Advances in Human-Computer Interaction

Advances in Human-Computer Interaction / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6632420 | https://doi.org/10.1155/2021/6632420

Valérie Maquil, Hoorieh Afkari, Béatrice Arend, Svenja Heuser, Patrick Sunnen, "Balancing Shareability and Positive Interdependence to Support Collaborative Problem-Solving on Interactive Tabletops", Advances in Human-Computer Interaction, vol. 2021, Article ID 6632420, 17 pages, 2021. https://doi.org/10.1155/2021/6632420

Balancing Shareability and Positive Interdependence to Support Collaborative Problem-Solving on Interactive Tabletops

Academic Editor: Thomas Mandl
Received29 Dec 2020
Revised12 Mar 2021
Accepted08 Apr 2021
Published24 Apr 2021

Abstract

To support collaboration, researchers from different fields have proposed the design principles of shareability (engaging users in shared interactions around the same content) and positive interdependence (distributing roles and information to make users dependent on each other). While, on its own, each principle was shown to successfully support collaboration in different contexts, these principles are also partially conflicting, and their combination creates several design challenges. This paper describes how shareability and positive interdependency were jointly implemented in an interactive tabletop-mediated environment called Orbitia, with the aim of inducing collaboration between three adult participants. We present the design details and rationale behind the proposed application. Furthermore, we describe the results of an empirical evaluation focusing on joint problem-solving efficiency, collaboration styles, participation equity, and perceived collaboration effectiveness.

1. Introduction

Many studies have identified the potential of tabletop interfaces to mediate and support collaboration. The explanation for this potential lies in the large shared screen and the possibility for direct multiuser interaction, which allow users to jointly view and work on tasks [1]. According to Roschelle and Teasley [2], collaboration can be defined as “a coordinated, synchronous activity that is the result of a continued attempt to construct and maintain a shared conception of a problem.” It is distinguished from cooperation, where labour is divided among several persons, with each of them being responsible for one portion of the work.

Tabletop interfaces provide important collaboration support as they allow users to easily monitor each other’s actions, bodily movements and gazes [3, 4]. This perceptual access to both group members’ actions and computational artefacts supports the group’s shared focus, coordinating their actions and repeatedly verifying that a common understanding is maintained. Not all interfaces enable such shared configurations. Hornecker et al. [5] propose the concept of shareability to describe the extent to which an interface allows a group of users to be engaged in shared interactions around the same content.

To discuss the design of shared interfaces, Benford et al. [6] distinguish between enabling collaboration, encouraging collaboration, and enforcing collaboration. An interface enables collaboration if joint work is possible. Collaboration is encouraged if an advantage is provided by working together, but individual work is still possible. Finally, the collaboration is enforced, if the successful accomplishment of the task is only possible if participants collaborate. In our work, we are interested in the latter case and seek to enforce collaboration with the aim of providing participants with a collaboration experience and allowing them to reflect on their strategies.

In order to enforce collaboration in learning situations, the field of Computer Supported Collaborative Learning (CSCL) proposes using scripts that generate positive interdependence between group members. More specifically, every group member is only provided with part of the available knowledge, tools, and skills. Therefore, group members are required to coordinate their activities to successfully solve the tasks [7].

While both shareability and positive interdependency were individually shown to be successful in different contexts, they are partially in conflict with each other and their combination creates some challenges for the design. Indeed, positive interdependence typically consists of dividing information and roles between group members to ensure that everyone only has part of the resources and therefore needs to contribute to the problem-solving process in order for the group to be successful. On the contrary, shareability seeks to make information and tools accessible to all group members at the same time with the aim of fostering engagement in shared interactions. To date, it is not clear how tangible and tabletop interaction need to be designed to ensure both design principles are combined in an interactive tabletop application and enforce collaboration.

In this paper, we report on the design and evaluation of a collaborative tabletop activity, called Orbitia, developed as part of a user-centred, iterative design process involving literature research, paper prototyping, a design workshop, and electronic prototyping. Orbitia aims to enhance users’ awareness of their collaboration strategies by providing them with tasks and tools that induce collaboration and create a positive experience. More specifically, in Orbitia, participants are provided with a radar drone and role-specific steering controls to collaboratively find and collect minerals on an unexplored planet (Figure 1). To enforce collaboration, the design principles of shareability and positive interdependence are balanced across several features. A large common area and two shared tangible objects target shareability, whereas distributed, touch-based control panels aim to facilitate positive interdependence. In this work, we present the design details and rationale behind the application. To evaluate our design, we provide the results of empirical evaluation focusing on joint problem-solving efficiency, collaboration styles, participation equity, and perceived collaboration efficacy.

2.1. Designing for Shareability

It is widely known that the large size of tabletops allows users to “view” and “work” on tasks together [813]. However, shared interaction around the same content could be facilitated or constrained by the way that an interface is made shareable [14]. Hornecker et al. [14] discussed managing shared access and shared interaction by providing multiple inputs for supporting interaction, called access points. Access points enable participants to point at and operate shared content and join a group activity, allowing “perceptual” and “manipulative” access and “fluidity of sharing.” Perceptual access refers to being aware of what a group is doing, which enables participants to join the ongoing interaction. Manipulative access refers to being able to actively interact with the system, and fluidity of sharing indicates how easily participants can switch roles or interleave their actions [14].

Focusing on tabletop interaction, Scott et al. [15] analysed participants’ spatial interactions in shared tabletop settings and identified three types of tabletop territories (personal, group, and storage). They suggest that each of these territories requires different visibility and transparency of action and serves to facilitate different functionalities. Accordingly, Fernaeus and Tholander [4] found a shared space to enhance the visibility of each other’s interactions and promote equity of participation [16]. Moreover, in a comparative study, Fan et al. [17], discussed codependent and independent access point design. In a system with codependent access points, inputs are sensed separately but processed together in such a way that two or more input actions are required for a successful response. Their results demonstrate support for the codependent strategy and suggest ways in which the codependent design could be used to support flexible collaboration on tabletops.

In addition, it is reported by Morris et al. [18] that shared controls contributed to the equity of collaboration and the task outcome. They evaluated two design alternatives: a shared centralised set of controls and separate per-user controls around the borders of the tabletop. They reported that it seemed preferable to design tabletop interfaces that share the controls and leave the centre of the table open for a variety of communicative purposes (e.g., as a focal area for items currently being discussed). However, they noted that certain types of task or physical configuration may make controls located along the edges of a table less optimal. In the same way, it has been reported that, in some situations, multiple access points promoted parallel and independent work rather than collaborative action [1921]. Therefore, in order to support collaborative activities, it is suggested that information, skills, roles, or tools are distributed among participants in such a way that they are able to actively operate the relevant objects, determine observation opportunities, and become involved, hands-on. The challenge here is to design collaborative activities that make the most of such affordances for regulating and guiding collaborative work [8, 14, 20, 22, 23].

To sum up, the studies reported have highlighted a series of design aspects that induce collaboration and decision-making. These are, in particular, the effect of territories on the visibility and transparency of actions, the role of codependent design to support more flexible collaboration, and the use of shared controls to affect the equity of participation. In this work, we implement a mix of these design concepts in a tabletop-based joint problem-solving activity, discuss them from the perspective of shareability and positive interdependence, and report on their effect on collaboration.

2.2. Designing for Positive Interdependence

Positive interdependence, which has its roots in collaborative learning, states a situation where group members are dependent on each other to realize and perform the task [24]. Positive interdependence takes place where participants see the positive effect of their work on others and vice versa; moreover, they work together in small groups to maximize the learning of all members. This learning happens through sharing resources, providing mutual support, and celebrating their joint success [24]. In such situation, (1) the effort of each participant is both required and indispensable for achieving the mutual goal, and (2) the contribution of each participant is unique, as the resources, roles, and responsibilities are shared. According to Johnson and Johnson [24], positive interdependence is structured in four ways: positive goal interdependence, positive reward/celebration interdependence, positive role interdependence, and positive resource interdependence.

Studies in CSCL have discussed strategies to promote positive interdependence using technology to provide learners with the means to share knowledge and build shared understanding. Positive interdependence as a design principle can promote collaboration through designing interdependent tasks, roles, and resources and can consequently facilitate shared interaction [25]. It is reported that the methods to promote positive interdependence are twofold: reward-based or task-based [26]. In a reward-based system, positive interdependence is implemented in such a way that the students’ individual grades depend on the achievement of the whole group [27]. According to Slavin [28], collaborative learning without group rewards is rarely successful. However, there are studies indicating that, in higher education, the reward-based interdependence is sometimes inconclusive, which is due to stimulated external motivation from the reward that may be detrimental to inherent and native motivation [27, 29].

Task-based interdependence forces the students to exchange information either by assigning different roles, resources, or tasks to the group members or by “scripting” the process, which means providing the students with a set of instructions on how they should collaborate and interact [30]. For instance, in the “jigsaw” technique, each member has access to only one piece of necessary information to solve the problem; therefore, no group member is capable of solving the problem on his/her own [30]. Dillenbourg explains that passing the information to another group member first requires processing that piece of information and becoming an “expert” in the specific subdomain. This then leads to the roles of each group member being defined [30].

When it comes to interactive tabletops, it is reported that tangibles provide affordances for implementing positive interdependence by allowing the physical embodiment of distributed control [31]. Technological interdependence in this context is employed on its own [32] or jointly with social interdependence [33].

2.3. Interactive Tabletop Activities Supporting Collaboration

Most commonly, the collaborative potential of tabletop technologies has been investigated in the context of collaborative learning, resulting in many interesting systems proposed in the literature. For example, DeepTree [10] is a multitouch tabletop interface where users can interact with a tree-based visualization of a large evolutionary dataset. Build-a-Tree [11] uses a similar context but requires users to construct the trees themselves. The Gnome Surfer [34] is a tabletop application supporting students in learning about genomics. “Digital mysteries” [13] targets the development of students’ higher level thinking skills and proposes a tabletop application where users need to interpret and process different types of hints in order to solve a provided mystery.

To support collaboration, all these examples are designed to be shareable. Visualizations and controls are arranged for them to be accessible by several users at the same time, thus supporting them in easily following and participating in the ongoing activity with the aim of fostering joint exploration. A qualitative analysis of dyads interacting with DeepTree [10] showed that the joint interaction could happen with different levels of coordination (high or low) and with different targets of action (mechanical or conceptual goals).

A different approach is provided by Futura [8], a multitouch tabletop-based simulation environment where users collaboratively plan communities through the assignment of different types of land uses. Futura focuses on positive interdependence and provides every user with a personal toolbar, which allows them to add a different type of resource onto the map. Depending on the combination of resources used, the environmental impact changes and players receive related feedback. A variation of the activity is proposed with Towards Utopia [35] where physical stamps are provided as interaction handles. In a user study, it was found that Futura promoted the sharing and discussion of resources, as well as helping each other out [36]. However, the researchers also observed that many groups adopted parallel play or a more individual approach at the beginning and only shifted to common strategies and goals when they had played the game several times [36].

Although these works proved to be efficient in supporting collaboration, none of them was designed with the aim of enforcing collaboration or generating a positive perception of collaboration.

3. Design of Orbitia

The design of Orbitia is part of the ORBIT project, aiming to support the development of collaboration methods through a tabletop-based joint problem-solving activity. More specifically, with Orbitia, participants should experience successful collaboration to become aware of and reflect on their collaboration strategies. To increase the possibilities for participants to experience such collaborative moments, Orbitia implements features that enforce collaboration (in the sense of Benford et al. [6]). As a use case, it is planned for Orbitia to be implemented in vocational training sessions, as a tool that allows participants to enhance their collaboration skills.

Orbitia targets groups of three adult learners without any particular prior knowledge or common background. The design and research process rely on user-centred design methods and multiple iterations, which progressively deal with a different focus and related subactivities of Orbitia. In this paper, we focus on the second version of Orbitia, which aims to induce participants’ face-to-face collaboration through features that implement shareability and positive interdependence.

3.1. Design Process

To design Orbitia, we conducted literature research, iterative paper prototyping, a design workshop, and iterative electronic prototyping. The design process was structured in three steps.

The first step consisted of analysing previous studies on collaboration and interactive tabletops (e.g., [2, 8, 23]) and, based on the information obtained, defining the idea of three activities potentially soliciting the participants’ collaboration [37]. Two of the team members turned each of the ideas into a paper prototype and tested them informally in several iterations to finalize the rules and determine whether they show enough features to create a role play for the rest of the group in the context of a design workshop.

In the second step, we tested and discussed the paper prototypes of Orbitia in our multidisciplinary project team as part of a 4-hour design workshop [37]. The two team members involved in the previous step organised the workshop, and three team members who were not familiar with the proposed activities acted as participants. Taking into account the fact that the paper prototyping has limitations in simulating the tabletop interface in terms of providing instant feedback or multitouch features, the aim of the design workshop was to explore and discuss the three proposed collaborative activities and gain an understanding of which design aspects might induce participants’ collaboration.

The main findings of the workshop highlighted the need to support both shareability and positive interdependence in the design. In particular, the task difficulty, the physical organization of the tabletop space, and the distribution of complementary competencies were found to be major and intertwined design challenges for supporting collaboration in the planned interactive tabletop-mediated activity [37].

The third step then consisted of identifying the activity with the most potential for addressing the proposed design challenges, refining it, and implementing a digital version. As for the paper prototyping, this step was done iteratively, while testing and discussing intermediate versions in the project team. During this process, we adjusted the difficulty, the number of levels, and the visual representations of different items. To ensure Orbitia fulfilled our design goals in terms of shareability and positive interdependence, we tested the first digital version with 5 groups of three participants who used the tabletop for 55 min on average. Drawing upon the results, we then improved the overall graphical design, added an additional tangible object, and extended the steering mechanism with an additional role-related action. Furthermore, explanatory feedback was integrated into personal areas.

While results from previous steps have already been reported elsewhere [3740], this paper describes the entire design rationale of the refined digital prototype, as well as the results obtained from its evaluation.

3.2. Activity Design and Rationale

We chose the context of space mining as the narrative of the activity, as we expect that this would be an interesting topic for most participants; however, since it is not related to common professions in the area, it would not interfere with their previous knowledge. Through this approach, we wanted all group members to have similar prior knowledge and therefore have knowledge resources equally distributed as required for positive interdependence.

Participants are located on Orbitia, an imaginary planet where they need to act as a space mining crew in order to steer a rover, mine valuable minerals, and ship them to Earth. Meanwhile, participants need to deal with the limitations of the environment, such as obstacles and energy constraints. In total, participants are required to solve three missions, each with increasing difficulty.

This problem is based on finding a route and steering a vehicle in a two-dimensional space. Such problems are frequently found in various activities or games (e.g., [23]), and we expect the provided goal and the basic mechanisms (steering and collecting) to be understood rapidly for the participants. Furthermore, such problems can be solved using different routes, and the required items can be collected in different orders. Hence, there are multiple solutions to the problem.

3.3. Activity Grid

A 9 × 11 grid, presented at the centre of the tabletop screen, acts as a common, shared area for participants. It shows the majority of the important elements (see Figure 2). To best support shareability [14], the grid is presented in the centre of the tabletop, covering most of the common space, and is well visible and within the reach of all participants. Participants can use bodily actions (e.g., pointing to routes) to discuss the routes and to serve communicative and collaborative purposes [4].

A part of the grid is shown as a cloudy area (Figure 2 (7)). According to the activity scenario, this area is affected by a dust storm and, therefore, the items located in all of these cells are hidden. Participants need to use the radar drone in order to find and reveal the hidden items (see the next section).

In each of the three missions, the grid configuration, including the location of the special cells, differs. The area affected by the dust storm increases, affecting the whole grid in the last mission.

3.4. Radar Drone

The radar drone is a shared tangible object that participants need to use in order to locate the items hidden in the dust storm (Figure 2 (7)). We decided to use a tangible object, as, from the perspective of shareability, the spatial properties of tangibles were found to support shared attention [3], lower the threshold for participation, and increase fluidity of sharing [14].

Once the tangible object is placed on the grid, a highlighted frame appears around it that covers nine cells (Figure 3(a)). The LED matrix display integrated into the tangible object shows the total number of items hidden within this frame, regardless of whether they are minerals, sharp rocks, or batteries. By moving the drone across the grid, participants can scan all the cells and check the number of hidden items that appear on the matrix display. This information can be used by the participants to discuss and decide on the route (e.g., the area with the highest number of hidden items and/or the area which can be best reached by the rover) and their strategy for completing the task.

To reveal the items, participants need to push the button on the drone. The hidden items appear for one second on the grid (Figure 3(b)) and then disappear with a snapshot of the area sent to the control panels. Snapshots are shown as small grids of 3 × 3 indicating the location of the revealed items in a complementary way: snapshots in each control panel indicate only the location of the items respective to that control panel (see the next section). In each mission, the number of snapshots is limited to four.

3.5. Control Panels

To create positive interdependence between group members, each participant is provided with a personal control panel (Figure 4), spatially distributed around the table and showing complementary steering controls and information related to three different roles, i.e., energy, mining, and damage.

To control steering, each group member has two touch-controlled arrows (Figure 4 (1)) showing three buttons (two directions and an additional button), which are unique to their role. To collect minerals, participants need to steer the rover towards the cells containing minerals and then touch the claw button in the mining control panel (Figure 4 (6)). In a similar way, to retrieve batteries, the charge button in the energy control panel must be tapped after steering the rover to a battery cell. Finally, steering the rover into one of the cells containing sharp rocks results in damage to the rover’s wheel. In such a situation, participants are unable to move the rover unless they repair the damaged wheel by touching the wrench button in the damage control panel.

In addition to the steering controls, each control panel shows the current status with regard to the respective role, i.e., energy level, number of collected minerals (Figure 4 (5)), or number of spare wheels. Finally, it show the snapshots provided by the drone, indicating the location of the revealed items related to the role (Figure 4 (2)). For example, the energy control panel only shows the information related to batteries, and the damage control panel only shows information related to the location of sharp rocks.

To make the steering challenging, we implemented a series of movement restrictions. Two directions are not directly available (south- and northeast) and need to be compensated for by using the directions that are available. Furthermore, each movement of the rover costs one unit of energy and, to recharge, participants need to extract the batteries. If the rover runs out of energy, the mission fails. In addition, there are restrictions on the number of spare wheels: damaging the rover more than three times causes the mission to fail. Finally, leading the rover to a canyon results in the destruction of the rover and the failure of the mission. Each time the rover is destroyed, participants can start a new trial (same configuration) for the same mission.

The idea of the control panels is to provide each of the participants with only some of the resources needed to complete the task. Each group member possesses only two directions for steering the rover and has only part of the information related to the status of the rover and the location of items. As formalised by Johnson and Johnson [24], each group member’s efforts are required and indispensable in order for the group to succeed. Furthermore, because of the unique roles and resources, each group member has a unique contribution to make to the joint effort.

Overall, the control panels integrate many sources of information, which need to be monitored in order to solve the task. One person alone could hardly keep track of all this information and consider everything in order to decide on the best strategy. Hence, referring to Vass and Littleton [41], the task would demand more intellectual resources than one individual person has at his/her disposal. Therefore, participants are forced to share knowledge and figure out a solution collaboratively. If participants do not collaborate, they might miss some of the information and they will not be able to solve the task or will not come up with as good a good solution.

3.6. Highlight Marker

To facilitate the planning of routes, we added another shared tangible object to the activity: the highlight marker (Figure 3(c)). Once the marker is put on a cell, the cell is highlighted (Figure 3(c)); repeating the same action undoes the effect. Moving the marker over the grid either marks or unmarks a group of cells.

The marker is the only feature that is not compulsory for solving the task. The idea was to support participants in presenting their suggestions about item locations or routes to each other. Similar to the drone, the marker was implemented as a tangible object, to support shareability.

3.7. Implementation

To develop Orbitia, we used Java and TULIP [42], a software framework for implementing widgets on tangible tabletop interfaces. The framework allows us to define the physical qualities of the widget, such as handles, identifiers, and dimensions, and to link it with digital components, such as different types of visualizations. It handles the receipt of protocol messages such as TUIO [43] and MQTT [44]. The processing unit used was a PC-compatible computer running Windows 10, with an Intel Core i7-8650U processor at 1.90/2.11 GHz and 16 GB RAM, and we used a MultiTaction MT550 to run the application.

The radar drone was developed with a Kniwwelino microcontroller board [45] as its core component. It provides a 5 × 5 LED matrix, an RGB LED, and two push buttons. The board was equipped with a Li-ion battery and a charging circuit in a 3D-printed enclosure. Its position on the MultiTaction is detected via an optical marker, and it handles communication with the Java application through MQTT messages. More details about the implementation of the radar drone can be found in [46].

4. Evaluation

To evaluate our design, we set up a user study focusing on the joint problem-solving efficiency, collaboration styles, participation equity, and perceived collaboration effectiveness. Joint problem-solving efficiency is defined by the extent to which the provided resources (time, trials, mental effort) were used in an optimal way to solve the problem. The collaboration style describes the way the groups shared their work and how closely they worked together. Participation equity refers to the relative contribution of the individual group members in terms of amount of speech and performed tabletop interactions. Finally, the perceived collaboration effectiveness describes how successfully the collaboration was perceived by the participants.

4.1. Participants

5 groups of 3 participants each (N = 15) participated in our study; 10 of them were male and 5 female; 5 were aged between 25 and 34, 5 between 35 and 44, and 5 between 45 and 54. Among them, 12 were invited to test the application prior to a seminar on collaboration, and 3 of them were researchers working within the authors’ respective research departments.

None of the participants was familiar with Orbitia, and, although some of the participants had met before, none of them knew each other well. The occupational background of the participants is heterogeneous; six were employees in different municipal departments (urban planning, transportation, festivals and market services, human resources, sport services, and general secretariat), whereas another six participants were schoolteachers. Two participants were computer science researchers and one was a civil engineering researcher.

4.2. Setup and Procedure

Orbitia was deployed in the centre of an experimentation room. The two tangibles (radar drone and highlight marker) were placed on the border of the table (Figure 1).

We first informed participants of the objectives and context of the study. Additionally, we explained to them which data would be recorded and how it would be used and stored. We also informed participants about their rights to withdraw their consent and ask for the deletion of the data at any time and without giving reasons. They were provided with an information sheet and a consent form to sign. Participants were then led to the experimentation room.

As we were also interested in studying how participants discovered and apprehended Orbitia (which will be reported in another paper), we did not provide them with detailed explanations or a tutorial but instructed them only on how to use the table and the two tangibles. We then left the room and encouraged participants to find solutions on their own. Overall, the procedure took between 45 and 70 minutes.

All groups were required to solve the same three missions with increasing difficulty. When starting the application, an introductory text was displayed on the tabletop and read out aloud by an integrated voice. It explained the narrative, i.e., the main objective of their mission (finding minerals, minding obstacles, and keeping the robot charged). After that, short instructions were provided at the top of the screen, specifying the number of required minerals. For example, “Find two minerals and transport them back to the station!” (Mission 1).

The first mission was designed to be as easy as possible, but to require the participants to use all the integrated features (steering, radar drone, control panels) in order to complete it. The aim of this mission was to learn to use the controls and the different interaction possibilities. In the subsequent missions, the difficulty was increased by varying the number of minerals needed, the size of the dust storm area, and the number of canyons, as well as the number of available minerals, sharp rocks, and batteries. In the first two missions, the items were partly visible, whereas in the last mission, there were no visible items on the grid.

When participants started a new mission, they were provided with 20 energy units and 2 spare wheels. In the event where either energy drops to 0 or they damage the rover and have no spare wheel left, they lose and must start over (with the same initial values). This is then displayed as a subsequent trial. Only when they complete the trial, can they move on to the next mission.

4.3. Data Collection and Analysis

To measure each group’s joint problem-solving efficiency (i.e., how well the problem was solved and how much effort it required), we captured the time, successes, and failures of each mission making use of system logs. Furthermore, to detect the required effort, we used the NASA TLX workload questionnaire [47]. Of the six different dimensions of workload, we used five that were relevant to our context: mental demand, temporal demand, frustration, effort, and performance.

To gain insights into collaboration styles (i.e., how the work was shared between group members), we recorded the problem-solving process using four fixed cameras (top, front, left, and right angles, Figure 5).

Building upon Isenberg et al. [48], we conducted video analysis of the collected observations of participants’ approaches when working together. Due to the differences in the context of this study compared to the original one by Isenberg et al. [48] (no collaborative coupling and 3 participants instead of 2), we first adapted the code scheme by reviewing the recorded data and discussing each style individually. This included identifying whether each category was relevant to our data and how it could be adapted to a case with three participants. As a result, we defined four different categories: (1) discussion (Disc) (see Figure 6), (2) work on the same problem (SP) (see Figure 7), (3) work on different problems (DP) (see Figure 8), and (4) disengaged (D), all illustrated in Table 1.


Disc: Active discussion about the task or proposed strategies. Only minor system interaction (e.g., pointing to items or marker use).
SP: At least one person is actively interacting with the system. The other(s) are actively watching, engaged in conversation about and/or commenting on the observed activities. All group members are working on the same problem. Gestures and body language relate to that problem.
DP: All group members are either actively interacting with the system or engaged in conversation, but system interactions, gestures, or body language relates to different problems.
D: Only one person is actively working. The other two are watching passively or are fully disengaged from the task.

In previous studies, the level of physical participation is normally measured through cumulative counting of the number of interactions (e.g., [14, 15] and [17]). For this study, we used the number of button hits but decided to use the duration for the use of the tangibles as it is difficult to segment the use of the drone and marker into individual interactions. Therefore, we used system logs tracking the frequency of button hits, and manual annotation of drone use based on the video data.

The level of verbal participation was previously measured based on either the amount of speech, such as speaking time, or the number of turns or spoken words (e.g., [14, 15] and [17]). More precise methods also consider the content of dialog and classify utterances into different categories (e.g., [16]). Due to time restrictions, we limited the analyses to the amount of speech and counted the number of nonblank characters noted for each person based on transcripts.

To measure equity of participation, we first transformed the participation levels for each type of contribution (verbal, button hits, marker, and drone) into relative proportions of the overall group. We then calculated the group’s physical participation by weighting the proportions of button hits, marker, and drone with the respective total group interaction durations and summing them up. This means that if interactions with one feature were considerably shorter, they had also less weight in the overall score. We then used these proportions to calculate the standard deviation as a measure of equity [49].

Finally, to measure group members’ perceived collaboration effectiveness, we used a questionnaire based on the team effectiveness questionnaire [50]. It consisted of 4 items: “my team collaborated effectively to complete our assignments,” “I feel a sense of accomplishment in my team’s ability to work together,” “this team gave me confidence in the ability of teamwork to solve problems,” and “I was confident that our team produced acceptable solutions to assignments.” Each of these items had to be rated on a 5-point Likert scale.

5. Results

Our results provide an insight into how groups collaborated with the different features of Orbitia and how they perceived their collaboration.

5.1. Joint Problem-Solving Efficiency

All five groups were able to successfully complete the two missions; required, on average, 25:42 min and 116.4 rover movements; and encountered 2.4 failures.

Looking at the differences between groups, G1, G4, and G5 managed to complete the missions without any mission failures, whereas G3 and G2 had 4 and 8 mission failures, respectively (see Table 2), of which 7 were caused by running out of energy and the remaining 5 were the result of running into canyons. Likewise, groups G4 (43), G1 (62), and G5 (62) used the least rover movements, and G3 (162) and G2 (253) the most. Regarding time, G4 (17:31 min) and G2 (19:36 min) were the quickest in solving the tasks. In contrast, it took G1 (41:51 min) more than double the amount of time, and the time taken by groups G5 (22:42 min) and G3 (27:29 min) can be considered as the average.


GroupTimeFailuresMovementsWorkload

G100:41:51062M: 35.00; SD: 7.35
G200:19:368253M: 31.33; SD: 11.61
G300:27:294162M: 48.33; SD: 12.36
G400:17:31043M: 14.67; SD: 0.94
G500:22:02062M: 28.33; SD: 6.94
M00:25:422.4116.4M: 31.53; SD: 14.00

According to the results of the NASA TLX questionnaire, the perceived workload was on average 31.53 (SD: 14.00). G4 perceived the workload as the lowest (M: 14.67; SD: 0.94), and G3 as the highest (M: 48.33; SD: 12.36) (see Figure 9).

From this data, we can conclude that G4 was the most efficient, with no failures and lower time, number of movements, and workload than the other groups. G1 and G5 had a high efficiency regarding failures and number of moves, but G1 required more than twice the time of G4. We can explain this by the long discussion and planning times that we observed for G1. Both G1 and G5 reported an average workload.

G2 had the most failures and used the highest number of rover movements, but was the second fastest and perceived their workload as average. We observed that G2 used only few discussions and explorations with the drone, instead of steering the rover in a risky way to find the minerals as quickly as possible. Finally, G3 used an average amount of time and can be situated as above average with respect to failures and number of movements. G3 also perceived the workload as highest. The high workload in combination with the average problem-solving results suggests that G3 had more difficulties in solving the tasks. Indeed, for G3, we observed intense discussions, often not in agreement with each other. They were holding discussions in order to find the best procedure but still encountered failures.

The results presented here show that the joint problem-solving efficiency of the groups was generally positive, although there were important differences depending on the groups. We assumed these differences could be explained by different collaboration approaches that the groups applied.

5.2. Collaboration Styles

Throughout the use of Orbitia, participants organised themselves in different ways, using closer and looser styles of collaboration. They spent part of the time jointly discussing the tasks and potential strategies to solve them. At other times, one or more persons actively interacted with the system, with the rest of the group watching, instructing, or commenting; all focused on the same problem. Finally, some groups also collaborated during a few short moments in a looser style, with one group member working on a problem different from that tackled by the rest of the group.

In order to better understand how collaboration styles alternate throughout the problem-solving process, we video-coded participants’ interactions based on their interactions with tabletop features and other group members, taking into account their body postures. Building on previous work by Isenberg et al. [48], we identified four different collaboration styles that participants adopted, and listed them in Table 1. We annotated the video data with ELAN [51] and generated time-stamped event-logs for each of these collaboration styles.

Our coding revealed that the teams showed high task engagement, with no time spent disengaged (D) from the task (Figure 10 and Table 3).


Disc (%)SP (%)DP (%)D (%)Close collaboration (%)

G142.2157.790.000.00100
G23.2894.182.540.0097.46
G333.8258.687.500.0092.5
G447.5052.500.000.00100
G542.1057.900.000.00100

During discussion phases (Disc), group members considered different route options to navigate the rover around the obstacles and collect the required minerals. During this time, they used only minor system interactions, such as pointing to items. In SP, they were jointly interacting with the drone, with one person moving the drone and the others commenting or instructing. Another SP example was steering the rover, where group members subsequently tapped the steering buttons, using verbal instructions and gestures to coordinate their movements. Parallel work (DP) could only be observed for two groups, and this happened when one person interacted with the drone while either the other two were discussing a route together or each of them was exploring the control panel or the grid separately.

We found that the groups with no failures (G1, G4, and G5) used Disc in a similar way, i.e., for 42.10% to 47.50% of the time, even though the overall time was different for these groups. In contrast, G2 and G3 showed different patterns. G2 used SP almost exclusively (94.18% of the time) whereas G3, in the first part, used SP almost exclusively but changed the strategy in the second part, where Disc was considerably more present. DP, as a loose collaboration style, was only observed for groups with a lower proportion of Disc. Overall, the groups worked in close collaboration between 92.5% and 100% of the time. Compared to another study by Isenberg et al. [48], where close collaboration was observed for 32% to 92% percent of the time, these numbers can be considered particularly high.

From these results, we can suggest that Orbitia affords close collaboration styles, with both Disc and SP. Alternating both styles seems to be important in order to be most efficient. On the other hand, loose collaboration (here only DP) was observed in combination with failures.

5.3. Equity of Physical and Verbal Participation

An overview of the results gained from the data on verbal and physical participation can be seen in Figure 11. To visualize the results, we used radars, as proposed by Martinez et al. [52] with the aim of gaining insights into the symmetry of participation. The shape of the radars provides insights into the symmetry of activity around the tabletop. Each colour refers to another type of contribution (verbal contribution, button taps, drone use, and marker use). The more symmetrical the triangular shapes were, the more equally the respective activities occurred. In contrast, a peak indicates that one user was more active in comparison to the other group members.

The results indicate an equity between 3.4 (G5) and 15.8 (G4) related to the amount of speech. Physical actions, on the other hand, had an equity between 8.9 (G5) and 26.9 (G3). Building upon Fan et al. [17], we consider participation to show the most equity when the equity value is less than 5, to show some equity when the value is in between 5 and 12.5, and to be unequal when the value is higher than 12.5. With this classification, the most equitable group was G5, which showed the most equity in verbal participation and some equity in physical participation (Figure 12). G4 can be considered the least equitable with both verbal and physical participation being classified as unequal (see Figure 12). Looking at the video data, a possible explanation for the lower equity could be their more instruction-based approach. Indeed, we observed one participant to be more dominant in the decisions, giving instructions to the other two participants without prior discussion of the plan in the group.

Comparing these results at the level of contribution type, we can see some differences (Table 4). The button hits were most equally distributed, providing an average equity of 5.53 (SD: 2.69). Throughout Missions 2 and 3, participants remained in their own positions and only reached over and touched the other group members’ buttons on rare occasions. The least equal group was G4, where P2 was almost twice as active (28 button hits) in comparison with the other group members (15 button hits each).


G1G2G3G4G5M

Verbal part8.59811.0099.26415.8043.4419.62
Button hits3.7083.0614.39210.5665.9095.53
Drone24.76924.41247.14031.26520.17429.55
Marker23.02623.75028.27225.34335.29127.14
Phys part12.44513.12826.78217.3448.87515.71

Amount of speech was also almost equally distributed with an average equity of 9.62 (SD: 3.99). Drone and marker handling, however, were significantly less equally distributed, with an average equity of 29.55 (SD: 9.48) for the drone and 27.14 (SD: 4.46) for the marker. The most active user operated the drone for 73% of the time on average (min 60%, max 100%) and the marker for 66% of the time on average (min 54%, max 82%). For both objects, the participation for the least active user was also very low (on average 6% (drone) and 2% (marker)).

These results show us that, with Orbitia, the actions with complementary buttons on personal control panels and the amount of speech tend to be close to equally distributed. Shared tangible objects, on the other hand, tend to have a main user with one or two assistant users. The unequal distribution of actions with physical objects might be explained by the concept of ownership [31]. The physicality of the marker and drone encouraged participants to pick it up and hold it in their hands. This, in turn, might generate the feeling that the object temporarily belongs to that participant and cause hesitation for other participants to take further action with that object.

We also used the data on participation to analyse the relative ratio between physical and verbal participation. In particular, we were interested to see whether people who spoke less were more active in physical handling and vice versa. In Figure 13, we plotted verbal participation against physical participation. We can see that, except for G3, there is no higher ratio in physical actions for less talkative participants, nor a higher ratio in verbal contributions for participants that performed fewer physical actions. On the contrary, it seems that participants who contributed more in speech were also more active in physical participation. However, this needs to be interpreted with caution, due to the small number of participants.

5.4. Perceived Collaboration Effectiveness

Our questionnaire revealed that collaboration effectiveness was rated on average M: 4.33 (SD: 0.76). Participants from 4 groups (G1, G2, G4, and G5) perceived the level of collaboration effectiveness as very high (between M: 4.50 and M: 4.83), whereas G3 rated it as medium (M: 3.0; SD: 0.41) (see Figure 14).

From this, we can conclude that collaboration was perceived as very effective for all groups except G3. The lower value of G3 might be explained by their below average joint problem-solving efficiency and their above average ratio of loose collaboration (DP).

We can also note that G2 perceived their collaboration effectiveness to be very similar to G1, G4, and G5, although they repeatedly encountered failures. We consider this as an indication that G2 did not interpret the task as intended and were adopting a trial-and-error approach to solve the problem as quickly as possible, building on failures as part of the strategy. Since G2 were eventually able to collect the required minerals, they might think themselves to be effective in their collaboration.

6. Discussion

Our overall aim with the design of Orbitia was to enforce collaboration to enable users to experience and reflect on collaboration strategies. We aimed to provide a problem that would require collaboration in order to be solved and would provide participants with a positive collaboration experience. Our results showed that the problem was solved entirely by three of five groups according to the defined criteria, meaning that they did collect the required number of minerals without destroying the rover. Moreover, we showed that those three groups perceived collaboration effectiveness as very high (≥4.5) and worked in close collaboration constantly, making use of Disc and SP at similar ratios.

Of the two remaining groups, one group misinterpreted the task and pursued a different objective from that intended by the task design. They collected the minerals using a trial-and-error approach, not minding about destroying their rover. Therefore, we can claim that, according to their own interpretations, they still performed well but adopted an alternative problem-solving approach. This explains the high perceived collaboration effectiveness and the divergent collaboration styles that consisted almost entirely of SP.

Finally, one group understood the task correctly but, especially in the beginning, had difficulty collaborating and solving the task. This can be seen in the collaboration styles diagram where the pattern in the beginning (mainly SP and partly DP) is different from that towards the end (Disc and SP mixed). Overall, they described the perceived collaboration effectiveness as medium, showing that they did not perceive their approach as very good.

We can therefore say that, for the participants that succeeded, the design of Orbitia successfully generated high perceived collaboration effectiveness and promoted close collaboration styles. However, since one of the five groups did not correctly understand the task, there is a need to adapt the instructional design and describe it more clearly to solve the task with the fewest resources possible.

The findings relating to equity of participation are less conclusive. On average, for all groups, there is most equity or some equity for verbal contributions and button hits. In contrast, for the shared tangibles (drone and marker), participation was unequal for all groups. Overall, the most efficient group (G4) had unequal participation both in verbal and physical participation, and for 4 groups, more verbal participation tended to go hand-in-hand with more physical participation.

These observations might, on the one hand, indicate that the design of Orbitia was not optimal for promoting equity of participation. For instance, by adding more physical objects, group members could be encouraged to distribute them and the related actions more evenly among themselves. On the other hand, it must also be mentioned that our approach of quantifying the participation might not be the most suitable for this context. As already noted by Marshall et al. [19], there might be participants that say or do little but still have a significant influence on the activity. The same applies to physical participation. Some participants might have a slower or more explorative approach to handling objects, while others might be more efficient and take decisive action in a shorter amount of time. Further work is needed in order to establish the most suitable approach for measuring participation in such a multimodal setting.

6.1. Comparison with Other Collaborative Interactive Tabletop Applications

In a user trial with DeepTree [10], it was found that joint interactions in dyads happened not only in high coordination (actions directed at the same target and complementing each other) but also in low coordination (simultaneous actions that are in conflict with each other). A similar observation was made with Futura [8], where groups were found to be focused on individual actions in the first rounds and only started to adopt a common group focus after repetitive play.

In comparison, with Orbitia, close collaboration involving a common focus was considerably more frequent. We can explain this by several aspects. First, the overall goal, i.e., steering a rover across a grid to collect minerals, was easy to understand and clear for participants. The same applies to the buttons provided in the individual control panels. After the initial familiarization, they were not required to explore and test several options in order to know how to use the buttons and how to proceed. The only exploration task was related to the radar drone. To find the minerals, participants need to use the drone and try out several locations to scan the grid. To do this, they were provided with only one shared object; hence, during this explorative phase, participants were prevented from working individually, and the group focus was maintained by the spatial properties of the drone.

A second aspect that promoted close collaboration was the grid that served as planning tool. Participants could use it to explain their ideas and illustrate them via gestures, through, for instance, pointing to the main steps or showing the next direction to take. This helped participants, throughout the activity, to maintain a common understanding about what to do and how to do it and, thus, kept their actions coordinated.

The Futura study [8] also showed that users frequently reached over to help each other and to perform an action for another group member through other participants’ control panels. In Orbitia, we only observed this behaviour on very rare occasions (it happened only 5 times in total among all groups). Once again, we can explain this by the clarity of the individual buttons and the respective roles, so that no physical help was needed to use the controls. Instead, participants could support each other using simple and short verbal instructions, such as “come here” or “one more.” However, there are also other aspects that might promote this helping behaviour, like, for instance, the degree to which group members are familiar with each other, their personality, the size of the tabletop, or the more relaxed atmosphere of a museum.

Regarding equity of participation, previous studies with interactive tabletops have shown that equity in terms of physical actions varied depending on the number of entry points and the way they are accessible and inviting for participants [16, 19]. In a comparative study [19], the impact on verbal participation was reported as less conclusive, showing no impact, whereas another study [16] showed that those who spoke the least contributed more in terms of physical actions. Our results are in line with these findings; in our work, participation equity varied depending on the characteristics of the features and was higher for button taps than for tangible interactions. However, for most of our groups, participants who spoke less also contributed less to tabletop interactions. Based on the video data, we could not find any explanation for this. Further investigation with a larger number of participants is needed in order to better understand participation equity during enforced collaboration around interactive tangible tabletops.

6.2. Design Implications

Our results are in line with previous work stating that there are no universal design guidelines that can be applied to support collaboration without taking into account more detailed information related to the specific context. Most importantly, the group composition, collaboration experience of the group members, and purpose of the collaborative activity led to different approaches in the design.

In our context, we were interested in creating a problem that enforced collaboration, i.e., a problem that could only be solved if users collaborated. We wanted members of the group to be able to experience effective collaboration and reflect on their applied strategies. In contrast to other works on collaboration with interactive tabletops, there was no additional learning or design objective related to an underlying topic. Hence, we could focus on collaboration as our only objective.

Keeping this context in mind, based on our results, we can express some design implications concerning interactive tabletop applications in support of collaboration:(1)Shared areas, well visible and accessible by all group members, are crucial. They should show enough elements to allow participants to present and comment on their ideas both vocally and with gestures. They should be able to point out both the overall problem-solving approach, as well as the next step to be taken, to support participants in sharing a common understanding about what to do and how to do it.(2)Shared objects for tabletop interaction are suitable for exploration tasks. By providing only one object, parallel work can be prevented, with the maintenance of a joint group focus being supported. However, shared objects might also lead to inequity in participation.(3)To support positive interdependency, distributed personal control panels can be used. Special attention needs to be paid to the clarity of the roles and related controls. To encourage group members to remain responsible for their respective assigned roles (and to avoid taking over for another participant), it must be clear for each participant how the personal controls work and what impact they have on the shared items in the application.

6.3. Limitations

Our study was conducted in a controlled setting with five groups of adult participants that did not know each other well and had already had some collaboration experience in their professional lives. Therefore, the results need to be considered as tendencies, and we cannot generalize the results for a wider range of users, including more unexperienced users (e.g., youth) or groups that know each other well (e.g., families). Furthermore, we recognize that the context of vocational training and the laboratory setting might have led to longer interaction times and higher levels of closer collaboration compared to studies conducted in museums.

7. Conclusions

In this work, we have described a solution that simultaneously addresses shareability and positive interdependence in an interactive tabletop application, with the aim of enforcing collaboration and creating a positive experience. Orbitia makes use of (1) a shared area that allows the discussion of strategies, (2) shared objects for joint exploration tasks, and (3) personal control panels to distribute roles and resources between participants.

Through the results gained from a user study with 15 participants, we have shown that Orbitia promotes close collaboration styles and supports a high perception of collaboration effectiveness. Results for participation equity vary depending on the groups and the features. In particular, shared objects, although they promote a joint focus, seem to foster inequity in terms of participation. Further work is needed to better understand how participation equity should and can be supported during enforced collaboration around interactive tangible tabletops.

In future work, we will use Orbitia to explore how new design features could trigger breaching moments within the activity, i.e., generate an unexpected system characteristic that requires groups to reorganize themselves and adopt a new procedure. We will then study how such breaching moments impact participants’ collaboration strategies and their awareness of them. With our research, we seek to contribute to a better understanding of how interactive tabletop-mediated collaborative activities can become a tool that supports users in developing and enhancing their collaboration strategies.

Data Availability

The data underlying this study are essentially video-based. Due to ethical reasons, it is not possible to share this data outside the project team.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank the Luxembourg National Research Fund (FNR) for funding this research under the CORE scheme (Ref. 11632733). Furthermore, the authors would like to thank all the participants of their user study.

References

  1. E. Mercier and S. Higgins, “Creating joint representations of collaborative problem solving with multi-touch technology,” Journal of Computer Assisted Learning, vol. 30, no. 6, pp. 497–510, 2014. View at: Publisher Site | Google Scholar
  2. J. Roschelle and S. D. Teasley, “The construction of shared knowledge in collaborative problem solving,” Computer Supported Collaborative Learning, vol. 30, pp. 69–97, 1995. View at: Publisher Site | Google Scholar
  3. A. N. Antle and A. F. Wise, “Getting down to details: using theories of cognition and learning to inform tangible user interface design,” Interacting with Computers, vol. 25, no. 1, pp. 1–20, 2013. View at: Publisher Site | Google Scholar
  4. Y. Fernaeus and J. Tholander, “Designing for programming as joint performances among groups of children,” Interacting with Computers, vol. 18, no. 5, pp. 1012–1031, 2006. View at: Publisher Site | Google Scholar
  5. E. Hornecker, P. Marshall, N. S. Dalton, and Y. Rogers, “Collaboration and interference : awareness with mice or touch input,” in Proceedings of the ACM 2008 Conference on Computer Supported Cooperative Work—CSCW’08, pp. 167–176, San Diego, CA, USA, November 2008. View at: Google Scholar
  6. S. Benford, C. O’Malley, K. T. Simsarian et al., “Designing storytelling technologies to encourage collaboration between young children,” in Proceedings of the Conference on Human Factors in Computing Systems, pp. 556–563, The Hague, The Netherlands, April 2000. View at: Google Scholar
  7. N. Yuill and Y. Rogers, “Mechanisms for collaboration: a design and evaluation framework for multi-user interfaces,” ACM Transactions on Computer-Human Interaction, vol. 19, no. 1, 2012. View at: Publisher Site | Google Scholar
  8. A. N. Antle, A. Bevans, J. Tanenbaum, K. Seaborn, and S. Wang, “Futura: design for collaborative learning and game play on a multi-touch digital tabletop,” in Proceedings of the 5th International Conference on Tangible Embedded and Embodied Interaction, TEI’11, pp. 93–100, Madeira, Portugal, January 2011. View at: Google Scholar
  9. A. N. Antle, A. F. Wise, and K. Nielsen, “Towards Utopia: designing tangibles for learning,” in Proceedings of the IDC 2011—10th International Conference on Interaction Design and Children, pp. 11–20, Ann Arbor, MI, USA, June 2011. View at: Google Scholar
  10. P. Davis, M. Horn, F. Block et al., “Whoa! we’re going deep in the trees!’: patterns of collaboration around an interactive information visualization exhibit,” International Journal of Computer-Supported Collaborative Learning, vol. 10, no. 1, pp. 53–76, 2015. View at: Publisher Site | Google Scholar
  11. M. S. Horn, Z. Atrash Leong, F. Block et al., “Of BATs and APEs: an interactive tabletop game for natural history museums,” in Proceedings of the Conference on Human Factors in Computing Systems, pp. 2059–2068, Austin, TX, USA, May 2012. View at: Google Scholar
  12. O. Shaer and H. Wang, “Enhancing genomic learning through tabletop interaction HoloMuse view project CRISPEE technology for bioengineering education view project,” in Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems—CHI’11, pp. 2817–2826, Vancouver, Canada, May 2011. View at: Google Scholar
  13. A. Kharrufa, D. Leat, and P. Olivier, “Digital mysteries: designing for learning at the tabletop,” in Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, pp. 197–206, Saarbrücken, Germany, 2010. View at: Google Scholar
  14. E. Hornecker, P. Marshall, and Y. Rogers, “From entry to access—how shareability comes about,” in Proceedings of the 2007 Conference on Designing Pleasurable Products and Interfaces, DPPI’07, pp. 328–342, Helsinki, Finland, August 2007. View at: Google Scholar
  15. S. D. Scott, M. Sheelagh, T. Carpendale, and K. M. Inkpen, “Territoriality in collaborative tabletop workspaces,” in Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW), pp. 294–303, Chicago, IL, USA, November 2004. View at: Google Scholar
  16. Y. Rogers, Y.-k. Lim, W. R. Hazlewood, and P. Marshall, “Equal opportunities: do shareable interfaces promote more group participation than single user displays?” Human-Computer Interaction, vol. 24, no. 1-2, pp. 79–116, 2009. View at: Publisher Site | Google Scholar
  17. M. Fan, A. N. Antle, C. Neustaedter, and A. F. Wise, “Exploring how a co-dependent tangible tool design supports collaboration in a tabletop activity,” in Proceedings of the International ACM SIGGROUP Conference on Supporting Group Work, pp. 81–90, Sanibel Island, FL, USA, 2014. View at: Google Scholar
  18. M. R. Morris, A. Paepcke, T. Winograd, and J. Stamberger, “TeamTag: exploring centralized versus replicated controls for co-located tabletop groupware,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems—CHI’06, p. 1273, Montréal, Canada, 2006. View at: Google Scholar
  19. P. Marshall, E. Hornecker, R. Morris, N. S. Dalton, and Y. Rogers, “When the fingers do the talking: a study of group participation with varying constraints to a tabletop interface,” in Proceedings of the 2008 IEEE International Workshop on Horizontal Interactive Human Computer System, TABLETOP, pp. 33–40, Newport, RI, USA, 2008. View at: Google Scholar
  20. D. Stanton and H. R. Neale, “The effects of multiple mice on children’s talk and interaction,” Journal of Computer Assisted Learning, vol. 19, no. 2, pp. 229–238, 2003. View at: Publisher Site | Google Scholar
  21. T. Pontual Falcão and S. Price, “Interfering and resolving: how tabletop interaction facilitates co-construction of argumentative knowledge,” International Journal of Computer-Supported Collaborative Learning, vol. 6, no. 4, pp. 539–559, 2011. View at: Publisher Site | Google Scholar
  22. M. Mateescu, C. Pimmer, C. Zahn, D. Klinkhammer, and H. Reiterer, “Collaboration on large interactive displays: a systematic review,” Human–Computer Interaction, vol. 36, no. 3, pp. 243–277, 2019. View at: Publisher Site | Google Scholar
  23. A. M. Piper, E. O’Brien, M. R. Morris, and T. Winograd, “SIDES: a cooperative tabletop computer game for social skills development,” in Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW, pp. 1–10, Alberta, Canada, 2006. View at: Google Scholar
  24. D. W. Johnson and R. T. Johnson, “Making cooperative learning work,” Theory Into Practice, vol. 38, no. 2, pp. 67–73, 1999. View at: Publisher Site | Google Scholar
  25. P. Resta and T. Laferrière, Technology in Support of Collaborative Learning, vol. 19, no. 1, Springer, Berlin, Germany, 2007.
  26. D. W. Johnson, R. T. Johnson, and K. Smith, “The state of cooperative learning in postsecondary and professional settings,” Educational Psychology Review, vol. 19, no. 1, pp. 15–29, 2007. View at: Publisher Site | Google Scholar
  27. K. Scager, J. Boonstra, T. Peeters, J. Vulperhorst, and F. Wiegant, “Collaborative learning in higher education: evoking positive interdependence,” CBE—Life Sciences Education, vol. 15, no. 4, p. ar69, 2016. View at: Publisher Site | Google Scholar
  28. R. Slavin, “Group rewards make groupwork work,” Educational Leadership, vol. 15, pp. 89–91, 1991. View at: Google Scholar
  29. T. J. Parkinson and A. M. George, “Are the concepts of andragogy and pedagogy relevant to veterinary undergraduate teaching?” Journal of Veterinary Medical Education, vol. 30, no. 3, pp. 247–253, 2003. View at: Publisher Site | Google Scholar
  30. P. Dillenbourg, Over-scripting CSCL : the risks of blending collaborative learning with instructional design, To cite this version : HAL Id: hal-00190230, P. A. Kirschner, Three worlds CSCL, Can we support CSCL?, Heerlen, 2002.
  31. T. Speelpenning, A. N. Antle, T. Doering, and E. van den Hoven, “Exploring how tangible tools enable collaboration in a multi-touch tabletop game,” Human-Computer Interaction—Interact 2011, vol. 39, no. 3, pp. 605–621, 2011. View at: Publisher Site | Google Scholar
  32. P. Dillenbourg and M. Evans, “Interactive tabletops in education,” International Journal of Computer-Supported Collaborative Learning, vol. 6, no. 4, pp. 491–514, 2011. View at: Publisher Site | Google Scholar
  33. A. F. Wise, M. Saghafian, and P. Padmanabhan, “Towards more precise design guidance: specifying and testing the functions of assigned student roles in online discussions,” Educational Technology Research and Development, vol. 60, no. 1, pp. 55–82, 2012. View at: Publisher Site | Google Scholar
  34. O. Shaer, G. Kol, M. Strait et al., “A tabletop interface for collaborative exploration of genomic data,” in Proceedings of the Conference on Human Factors in Computing Systems, vol. 3, pp. 1427–1436, Atlanta, GA, USA, 2010. View at: Google Scholar
  35. A. N. Antle, A. F. Wise, and K. Nielsen, “Towards utopia: designing tangibles for learning,” in Proceedings of the IDC 2011—10th International Conference on Interaction Design and Children, pp. 11–20, Ann Arbor, MI, USA, June 2011. View at: Google Scholar
  36. A. N. Antle, J. Tanenbaum, A. Bevans, K. Seaborn, and S. Wang, “Balancing act: enabling public engagement with sustainability issues through a multi-touch tabletop collaborative game,” Human-Computer Interaction—Interact 2011, vol. 6947, pp. 194–211, 2011. View at: Publisher Site | Google Scholar
  37. P. Sunnen, B. Arend, S. Heuser, H. Afkari, and V. Maquil, “Designing collaborative scenarios on tangible tabletop interfaces—insights from the implementation of paper prototypes in the context of a multidisciplinary design workshop,” in Proceedings of the ECSCW 2019—17th European Conference on Computer Supported Cooperative Work, Siegen, Germany, 2020. View at: Google Scholar
  38. H. Afkari, V. Maqui, B. Arend, S. Heuser, and P. Sunnen, “Designing different features of an interactive tabletop application to support collaborative problem-solving,” in Proceedings of the ACM International Conference Proceeding Series, New York, NY, USA, 2020. View at: Google Scholar
  39. P. Sunnen, B. Arend, and V. Maquil, “Orbit-overcoming breakdowns in teams with interactive tabletops,” in Proceedings of the International Conference of the Learning Sciences, ICLS, vol. 3, pp. 1459-1460, London, UK, 2018. View at: Google Scholar
  40. P. Sunnen, B. Arend, S. Heuser, H. Afkari, and V. Maquil, “Developing an interactive tabletop mediated activity to induce collaboration by implementing design considerations based on cooperative learning principles,” Communications in Computer and Information Science, vol. 1225, pp. 316–324, 2020. View at: Publisher Site | Google Scholar
  41. E. Vass and K. Littleton, “Peer collaboration and learning in the classroom,” in Proceedings of the International Handbook of Psychology in Education, p. 808, New York, NY, USA, 2010. View at: Google Scholar
  42. E. Tobias, T. Latour, V. Maquil, and T. Latour, “TULIP: a widget-based software framework for tangible tabletop interfaces,” in Proceedings of the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, EICS 2015, pp. 216–221, Duisburg, Germany, June 2015. View at: Google Scholar
  43. M. Kaltenbrunner, T. Bovermann, R. Bencina, and E. Costanza, “TUIO: a protocol for table-top tangible user interfaces,” in Proceedings of the 6th International Workshop on Gesture in Human-Computer Interaction and Simulation (GW 2005), Berder Island, France, 2005. View at: Google Scholar
  44. Mqtt—the standard for IoT messaging, https://mqtt.org//, 2020.
  45. V. Maquil, C. Moll, L. Schwartz, and J. Hermen, “Kniwwelino: a lightweight and WiFi enabled prototyping platform for children,” in Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction, pp. 94–100, Stockholm, Sweden, March 2018. View at: Google Scholar
  46. V. Maquil, H. Afkari, C. Moll, J. Hermen, and T. Latour, “Active tangibles for tabletop interaction based on the Kniwwelino prototyping platform,” in Proceedings of the ACM International Conference Proceeding Series, pp. 667–671, New York, NY, USA, 2019. View at: Google Scholar
  47. S. G. Hart, “Nasa-task load index (NASA-TLX); 20 years later,” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 50, no. 9, pp. 904–908, 2006. View at: Publisher Site | Google Scholar
  48. P. Isenberg, D. Fisher, S. A. Paul, M. R. Morris, K. Inkpen, and M. Czerwinski, “Co-located collaborative visual analytics around a tabletop display,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 5, pp. 689–702, 2012. View at: Publisher Site | Google Scholar
  49. M. R. Morris, A. Cassanego, A. Paepcke, T. Winograd, A. M. Piper, and A. Huang, “Mediating group dynamics through tabletop interface design,” IEEE Computer Graphics and Applications, vol. 26, no. 5, pp. 65–73, 2006. View at: Publisher Site | Google Scholar
  50. J. Wang and P. K. Imbrie, “Assessing team effectiveness: comparing peer-evaluations to a team effectiveness instrument,” in Proceedings of the ASEE Annual Conference and Exposition, Austin, TX, USA, 2009. View at: Google Scholar
  51. P. Wittenburg, H. Brugman, A. Russel, A. Klassmann, and H. Sloetjes, “ELAN: a professional framework for multimodality research,” in Proceedings of the 5th International Conference on Language Resources and Evaluation, LREC 2006, pp. 1556–1559, Genoa, Italy, 2006. View at: Google Scholar
  52. R. Martinez, J. Kay, and K. Yacef, “Visualisations for longitudinal participation, contribution and progress of a collaborative task at the tabletop,” in Proceedings of the Connecting Computer-Supported Collaborative Learning to Policy and Practice: CSCL 2011 Conference Proceedings—Long Papers, 9th International Computer-Supported Collaborative Learning Conference, vol. 1, pp. 25–32, Hong Kong, China, 2011. View at: Google Scholar

Copyright © 2021 Valérie Maquil et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views715
Downloads833
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.