Table of Contents Author Guidelines Submit a Manuscript
Mobile Information Systems
Volume 2019, Article ID 4545917, 10 pages
https://doi.org/10.1155/2019/4545917
Research Article

Coordinating Real-Time Serial Cooperative Work by Cuing the Order in the Case of Theatrical Performance Practice

University of Tsukuba, Ibaraki, Japan

Correspondence should be addressed to Kosuke Sasaki; pj.ca.abukust.sils@ikasask and Tomoo Inoue; pj.ca.abukust.sils@euoni

Received 22 September 2018; Revised 17 December 2018; Accepted 21 January 2019; Published 13 February 2019

Academic Editor: Sergio Mascetti

Copyright © 2019 Kosuke Sasaki and Tomoo Inoue. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Our goal is to facilitate real-time serial cooperative work. In real-time serial cooperative work, the order of subtasks is important because a failure of the order leads to a failure of the whole task. So, coordinating the order of workers’ subtasks is necessary to smoothly accomplish the tasks. In this paper, we propose a method that uses vibration to present the action order and coordinate the orders. We conducted a user study to verify the effectiveness of the method in theatrical performance practice, which was an example of real-time serial cooperative work. As a result, order coordination reduced mistakes in the order of speech and action and in the contents of lines and action. This result suggests that order coordination can improve real-time serial cooperative work.

1. Introduction

Work is generally divided into several steps. For example, cooking includes processes such as cutting ingredients, stir-frying ingredients, and arranging ingredients. Assembling furniture includes steps like reading instructions, preparing the necessary tools, and combining parts according to instructions. In these examples, cooking and assembling furniture are called tasks, and each process is called subtask or process [1, 2]. A task is a unit of work that the worker has to achieve. Also, it seems natural to have a break after a task [3]. A task can be divided into several subtasks [3], and subtasks can be arranged hierarchically [4, 5]. Even in the context of cooperative work, a task is divided into subtasks that are assigned to workers, and workers complete their own assigned subtask [6, 7]. In this study, we focus on cooperative work where tasks are divided into several subtasks.

Some of the cooperative work has subtasks arranged serially, so the order in which subtasks are performed is essential. In order to accomplish this kind of work without mistakes, it is necessary to coordinate the order of subtasks and notify workers of the subtasks’ order. In this paper, we propose a method to support cooperative work where the subtasks’ order is important by coordinating the order. We deal with theatrical performance practice as an example of cooperative work in which the order of executing subtasks is essential.

In addition, some cooperative work subtasks are difficult for workers unfamiliar with the work, and they can disturb the task by making mistakes. Interruption of a task in cooperative work can necessitate restarting tasks, and it can increase the stress of other workers [810]. Therefore, this study aims to avoid task failures by supporting beginners, and we propose a method of order coordination for novice workers to facilitate cooperative work.

In this paper, we dealt with theatrical performance practice as an example of real-time serial cooperative work, which is equivalent to a school play. In theatrical performance practice, subtask refers to the utterances and actions of each actor. In theatrical performance practice, actors need to take actions in accordance with other actors’ action. For that reason, actors must always consider other actors to grasp which one will take action next. Moreover, it is necessary for actors to pay attention to their own subtasks, such as posture and voice. Therefore, the need for much awareness burdens the actors. Theatrical performance practice is difficult work for novices, but we show that our proposed method can support them.

To coordinate the action order in theatrical performance practice, we used vibration to cue the order. Each actor attached a smartwatch to his or her arm to receive cues for the action order through vibrations. We conducted a theatrical performance practice experiment with 16 teams of 3 persons (totaling 48 persons) to verify the effect of notifying action order. In this experiment, we used a cuing system that vibrates the smartwatch of the actor who takes the next turn. As a result, giving actors two kinds of vibration patterns reduced the number of mistakes. Finally, we discuss the possibility of reducing mistakes during real-time cooperative work with serial subtasks.

2. Taxonomy in Cooperative Work

In order to define cooperative work in this study, we describe a taxonomy of cooperative work.

2.1. Real-Time or Non-Real-Time Work

Cooperative work has been classified as synchronous or asynchronous [1113]. In this paper, we focus on synchronous, real-time work to verify the effectiveness of coordinating the order of subtasks. In this paper, we focus on theatrical performance practice. Theatrical performance practice is one task where the subtasks are performed in the same place, but the location of the subtask does not matter for the method of order coordination proposed in this paper.

2.2. Serial or Parallel Subtask Design

Cooperative work has a parallel subtask design or a serial subtask designs [14, 15]. For example, when all the classmates in a school class create a thousand Japanese paper cranes, each students’ subtask of folding cranes is a subtask. So, making a thousand paper cranes can be considered a parallel task. In a relay race in physical education, however, each runner’s run is considered as a subtask. If the baton is not passed, the next runner cannot start, so the subtasks of all runners are arranged serially. Therefore, a relay race can be regarded as a serial task. In this research, we focus on serial tasks in order to investigate whether workers’ mistakes concerning the order of subtasks will decrease.

In the serial task in particular, the order of subtasks is important. In a music ensemble or team action, such as a dance or a march, once the order of subtasks collapses, the entire task will fail. Although support for completing these tasks has been reported [16, 17], we propose a coordination method for any real-time cooperative work with serial subtasks by presenting the order of subtasks to the workers in this study.

2.3. Deciding the Order of Subtasks in Advance

In some tasks, the order of subtasks is predetermined. In a group discussion, for instance, each speaker forms an argument and speaking can be considered a subtask. However, the order of appropriate speech in the discussion changes from moment to moment, so we cannot decide the order of remarks in advance. In this study, we discuss only cooperative work that can determine the order of subtasks before starting. For example, in an ensemble, the subtask is playing a sound with a musical instrument. Since a musical score shows when a certain kind of sound is needed, its order of subtasks is predetermined.

3. Related Work

3.1. Real-Time Cooperative Work and Serial Cooperative Work

Many studies argue that managing task sequences in serial cooperative work is necessary to facilitate tasks. In a serial crowdsourcing task, for example, subtasks may become stagnant [18] or it may be difficult to request complicated work [19] unless subtasks are presented in an appropriate order. Therefore, some studies have focused on the order or the content of the subtasks [2022]. Other than crowdsourcing, some studies used a virtual conductor to inform performers of the order in an ensemble [23, 24], showing that it is useful to present the order of subtasks. In this study, we dealt with theatrical performance practice, but we propose a method that can be used for tasks other than theatrical performance practice.

In real-time cooperative work, it is necessary to do subtasks while paying attention to the actions of other workers. A conflict of workers talking or acting simultaneously with other workers is a known issue in such work [11, 25]. Although gaze information is used to avoid these collisions [26, 27], workers have to look at the circumstances of their surroundings, which can distract attention from their own subtasks. In this paper, we propose a method of using vibration to present the order of actions, to reduce mistakes while concentrating on individual subtasks, and to avoid burdening the worker.

Meanwhile, a presentation support system has been proposed as a method to support serial work done by one person. The system uses speech recognition technology to automatically show the script to a presenter who is not accustomed to giving a presentation. This study showed the importance of showing the order to a worker [28, 29], but our research extends these studies and shows the effectiveness of presenting the order for multiple workers.

3.2. Theatrical Performance Practice
3.2.1. Steps of the Theatrical Performance Practice Process

Theatrical performance practice steps include reading the script, practicing parts, running through practice, and rehearsal. While reading the script, each actor practices reading their lines aloud. While practicing parts, actors check the action and posture. While running through practice, actors play each role through the whole performance. While rehearsing, actors perform the whole production along with sound and lighting. In this research, we focus on practicing parts. In this step, actors must be conscious of their relationship with other actors’ movements, which they do not have to consider while only reading the script. Therefore, actors must pay attention to more elements. Beginners especially may easily make mistakes in this step. Because mistakes interrupt practice and can prevent other actors from grasping the overall flow, we alleviate mistakes during the part practice step.

3.2.2. Supporting the Part Practice Step

Like a theatrical performance practice support method, some systems have been suggested that allow independent practice even when all the actors are not available [30] or offer support for remote performance instruction [31]. This study, on the other hand, supports a theatrical performance practice that progresses in real time with actors in the same place.

As a way to support the action by giving information to an actor in a theatrical performance, a cue card presentation system [32] and showing the script using Google Glass [33] have also been proposed. In this study, we notify actors of the subtask order, which is the action order of each actor, to coordinate the order of action in the collaborative work.

3.3. Memory

In a theatrical performance, actors must remember their own lines and actions. Baddeley’s memory model [34] and Cowan’s memory model [35] are known, but memory is generally divided into two kinds: a short-term memory held for about several tens of seconds in a limited storage area and long-term memories that can be stored for a long time. In order to transfer short-term memories to long-term memory, it is necessary to carry out iterative work, called rehearsal [35, 36]. Especially for beginners in a theatrical performance equivalent to a school play who try to memorize the whole script, the task can be incomplete, and it is hard for them to proceed with the repetitive work of memorizing the script. Presenting a part of the script’s contents during practice can solve this problem by reducing the items that actors have to memorize.

4. Study of Cuing System for Theatrical Performance Practice as an Example of Cooperative Work

To accomplish our goal, we present a method of cuing the action order for novice workers. Workers who are unfamiliar with the task can make mistakes in the order and disturb the task. We present a method of order coordination that individually notifies each worker of the action order [37, 38].

We use the practice of theatrical performance as an example of real-time serial cooperative work and target novice actors. The actors who are unfamiliar with theatrical performance must pay attention to various contents such as the order of actions, their behavior, and the other actors’ actions. Therefore, it is difficult for them to complete the work. Because theatrical performance practice has subtasks arranged serially, the task can be interrupted when actors have the wrong order. It is necessary to coordinate the order of action for individual actors. In this paper, we use a cuing system that instantly notifies each actor when they should take action.

Notifying the action order is expected to reduce the number of mistakes made by beginners during practice. However, because the actors do not always move as intended, even using the system, it is necessary to confirm that the number of mistakes in the speech/action order decreases by cuing the order.

In addition, we had to consider mistakes in the speech content and in the action content because theatrical performance practice includes mistakes in the order and mistakes in the content. By reducing the items that the actors must memorize, they can memorize the content-related items more easily. It is necessary to check whether the order cues can reduce the number of content-related mistakes. If we find that our method can reduce the number of content mistakes, a better quality of work is expected because the actors will not make mistakes in the content or in the order.

Therefore, we verified the following hypotheses about whether the order coordination is useful in theatrical performance practice:(i)H1: notifying the actors of the action order in real time reduces the mistakes in the order of action in the theatrical performance practice.(ii)H2: notifying the actors of the action order in real time reduces the content mistakes, such as the content of speech and action, in the theatrical performance practice.

5. Cuing System

We introduced a cuing system that notifies actors of the speech/action order [37, 38].

5.1. Scenario

An example scenario is shown in Figure 1. In this paper, one utterance or action by an actor is defined as a turn. This figure is an example of a script that is composed of three turns. The horizontal axis shows the time. The light blue lines in the figure show that the system gave a one-shot vibration, the dark blue lines indicate two-shot vibrations, the orange lines indicate the utterance section by the actor, and the green lines indicate the section of an actor’s action. In this scenario, the system operated for actors A, B, and C are as follows:(i)[Turn 1] Actor A: only speaks(1)The smartwatch of actor B vibrates twice, and the smartwatch of actor A simultaneously starts voice detection.(2)Actor A speaks.(3)The smartwatch finishes speech detection.(4)The system reads the next turn.(ii)[Turn 2] Actor B: speaks and acts(1)The smartwatches of actor A and C vibrate once, and the smartwatch of actor B starts voice detection simultaneously.(2)Actor B speaks and acts.(3)The smartwatch finishes speech detection.(4)The system reads the next turn.(iii)[Turn 3] Actor A and C: only speak(1)The smartwatches of actors A and C start voice detection.(2)Actors A and C speak.(3)The smartwatches finish speech detection.

Figure 1: A timeline of a script.

Because the system used in this study was a prototype, if the actors made some mistake during the experiment, the experimenter operating the server temporarily interrupted the system and manually restarted the system.

5.2. Implementation

In this section, we explain our cuing system that notifies actors of the order.

5.2.1. Requirements

In this study, we used an experimentally constructed cuing system to notify actors of the order. This cuing system is a prototype system using existing technology. To begin, we note the requirements to be satisfied by this system.

First, in order to cue the order individually, a notification is sent to an individual actor’s device. As described in Section 3.2.1, we assumed system usage in the part practice step, and we had to consider the possibility that the actor performs with props in both hands. Therefore, we chose a smartwatch as a wearable device that does not occupy both hands. Wearable devices can cue actors individually.

Second, this system can register scripts in advance. If the system records the script’s information beforehand, it is possible to grasp the action order of the actors, and the system can correctly cue the appropriate actors.

This system can also detect the utterances of speakers. By detecting the speaker’s utterance, it is possible to automatically notify the next actor of the order after the previous actor’s turn. We used speech detection technology in the smartwatch worn by individual actors to observe who is speaking.

5.2.2. System Configuration

Figure 2 shows the outline of the system. Each actor wore a smartwatch (Samsung Gear Live) on each arm as a device to notify the action order through vibration. This smartwatch was connected to the server via Android tablets (Nexus 7) to synchronize all smartwatches. Smartwatches and tablets were connected one-to-one by Bluetooth, and all tablets and servers performed bidirectional connection using WebSocket. The tablet mediated the connection between each smartwatch and the server, so it was unnecessary for the actor to carry the tablet as long as the tablet was in the range where the smartwatch, tablet, and server could be connected.

Figure 2: System configuration.
5.2.3. Storing Scripts

The scripts were stored as a CSV file on the server and read by the cuing system. CSV is a versatile file format that can be read and written by various text editors, including Microsoft Excel spreadsheet software. Considering that this system will be used in cooperative work other than theatrical performance practice in the future, we avoided using a unique file format like Opera Liber DTD [39] as the encoding model for the script. The system automatically read the script data and created the cue. The script CSV file included the following data: the order of speech, whether or not the action is included in the next turn, and the contents of the speech and the actions.

5.2.4. Cuing the Action Order to Actors

The action order cue was presented through the smartwatch’s vibration. The cuing system gave two kinds of vibration patterns. When actors only utter their own lines, a one-shot vibration lasting 500 ms is given. If actors have to act, regardless of whether they utter lines or not, two vibrations lasting 500 ms and 700 ms were presented with a 200 ms interval between the two vibrations. The different length between the two patterns of vibrations is for the actor to easily distinguish the number of vibrations. These vibrations are given one turn before the actor should speak or act. Therefore, actors can know if their turn is next.

5.2.5. Detecting Actors’ Utterances

The cuing system used the Android Speech Recognizer to detect actors’ utterances. The smartwatches worn by the actor shifted to the utterance detection state when the actor’s turn came, and it was possible to detect the utterance. Since Android’s speech recognition does not always accurately detect the phrase, the smartwatches in this study’s experiment detected only whether the speaker spoke so the system could follow the script and make cues in the right order. It took approximately 2 or 3 seconds to finish detecting actors’ speaking (the processing time of the Android Speech Recognizer was measured using a script prepared for our experiment, and the average was 2.63 sec. SD was 0.47 sec ()). During an utterance, actors did not need to bring the smartwatch close to their mouth, and both arms could freely be brought to position according to the action at that time.

5.2.6. System Workflow

The workflow of this system is shown in Figure 3. First, the system checked if the actor’s action was included in the next turn by reading the stored script. If an action was included, this system gave two vibrations via smartwatch to the actor who acts in the next turn. If an action was not included, the system offered a one-shot vibration to the next actor. If the actor performed the speech and/or action without making mistakes, the system read the next turn. This sequence was repeated until reaching the end of the script.

Figure 3: System workflow.

6. User Study of the System

In order to evaluate whether notifying the action order reduced the number of mistakes in the part practice step of theatrical performance practice, we carried out a user study in which the participants practice a theatrical performance using the cuing system [37, 38].

6.1. Participants

Forty-eight undergraduate and graduate students participated in this experiment. All of them were beginners in theatrical performance, which conformed to the target of this research.

6.2. The Scripts Used in the Study

In this experiment, two different scripts were prepared for each team. Each script had 12–14 speech lines and 5–7 actions for each actor, and there were about 40 turns total. Speech lines were limited to a single word or ones within ten seconds, considering the target users. On average, it took almost 3 seconds to complete a line. This included small movement actions, such as raising hands or hitting hands on the mouth. It also included whole body actions, such as standing up, crouching and stroking dogs, slowly looking back, falling down, and more. There was no action that participants could not physically perform. Because there was no turn where speech or action crossed between turns, no one acted when the system gave the vibration. Moreover, the scripts we prepared had actors perform utterances or actions alone in all the turns except for one, and two actors perform simultaneously in one turn. It took about three minutes to read each script.

6.3. Experimental Condition

We set up two experiments in this study. In one, the system cued only the order of the utterance (Speech-cue experiment). In the other, the system notified the order of both speech and action (Speech + action-cue experiment). In the Speech-cue experiment, the cuing system gave only a one-shot vibration for both speech and action. In the Speech + action-cue experiment, the system gave two patterns of vibrations. Two vibrations were given for action and one vibration for speech.

Forty-eight participants were divided into sixteen teams of three persons. Eight teams belonged to group A, and the other eight teams belonged to group B. The participants in group A took part in the Speech-cue experiment, and the ones in group B took part in the Speech + action-cue experiment (Table 1).

Table 1: Participants in the study.

For each experiment, we set two experimental conditions. One was participating in the experiment using the cuing system (system condition), and another was participating without the system (control condition). All participants took part in both conditions. In both trials, they used different scripts. The eight teams in the two groups were divided into four small teams, and the experiments were conducted in the order shown in Table 2 for counterbalance.

Table 2: Combination of the study conditions.
6.4. Procedure

In conducting the user study, we focused on the part practice step of theatrical performance practice. Therefore, we had to carry out the experiments on the premise that the actors had already finished reading the script, which is the previous practice step. Therefore, participants needed to memorize their lines and actions before starting the experiment. In accordance with the opinion of the person with theatrical performance experience and to prepare a state in which the participants memorized lines and actions by roughly 80%, the experiments were carried out according to the following procedure:(1)Instruction of the Experiment and the System. The experimenter told participants that this experiment was to practice using a cuing system that supports a theatrical performance practice. Participants were not informed of the purpose of the experiments other than what to do in this experiment. Next, the experimenter explained to the participants how the system works and let them wear a smartwatch to try the system. At this time, voice detection did not perform correctly when the voice was unclear or too quiet, so the experimenter instructed participants to make their voice clear. This instruction did not deviate from the assumed environment because a large and clear voice is necessary for the audience to hear actors’ words in the actual theater.(2)Memorize the Script (1st Round). Each participant was given a role and a printed script. Then, they were instructed to memorize the script. They could memorize the script without restriction, but they were not allowed to memorize with other participants. In this step, participants were given five minutes to memorize the script.(3)Reading the Script. Participants read the whole script aloud while holding their own printed script. At this step, participants understood how to utter lines and play a role.(4)Instruction of Action by the Experimenter. The experimenter performed the actions written in the script to participants once.(5)Memorize the Script (2nd Round). Participants were instructed to memorize the script again. Just like the first round, they could memorize the script freely, but it was forbidden to memorize with other participants. Three minutes were given to memorize, which was shorter than in the first round.(6)Starting Practice. The participants started performing all of the script. At this time, the experimenter counted the number of mistakes described in Section 6.5. In the case of a mistake, the experimenter stopped practice once and restarted from the turn that a mistake occurred.(7)Switch to the Next Script. After the participants completed practicing the first script, they repeated the procedure from (2) to (6) with the next script.

6.5. Data Collection

In the Speech-cue experiment, the experimenter counted the number of mistakes in speech order and in speech content. In the Speech + action-cue experiment, the experimenter counted the number of mistakes in speech order and in speech content along with the mistakes in action order and in action content. It is not necessary for the speech content to be correct verbatim, and changing the expression somewhat without changing the speech’s meaning was not counted as a mistake. For example, inverting the order of words and phrases or replacing part of a phrase with synonyms was not judged as a mistake.

7. Result of the User Study

In the user study, we carried out two experiments: a Speech-cue experiment and a Speech + action-cue experiment [37, 38]. The number of mistakes in speech order and speech content from 16 teams was obtained in the Speech-cue experiment. The number of mistakes in action order and action content from eight teams was obtained in the Speech + action-cue experiment.

The result of the user study is shown in Figure 4. Because different scripts were used in the two experiments, the ratios of the system condition to the control condition were calculated and compared. By putting the two experiments together, the mistakes in speech order and in speech content came from 16 teams (48 participants) and mistakes in action order and in action content came from 8 teams (24 participants), originally.

Figure 4: The result of the user study. The values are the ratios of the system condition to the control condition. Error bar shows the normalized standard deviation.

Table 3 shows the statistical differences in the measured items. A post hoc t-test followed by the Bonferroni correction was applied. Significant differences between the system condition and the control condition were found in the speech order and the speech content. Marginally significant differences between the system condition and the control condition were found in the action order and the action content.

Table 3: Statistical differences in the measured items.

Using the system surely reduced the number of mistakes during the theatrical performance practice. This result supported H1, the system reduces mistakes in speech and action order, and H2, the system reduces mistakes in speech and action content.

8. Discussion

8.1. Effectiveness of Notifying the Action Order

In the user study, the system notified the order. As a result, it was shown that the number of mistakes in speech and action order decreased compared with the control condition. The speech and action order during a theatrical performance practice correspond to the order of subtasks in real-time serial cooperative work, which is dealt with in this study. The order of subtasks is essential because once workers make a mistake in order, it leads to the failure of the entire task. Therefore, we argue that the system is effective for the real-time serial cooperative work focused on in this study.

8.2. Effect of Difference in Two Types of Cuing Order

The experiments carried out in this study had two patterns of vibrations given by the system. To distinguish the one-shot vibration from the two-shot vibrations, the length of the two-shot vibrations was changed. By only changing the vibration length, the system notified the actors of the two kinds of order, including or not including action. As shown in Figure 4, the number of mistakes in speech order and in action order were reduced. This result shows a possibility to show the proper order of subtasks by different operations of vibrations.

8.3. Effect on Content-Related Subtasks

In theatrical performance practice, it is necessary for actors to memorize their actions in addition to relationships with other actors. In this study, the system only presented the order, but the results confirm that the number of mistakes related to the content of the speech and actions also decreased, as shown in Figure 4. It seems that the beginners of theatrical performance were able to concentrate on remembering speech and actions by automatically presenting the order, which reduced the number of items to memorize. This result suggested that notifying workers of the order of subtasks improves the quality of individual work.

8.4. Possible Effect on Learning Theatrical Performance

This study focuses on the school-play level of theater. After the practice, there will be an actual production of the performance. However, it may be possible to mitigate mistakes in the production without this system. This can be inferred from the concept of scaffolding, which enables efficient learning by establishing a foothold in learning [4043]. Originally, scaffolding was used to enable children or novices to solve problems beyond their unassisted efforts [44, 45]. According to Vygotsky, learners also have a zone named “Zone of Proximal Development (ZPD)” that is the distance between what a learner can do with help and do without help [46, 47]. Scaffolding temporarily and dynamically supports learners in the ZPD [45]. However, the learners gain understanding and the scaffolding can fade over time as learners take more control over own learning [40, 42, 43, 4548]. Therefore, learners can finally solve problems smoothly without scaffolding. Similarly, considering the theatrical performance practice as learning content, the cuing system in this study can become scaffolding. The actors practice efficiently using the system, and they might finally be able to perform an actual play without making mistakes.

8.5. System’s Capability of Supporting Overlapping Turns

There was no overlapped utterance or overlapped action in the user study. However, we expect that our cuing system can support overlapping actions. Figure 5 shows examples of scripts including overlapping turns. Example (1) shows that actor B starts speaking while actor A is speaking. In this case, actor B’s smartwatch vibrates when actor A starts to speak. Example (2) shows actor B and actor C start speaking while actor A is acting. In this case, actor B is notified when actor A starts acting, and actor C is cued when actor B starts speaking. In example (3), actor B and actor C start speaking and acting simultaneously during actor A’s action. Both actors B and C are cued when actor A starts his/her turn. In any of the cases from (1) to (3), the system cues actors one turn before the actor(s) should start speaking or acting. Thus, the system can support various overlapping actions, including those shown in Figure 5.

Figure 5: Examples of overlapping turns.
8.6. Application to Other Real-Time Serial Cooperative Work

In this study, we considered three actors practicing a theatrical performance as an example of real-time serial collaborative work, and we verified the effectiveness of notifying action order. As a result, mistakes related to the order of speech and action decreased, along with mistakes related to the content of speech and action.

Considering the cooperative work covered by this study, it is believed that similar results can be obtained in similar real-time serial cooperative work. For example, dance performances and playing music with handbells are other examples of real-time serial cooperative work. These tasks also have some subtasks, and their order is essential to completing the entire task. Our method of cuing the order may be able to support these cooperative works.

However, it has been pointed out that when a worker is walking or running, vibrations can be difficult to sense [49]. In this study’s experiment, every participant was able to sense the vibration because all participants were not moving when given the vibration. In other cooperative work, it remains a matter of research to present the cue in other ways while paying careful attention to the state of workers.

In this study, we discussed only tasks whose subtask order is determined in advance. As mentioned in Section 2.3, in a group discussion, for example, the order of each person’s remarks cannot be determined in advance. However, some studies have already proposed how to decide the appropriate person in a discussion [50, 51], that is, to determine the order of subtasks in real time. By adding such knowledge to this method, the proposed method is applicable to tasks whose order of subtasks is not predetermined.

9. Conclusion

In this study, we focused on real-time serial cooperative work. In such cooperative work, the order of subtasks is important. Workers who are unfamiliar with the order of subtasks may make a mistake that interrupts the entire task, so we support novice workers by coordinating the order of actions. In this paper, we dealt with a theatrical performance practice as an example of the real-time serial cooperative work, and we presented a method of coordinating the order of subtasks to reduce the number of mistakes. We used a cuing system that gave vibrations to each actor through a smartwatch. The system notified actors of the action order. To verify whether order coordination by notifying action order leads to mitigation of mistakes, we conducted a user study in which participants practiced theatrical performances and confirmed the effectiveness of the method. As a result, notifying actors of the order reduced mistakes in the order and in the contents. It was also suggested that this system could improve the quality of subtasks by making the worker more focused.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. J. Grudin and S. Poltrock, “Cscw, groupware and workflow: experiences, state of art, and future trends,” in Proceedings of Conference Companion on Human Factors in Computing Systems, CHI’96, pp. 338-339, ACM, New York, NY, USA, April 1996. View at Publisher · View at Google Scholar
  2. T. W. Malone, K. Crowston, J. Lee, and B. Pentland, “Tools for inventing organizations: toward a handbook of organizational processes,” in Proceedings of 1993 Proceedings Second Workshop on Enabling Technologies-Infrastructure for Collaborative Enterprises, pp. 72–82, Morgantown, WV, USA, April 1993. View at Publisher · View at Google Scholar
  3. J. B. Jørgensen, K. B. Lassen, M. Wil, and P. van der Aalst, “From task descriptions via colored petri nets towards an implementation of a new electronic patient record workflow system,” International Journal on Software Tools for Technology Transfer, vol. 10, no. 1, pp. 15–28, 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. H. Trætteberg, Modeling Work: Workflow and Task Modeling, Springer, Dordrecht, Netherlands, 1999. View at Publisher · View at Google Scholar
  5. S. Kwon, “The effects of knowledge awareness on peer interaction and shared mental model in cscl,” in Proceedings of 8th International Conference on Computer Supported Collaborative Learning, CSCL’07, pp. 851–855, International Society of the Learning Sciences, New Brunswick, NJ, USA, July 2007.
  6. J. M. Carroll, D. C. Neale, P. L. Isenhour, M. B. Rosson, and D. Scott McCrickard, “Notification and awareness: synchronizing task-oriented collaborative activity,” International Journal of Human-Computer Studies, vol. 58, no. 5, pp. 605–632, 2003. View at Publisher · View at Google Scholar · View at Scopus
  7. T. W. Malone and C. Kevin, “The interdisciplinary study of coordination,” ACM Computing Surveys, vol. 26, no. 1, pp. 87–119, 1994. View at Publisher · View at Google Scholar · View at Scopus
  8. B. Y. Lim, B. Oliver, and V. Bellotti, “Show me a good time: using content to provide activity awareness to collaborators with activityspotter,” in Proceedings of 16th ACM International Conference on Supporting Group Work, GROUP’10, pp. 263–272, ACM, New York, NY, USA, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  9. M. Czerwinski, E. Cutrell, and E. Horvitz, “Instant messaging: effects of relevance and timing,” in Proceedings of HCI 2000 People and Computers XIV, vol. 2, pp. 71–76, New York, NY, USA, January 2000, https://www.microsoft.com/en-us/research/publication/instant-messaging-effects-of-relevance-and-timing/.
  10. G. Mark, D. Gudith, and K. Ulrich, “The cost of interrupted work: more speed and stress,” in Proceedings of SIGCHI Conference on Human Factors in Computing Systems, CHI’08, pp. 107–110, ACM, New York, NY, USA, April 2008. View at Publisher · View at Google Scholar · View at Scopus
  11. C. A. Ellis, S. J. Gibbs, and G. Rein, “Groupware: Some issues and experiences,” Communications of the ACM, vol. 34, no. 1, pp. 39–58, 1991. View at Publisher · View at Google Scholar · View at Scopus
  12. V. Christine, Bullen, and L. Bennett, “Groupware in practice: an interpretation of work experiences,” in Readings in GroupWare and Computer-Supported Cooperative Work: Assisting Human-Human Collaboration, pp. 69–84, Morgan Kaufmann Publishers Inc., Burlington, MA, USA, 1994. View at Google Scholar
  13. S. Viller, “The group facilitator: a CSCW perspective,” in Proceedings of the Second European Conference on Computer-Supported Cooperative Work ECSCW’91, pp. 81–95, Springer, 1991. View at Publisher · View at Google Scholar
  14. A. B. Baskin, S. C. Lu, R. E. Stepp, and M. Klein, “Integrated design as a cooperative problem solving activity,” in Proceedings of 1989 Twenty-Second Annual Hawaii International Conference on System Sciences. Volume III: Decision Support and Knowledge Based Systems Track, pp. 387–396, Amsterdam, Netherlands, January 1989. View at Publisher · View at Google Scholar
  15. A. Adya, J. Howell, M. Theimer, W. J. Bolosky, and J. R. Douceur, “Cooperative task management without manual stack management,” in Proceedings of General Track of the Annual Conference on USENIX Annual Technical Conference, ATEC’02, pp. 289–302, USENIX Association, Berkeley, CA, USA, June 2002.
  16. F. Lyu, F. Tian, W. Feng et al., “Ensewing: creating an instrumental ensemble playing experience for children with limited music training,” in Proceedings of 2017 CHI Conference on Human Factors in Computing Systems, CHI’17, pp. 4326–4330, ACM, New York, NY, USA, May 2017. View at Publisher · View at Google Scholar · View at Scopus
  17. S. Tsuchida, T. Terada, and M. Tsukamoto, “A system for practicing formations in dance performance supported by self-propelled screen,” in Proceedings of 4th Augmented Human International Conference, AH’13, pp. 178–185, ACM, New York, NY, USA, January 2013. View at Publisher · View at Google Scholar · View at Scopus
  18. P. Kucherbaev, F. Daniel, S. Tranquillini, and M. Marchese, “Relauncher: crowdsourcing micro-tasks runtime controller,” in Proceedings of 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, CSCW’16, pp. 1609–1614, ACM, New York, NY, USA, Feburary-March 2016. View at Publisher · View at Google Scholar · View at Scopus
  19. A. Kulkarni, M. Can, and B. Hartmann, “Collaboratively crowdsourcing workflows with turkomatic,” in Proceedings of ACM 2012 Conference on Computer Supported Cooperative Work, CSCW’12, pp. 1003–1012, ACM, New York, NY, USA, Feburary 2012. View at Publisher · View at Google Scholar · View at Scopus
  20. J. Kim, P. T. Nguyen, S. Weir, P. J. Guo, R. C. Miller, and K. Z. Gajos, “Crowdsourcing step-by-step information extraction to enhance existing how-to videos,” in Proceedings of 32nd Annual ACM Conference on Human Factors in Computing Systems, CHI’14, pp. 4017–4026, ACM, New York, NY, USA, April-May 2014. View at Publisher · View at Google Scholar · View at Scopus
  21. K. Hara, V. Le, and J. Froehlich, “Combining crowdsourcing and google street view to identify street-level accessibility problems,” in Proceedings of SIGCHI Conference on Human Factors in Computing Systems, CHI’13, pp. 631–640, ACM, New York, NY, USA, May 2013. View at Publisher · View at Google Scholar · View at Scopus
  22. B. Lydia, “Chilton, greg little, darren edge, daniel S. Weld, and james A. Landay. Cascade: crowdsourcing taxonomy creation,” in Proceedings of SIGCHI Conference on Human Factors in Computing Systems, CHI’13, pp. 1999–2008, ACM, New York, NY, USA, May 2013. View at Publisher · View at Google Scholar · View at Scopus
  23. D. Reidsma, A. Nijholt, and P. Bos, “Temporal interaction between an artificial orchestra conductor and human musicians,” Computers in Entertainment, vol. 6, no. 4, p. 1, 2008. View at Publisher · View at Google Scholar · View at Scopus
  24. P. Bos, D. Reidsma, Z. Ruttkay, and A. Nijholt, “Interacting with a virtual conductor,” in Entertainment Computing-ICEC 2006, R. Harper, M. Rauterberg, and M. Combetto, Eds., pp. 25–30, Springer, Berlin, Germany, 2006. View at Google Scholar
  25. M. Zancanaro, O. Stock, Z. Eisikovits, C. Koren, L. Patrice, and Weiss, “Co-narrating a conflict: an interactive tabletop to facilitate attitudinal shifts,” ACM Transactions on Computer-Human Interaction, vol. 19, no. 3, pp. 1–30, 2012. View at Publisher · View at Google Scholar · View at Scopus
  26. A. Sauppé and B. Mutlu, “How social cues shape task coordination and communication,” in Proceedings of 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW’14, pp. 97–108, ACM, New York, NY, USA, Feburary 2014. View at Publisher · View at Google Scholar · View at Scopus
  27. W. Dong and W.-T. Fu, “One piece at a time: why video-based communication is better for negotiation and conflict resolution,” in Proceedings of ACM 2012 Conference on Computer Supported Cooperative Work, CSCW’12, pp. 167–176, ACM, New York, NY, USA, Feburary 2012. View at Publisher · View at Google Scholar · View at Scopus
  28. T. Okada, T. Yamamoto, T. Terada, and M. Tsukamoto, “Wearable mc system a system for supporting mc performances using wearable computing technologies,” in Proceedings of 2nd Augmented Human International Conference, AH’11, pp. 25.1–25.7, ACM, New York, NY, USA, March 2011. View at Publisher · View at Google Scholar · View at Scopus
  29. R. Asadi, T. Ha, H. J. Fell, and T. W. Bickmore, “Intelliprompter: speech-based dynamic note display interface for oral presentations,” in Proceedings of 19th ACM International Conference on Multimodal Interaction, ICMI 2017, pp. 172–180, ACM, New York, NY, USA, November 2017. View at Publisher · View at Google Scholar · View at Scopus
  30. S. Mitsuki, F. So, and K.-I. Okada, “Supporting actor’s voluntary training considering the direction of drama,” IPSJ Transactions. DCON, vol. 4, no. 1, pp. 1–9, 2016. View at Google Scholar
  31. F. So, Y. Go, K. Hiroshi, and K. Okada, “Supporting direction of acting in remote location by director to stage actor,” IPSJ SIG Technical Reports, vol. 44, pp. 1–6, 2014. View at Google Scholar
  32. A. Kirtland, “An unrehearsed cue script perspective on love’s labour’s lost,” Actes des congrès de la Société française Shakespeare, vol. 32, 2015. View at Publisher · View at Google Scholar
  33. J. A. Ward and L. Paul, “What’s my line? glass versus paper for cold reading in duologues,” in Proceedings of 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, UbiComp’16, pp. 1765–1768, ACM, New York, NY, USA, September 2016. View at Publisher · View at Google Scholar · View at Scopus
  34. A. Baddeley, “Working memory,” Current Biology, vol. 20, no. 4, pp. R136–R140, 2010. View at Publisher · View at Google Scholar · View at Scopus
  35. N. Cowan, “The magical mystery four: how is working memory capacity limited, and why?” Current Directions in Psychological Science, vol. 19, no. 1, pp. 51–57, 2010. View at Publisher · View at Google Scholar · View at Scopus
  36. E. Awh, J. Jonides, E. E. Smith, E. H. Schumacher, R. A. Koeppe, and S. Katz, “Dissociation of storage and rehearsal in verbal working memory: evidence from positron emission tomography,” Psychological Science, vol. 7, no. 1, pp. 25–31, 1996. View at Publisher · View at Google Scholar · View at Scopus
  37. R. Takatsu, N. Katayama, T. Inoue, H. Shigeno, and K.-I. Okada, “A wearable system with individual cuing for theatrical performance practice,” in Collaboration and Technology, T. Yuizono, H. Ogata, U. Hoppe, and J. Vassileva, Eds., pp. 37–49, Springer International Publishing, Cham, Switzerland, 2016. View at Google Scholar
  38. R. Takatsu, N. Katayama, T. Inoue, H. Shigeno, and K.-I. Okada, “A wearable action cueing system for theatrical performance practice,” in Collaboration Technologies and Social Computing, pp. 130–145, Springer, Berlin, Germany, 2016. View at Publisher · View at Google Scholar · View at Scopus
  39. P. Elena, An Encoding Model for Librettos: The Opera Liber DTD, King’s College London, London, UK, 2005.
  40. M. Erickson and L. R. Hales, “Increasing art understanding and inspiration through scaffolded inquiry,” Studies in Art Education, vol. 59, no. 2, pp. 106–125, 2018. View at Publisher · View at Google Scholar · View at Scopus
  41. J. Mousley, R. Zevenbergen, and P. Sullivan, “Improving whole class teaching : differentiated learning trajectories,” in Proceedings of Making Mathematics Vital: Twentieth Biennial Conference of the Australian Association of Mathematics Teachers, Inc., pp. 201–208, Sydney, Australia, January 2005.
  42. Great Schools Partnership, Scaffolding Definition–The Glossary of Education Reform, Great Schools Partnership, Suite, Portland, https://www.edglossary.org/scaffolding/.
  43. Northern Illinois University, Instructional Scaffolding to Improve Learning, Fall 2008 Spectrum Newsletter, DeKalb, IL, USA, 2008, https://niu.edu/spectrum/archives/scaffolding.shtml.
  44. D. Wood, J. S. Bruner, and G. Ross, “The role of tutoring in problem solving,” Journal of Child Psychology and Psychiatry, vol. 17, no. 2, pp. 89–100. View at Publisher · View at Google Scholar · View at Scopus
  45. T. Gonulal and S. Loewen, “Scaffolding technique,” in The TESOL Encyclopedia of English Language Teaching, p. 1, Wiley, Hoboken, NJ, USA, 2018. View at Publisher · View at Google Scholar
  46. L. S. Vygotsky, Mind in Society: Mind in Society the Development of Higher Psychological Processes, Harvard University Press, Cambridge, MA, USA, 1978.
  47. I. Verenikina, Scaffolding and Learning: Its Role in Nurturing New Learners, University of Wollongong, Wollongong NSW, Australia, 2008.
  48. J. van de Pol, M. Volman, and J. Beishuizen, “Scaffolding in teacher–student interaction: a decade of research,” Educational Psychology Review, vol. 22, no. 3, pp. 271–296, 2010. View at Publisher · View at Google Scholar · View at Scopus
  49. T. Roumen, S. T. Perrault, and S. Z. Notiring, “A comparative study of notification channels for wearable interactive rings,” in Proceedings of 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI’15, pp. 2497–2500, ACM, New York, NY, USA, December 2015. View at Publisher · View at Google Scholar · View at Scopus
  50. Y. Takase, T. Yoshino, and I. Nakano Yukiko, “Conversational robot with conversation coordination and intervention functionality in multi-party conversations,” IPSJ Journal, vol. 58, no. 5, pp. 967–980, 2017. View at Google Scholar
  51. J. Gratch, O. Anna, F. Lamothe et al., “Virtual rapport,” in Intelligent Virtual Agents, J. Gratch, M. Young, R. Aylett, D. Ballin, and P. Olivier, Eds., pp. 14–27, Springer, Berlin, Germany, 2006. View at Google Scholar