This article describes a proposal and case study based on mobile phones and QR Codes to assist individuals with cognitive disabilities in their labour training and integration. This proposal, named AssisT-Task, is a full functional mobile application for Android smartphones and offers step-by-step guidance, establishing a learning method through task sequencing. It has been tested with a group of 10 users and 2 types of labour tasks. Through 7 recorded sessions, we compared the performance and the learning progress with the tool against the traditional assisting method, based on paper instructions. The results show that people with cognitive disabilities learnt and performed better and faster when using AssisT-Task than the traditional method, particularly on tasks that require cognitive effort rather than manual skills. This learning has proved to be essential to obtain an adequate degree of personal autonomy for people with cognitive impairment.

1. Introduction

Autonomy is the main goal for people with any type of cognitive impairment. In order to achieve a fulfilling life, support is critical [1]. In this article, we focus on people with cognitive disabilities susceptible to be recruited to work in a company and, particularly, people with Down syndrome. These people are usually educated in labour centres, where special education teachers and labour tutors train them in several skills, adapted to their profiles. Their curricula usually include internships in companies, and eventually, they are hired afterwards. Therefore, educators have to teach their students a large number of skills, including stationery, office managing, and reception or cleaning service.

The traditional way to train and teach these skills is task repetition for a long time. During this process, caregivers monitor users’ performance and provide oral or textual information. Although they usually provide direct supervision and support, they have to let the student progressively complete the task without any help. This way, instruction-based support is usually provided, so that they can consult it and complete the task [2]. Traditional methods of support include verbal instruction, cards with text, and/or pictures or lists. Although this support is carefully developed by caregivers and experts, they present some challenges to people with cognitive disabilities: they often have difficulties to read, to relocate themselves within a text when they get lost, to look for specific information of a certain instruction, and even to understand them.

These issues motivated researchers to study how to introduce technological aids for daily-life activities performance, both in the learning process and in their houses or workplaces. In the literature, we can find that most of the researchers proposed new devices that provide instructions and offer prompting-based interfaces. This approach involves that users must learn how to use a new device—usually wearable screens or specifically developed interfaces—and then use them to do the tasks. In other words, this training involves a new challenge, using an unknown device, which entails a new learning process and changes in their learning paradigm, and its success might be jeopardized again.

Studies such as [3, 4] run different interviews with people with cognitive disabilities and tried to find a common pattern of technology usage among them. Additionally, Hallgrenn et al. [5] applied the ETUQ (Everyday Technology Use Questionnaire) to 120 users with different level of cognitive impairment. It showed that the perceived difficulty of technology use among these people is directly proportional to their cognitive impairment severity, with a slight influence of the interest of the user in the device or the topic it covers. Lancioni et al. [6] considered that progress on assistive technologies should not be taken apart from progress on common technologies and defended new strategies of intervention so the users get the most of innovative technologies.

Therefore, it would be far better for them to use their well-known devices, in a less intrusive manner, instead of introducing in their lives new and unknown devices. As it will be presented in latter sections, smartphones are very popular among them and fit perfectly these requirements. Even if they had not any previous experience with them before, learning how to use them would be advantageous, since these devices will become useful in some moment of their lives. Besides, Holzinger et al. [7] discussed the acceptance of technology and the tolerance of individuals to introduce a new device in their routine. A person with cognitive disability, yet having been trained in its use, will not use the device in his/her real life if there is no total acceptance of it. The authors concluded that acceptance is strictly related to previous exposure to technology, so smartphones remain as an optimal choice for our study. In fact, choosing a device of high acceptance level decreases the risk to become an abandoned device through time, a problem that occurs with almost 35% of assistive developments, as some studies pointed out [8].

On that basis, we chose smartphones as the developing platform. Our approach was based on an intuitive and uncomplicated way to assist doing tasks. Users only need their smartphones, launch the application, and follow the previously prepared sequence of instructions. Thanks to the ubiquity of these devices, the assistance would be available anytime they need it. The process can be divided into two steps: task selection and task execution. As it will be detailed in further sections, task identification can be challenging so different approaches should be considered. In our case, we decided to release users from this work, tagging the environment with identification codes, easily readable by the device. The cheapest, most widespread, error-safe tagging technology nowadays is QR Codes [9]: task information is coded into a visual mark, which is printed and put near the places where tasks must be done, providing pervasive assistance [10]. Current smartphones are able to read them through the camera and the capturing-decoding procedure is easily includible in applications. On the other hand, the assistance to perform the task will be offered as a prompting sequence of instructions, supplemented with visual and audio cues. This way, users would receive the stimulus by different channels. Besides, thanks to the navigation controls, users would be able to go forward and backward as they need.

In order to validate our development, we run an evaluation with 10 young adults with Down syndrome from a labour training course. We used a hybrid methodology, combining elements from inquiry (e.g., Direct observation during the trials) and test methodologies (e.g., Focus groups), which provided us with objective information about users’ performance and knowledge acquisition.

This article is organized as follows: after the introduction, we present a review of the related work in the literature in Section 2. Then, in Section 3, the AssisT-Task system is described in detail. After that, the evaluation carried out is explained in Section 4 and the results are discussed in Section 5. Finally, we summarize the conclusions extracted from the experience and outline some future work lines in Section 6.

Cognitive disabilities are related to mental and intellectual functioning and can be caused by several factors, such as genetics, congenital, and environmental factors. A wide accepted definition is the one by the American Association on Intellectual and Developmental Disabilities (AAIDD) [11]: “intellectual disability is a disability characterized by significant limitations in both intellectual functioning and in adaptive behaviour, which covers many everyday social and practical skills. This disability originates before the age of 18.” Intellectual functioning refers to mental capacity (reasoning, learning, and problem solving), while adaptive behaviour refers to conceptual, social, and practical skills (such as activities of the daily living, occupational skills, and schedules).

Therefore, cognitive disabilities include different diagnosis, such as Alzheimer disease, traumatic brain injury, Down syndrome, autism spectrum disorders (ASD), and attention deficit hyperactivity disorder (ADHD). In this article, we will focus on people with Down syndrome; however, some of the ideas and studies are easily adapted to other cognitive disabilities.

Most of the works related to assistive technologies for people with disabilities are rarely focused on cognitive impairments. Despite this hurdle, the literature still provides some interesting works. Particularly, in this review, we focused on assistive technologies to help people with cognitive disabilities to do different tasks. Due to the wide variety of activities and contexts, we classified the works in three groups: daily-life activities at home, related to education, and related to workplace.

2.1. Daily-Life Activities at Home

As daily-life activities at home, we usually consider every task or basic capability related to personal care (hygiene, dressing, food, etc.), instrumental activities (cleaning, meals preparation, transport, and money management), and the relations with relatives, neighbours, and flat/residence mates. This context provides a basic level of independency and it is narrowly related to age. Therefore, many researches have been working on the empowerment of users at this level. Particularly, a good example of project related to our work is “Memory Aiding Prompting System” (MAPS) [12]. It uses mobile technologies (i.e., PDAs) as assistive devices to present an interactive guide to do activities at home. The guides could be composed of pictures, images, and videos that caregivers prepared with a PC tool. Additionally, the system was extended to MAPS-LifeLine in [13] to provide assistance in the workplace and also feedback to caregivers.

Another interesting project is ARCHIPEL [14], an intelligent kitchen equipped with sensors and actuators developed to help people with specials needs in their daily lives. Thanks to visual and audio cues, users are guided to prepare meals and alerted in case of dangerous situations (e.g., missing a stove on). Additionally, the environment has a touch screen to guide users doing the activity by providing recipes, videos, and help. Related to this, “The TEeth BRushing Assistance” (TEBRA) project also employs intelligent systems to help users brushing their teeth [10]. It uses sensors, cameras, and complex decision systems to recognize the action that the user performs and automatically provide the next instruction.

Finally, PREVIRNEC is a distributed telerehabilitation system based on virtual environments that allow caregivers to design and adapt activities and rehabilitation programs to users’ needs [15]. This process can be done manually by the caregiver or automatically, thanks to the reports generated and the subsequent analysis.

2.2. Related to Education

Another key area that researchers in assistive technologies usually focus on is education. It includes the whole learning process in all stages, from kindergarten to higher education, as well as the relations with other students, teachers, and centres’ staff. Besides, innovative technologies are commonly integrated in their curricula as part of the learning areas.

Thus, we found projects such as Artifact-AR [16], an augmented reality system for cognitive rehabilitation. It uses a three-dimensional structure with tags and pointers to work on memory, sequencing, and images identification. The system was evaluated by therapists, who reported its suitability to the rehabilitation and learning processes but also highlighted some limitations of the system, such as its performance-dependency to external illumination and the need of pointers with adapted fasteners.

Related to the use of mobile devices, we found Picaa [17], a mobile platform specifically designed to help people with special needs in their educational process. It takes advantage of the iOS features, such as the touch screen, accessibility, and the ubiquitous access to the Internet to offer four types of activities: visual exploration of content, association, puzzles, and sorting activities. One of the features of Picaa is including the authoring tool and users’ activities in the same application, so educators can develop the content directly on the user’s device. From the evaluation, the authors concluded that Picaa helped in the development of basic skills, such as perception, attention, or memory.

Another interesting work that uses iOS devices (particularly, iPads) was presented in [18]. In this article, the authors contribute an empirical vision of the iPad as a mobile learning tool for students with cognitive disabilities in postsecondary education. To do that, they distributed devices, previously loaded with applications, among a group of students, and asked about their usage after a period of time. From the surveys and interviews, the authors concluded that, although all users were satisfied, they found difficulties to choose, configure, and test the most suitable application. However, once they found the proper one they reported to be very satisfied.

2.3. Related to the Workplace

One of the problems that people with cognitive disabilities have to address is the lack of autonomy. Authors such as Taylor and Hodapp [1] stated that their real independency relies directly on economic factors. Therefore, finding and keeping a job is a key factor to promote their autonomy. Thus, the workplace context includes all the necessary skills to get, keep, and develop a remunerated work, as well as the personal relations with partners, supervisors, or suppliers.

Thanks to the possibilities that current technologies offer, we can find novel examples of assistive developments such as ARCoach [19], which uses augmented reality to train people with cognitive disabilities in work tasks. Particularly, the authors presented an evaluation of the system applied to meal preparation in a restaurant. To do that, participants had to prepare a menu by selecting four different plates, represented by tags, and putting them in a tray in the appropriate order. A cenital camera recognized the tags and presented a virtual model of each plate on a screen. Then, the menu was automatically analysed by the system, attending to the selection of the plates and their position in the tray. From the evaluation of the system, the authors validated it since participants learnt the activity and this knowledge was retained.

Another example of labour task training based on technology is Kinempt [2]. This system used the Microsoft Kinect camera to identify the action the user performed and compared it to the one that had to be done to complete the task (previously recorded). Once the action is performed correctly, the system automatically provides the next instruction until the user finishes the task. Therefore, no direct supervision is required. The system was validated with real users with cognitive disabilities in a pizzeria environment. They were asked to prepare pizzas with different ingredients with the Kinempt support and the results demonstrated the possibilities and suitability of this kind of developments to make the training period easier and cheaper.

Finally, the literature also provides novel examples of mobile technologies applied to work inclusion, and training. Smith et al. presented in [20] a study of the viability of mobile technologies as self-instruction devices. To do that, three users with cognitive disabilities did a labour related task (upgrading a PC memory) with the support of a mobile phone loaded with videos explaining the procedure. From the data collected, the authors validated these devices as appropriate for labour training. Another interesting work presenting a study of mobile devices (i.e., the iPod touch) as vocational task support was carried out by Gentry et al. [21]. In their article, they present three studies in which participants used an iPod touch loaded with commercial applications as a system of support. During the experiments, the authors measured different time variables, such as the time working, the time they need direct supervision, and external support. In general, thanks to the iPods, the time employed by the labour trainer was drastically reduced as long as the experiment advanced.

In summary, we found that mobile devices (such as smartphones or PDAs) are one of the promising technologies for assistance. Although other technologies such as computer vision or smart environments have been studied, they require an additional and expensive infrastructure. In contrast, the familiarity of users with smartphones, their high penetration level in society, and the increasing capabilities they offer make them suitable for task assistance in the three contexts: home, education, and workplace. Besides, they act as a motivation, which may help to reduce the high abandonment rates of assistive technologies [8, 22, 23].

In general, the assistance is delivered by different channels or modes: audio, images, video, and so on that can be prompted automatically or by user interaction. That is, some works make the user ask for the next step, while others provide the next instruction automatically. Although both have their advantages and disadvantages, the manual approach would make users more conscious about their progress doing the task, since they have to identify when the step is completed to ask for the next one. This empowers the assistance-training duality [24]. In this sense, the MAPS project fits perfectly with this idea but it has strong limitations with the new devices and interaction possibilities.

Besides, as it has been demonstrated in different studies, mobile devices reduce the supervisor’s load, which may lead to costs reductions (both time and human resources).

3. Materials

In the previous section, we have presented a view on people with Down syndrome and current developments for their assistance. As we said before, there were a few approaches that could fit their needs, but they have limitations. Therefore, we decided to design, develop, and evaluate a novel application for Android smartphones.

AssisT-Task is a mobile system based on task-sequencing and QR Codes that provides pervasive guidance to do daily-life activities. To do that, we employ smartphones as prompting devices and performance recorders.

The operation has been simplified as much as possible. First, the caregiver defines the task by means of the set of steps that compose it. This is possible thanks to the authoring tool provided. Once the task definition is ready, a QR Code containing its information is printed and tagged in a proper place (e.g., close to the washing machine for the “doing the laundry” task). Then, users only have to open the application on their phones, point at the tag, and follow the steps to complete the task. Moreover, this guidance is adapted to the task, the user, and his/her needs.

During the activity, the smartphone records every interaction that takes place. This way, caregivers would be able to reproduce and analyse users’ performance.

3.1. Data Model and Architecture

The system is based on a client-server architecture. The server stores all the information related to users and activities but the client also has a local copy of the information to provide offline assistance.

Activities (or tasks) are modelled as a set of steps (instructions) and other activities. They also have a name and a unique id, which is coded into a QR so they can be easily identified. On the other hand, each step is represented by a textual instruction and a descriptive image. Besides, they have sorting relations with other steps or tasks to define the sequence. In order to adapt the task to the user and his/her needs, these relations are tagged and two additional features are included into steps: repetitions and branches. This way, the sequence can be adapted in execution time (the sequence adaptation options are explained in detail in the next subsection).

Caregivers can develop tasks and edit them with a provided authoring tool. It consists of a graphical user interface which allows viewing all the available tasks, modifying them (changing images, descriptions, features as branches, repetitions, or user-labelling), creating new tasks, deleting others, and exporting the QR Code. These features have been implemented into a drag-and-drop environment. An example of the interface is shown in Figure 1. As can be seen, the screen is divided into two parts: a left side bar showing a tree scheme of the available tasks and a right frame presenting the steps that compose the selected task. In this example, the “make coffee” task is composed of seven steps.

Additionally, a toolbar is included in the upper part. It includes drag-and-drop icons to add tasks and steps and buttons to delete and save the work, access user adaptation mode, and generate the task’s QR Code.

3.2. Sequence Adaptation

Each user has a unique set of abilities, and his/her cognition level is very difficult to measure. Unlike other disabilities, the level of “cognitive prosthesis” given depends on a varying number of factors, where some of them are quite subtle [8]. Therefore, there is an imperative necessity to have flexible tools to assist people with cognitive disabilities. With that idea in mind, AssisT-Task allows caregivers to decrease the prompting level for each user: when designing tasks, each step is chained to the next step by default, but it is possible to label a step specifying what the next step for a certain user is, skipping all steps between them. The intention of this feature is that the designer of the task (usually the caregiver or the job responsible) makes it as granulized as possible, so users with lower cognitive levels have an adequate assistance, whereas users with higher levels can do the same task with less steps, since they do not need so much help. This feature not only covers variability amongst cognitive levels, but also avoids prompting dependency [25] and decreases prompting level a long time. This way, users have an adapted version of the task during their learning process: their caregivers would program to skip steps in their tasks as long as they progress and become able to perform the task with less assistance.

On the other hand, the system also provides mechanisms to adapt the sequence to users’ and tasks’ contexts. To do that, we developed two mechanisms: repetitions and branches. Under some circumstances, it would be interesting to do a step for a number of times repeatedly. Besides, this number could be specified while designing the task or users should be asked in execution time. This feature is supported by the benefit that people with cognitive disabilities get from mechanical instructions instead of complex or numerical instructions, in most cases. For example, it is preferable to say seal the next envelope repeatedly instead of seal ten envelops for the “prepare the mail” task, for example. The former way makes the step atomic, clear, and understandable by the user, whereas the latter introduces conditional and complex information that can become difficult to understand by some users with a lower cognitive level. In fact, the user will likely assume the complex component of these indications by the time, in a more natural way than a complex instruction.

The other adaptation mechanism is branching. In some daily-life activities, the sequence of steps varies depending on some events during the performance or factors affecting the nature of the activity. For example, doing the laundry is different if clothes are coloured, white, or delicate. With a linear model of steps, caregivers would have to design three different tasks, generate three QR Codes, and put them near the washing machine. With three options, it does not seem very problematic, but with other many activities, photocopying, regarding all the options of paper, zoom, density, and arrangement, it is rather impractical. Therefore, the application allows creating branches in the sequence, through steps that ask the user to choose an option to continue. For example, doing the laundry would have a step asking what kind of clothes you want to wash, with three possible answers: white, coloured, and delicate. Every option would lead to different subsequences of steps, converging before the end, not necessarily, depending on the activity we want to design, with the steps to turn off the washing machine.

3.3. Interface Design

The interface design process was an expert-centred approach. That is, we had the support of experts and therapists of the Down Syndrome Foundation of Madrid and discussed different versions of the interface. It finally took three iterations until we designed the last version of the interface. Figure 2 shows two screenshots of the interface. In (a), the user selection screen is presented. In the first steps of the design, we did not consider the smartphone to be shared by different users but educators suggested that it would be very useful in the class. This way, all users would have the opportunity to use it. Therefore, we decided to include this optional screen to ask the user about who he/she is in order to identify him/her and provide a personalized assistance and registry. As can be seen, the interface has been simplified as much as possible. On the top of the screen, the interface presents users’ names and their photographs. Then, there are two arrows to go to the next/previous user and an OK button to get in the application. Additionally, an exit button has been included at the bottom of the screen.

Once the user selects himself/herself, the QR Code screen is launched. It loads a view of the camera and automatically detects and decodes the QR it is pointing at. After that, the system requests the related information and loads the sequence of steps. An example of it is shown in Figure 2(b). On the top of the screen, right under the black title bar, the instruction is shown. The font selected is clean and big enough to be read easily. Right under the text, the descriptive image is included, taking most of the available space. On the bottom of the interface, we included two navigational buttons, to go to the next or the previous step. They have distinct colours, colour-blind-proof, and with a subtle intention: the next button is green, as a metaphor of positive reinforcement, since pressing means advancing within the whole task, whereas the previous button is yellow, meaning a neutral connotation; it is not negative to go back and retry if you feel lost but you have not advanced in the process. The meaning of both buttons is given by the arrow symbols on them. Educators explained that users tend to respond well to arrow indications representing directional messages. Finally, in addition to the textual instruction and the image, the interface can be configured to automatically read aloud the instruction on load.

As it has been said, all the design process and elements of the interface have been carefully studied and discussed with experts in special education. Thus, we have the following.

Texts: they must be shown in a simple and natural style, as recommendation for the caregiver in charge of modelling the task, so it does not become a challenge to the user to understand it. Studies such as [26, 27] showed that, in a moderate level of cognitive disability, text-based instructions become more useful than others that require interpretation of the information or metaphor understanding, if given properly. In fact, nontechnology support for daily-life activities is often given in text-only instruction format. Finally, reading also increases focusing and attention.

Pictures: unless they are clear enough by themselves, they should be highlighted at the zones that the user must pay special attention [28]; for example, if the step is turning on the copier, the attached picture would be the control panel of the copier, with a highlighted area around the on/off button.

Audio: when users reach a certain step, the text shown with the description is also read by a text-to-speech engine. It is also read when they touch the screen and when a certain amount of time passes and they have not interacted with the application. Spoken instructions are proved to be the most helpful prompting source in several study cases [27, 29].

Vibration: the device vibrates slightly when a configurable timeout expires. Although some studies tried to build prompting systems only based on vibration of the device, Mechling et al. [27] showed that vibration works better as a supplement of prompting systems based on multimodal support, so here it is used only to notify the user that there has passed some time.

There are two particular cases related to steps’ interfaces: repetitions and branches. As it was said before, the number of repetitions can be asked in execution time or set in design time. In the first case, we first present a new screen asking for the number of times to repeat the step. An example of this interface is shown in Figure 3(a). As can be seen, the system asks the user about the number of times she has to do the step: in this example, “press copy button.” With the blue buttons with up and down arrows they can choose the appropriate number and then press OK to continue. After that, the step screen is presented. It is pretty similar to the standard one, but a new message has been included, indicating the number of remaining repetitions. The corresponding example is shown in Figure 3(b). As can be seen, under the instruction the system indicates the number of additional repetitions, in a smaller and lighter font.

On the other hand, branches are implemented as lists. The interface shows the instruction as any other step and, instead of the image, we included a list with the options the caregiver designed. An example of an interface is shown in Figure 4. On the top of the screen, there is the instruction: in this case “select copy type.” In general, it is advisable to write it as a selection order. Under it, the list of options is shown. In this case, as part as the “make photocopies” task, users would have to choose between one or double-sided copies. Depending on their decision, the sequence would be different.

3.4. Interaction

The interaction with the application has been designed to be the simplest and the most intuitive possible. Apart from the workload required to understand the information that represents a step, users only have to navigate to the next or previous step, in a natural way. The buttons’ layout has been designed to be handy and comfortable while holding the mobile phone with both hands or one hand; it is a desirable option for users that had acquired good handling level with the device and are able to perform a task with one hand, while holding the device with the other.

During the early stages of the design process, we run different trials to test whether our designs were suitable for the users. One of the main issues we observed was that some users pressed the next button repeatedly, even without reading or listening the information, thinking that it would allow them to finish earlier, leading them to misunderstandings and errors in the performance. Therefore, we introduced a short delay before the navigation buttons are enabled: users cannot quickly press the next buttons and finish the task; they have to wait for two seconds (this value can be configurable) to be allowed to go to the next step. This way, they are forced to wait and to pay attention to the information presented on the screen.

People with cognitive disabilities often get stunned when they have to remember how to do a task or when understanding an instruction is a bit more complex. Even more, they usually get blocked and cannot continue; therefore, the caregiver has to intervene and provide some stimuli for the user to continue. Regarding that, the application has been designed following a proactive philosophy, so the device not only expects interaction from the user but also requests it. If certain time has passed since step information is given, the device vibrates and reads aloud again the information; this behaviour is also configurable. This way, we encourage the user to try again to complete the step or draw his/her attention.

4. Evaluation

Even though the design process was assisted by experts, which reduces the probability of technology rejection, we decided to carry out an evaluation with users with cognitive disabilities. We wanted to evaluate the system from two perspectives: the first one was related to its suitability in users’ daily lives and the quality of the assistance provided. On the other hand, we wanted to compare users’ performance with our system in contrast to traditional support. In general, these methods include paper or cards with actions and pictures, verbal instructions, and direct supervision. They present advantages but, as we said before, many disadvantages, such as the costs (in terms of human resources) and the difficulties that many users present to find information or to recover from an error.

4.1. Methodology

The methodology carried out can be considered as a hybrid between inquiry and test methods [30]. The former involves interacting and observing users during the experiments, group sessions, surveys, and statistical analysis of record files (typically log registries). The later focuses on retrospective tests, such as video analysis, thinking aloud, coaching method, and analysis of users’ actions. In general, inquiry methods involve interacting and observing users during the experiments, which may lead to new design ideas and allows a better initial adjustment of the design and usability issues avoidance. On the other hand, test methods involve analysing the results to obtain general conclusions.

The evaluation process took place in a real working place setting. Specifically, all the sessions took place in a labour training centre. It is furnished as an office, with computers, bookcases, file cabinets, and shelves. Additionally, there are office-related devices, such as photocopiers, bind and lamination machines, and recycling points. This setting improves the training activities in the common tasks users may develop in their working places.

In order to reduce the carry-over effect during the sessions, we prepared an incomplete factorial experiment design [31], based on the Latin-square [32], but including repeated measures; this is, participants repeated the tasks several times, in order to get trained as they usually do with traditional methods of support. To do that, we asked the educators and labour trainers of the centre for two different tasks. They selected make photocopies (including configuration options) and documents archiving (according to different criteria). Regarding users profiling, we asked to recruit participants so they could be divided into two equivalent but heterogeneous groups (A and B) to avoid age, gender, or level biases.

Therefore, following the Latin-square experiment design, we assigned tasks, users, and support alternatively. Table 1 summarizes the distribution of tasks, support, and groups. As can be seen, each group did each task using one support alternatively. This way we ensured that results were not biased by users’ distribution or the task assigned.

Each user did each task once a week, during an eight-week period. We arranged with the centre to program the sessions during their workshop classes. This way it would fit perfectly with their curricula and users would not feel under pressure or persuaded. Therefore, we agreed to program photocopies sessions on Monday mornings and archiving sessions on Wednesday mornings. Each participant did the task individually so there was no interference or interaction between participants.

Although support was previously assigned, we introduced some modifications in order to get reference values: during the sessions of the first, the fourth, and the eighth weeks, users performed the tasks without any support but oral instructions at the beginning. Moreover, we left one week (number 7) without training before the last session to check whether the knowledge was acquired and kept in time or not. Therefore, the distribution of tasks support and groups is summarized in Table 2. Each week (from 1 to 8) is divided into two columns, the first corresponding to Task 1 (photocopies) and the second to Task 2 (documents archiving). Each row corresponds to a group (A or B) and the cells present the support used (X means no support, only oral instructions; PS, paper support; and AT, AssisT-Task). All the sessions were recorded and two observers took notes about users’ performance and any other relevant issue.

The number of repetitions came determined by the educators of the centre. After reviewing users’ performance on week 6, they decided that most of the users were doing their best so we could conclude the experiment. Despite the limited number of sessions, educators reported that they usually program the same number of sessions but, due to time limitations, they cannot do each task once a week and they have to extend the time between sessions up to once every two weeks for each user. Therefore, we already add a value to our system: it allowed a more efficient way of training regarding time resources.

4.2. Tasks

As it was said before, we carefully chose the tasks by asking experts about the most suitable ones. Thus, we prepared a list of requisites:(1)These tasks should be interesting for users, both from the user point of view (enjoyable) and from the formation curricula point of view. This way, users may understand the experiment as part of their studies which may avoid biases.(2)They should be easily arranged as a sequence of steps and standardized as much as possible. This is, they should be appropriate for all the participants.(3)Tasks should be different enough to avoid carry-over effects, but relative similar in terms of difficulty or time needed.(4)As far as possible, tasks should not have been trained before.

According to these requisites and experts’ criteria, the first task (T1) consisted of making photocopies and the second one (T2) was archiving documents. Both tasks fulfilled all the requisites but the fourth. Since the formation course takes two years and many of the participants were on their second semester, they had already been trained on photocopies and, some of them, also on sorting documents (but not archiving). Therefore, we decided to increase T1 difficulty by introducing features configuration on the photocopier: users had to make one (and only one) copy of a map of the subway, but it had to be reduced from a DIN-A3 size to a DIN-A4 and the density had to be increased exactly 3 points. This task was modelled as a sequence of 10 steps, including a branch step: users had to decide the type of copy from a list (simple, double sided, enlargement, or reduction). On the other hand, T2 consisted of documents archiving. In this case, educators prepared a set of contracts and put them unsorted on a desktop tray. Users had to put one of them in a proper file, regarding the following criteria: the document date had to be 2012-2013; then, they had to look for the WOP code (an invented array of letters and numbers). Depending on it, they had to choose the proper folder (there were three possibilities). Finally, they had to look for a company name on the first paragraph of the contract and archive the document alphabetically depending on it. This task was modelled as a sequence of 9 steps, including a branch as well (asking for the folder name).

Although both tasks were pretty similar in terms of number of steps and time to be completed, they were intentionally different regarding the skills required: while T1 required manipulative skills to handle the paper, open and close the machine, and so on. T2 required strong cognitive skills since users had to read, look for concrete information within a text, and sort alphabetically.

Support materials for both tasks were also developed by the educators. For the traditional support (paper based), we used the materials they already employed in their courses. Task 1 manual included instructions to do different type of photocopies on the same sheet of paper. They used highlighted fonts to separate the type of photocopies and numbered the steps of each activity. The traditional support for documents archiving was more elaborated and included examples and colours to highlight relevant information (such as one colour for each folder). On the other hand, AssisT-Task support was developed specifically for this evaluation. They were based on the traditional support (same instructions) but we included photographs as well for both tasks.

4.3. Users Profiling

Participants of labour training programs usually have mild to moderate cognitive disabilities. In many cases, they also have other disabilities, such as reduced vision or mild motor impairments. Despite their disabilities, most of them are able to read, understand simple instructions, do basic calculus, and have social manners and politeness. All these skills are usually acquired in previous stages, and now they focus on the abilities and capabilities typical of the workplace.

Thus, we asked educators to recruit participants regarding each one’s capabilities and the possible benefit they could get from the experience. In order to get a wide vision of the field and attend to the diversity, we asked them to select users of different levels so we had some heterogeneity, both genders, and typical age range (around 20 years old). Hence, they chose 10 users, 5 males and 5 females, who were 23.8 years old on average . The certified degree of disability oscillates between 33% (the minimum required according to current law) and 75%. Although the certificate is necessary to access special education centres and government support, many educators and specialists rate their users according to different abilities regarding four dimensions: cognition, social skills, handling capabilities, and attitude. Each dimension has a set of characteristics and, each one, is usually rated from 1 to 3 (lower values mean lower capabilities). In relation to this study, the most important characteristics are the ones of cognition and handling capabilities dimensions, which are as follows.

Cognition(i)Attention: the ability to keep concentrated on an object/action/task(ii)Memory: the ability to hold and manipulate information in the short or long term(iii)Instructions comprehension: the capability to understand and process simple and/or complex instructions(iv)Flexibility

Handling Capabilities(i)Mobility(ii)Rhythm(iii)Cleanliness

The distribution of users in groups and their profiles are summarized in Table 3.

As can be seen, most of the job profiles are office assistants, so they are usually trained on the typical tasks they will have to develop in their work. Therefore, tasks fitted perfectly to their curricula and were very interesting for their formation.

In order to get their technological profile and their familiarity with mobile devices we made an interview. From their answers, we concluded that all participants had a mobile phone, but only 6 out of 10 considered it as a smartphone. Besides, tablets were also popular, but not all of them had one. Some of them reported they used a relative’s one. In relation to Internet access, all participants reported they had connection at home and part of them also in their phones. Related to smartphone usage, the most popular purposes are instant messaging (i.e., WhatsApp), photography, multimedia reproduction, and videogames. Besides 9 out of the 10 participants reported they use their email frequently and social networks (i.e., Facebook). Thus, and despite their disabilities, they have similar technological profiles to people without disabilities of their same age.

5. Results and Discussion

In order to evaluate AssisT-Task in terms of the quality of assistance and suitability (regarding users’ performance improvement), we made a retrospective analysis of the records, phone registries, and observers’ notes. Traditionally, experts in special education value users’ performance regarding two factors: completion and time needed: that is, if they finish the task properly and in less time. In our case, in agreement with the educators, all users had to finish the task. This is, in case they made a mistake that would not allow them to finish the task correctly, the observer helped them to recover from the error. This decision was made in order to act as they usually do in the centre (with the traditional support).

Regarding data collection, we focus on the following measurements during the analysis:(1)Completion time: measured as the time between the start and the end of the performance. This factor must be taken into account carefully, due to its weak representativeness when comparing between subjects: the fact that one user takes one minute against another one who takes five minutes does not show an actual difference between performance qualities. In general, some users simply take more time to complete a task than others, regardless of their success in the task. However, this measure becomes an invaluable progress indicator when it is used within subjects, in other words, when comparing the time taken to complete a task in the first session with the last one.(2)Errors: we counted an error when users did not follow the specified instruction. For example, in T1 we considered an error to make two copies instead of one: the instructions ordered specifically one copy. Considering T2, a typical error was filing a document into an incorrect tab of the folder.(3)Help requests: that is, the number of times users asked the observers for assistance. In some cases, users did not know how to continue, got lost, or hesitated at some point of the task and asked directly. In other cases, they looked for approval or made gestures to indirectly call the observer (on the view of the experts).

These measurements were analysed for each user and session to study their advance individually (within-subjects analysis). The evolution of each measurement along the sessions is represented in Figures 5, 6, and 7 (completion time, errors, and help requests, resp.). These figures are composed of 4 graphs each, one for every group, task, and support combination. Completion time is shown in Figure 5: (a) corresponds to Group A, T1, and AssisT-Task as support; (b) shows time results as well for T1; but in this case, it corresponds to Group B, who used paper support; and (c) and (d) correspond to T2 and the proper support for both groups. As can be seen, most of the users present a decreasing curve, which means they needed less time to do the tasks as sessions progressed. The exception is graph (c), where this decrease is not so clear. We consider that this issue came motivated by the complexity of the task and the fact that they got confused with the paper instructions. As we will see in later analysis, the other measurements are also unclear for this combination (Group A, T2, and paper support).

In relation to errors, we can observe similar behaviours in Figure 6. There is a tendency to decrease the number of errors made. The exception again is the (c) graph, which is blurry and there is no clear trend. In this case, participants who used AssisT-Task did less mistakes than the other group. At this point, we want to remind that the number of errors (or just the presence of errors) can make users not complete the task correctly, which is one of the most relevant factors to consider.

Finally, help requests are presented in Figure 7. As can be seen in the graphs, there were only a few requests during the sessions. Moreover, as we observed directly and later on the videos, most of them were very subtle and looking for approval. Again, and for both tasks, users who were supported by AssisT-Task asked less for help. This could be motivated by the fact that participants did not get lost when using AssisT-Task.

On the other hand, if we analyse the behaviour between subjects, we can observe that there is evidence of the influence of the support on the results. In Table 4, we have summarized the statistical analysis of the measurements for both tasks. As can be seen, for T1 there is no statistical evidence of the influence of the support, while in T2, it exists. The number of errors and help requests is significantly lower for AssisT-Task . As it was said before, time factors are not as representative by themselves as other factors (such as errors). Making mistakes may lead to not completing the task correctly and it is even more important than doing activities fast. However, due to the limited number of users, statistical analysis should be considered carefully and as an illustrative result.

As it was presented in Methodology, in sessions 1, 4, and 7, users did not have any support but oral instructions at the beginning. In Table 5, we have summarized the average values of time and errors measurements for both tasks and groups (see Table 1 for task, support, and group distribution). Help requests were also analysed, but due to the reduced number of values, it has not been included in this part of the study. Regarding the time needed, there is an improvement for both tasks and groups, except for T2 and group A. That is, on average, all users improved except the ones that trained the cognitive task with paper support. Therefore, paper support seemed to be the less appropriate support to train tasks that require higher cognitive load. On the other hand, if we focus on the average number of errors and T2, there is also a slight improvement in Group B, the one that was trained using AssisT-Task.

From the recordings, observers’ notes, and focus groups with educators and labour trainers we made a qualitative analysis. In general, all users handled the smartphone properly. That is, all of them hold the smartphone in portrait mode, as the application was designed.

Regarding the QR scanning, all of them understood perfectly the process. Many of them named it as “taking a picture of the code.” Therefore, it was easy for them to point at the code with the phone and wait for the application to capture it.

Finally, another interesting conclusion we extracted from the recordings and educators and labour trainers was the motivational component of the application. As young adults, most of them are very interesting in new technologies. Therefore, using them as part of their formation made them keener to participate and do their best.

6. Conclusions

Although the evaluation revealed promising results, they should be carefully considered: we tried to include users with different capabilities and levels, which introduces value to the experiment, but due to the limited number of users and variation (in terms of type of disability) and the number of sessions we cannot universalize the study for all people with cognitive disabilities. However, we think it is representative for a particular group: people with Down syndrome who are being trained to get a job.

First of all, we would like to highlight the variety of the results. As can be seen in Table 4, in some measurements the standard deviation is relative high. This denotes the presence of many atypical values and lack of normality. On the other hand, the statistical analysis (presented on the same table) demonstrates the influence of the type of support in relation to the number of errors and help requests in the T2 case. In contrast, we did not find any evidence for T1. This may be motivated by the nature of both tasks: while T1 requires higher manipulative capabilities, T2 requires cognitive skills.

Secondly, and as it was said before, the completion time is usually related to knowledge acquisition although it is not the most representative. We did not find any evidence of the influence of the support in this factor in our study.

In addition to the retrospective analysis, we carried out focus groups with educators and labour trainers. In their opinion, AssisT-Task fitted perfectly for higher and lower profiles. Higher profiles are usually more impulsive and try to finish the tasks quickly, regardless whether they are doing it right or wrong. Moreover, they are reluctant to follow fixed and repetitive orders. This issue can influence their chances to get a job. On the other hand, AssisT-Task was ideal for them. In fact, as it was reported by the educators after the experiment, one of the users (U9) was selected to participate although she had a very low profile and was not valued for cognitive tasks. Surprisingly, she was able to do the archiving task perfectly with the support, and satisfactory without any kind of help. This fact demonstrates that AssisT-Task provides new opportunities for these users.

As future work, we propose to extend the trials, including more users and settings, as well as different tasks. Moreover, it would be very interesting to test the system in a real setting (company) and evaluate the impact of AssisT-Task in the work-inclusion process.

Additionally, educators suggested improving the authoring tool to make it available on tablets. This way all the design process could be done on-site.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.


The authors wish to thank the Madrid’s Down Foundation staff for their collaboration in this research. The work has been partially funded by the following projects: “e-Training y e-Coaching para la Integración Socio-Laboral” (TIN2013-44586--R) and “eMadrid-CM: Investigación y Desarrollo de Tecnologías Educativas en la Comunidad de Madrid” (S2013/ICE-2715).