Abstract

Young children with multiple disabilities (e.g., both cognitive and motor disabilities) are confronted with severe limitations in language development from birth and later on. Stimulating the adult-child communication can decrease these limitations. Within LinguaBytes, a three-year research program, we try to stimulate language development by developing an interactive and adaptive play and learning environment, incorporating tangible objects and multimedia content, based on interactive storytelling and anchored instruction. The development of a product for such a heterogeneous user group presents substantial challenges. We use a Research-through-Design method, that is, an iterative process of developing subsequent experiential prototypes and then testing them in real-life settings, for example, a center for rehabilitation medicine. This article gives an outline of the development of the LinguaBytes play and learning environment from the earliest studies up to the current prototype, CLICK-IT.

1. Introduction

Normal developing children acquire language skills seemingly effortless. However, this is not the case for non- or hardly speaking toddlers with multiple disabilities [1, 2]. These children are confronted with severe limitations in language, early literacy, and communication development from birth on. While language refers to the use of words in spoken, written, signed, or other symbolic forms in either the expressive or receptive modality [3], early literacy refers specifically to early behaviours, like storybook reading that precedes and develops into conventional literacy. These are evident in even very young children as part of informal daily experiences [4]. Communication refers to the transmission of meaning from one individual to another, whatever the means used (verbal, with and without speech, nonverbal, with and without vocal output) [5]. Communication implies a process of social interaction. In this article we focus on language development, with the notion that language development can be stimulated in adult-child communication and that early literacy activities are an important context to stimulate language development.

A major part of the non- or hardly speaking toddlers with multiple disabilities have the diagnosis Cerebral Palsy (CP). CP is an umbrella term encompassing a group of nonprogressive damage of the immature brain, before, during, or shortly after birth, with motor disabilities as a consequence. The fact that it is a nonprogressive disorder means that the brain damage does not worsen, but secondary deformities are common. The word cerebral means that the brain is injured; the word palsy refers to a weakness in the way a person moves or positions his or her body. Subgroups of CP have been classified according to clinical signs: spastic (70–80 percent), which is characterized by muscles that are stiffly and permanently contracted; athetoid (10–20 percent), which is characterized by uncontrolled, slow movements; ataxic (5–10 percent), which affects depth perception and the sense of balance. Depending upon which muscle groups are affected, CP may also be classified as monoplegic, triplegic, or quadriplegic, for one, three, or four limbs, respectively. Diplegic usually refers to both legs being affected and hemiplegic for one side of the body. A toddler with CP has trouble controlling the muscles of the body, the child might not be able to walk, talk, eat, or play the way most children do.

If the part of the brain that controls speech is affected, a child with CP might have trouble talking clearly. Another child with CP might not be able to speak at all. So the limitations in language development can arise as a result of the brain injury. Other factors that contribute to the language limitations are as follows. (i)Motor Problems. Because the arm and hand functions are retarded, these toddlers have restricted access to their environment and therefore an impoverished experiential base for language development [2]. Another result of motor problems is that the facial, gestural, and verbal expressions of toddlers with multiple disabilities can be hard to interpret by their caregivers, making it difficult to understand what the children are trying to communicate, especially since communication with non- or hardly speaking children is highly dependent on nonverbal expressions. As a consequence, these children receive less communicative reactions than normal developing children, or only reactions that are less rich in information. This leads to further impoverishment of the child's opportunities for language development.(ii)The Requirement of Much Physical Care. Because a lot of time is needed for physical care, less time and attention is left for caregivers to spend on play and communication. The toddlers miss opportunities for learning from their caregivers and surroundings, which leads to a restricted environment [6]. The limitations in language development can also have serious repercussions on other developmental areas, such as the social, emotional, and personal development, since in this age the development of all skills is interdependent. Early intervention including augmentative and alternative communication (AAC) is essential to minimize these negative impacts and stimulate the development of language, emergent literacy, and communication [7]. The effectiveness of early intervention programs can be reinforced by the use of multimedia technology [4]. Although there are many contexts in which early intervention can take place, one receiving recent attention in the AAC literature is that of interactive storybook reading [8], since it plays an important role in the early development of language and literacy skills of young children [9].

Following a short inventory study of multimedia computer programs to stimulate the language development of toddlers with multiple disabilities in the Netherlands, we concluded that multimedia technology to stimulate the language development incorporating AAC was not available. It was decided to examine the need for multimedia technology that included AAC in a preliminary study and, if such a need would exist, what guidelines for such a multimedia program would be.

In this article, we describe how this preliminary study has led to LinguaBytes, a three-year research program, aimed at developing an interactive and adaptive play and learning environment for stimulating the language development of toddlers with multiple handicaps. We will describe (1) the abovementioned preliminary study; (2) the development and evaluation of a first prototype, the E-Scope; (3) the development and evaluation of the follow-up prototype, KLEED; and (4) the current prototype, CLICK-IT, along with preliminary findings.

2. Preliminary Study

The aim of the preliminary study was twofold: (1) to execute a needs assessment and define initial guidelines and (2) to build and evaluate a preliminary program based on these guidelines [10]. The methods used were literature study and expert consultation using the Delphi method [11]. The Delphi method is a method for obtaining judgments from a panel of independent experts.

The literature study showed that 50% of the non- or hardly speaking toddlers with multiple disabilities have problems with their language development and early literacy [12, 13]. The period from the prelinguistic to the linguistic period is the most important phase in language development in which the foundation for further language development is laid. Another finding was that young children are more distracted by details on two-dimensional graphics than older children [14]. The results of the literature study were used as starting points for formulating the propositions used in the Delphi method.

The invited experts (2 linguists, 5 educational psychologists and speech therapists working with toddlers with multiple disabilities, 3 computer scientists, 3 teachers in special education, and 1 industrial designer) were asked to react on propositions in two subsequent phases (36 propositions in the first phase and 27 propositions in the second phase) via the Internet. The propositions were categorized in propositions about the target group (including the need for a program), the content, AAC, graphics, devices, and instructions and support. An example of a proposition in the category “graphics” is “To show the symbolic function of a concept an animation of the concept is needed.” On a five-point scale the experts had to fill in how much they (dis-) agreed with the proposition and argue why. In the second phase the propositions on which almost all experts (dis-) agreed were left out. The other propositions were reformulated or specified based on arguments given in the first phase. An example of a more specified proposition in the category “graphics” is “For each concept it should be considered if an animation supports the meaning of the concept. In case of verbs and dynamic concepts animations are needed. In case of nouns likehouseandtreeanimations are not needed.” Anonymous summaries of the experts' opinions from the previous phase as well as the rationale behind their judgments were given. Thus, experts were encouraged to revise earlier answers in light of the replies of other members of the group.

The results of the preliminary study confirmed the need for a multimedia program, and the results of the Delphi study led to a first global set of project guidelines.

2.1. Guidelines

The guidelines concerning the development of a preliminary LinguaBytes program were defined as follows. (i)Target Group. The LinguaBytes multimedia computer program should aim at toddlers with a developmental age between 1 and 4 years.(ii)Language Development. The content of the multimedia program should cover the transition from the prelinguistic to the linguistic period and the linguistic period with an accent on the early linguistic period. This outcome was also supported by literature [8, 15].(iii)Content. The content should contain interactive story reading and story related exercises that would provide appropriate vocabulary for the child to explore, predict, and practice.(iv)Levels of Difficulty/Adaptivity. The program should offer different levels of difficulty and grow along with the developing child.(v)AAC. The program should make use of AAC and at least contain picture communication symbols (PCSs), since this is the most used form of supported communication for toddlers.(vi)Independency. The child should, as much as possible, be able to use the program independently, which means that the child should be able to start, stop, and replay parts of the program. The replay is especially important because toddlers enjoy rereading stories, which has shown to be very powerful in supporting language, emergent literacy, and communication development [16]. Because children with disabilities often lack the possibility for independent exploration, it is important to give them these opportunities. This will have a positive impact on their social emotional development.(vii)Graphics. The graphics should be simple without too much distracting details; animations should be used when needed, for example, in case of dynamic concepts like verbs.

2.2. Program

To verify the results of the Delphi study a prototype was built, a computer program that contained a nine-scene story about a boy who is going to sleep (Figure 1). This subject is close to the daily experiences of the toddler. The “core words” of the story were derived from Dutch word lists. Some core words in the story like “pyjama,” “sleep,” and “toothbrush” were highlighted on screen using PCS. Animations were only used to illustrate dynamic concepts like “undress” and “brushing teeth.” The computer program supported the use of “traditional” PC input devices (mouse, trackball, etc.). By clicking one of the navigation icons at the bottom of the screen, the toddler could stop the story, go to the next or previous scene, or replay the current scene. The program was presented in a plenary meeting with the experts that participated in the preliminary study.

2.3. Evaluation

After demonstrating the prototype, the experts were divided into smaller groups and were asked to evaluate the prototype focusing on (1) the themes that should be incorporated in the content, (2) the graphical interface, and the (3) user interface in general. The experts were positive about the content of the story and proposed several other themes, for example, “eating and drinking,” “animals,” and “taking a bath.” With regard to the graphical interface the experts mentioned that buttons like the stop and forward button should not be shown all the time. Concerning the design of the user interface and the hardware several more impacting aspects were indicated as follows. (i)The program should be adjustable to the sensory-motor skills of the child to optimize the interaction for each individual child. If the designed interaction does not fit the child's skills, the child will be less motivated to engage in the program and eventually stop using it. This does not benefit the child's language development.(ii)The program should appear to be more as a toy than PC-based computer program. This is for two main reasons. Firstly, one has to realize that practically none of the multimedia play and learning applications that have been developed for toddlers with multiple handicaps—mostly traditional, PC-based software—support the explorative natural interaction style of toddlers, making these programs less appealing than many toys. Most computer programs are assignment based, solitary and do not support the child's urge to explore. Interacting with a PC is simply not rich and social enough for toddlers. Our prototype was no exception. Secondly, the structure (menu-based decision making) and input (mostly button like) of most programs are not suitable for toddlers, due to the high cognitive load [17]. These aspects led us to conclude that, in order to stimulate the language skills of toddlers with multiple disabilities, we should design a different interaction that would be better tailored to their individual skills and needs. This should be a richer system that would facilitate active exploration and interaction with the environment, and would integrate interactive storytelling and AAC, capitalizing on new technology (embedded intelligence, sensor technology, tangible input systems). This could lead to improved and more effective play and learning systems for toddlers with multiple disabilities [18].

3. Follow-Up Study

To research what such a product should look like, what its content should be, and how it should be used by toddlers with multiple handicaps, the LinguaBytes project started, a three-year research program. LinguaBytes aims to develop an adaptive and interactive play and learning system for stimulating the language competencies of toddlers, aged from 1 to 4 years with multiple handicaps.

3.1. Method

The development of a product for this highly heterogeneous user group is a complex process, in which numerous choices that are impacted by factors related to the child (e.g., motor, cognitive and linguistic skills, interests, and attractiveness), the therapist or parent (efficient to learn, maintain, and develop) as well as factors related to the product itself (technology, material, costs) have to be made. In order to keep this process structured and efficient, we use a constructive research method—in our field of design research more commonly known as “Research-through-Design.” This is a process in which scientific knowledge is generated through iterations of designing, building, and testing experiential prototypes in real-life settings [19]. When designing interactive products like the LinguaBytes product, this process typically moves through several cycles of designing, building, testing, each cycle yielding refined guidelines for the content and the design of the product. This means that early iterations are often more diverging in character (focused on mapping out all aspects involved in the project) and later iterations more converging (refining within these aspects). As a consequence, research activities such as, for example, literature search, are often repeated throughout the process, on different levels of detail: from global to specific knowledge. Subsequent cycles are evaluated using process or formative evaluation [20, 21]. The aim of formative evaluation is to collect data with which the product can be improved.

Below we describe two cycles, in which two explorative prototypes of the LinguaBytes project were developed and tested: the ExploraScope and KLEED. Concluding, we will outline the evolution towards a more definitive prototype, called CLICK-IT.

4. Explorascope

The preliminary study showed that there was a need for an early intervention multimedia program to stimulate the language competencies of toddlers with multiple disabilities in the Netherlands. It also showed that, in order for this program to be successful, it should appear to be more as a toy than computer program in the traditional sense and be highly flexible in order to create optimal learning settings for individual children. In the LinguaBytes project we have taken these guidelines as a starting point for actually designing such a system.

4.1. Guidelines

In order to get more insight in the scope of these guidelines we have conducted a broad literature search and consequently built and tested several cardboard models, mockups, and semifunctional 3D sketches. This resulted in an extension of our design guidelines as follows. (i)Playing. Very young children learn mostly through play [22]. Play permits making mistakes and trying again. Therefore, the interaction with our system should be playful, in order to motivate the child and stimulate exploration [17, 2325]. Within LinguaBytes, this could be done by taking the initiative away from the computer and giving it to the child, for example, by offering materials with which the child can control the content of the program.(ii)Social Interaction. The new toy should focus on stimulating interpersonal interaction [26], because stimulating the communication between caregiver and child is essential [2729]. This means, for example, that the LinguaBytes system should shift from solitary use to collaborate use, compared with PC-based programs [23, 30].(iii)Tangibility. Especially for very young children, who naturally explore the world through play, interaction should be focused on all bodily skills. Tangible interfaces [31], for example, offer a number of advantages over the standard PC-interface, for example, stimulation of multiple senses and skills [32, 33], affording both actions and play, offering a slower pace [34] and thus more room for social interaction, a more personal interaction style, more involvement, and a more active interaction [34].(iv)Challenge. The interaction should be challenging. Challenge is a key element of motivation [35]. It engages children by stimulating them to reach for the boundaries of their skills. We wish to challenge children by designing interactions that are appealing, rewarding, engaging, and fun. This also means that the content of our system should be tailored to the developmental level of the child.(v)Technology. The LinguaBytes system should be highly adaptive to individual users to enable the diverse group of multihandicapped toddlers to use it independently. This optimizes the learning setting and avoids frustration. Supporting such adaptability requires advanced technologies, which are not capitalized on today. Embedded intelligence, wireless networking, and interactive, adaptive narratives offer possibilities for innovative designs. For example, small motors and sensors could be integrated in the interface to react to the child's behaviour during interaction, or even trigger behaviour.(vi)Appeal. We wish to design products that are appealing to both disabled and able-bodied children by making products that resonate with them. Designs should be nonstigmatizing and can benefit from success formulas from the toy industry [7]. We have used these guidelines to design a new prototype called ExploraScope, or E-Scope.

4.2. Design

The E-Scope is a tangible controller that enables young children to learn simple concepts (e.g., sleep, clock, bear) through tangible interaction and play [18]. The E-Scope consists of a wooden ring-shaped toy with sensors and actuators, a computer with a wireless station, and a monitor. The E-Scope and the computer communicate through radio transceivers. All sensors, actuators, and batteries are built into the ringed layers of E-Scope.

E-Scope is adaptable to a child in the sense that it can be used in different configurations (Figure 2) to suit a child's preferred interaction style. A child can listen to stories or play educational games by rolling E-Scope over pictures that are lying on the floor. Each picture triggers a matching one-scene story. The buttons can be used for further deepening of the linguistic concepts within the scene. For example, within a scene about a goat at the farm, pushing a button can trigger auditory output (e.g., the sound the goat, the word “goat,” a song of the goat) or visual output (e.g., the PCS of a goat, a different picture of a goat), or be used to highlight parts of the goat (legs, belly, tail, etc.).

If another configuration is preferred, E-Scope can also be used on a table or other workspace. By rotating the upper part of the ring and pushing its buttons, which are connected, to individual scenes of the story, a child can interact with stories shown on an integrated or a separate screen, depending on the ergonomic and social requirements. If required, E-Scope can also be attached to alternative input devices, for example, single button or eye-movement interaction. In this last case, the upper ring is rotated by use of a motor.

4.3. Content: Story

A linear story about a girl called Jitte who is going to sleep, that is offered through the E-Scope, aims at being rich and engaging. Therefore, it uses a variety of visual and auditory outputs such as photos, drawings, symbols, sounds, and songs. The graphical style aims at being realistic for an optimal recognition of the concepts to be learned, but it lacks enough freedom to stimulate the imagination of the children.

4.4. Evaluation

The E-Scope was tested with configurations in Figures 2(a) and 2(c) with three children and three therapists in the center for rehabilitation medicine St. Maartenskliniek in Nijmegen, the Netherlands. Each session took half an hour and was conducted during a “regular” speech therapy session. The sessions were videotaped and the therapists were interviewed after the sessions.

The outcome was that the overall concept of the E-Scope—enabling young children to learn simple concepts (e.g., sleep, clock, bear) through tangible interaction and play [18]—was useful and promising. The children were exited by the stories and graphics and showed good concentration. The therapists were positive about the toy-like design and its playful sensorial character. They were enthusiastic about the diversity in interaction styles but encouraged further adjustability for a more personal fit. The product should make more use of physical objects that could be adjusted to the skills (cognitive and motor) of the child. Also, the interaction style should be adaptable because for some children it would be too hard to push the (correct) buttons. Also, the graphics on the buttons were relatively small and could not be adjusted. Finally, one therapist indicated that she wanted an integrated screen to enhance social interaction by sitting opposite each other with E-Scope in the middle (configuration b). Unfortunately, integrating a circular screen in the E-Scope's ring would be a costly thing. We have not yet solved this problem but have built and tested a mockup version of this configuration. In this mockup, the E-Scope was placed in a fixed position over a circular tabletop projection. This proved to facilitate the desired eye contact with the child but of course heavily limited the freedom of moving and exploring the E-Scope.

5. KLEED

The tests with E-Scope showed that a more playful, toy-like interface has great potential, provided it is tunable to the skills and needs of the individual child, not only cognitively, but also physically. The E-Scope already had some flexibility in terms of configuration and content, but the therapists indicated the necessity of further adjustability. Taking these results, we have elaborated our literature search for further deepening of our guidelines, and built and tested two semifunctional mockups, which led to the development of our second prototype, kids learn through engaging edutainment (KLEED).

5.1. Guidelines

Taking the results of the evaluation of the E-Scope, additional guidelines were formulated as follows. (i)Physical objects. The LinguaBytes system should allow for the use of a child's own preferred physical objects and AAC systems. This is important since not all toddlers are capable of symbolizing the world into abstract representations [36]. They should be able to use materials they know as a starting point.(ii)Adaptability and adaptivity. This also means that the system should be flexible enough to support different input materials and levels of abstraction or difficulty. The LinguaBytes system could benefit from database technologies to set initial settings per child and monitor the child's development.(iii)Costs. The LinguaBytes system should be affordable, despite being innovative. Otherwise it will not be feasible. To do so, the system could benefit from the advantages of modularity.

5.2. Design

Based on these and other guidelines, KLEED was developed (Figure 3). KLEED is a modular system consisting of exercise mats that can be connected to a central console, and upon these mats a standard set of tagged objects and additional tagged personal material can be used to hear and respond to interactive stories and exercises. Apart from exercise modules, a separate module for navigating through stories was developed. The modularity, tangibility, and adaptability of the system all add to its playfulness, appeal, and challenge, making it motivating for the child to use and learn.

The central console contains a 15-flatscreen monitor and electronics for connecting exercise modules to the system. The position of the monitor can be adjusted to the optimal learning settings of individual toddlers. This means that the screen can be placed in both a horizontal position, enabling the use of KLEED on the floor or table, and a range of tilted positions. The central console is embedded in a sleeve of a soft and friendly material, that can be washed separately after screen and electronics are taken out.

Exercise modules can be easily attached to the console in different setups (Figure 4), enabling both individual and collaborative use, thus stimulating social interaction. Every exercise module has its own goal, for example, exercising phonological awareness (through rhymes or songs), semantics, syntax, or just free play. By giving every module its own goal, each can be designed optimally for the type of exercise, making the interaction more intuitive, engaging, and suitable for toddlers. Materials, textures, colours, sounds, and so forth will therefore vary between exercises, thus offering a wide range of sensory stimuli. Each module supports different difficulty levels, depending on the development of the child.

All parts of the prototype were made interactively using Phidgets sensors [37], Macromedia Flash, and MAX/MSP, a widely used graphical programming environment [38].

5.3. Content
5.3.1. Stories

Along with the KLEED prototype two stories and two exercises have been developed within the semantic category “people.” Both stories consisted of nine scenes. The first story concerns two children, Tom and Tess, playing with a ball and daddy who wants to join them, but when he does, he falls down and tears his trousers, so he has to go home to put on a new one. Some core words are “daddy,” “cuddle,” “play,” “join,” and “help.” The second story also starts with Tom and Tess who are playing with a ball, but now a woman with a baby appears. The children want to see the baby and give the baby a kiss. They sing a song for the baby. Then the baby falls asleep and the woman with the baby goes home. Some core words in this story are “woman,” “baby,” “cuddle,” “kiss,” “sing,” and “sleep.” The core words for the stories and exercises have been chosen on the basis of the three Dutch word lists, N-CDI [39], the Lexilijst [40], and the list Duizend-en-een-woorden [41]. In each scene, up to three PCSs were shown to emphasize the core words in the scene.

5.3.2. Exercises

The aim of the first exercise is to stimulate vocabulary (words like “daddy,” “woman,” “baby,” “boy”), turn-taking, cause and effect, and visual memory. In the exercise (Figure 5), three outlines of characters from the story were shown on screen. Pulling a cord on the accompanying exercise mat would “open” the corresponding character and trigger audio that said something about the character, for example, “this is daddy, daddy is a man.” After this, the revealed character would be replaced by the outline of a different character.

The aim of the second exercise (Figure 6) was to construct two-word sentences. The child could choose a wooden character piece and combine it on the exercise mat with a PCS-verb card. The constructed sentence (e.g., “Tom” and “sleeping”) would be animated on the screen and pronounced (“Tom is sleeping”). The aim of this exercise is to stimulate active syntax, turn-taking, and playing with elements of a sentence by letting the same character do something else or letting someone else do the same.

5.4. Evaluation

Seven children between the ages of 3 years 1 month and 6 years 1 month (with developmental ages between 1 year 5 months and 3 years 9 months) took part in the evaluation study. Four children visited a day care center for children with cognitive delays; these children had none or minor motor problems. The other three children visited a center for rehabilitation medicine. Two of these children had the diagnosis CP, the other had the diagnosis hydrocephalus. These children had moderate to severe problems with their arm and hand function: they could pick up something with their hands but were not able to pick up something with their thumb and forefinger. All seven children had a delay in language development.

For the evaluation, we used the following (1) a questionnaire concerning child data that was completed by the researcher on the basis of the children's files; (2) an observation list that was completed by the researcher to analyze the video material of the children; and (3) a questionnaire for the therapist of the child.

The most important results were as follows.

(i) The toddlers looked continuously at the screen and seemed interested in the story and the exercises. The therapists indicated that the concentration, speed of work, and motivation of the children were at least similar or better than in working with other, comparable materials. It was noted that the child's motivation could be even more enhanced by offering both child and therapist more control over the content and timing of the exercises. For this, a more extensive database should have been set up.

(ii) The children reacted well to the design, among which the use of material, colour, graphics, animations, and audio (type of voice, speed of the pronunciations). The separate modules with their own actions spaces and own goals were considered to make the interaction more intuitive for toddlers.

(iii) The use of AAC was considered satisfactory by the therapists, but from a design perspective it showed considerable drawbacks in terms of clarity and flexibility. We will address some here. Firstly, recent literature suggests (1) that it is best to offer language concepts within a visual scene [42] and (2) that it is best to show a communication symbol at the same location which the object refers to. In the case of onscreen animations this means that when, for example, the PCS of “daddy” is shown in the story, it should be placed as close to the onscreen daddy figure as possible. This however obscures part of the scene making it unclear. This effect increases with each symbol placed within the scene. Secondly, it is also preferred to reveal the communication symbol at the moment the corresponding audio is being pronounced. However, it could be seen that toddlers often looked away from the screen at this crucial moment, due to the fact that many toddlers with CP move around involuntary. This raises substantial timing problems for the animator. Thirdly, in order for the scene to be as clear as possible, it is necessary to keep the symbols small, making them harder to “read” for the toddlers, who often have visus problems. Finally, we used RFID-tagged cardboard PCS verb cards with the second exercise. However, these widespread communication symbols were often slightly customized or replaced by the caregivers or therapists in order to make them suitable for individual children. Some children did not understand the standard PCS but did understand a slightly altered version. Some children preferred using photograph representations. In other words, although the tangible symbol cards had their interaction advantages in the sense of exploration of, and control over content, it would be highly recommendable to allow therapists to customize the symbols.

(iv) The contents of the stories and exercises were considered suitable. All children liked the characters in the story; they spontaneously used the names of the children in the story. In spite of the fact that most children already mastered the core words, the vocabulary seemed to be chosen well as indicated by the therapists.

(v) The physical interaction needed for storytelling was not always suitable. One problem was that some children liked moving the story navigation handle so much that it disrupted the continuity and concentration. The tangible interaction in the combination exercise was clear for all children and provided no difficulties, not even for the multiply disabled children. The hiding exercise proved to be physically difficult for the multiple handicapped children.

(vi) The therapists indicated that they wanted to make choices in offering content to the child, so more (types of) stories and exercises should be implemented. The therapists did not give a high priority to integrating their own personalized pictures or audio in the program.

We are currently using these results for the development of KLEED's follow-up, CLICK-IT.

6. CLICK-IT

6.1. Guidelines

The results of the evaluation of KLEED enabled us to refine our body of design guidelines. The most important alterations are as follows. (i)Control. The LinguaBytes play and learning system should offer both child and therapist as much control over the content, exercises, and interaction itinerary as possible. This will benefit the child's comprehension of the content and the social interaction with therapist or parent.(ii)Adaptivity. This means that the system should make both the software/content and hardware/interfaces highly adaptable and adaptive [43].(iii)Database Technology. This means that the system should capitalize more on database technology and randomization (sounds, details, visual effects, etc.) to enhance the motivation of the child, within the constrictions of the individual learning settings.(iv)AAC. The system should externalize the use of PCS or other symbols to both keep the visual content clear and give the initiative to use symbols to child and therapist/parent. Based on this new body of design guidelines, we have recently started developing our current prototype, CLICK-IT.

6.2. Design

Like the previous prototype, CLICK-IT consists of a console, exercise modules, and a collection of input materials but the design shows the following significant changes. (i)The exercise modules have been split into one general base unit, on top of which various interface modules can be placed.(ii)The base unit contains most of the sensors, actuators, and processing, so that these can be used by any of the interface modules, thus reducing costs. Connecting an interface module automatically changes the setup of the base unit.(iii)Due to various reasons, the fabrics used in the KLEED prototype have been replaced by wood.(iv)The base unit contains a slot in which the current user's identifying tag can be inserted. This will change the product's settings to fit the user optimally (level of difficulty of the content, sensitivity of sensors, etc.).(v)To increase the flexibility of the system, more use has been made of tangible input materials. A major change in this respect is that the stories have become physical books again, that can be augmented by running them through the story reading module. The CLICK-IT prototype consists of a console, a base unit, four different interface modules, a booklet of the story, and 15 input characters (Figure 7). Additionally, the 30 core words from the story are added on  cm (  inch) cards, containing the word and an illustration of the word. The console contains a 17-flatscreen monitor, stereo speakers, and connectors for the base unit; the base unit itself contains various Phidgets sensors and connectors for the exercise modules and the exercise modules occasionally house additional electronics (speaker, slider, light sensor or DC motor). All parts are made of Ash wood using a 3D milling machine and plastics.

6.3. Content

The content of the CLICK-IT prototype was created in the same way as that of KLEED: a body of core words was assembled based on the word lists mentioned earlier in this paper. This time another semantic category was chosen, “animals.” In this context, a nine-scene story about a children's farm was created (Figure 8) along with seven exercises (four phonological exercises with incrementing levels of complexity, two syntax exercises, and a semantic exercise). All exercises and the stories were animated in Macromedia Flash and made interactive using MAX/MSP.

6.4. Evaluation

We are currently testing the CLICK-IT prototype at two centers for rehabilitation medicine in the Netherlands, with twelve children between the ages of 2 years 3 months and 3 years 10 months and a developmental age between 1 year 3 months and 3 years. At this point it is too early to draw definite conclusions, but the first signs are promising.

7. Discussion

In the introduction of this article, we have outlined some of the main factors that cause severe limitations in the language, emergent literacy, and communication development of very young children [1, 2, 6]. We have described the repercussions these limitations have on the total development of the child and identified the need for an interactive multimedia play and learning system to stimulate the linguistic development of the child and help diminish these repercussions [710]. A preliminary study resulted in the following two important conclusions that would be the foundation for the further development of this interactive multimedia play and learning system. (1)The system should appear more as a toy than PC;(2)the interface of the system should be not only adjustable to the cognitive and linguistic level of the child, but also to the child's other needs and skills (perceptual motor, social, and emotional). These conclusions led to the iterative development of the LinguaBytes system, of which we have described two iterations. Throughout the paper we have highlighted the most important design guidelines that led up to our current work, CLICK-IT. We have summarized all guidelines in Table 1. Each column represents one iteration.

Focusing on the two, abovementioned foundation points for the development of our system, we have made some interesting observations.

Firstly, throughout our iterative process we have seen that the physical manipulability of our toy-like prototypes had the following major advantages for these children, compared to the familiar PC interface. (i)By offering physical input materials (in the KLEED prototype) and an interaction that is closer to their usual style of exploration, these children were offered more access to their environment, thus getting a richer base for language and emergent literacy development [2].(ii)Consequently, our observations showed that the children generally had a longer attention span than usual and showed more initiative [32, 33]. These observations were confirmed in the therapists' questionnaires.(iii)Additionally, using tangible input material slowed down the interaction, subsequently giving both children and caregivers more control over the timing of the interaction. For example, the KLEED prototype clearly seemed to stimulate the communication between the therapist and the child; especially in working with the combination exercise children seem to communicate more than in other comparable situations, using the physical input material as an alternative communication means.(iv)As a result of this, there are more opportunities for facial, gestural, and verbal expressions of the children, letting them evoke more communicative reactions of their surroundings. Secondly, throughout our research we have seen that, in order LinguaBytes system to optimally fit all its different users, it is crucial that it is highly adaptable and adaptive. By the former we mean “adjustable by the user,” by the latter “adjusting to the user.” The E-scope already showed that the interactions in the different exercises should be more intuitive and suitable, that the system should allow for the use of a child's own preferred physical objects and AAC systems, and that it should offer the possibility to optimally fit each exercise to the individual child. And although these refinements were executed in the KLEED prototype and the children and caregivers were positive about the content and the tangibility of the product, still they urged for further broadening of the content (more stories and exercises) and a more flexible and adaptive user interface with regard to the children's motor skills.

All this brings us to one of the major challenges of developing the LinguaBytes system: combining the two foundation points “more toy than PC” and “highly adaptive and adaptable.” We clearly see the advantages of physical interfaces that adapt themselves to individual users, not only within our own target group but also for any other highly heterogeneous group of users. Actually developing such interfaces however becomes really complex, due to this high heterogeneity. We find incremental research or Research through Design [18] a helpful method for achieving this because it enables us to shift between different aspects of the design building on previously generated knowledge, slowly working toward a more or less complete body of guidelines.

Which brings us to the second challenge we wish to address with regard to interaction design: the complexity of mapping out guidelines when designing complex products such as the LinguaBytes system. Table 1 illustrates this quite well, since it shows the growth of our design guidelines as a rapidly expanding set. However, the guidelines it holds are still very general. The table does not incorporate detailed guidelines such as ergonomic dimensions. Of course, in order to fit Table 1 in this paper we had to cut it back to the bare minimum, but this illustrates an important thing: mapping out the guidelines, requirements, and criteria for (developing) a complex product or system rapidly becomes complex itself. A table often does not suffice, due to the interdependencies of many guidelines: often, when one guideline changes it has repercussions on other guidelines, which in turn might lead to necessary changes in other guidelines. We feel that we need a new representation form, in order to keep track of all these changes. We are still investigating ways to tackle this problem, which we feel most developers of complex products or systems have encountered.

8. Conclusions

In this paper, we have outlined some of the main factors that cause severe limitations in the language, emergent literacy, and communication development of very young children, described the repercussions these limitations have on the total development of the child, and identified the need for an interactive multimedia play and learning system to stimulate the linguistic development of the child and help diminish these repercussions. We have described how we developed and tested two prototypes, ExploraScope and KLEED. Concluding, we have given insights in current and future works.

The subsequent outcomes of the two prototypes indicate that the iterative process leading toward a definitive concept of LinguaBytes is promising. The current iteration with the CLICK-IT prototype as a result shows that the LinguaBytes system is gradually evolving into a more definitive concept and a valuable addition to the yet available early intervention products for non- or hardly speaking children with multiple disabilities.

We do feel however that in order for the system to be really effective, still more adaptivity and adaptability need to be designed and implemented. We hope the results from our current tests with CLICK-IT enable us to develop a final, fully adaptive prototype which we plan to build and test in early 2009.

Acknowledgments

The authors thank Dr. W. M. Phelps-Stichting voor Spastici and the consortium of additional sponsors for funding the LinguaBytes project. They also thank the therapists of the two centers for rehabilitation medicine, Rijndam Revalidatiecentrum in Rotterdam and St. Maartenskliniek in Nijmegen for their useful feedback and for enabling them to test the LinguaBytes prototypes.