Table of Contents Author Guidelines Submit a Manuscript
Journal of Robotics
Volume 2018, Article ID 3246708, 12 pages
Research Article

A Control Architecture of Robot-Assisted Intervention for Children with Autism Spectrum Disorders

School of Automation, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing 100876, China

Correspondence should be addressed to Yongli Feng; nc.ude.tpub@ilgnoygnef

Received 1 November 2017; Revised 23 April 2018; Accepted 4 June 2018; Published 2 July 2018

Academic Editor: Ali Meghdari

Copyright © 2018 Yongli Feng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Robot-assisted intervention has been successfully applied to the education and training of children with autism spectrum disorders. However, it is necessary to increase the autonomy of the robot to reduce the burden on the human therapists. This paper focuses on proposing a robotic architecture to improve the autonomy of the robot in the course of the interaction between the robot and the child with autism. Following the model of perception-cognition-action, the architecture also incorporates some of the concepts of traditional autism intervention approach and the human cognitive model. The details of the robotic architecture are described in this paper, and in the end, a typical scenario is used to verify the proposed method.

1. Introduction

Autism, or Autism Spectrum Disorder (ASD), describes a broad range of developmental disorders whose main symptoms include impairment in social communication and restricted or repetitive patterns of behavior, interests, or activities [1]. Previous studies reveal that behavioral intervention is one of the major approaches to stimulate children with ASD to reduce their symptoms and improve their abilities of social interaction [2]. In the field of ASD intervention, some of the major challenges are to maintain motivation of the children, while long-term and repetitive therapies require therapists to spend a lot of time and energy. To address these challenges, the novel techniques and devices have been studied to ensure effective intervention of ASD while reducing the workload of therapeutic professionals [3, 4].

In these technologies, Socially Assistive Robotics (SAR) is considered as support tool for autism therapy through social interaction [5]. In the current studies, these robots have shown high efficiency in the intervention in autism, and the social skills of children with ASD have been positively improved [6, 7]. The approach of robot intervention for ASD is expected to obtain a better assist to traditional intervention procedure. In many systems, the robot is controlled remotely by the operator to make appropriate responses. This control method limits the long-term, large-scale use of robots in ASD interventions. In the field of ASD intervention, robots should be positioned as support tools for the human therapists rather than as alternatives. During the intervention, robots require collaborating with people who lack knowledge of ASD intervention or robotic. In order to achieve this goal, it is a major trend that to develop autonomous robots to interact with children with ASD in the course of intervention sessions. In the field of ASD intervention, the robot could play the role of human therapists, and the function of the robot is to stimulate and encourage the social skills of children with ASD (as show in Figure 1).

Figure 1: A basic overview of the robot application for autism intervention.

In the unstructured environment, it is a complex process to realize autonomous interaction between the robot and the child. In order to improve the autonomy of the robot in the intervention in autism therapy, the appropriate control architecture design is an indispensable part. In particular, a well-designed robotic architecture is an essential part to adjust the robot’s action by interpreting interaction signals reliably [8]. The main purpose of this paper is to present a control architecture that enables the robot to intervene in autism therapy with higher autonomy. The remainder of this paper is organized as follows: Section 2 provides an overview of related work; Section 3 presents the detail of the control architecture of the robot in each part and explains the intervention process of the robot for autism therapy; Section 4 describes the experiment to validate proposed architecture; in Section 5, we conclude our study and present future work.

2. Related and Previous Work

At present, the pathogenesis of autism is not clear and the early intervention is an effective means to relieve symptoms of children with ASD to promote their skills. In this section, we will introduce the traditional methods of intervention and the application of robots for autism therapy.

2.1. Traditional Methods of Autism Intervention

With years of development, researchers have presented various methods and models to help children with ASD, for example, Denver Model, Comprehensive Programs, TECCH Program, Applied Behavioral Analysis, and DIR/Floortime. Among these methods, Applied Behavioral Analysis (ABA) and DIR/Floortime are widely used and accepted. In the following sections, we will introduce the two methods, respectively.

(i) Applied Behavioral Analysis for Autism. Applied Behavior Analysis (ABA) which is based on the principles of learning and motivation is a general term for behavior modification [9]. ABA therapy is usually defined as a systematic approach to improve people’s social behavior based on the mixture of psychological and educational techniques. Over the past few decades, the intervention methods based on the ABA principle have been applied to the behavior and learning of children with special needs. Particularly, in the intervention of children with ASD, ABA therapy is one of the most effective means of intervention for children with ASD [10].

In the course of intervention of autism, Discrete Trial Teaching (DTT) is a concrete implementation method based on principle of ABA therapy [11]. DTT was developed by psychologists Ivar Lovaas in the 1970s and its core elements include instruction, individual response, auxiliary, reinforcement, and pause [9]. In practical execution, DTT includes the following main contents. Firstly, the target task is decomposed into a series of smaller or mutually independent steps in a certain manner and sequence. Subsequently, each small step is trained and educated by the appropriate reinforcement method. Through this process, children with autism can master all the steps and fulfill the task independently. The ultimate goal of DTT teaching for children with ASD is to allow them to apply the knowledge and skills in other situations. DTT is a highly structured teaching model and this type of teaching is generally one-on-one intervention of children with ASD. The basic process of DTT is shown in Figure 2.

Figure 2: The basic process of DTT for children with ASD.

(ii) The DIR/Floortime. The Developmental, Individual-difference, and Relationship-based (DIR) mode is an outstanding developmental approach for autism intervention. The mode of DIR was developed by Stanley Greenspan and its core is Floortime [12].

Different from the ABA method, DIR/Floortime emphasizes children’s emotional experience, imagination training, and interpersonal interaction. According to children with ASD’s characteristics and stages of development, the ladder of capability development is set up which consists of six milestones. These milestones include (1) attention and interest, (2) engagement and intimate relationship, (3) two-way communication, (4) continuous problem solving, (5) creative thinking, and (6) abstract and logical thinking [13]. Based on specific developmental stages of children with ASD, adopting the different strategies will achieve better effect of intervention. For children with autism, the time and place of DIR/Floortime intervention are flexible, and intervener can use this method in drawing rooms, bedrooms, schools, or training institutions at any time [14]. The intervention process of DIR/Floortime for autism can be summarized as shown in Figure 3.

Figure 3: The basic process of DIR/Floortime for children with ASD.
2.2. Application of Robots in ASD Intervention

Researchers have done a lot of studies on the use of robots in the intervention of children with autism. The results of these studies demonstrate that it is an effective way to use the robot in autism intervention.

In the course of intervention of children with autism, robots play a role of complementary therapy. In previous studies, researchers always use robots to assist human therapists for mediated interaction in tasks of action imitation, joint attention, turn talking, and so on [15]. These tasks can improve the social skills of children with ASD and help them integrate into society. Nevertheless, in the current approaches, almost all robots which are usually controlled by human operators remotely lack autonomy. In this mode, the operators need specialized knowledge of autism intervention and robots operation which is unsustainable for use of robots in long-term and large scale. Therefore, it is necessary to make robots have a certain degree of autonomy. In order to achieve this goal, the research of suitable control architecture of robots for intervention of children with autism requires developing preferentially.

At present, the researches of robots’ control architectures for autism intervention mainly focus on the control of robots, and rarely take into account the traditional methods of intervention. Feil-Seifer and Mataric [16] proposed a kind of behavior-based control architecture which is named B3IA. The B3IA is composed of sensor and interpretation module, activity history module, behavior network, and effector module. In the task of playing video games, Wainer et al. [17] designed the control architecture of detection-planning-action, which makes the robot complete the task with a certain degree of autonomy. Sang-Seok et al. [18] developed control architecture based on four modules including human perception, interaction manager, user input, and robot effector. In their study, the DTT protocol was used. In addition, Gonzalez et al. [19] proposed a three layer planning architecture to carry out a rehabilitation therapy for patients with physical impairments. Pour A G et al. [20] proposed human-robot facial expression reciprocal interaction platform. The reciprocal interaction is composed of two main phases: nonstructured and structured interaction modes. In this platform, the imitation task of facial expression between the robot and the children with autism is realized. In their study, the traditional rehabilitation procedure was considered, which gives us some inspiration.

The intervention of autism therapy has its particularity. It is a promising research direction to integrate the ideas of traditional approaches of intervention into the control architecture design of robot. In this paper, we carried out the exploratory study.

3. The Design of Control Architecture for the Robot in Autism Intervention

3.1. The Feasible Procedure of the Robot for Autism Intervention

In autism intervention, it is necessary to improve the autonomy of the robot which can reduce the workload of the human (e.g., therapists, educators, and parents). Furthermore, people who lack professional knowledge of autism intervention or robotics can use robots to achieve their aims. In this way, the scope of use of the robot can be expanded, and the robot can play better roles for autism intervention. In order to achieve this goal, it is necessary to design the control architecture for robot based on traditional methods of autism intervention.

As mentioned in the preceding section, ABA and DIR/Floortime are two traditional methods for autism’s early intervention commonly. DTT is the specific method of ABA program in autism therapy. This method has the advantages of structured process and easy to apply and has been used by robots in some studies for autism intervention [21]. Although DTT is very effective in the early intervention for children with ASD, quite a few scholars criticize that this method is hidebound and mechanical and is not conducive to the development of children with ASD [13]. Compared to DTT, DIR/Floortime attaches great importance to the developments, individual differences, and relationships of autism children. It advocates the intervention into the child’s daily life, according to the child’s development stages. Nevertheless, for the robot in autism intervention, this method requires the robot has high-level ability of natural interaction, but the existing technology cannot support that.

Therefore, as shown in Figure 4, we proposed a feasible procedure of the robot for autism intervention to improve the autonomy of the robot.

Figure 4: The flowchart of the robot intervention for children with autism.

In the course of autism intervention, the robot awakes children’s interest by dancing, singing, dialogue, and so on. In this way, children can focus their attention on the robot which is the basis for efficient interaction. Simultaneously, the robot collects the environmental information and evaluates the behavior of children with ASD. The content of assessment includes engagement, the stage of development of child which refers to the method of DIR/Floortime, requirement analysis of the child, and the evaluation of task feedback. And afterward, the robot assign tasks based on the results of evaluation and the tasks include DTT teaching and other ones (e.g., storytelling, singing, and dancing). Following this, the robot decomposes the task which has been decided and planning to perform. Finally, the robot interacts with the child or else ends the interaction.

In the process of interaction between robots and children with ASD, robots play the role of human therapists. Therefore, the cognitive mechanism of human should be considered in the design of robotic architecture. Among cognitive architectures, ACT-R (adaptive character of thought rational) has been studied for a long time and has good evaluation. ACT-R system is a hybrid cognitive architecture, which consists of two parts: symbolic system and subsymbolic system. ACT-R is a general theory of cognition and provides a framework for information processing [22]. The system of ACT-R is composed of several different modules. The production module connects other modules into a whole through buffers. In this framework, the symbolic system is driven by production system and the buffers of different modules are operated through the production rules. Besides, the subsymbolic system runs in the external structure of the ACT-R, and it controls the operation of symbolic system through a series of mathematical methods. One of the important features of ACT-R which are different from other similar theories is to apply a large number of experimental information directly to the research work. Therefore, some ideas of ACT-R were used in our study.

In the next section, we will propose the control architecture of the robot for autism intervention based on ACT-R and the flowchart which we have presented in this section.

3.2. The Control Architecture of the Robot for Autism Intervention

In order to improve the autonomy of the robot in the intervention of children with autism, we designed the control architecture of the robot for autism intervention (CARAI), which is shown in Figure 5. Following the perception-cognition-action model, the CARAI comprises several modules and submodules with special functions which are represented by rectangles. In the architecture, the arrows express the direction of information transmission and the dependency between modules. How each module works is described as follows.

Figure 5: The control architecture of the robot for autism intervention (CARAI).
3.2.1. Perception Module

The perception module is an interface that the robot obtains environment information and maps the information to the internal representation. In our study, the perception module is divided into two submodules, data acquisition module and data processing module. Among them, in the data acquisition module, the robot collects the data through sensors and transfers them to the data processing module. Besides, the data processing task is carried out in three steps. Firstly, according to the needs of the interaction task, the data processing module obtains the instruction from upper process of CARAI through production rules; then, the data processing module transfers the focus of the attention to the corresponding position according to the instruction, obtains the detailed information of the object, and carries on the data processing; finally, the data processing module transmits the results to the high-level module. Some general algorithms are applied to this module. For example, the robot locates the child with ASD through HOG-Linear SVM and detects the face of the child based on Haar feature. The library based on Deep Neural Network (DNN) is used for natural language processing.

3.2.2. Intention Module

Goal making is a very important process in human-robot interaction. Because the ability of the robot is always limited, the robot should revolve around its purpose when accepting input and processing and making output. In the CARAI, we use intention module to store the goals and aim to generate therapy plans for children with autism. In addition, the goal buffer is used as an interface to implement the interaction between the intention module and the robot core processing process. In the goal buffer, the goal can be created, temporarily stored, and modified. When the robot interacts with the child, a goal can be decomposed into subgoals, which are managed in a stack way.

As shown in Figure 6, the ultimate goal for the robot to achieve is , which is the initial state of the goal stack. However, in order to achieve the goal , some subgoals need to be completed. Therefore, these subgoals are pushed into goal stack. In this way, the goal is at the top of the goal stack and is popped to execute firstly. During the robot’s execution of the current goal, the new goal may be created. At this point, the new goal will be pushed onto the top of the goal stack. This process is looped until the ultimate goal is completed.

Figure 6: The schematic diagram of goal management.
3.2.3. Memory Module

The function and running rule of memory module are similar to the declarative module in ACT-R. In this module, the knowledge of intervention therapy for autism is stored. The knowledge, which is converted into production rules, comes from the experienced therapists and is filtrated according to the characteristics of the robot intervention. During the intervention, the central processing system of the CARAI retrieves the rules from memory module in real time and updates the knowledge (e.g., modify, remove, or add rule). In addition, the information of users (e.g., names, genders, and symptoms), robot’s behavior set, and task information are also stored in this module.

Production rules can generally be expressed as “”. In this expression, represents a set of preconditions; denotes several conclusions or actions. Its meaning is “if the premise is satisfied, the conclusion (or action should be performed) can be introduced”. The knowledge represented by a production rule is an ordered set of productions. The syntax can be represented by (Backus Normal Form) as follows: ::= : IF “” THEN “”. In this study, two of the robot production rules are shown as follows:::=: IF visual_goal location_state THEN move-attention;::=: IF “action_goal leftarm_state” THEN “joint_angle_1 ∣ joint_angle_3 ∣”;

3.2.4. Evaluation Module

The evaluation module appraisals the state or situational context using data received from perception module. The evaluation module appraisals the state of the child based on several features and variables. In the CARAI, we used a parallel mechanism to evaluate the user’s state. The evaluation module contains three submodules: engagement recognition, feedback appraisal, and assessment of developmental stage. Among them, the engagement recognition module mainly assesses the user’s degree of participation in the interaction task currently; the feedback appraisal module is used to evaluate the completion of the interaction object for the current task; in the development stage module, the development of children with ASD is divided into several levels, and the level of the user is determined by his performance in the human-robot interaction. The evaluation results of these modules will serve as the basis for goal setting and task planning of the system.

In the evaluation module, the engagement evaluation is the important step for using the robot to interact with the children with ASD. The evaluation results of the engagement can reflect the degree of acceptance of intervention tasks by children with ASD. In order to classify the degree of participation of the child, we developed an engagement evaluation model based on dynamic Bayesian networks and domain experts’ knowledge. The inputs (evidence variables) of the model are child’s features which are face orientation, interpersonal distance, and acoustic state. The outputs (query variables) of the model are the state of the child when he/she interacts with the robot. The expression of engagement evaluation is shown as

where and are normalization factors. Under the influence of the evidence variables , denotes the probability value set of query variables, and represents the calculated values of hidden variables in time . Transition relations between two adjacent time slices are represented by 3-order confused matrices, which are and . The symbol “” indicates the dot product between matrices. Detailed content of engagement evaluation can be found in [23].

The feedback evaluation can measure whether the child’s response to the robot is correct. For imitation task, we use “PyOpenPose” to evaluate the posture of the child. PyOpenPose is an implementation of OpenPose (an algorithm for human posture recognition) based on Python. In the application, the key points of the human body are first detected. Then, based on the coordinates of these points, a threshold is set to evaluate the child’s feedback.

The evaluation result of developmental stage is the important basis for robot decision-making. In our application, we divided the development of children with ASD into three levels which include focused attention, two-way communication, and continuous interaction. During the interaction, the robot asks the child to complete the tasks with different difficulty which is assigned different score. For example, the robot raises a hand to ask the child to imitate, or the robot raises a hand, asking the child to answer which hand it is. The description of the task is shown in Table 1. When the children with ASD complete the tasks, the robot evaluates their developmental stages by the addition and subtraction of the score.

Table 1: The description of the task.
3.2.5. Task Planning Module

The process of task planning is divided into two stages: decision-making and task decomposition. In decision-making, the system fully integrates information which is from evaluation module, memory module, intention module, and social module. The learning mechanism and selection mechanism are also formulated in this step. In the task decomposition, the target task which is set in the previous step is decomposed into simple primitive tasks based on the robot’s ability. In this way, these primitive tasks can be performed through the robot’s actuators directly.

During the interaction between robots and children with ASD, the role of the supervisor is indispensable [24]. Therefore, we considered the role of the supervisor in the decision-making of the robot and achieved the purpose through the method of interactive reinforcement learning which can be defined by

where is the result of decision which expresses the robot’s action. represents the action set in state . is the value of -learning and is the result of supervisor’s reinforcement. The symbol represents the confidence level for supervisor’s reinforcement. can be obtained by (3) which is the expression of -learning and the can be received through prior agreement.

where is the learning rate, which defines the extent of new information covering old information. is the discount factor and reflects the importance of future rewards in the learning process. represents the reward value of executing action at the state . and denote the state of the next step and the action to be performed. The detailed derivation process of the algorithm will be discussed in another paper.

In the CARAI, the task decomposition is achieved through hierarchical task network. The simple primitive tasks are comprised of the basic tasks that the robots can accomplish, like the joints of robot rotating to the specific angles, robot voice output, setting the LED colour of the robot eyes, and so on.

3.2.6. Action Module

When the task planning is finished, the CARAI translates the primitive tasks into the robot’s behavior through the action module. First, the action sequence of the robot is formulated by the behavior planning module, and then the movements of the actuators are controlled by the action control module.

3.2.7. Social Module

In the therapeutic task, we expect that the robot could infer and understand the intentions of the child and take appropriate behavior to meet the child’s individual needs. Usually, the capabilities of a robot are limited. In intervention schemes, sometimes, the robot needs to use other agents (e.g., glowing ball which can be controlled by programming) or sensors to achieve the intervention goals. Besides, it is unrealistic that the robot is fully automated which means the robot can adapt to any event during the intervention in unstructured environments. Therefore, in our study, while the robot interacts autonomously with the child, it needs to accept and prioritize the special instruction which given by the supervisor (the therapist, the teacher, or parents). In other words, when the robot’s behavior does not correspond to the interactive content, the supervisor should be able to command the robot to adjust its behavior via special instruction. In order to achieve the above targets, we design the social module in the CARAI. The social module includes two submodules: one module is the message handling and the other is message generation. Through the social module, the robot can interact with other agents, devices or supervisors for information and data.

3.3. Intervention Process of the Robot

In this section, we will describe the robot intervention process based on the robotic architecture which is proposed in the previous section.

In the robot-assisted behavioral intervention for children with ASD, the overall process can be divided into two steps. As shown in Figure 7, the first step is to develop intervention plan and the second step is to execute the plan.

Figure 7: Steps of robot-assist behavioral intervention.

The intervention procedure starts with the individual information of the interactive object. Based on the evaluation result, the robot develops an initial scheme. This process is accomplished by the intention module of robotic architecture. The robot updates the intervention plan according to the environmental information along with the interaction.

The execution of the intervention plan includes three parts: real-time evaluation, decision support, and robot execute. Among them, the real-time evaluation is realized by evaluation module of the robotic architecture. The decision support function of the robot is achieved by task planning module of the robotic architecture. The function of robot execution belongs to the basic planning of robots which can be realized by action module of robotic architecture. At this stage, the robot converts the decision result into the sequence of instructions which can be executed directly, including the angle of the robot joints, voice, and LEDs control.

4. Experimental Scenarios

4.1. The Robot Platform

We prepared the robot platform NAO to support our study, as shown in Figure 8. The NAO robot is a humanoid robot, it has an appealing appearance and is easy to be accepted by children. In previous studies, the NAO robot has been successfully applied to training tasks for children with ASD and achieved ideal results [2528]. With the height of 574 mm and the weight of 5.4 kg, the NAO robot integrates various sensors (i.e., video cameras, microphones, inertial unit, tactile sensors, and joint position sensors) and actuators (i.e., loudspeakers, joint motors, and LEDs). Therefore, the NAO robot can collect environmental information and interact with people conveniently. The supported programming languages of the NAO robot include Python, C++, Java, and JavaScript. Based on this, we can be flexible for program development. In the experiment, we developed the application based on Python SDK of the NAO robot.

Figure 8: The NAO robot.
4.2. Interaction Session Design

In the study, we designed a scenario where the robot guided the child with ASD to imitate its action in the session. The training session was divided into four phases: initialization, arousing child’s interest, training, and finishing session. In Figure 9, the rectangular box is used to represent each phase, where the basic events are indicated in a brace. The explanations of basic events are shown in Table 2.

Table 2: Overview and explanation of the basic event.
Figure 9: Phases of the interaction session.
4.3. Results

A 4-year-old boy and a 6-year-old girl were invited to participate in our study, which were supported by their parents. Figures 10 and 12 show sequences of the interaction between the robot and the child A and the child B, respectively. The session was divided into five rounds, and each round lasted 3 minutes. When the robot output the action, it waits for the child to perform the task for 5 seconds. The degree of difficulty of action imitation was divided into 3 training levels, and the reaction of the robot was counted. In each round, the distribution rate of the training levels and the robot’s reaction on children’s performance are shown in Figures 11 and 13. Figures 11(a) and 13(a) show the proportion of the feedback that the robot outputs when the child performs a task of imitation, including encouragement, praise, and claim attention. Figures 11(b) and 13(b) show the proportion of three different difficult movements output by the robot in each round. These percentages are obtained by dividing the number of corresponding items by the total number.

Figure 10: The robot is interacting with the child A.
Figure 11: The distribution rate of training levels and the robot’s feedback for the child A in the session.
Figure 12: The robot is interacting with the child B.
Figure 13: The distribution rate of training levels and the robot’s feedback for the child B in the session.

Figure 11 shows the distribution of training levels and robot’s feedback when the robot interacted with the child A. For the training levels, the difficulty of the robot’s action increases gradually from level 1 to level 3. According to Figure 11, different legends are used to distinguish the distribution of the robot’s feedback and training levels in each round. In round 1, the robot asked the child to imitate the actions with a higher proportion (64.3%) of level 1, and the feedback of the robot consisted of encouragement (38.3%), praise (43.2%), and claim attention (18.5%). In round 2, the difficulty in training level was improved. According to Figure 11(b), the proportion of training of level 1 decreased while the proportion of training of level 2 and level 3 increased. In this round, more praises (57.5%) were given by the robot. With the child skilled in the task, the robot guided the child more to imitate actions of level 3 in round 3. However, in this round, the child did not perform well, so the robot gave more reaction of encouragement, and the robot reduced the difficulty of training level in round 4. In round 5, the child was tired of the imitation task, and the robot had to spend more time asking the child to keep his attention.

Similar to Figure 11, Figure 13 shows the distribution of training levels and robot’s feedback when the robot interacted with the child B. Compared with the child A, the child B has the stronger ability of imitation. Therefore, the robot performed higher-level actions during the interaction. As shown in Figure 13(b), the proportion of training of level 2 and level 3 steadily increased, while the proportion of training of level 1 was decreased from round 1 to round 5. However, during the interaction, the child B showed the state of inattention. Therefore, compared with the interaction with the child A, the robot’s feedback showed a higher proportion of claim attention (as shown in Figure 13(a)).

In the experiment, the robot can adjust its action according to the change of interaction environment and quickly meet the individual needs of the interactive object under the guidance of the supervisor. Sometimes, the robot’s perception and cognition of the environment were not accurate, and the robot’s behavior output was not reasonable. At this time, it is necessary for the supervisor to intervene in the interactive process to ensure the smooth progress of the task. This process is implemented through social module of the control architecture. The Table 3 shows how many times the supervisor gives the suggestion to the robot during interaction.

Table 3: The times of the supervisor intervenes during interaction.

As shown in Table 3, total A represents the total number of imitative actions that the robot outputs in each round when it interacts with child A. Times A indicates that when the robot interacts with the child A, the number of the supervisor gives advice. Total B and times B indicate the same meaning when the robot interacts with the child B. During the interaction, when the supervisor thinks the robot’s decision is inappropriate, he will intervene in the robot’s decision through language. The robot adjusts its output through its decision-making algorithm (based on interactive reinforcement learning). In the experiment, the overall rate of intervention of the supervisor on the decision-making of the robot is about 14%. Through analysis, it is found that the error rate of the robot feedback evaluation to children is the main factor that influences the robot to make inappropriate decisions. Therefore, in the future research, the supervisor's intervention in the robot-child interaction process can be reduced by improving the robustness of the related algorithm.

5. Conclusion

Robot-assisted intervention is an effective way to facilitate social skills for children with ASD. In this study, we proposed a control architecture to improve the autonomy of the robot, when the robot is used in the intervention of children with ASD. Following the perception-cognition-action model, the architecture is designed based on some ideas of traditional intervention (DTT and DIR/Floortime) and ACT-R. In the paper, the operating mechanism and some algorithms of the proposed architecture are described in detail. Finally, through experimental verification, the proposed control architecture can improve the autonomy of the robot in the intervention of children with ASD and reduce the burden on supervisors.

It should be noted that some limitations of this study still exist. For example, two children participated in the validation in this study. However, children with ASD have large individual differences and there are many situations in the intervention sessions. Therefore, more participants are needed to verify the advantages and disadvantages of the architecture. In addition, only one robot platform (NAO robot) was used in our study, and the application of some algorithms was limited. Moreover, the tasks of robot intervention for children with ASD need to be further enriched.

In a near future, we are planning to work in three aspects: (1) according to the characteristics of interactive objects, the collection, analysis, and interpretation of sensory information are carried out and improve the robustness of the robot by perfecting existing algorithms; (2) we will design interactive tasks which suitable for robot expression based on experience of traditional intervention; (3) we will expand the scope of clinical application, which can help us improve the study and make our study more meaningful.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.


The research presented in this paper is supported in part by the National Natural Science Foundation of China (61573066 and 61327806).


  1. J.-J. Cabibihan, H. Javed, M. Ang Jr., and S. M. Aljunied, “Why Robots? A Survey on the Roles and Benefits of Social Robots in the Therapy of Children with Autism,” International Journal of Social Robotics, vol. 5, no. 4, pp. 593–618, 2013. View at Publisher · View at Google Scholar · View at Scopus
  2. B. Ingersoll, “Brief report: Effect of a focused imitation intervention on social functioning in children with autism,” Journal of Autism and Developmental Disorders, vol. 42, no. 8, pp. 1768–1773, 2012. View at Publisher · View at Google Scholar · View at Scopus
  3. S. Boucenna, A. Narzisi, E. Tilmont et al., “Interactive Technologies for Autistic Children: A Review,” Cognitive Computation, vol. 6, no. 4, pp. 722–740, 2014. View at Publisher · View at Google Scholar · View at Scopus
  4. L. Escobedo, C. Ibarra, J. Hernandez, M. Alvelais, and M. Tentori, “Smart objects to support the discrimination training of children with autism,” Personal and Ubiquitous Computing, vol. 18, no. 6, pp. 1485–1497, 2014. View at Publisher · View at Google Scholar · View at Scopus
  5. A. Peca, M. Coeckelbergh, R. Simut et al., “Robot Enhanced Therapy for Children with Autism Disorders: Measuring Ethical Acceptability,” IEEE Technology and Society Magazine, vol. 35, no. 2, pp. 54–66, 2016. View at Publisher · View at Google Scholar · View at Scopus
  6. C. A. G. J. Huijnen, M. A. S. Lexis, and L. P. de Witte, “Matching Robot KASPAR to Autism Spectrum Disorder (ASD) Therapy and Educational Goals,” International Journal of Social Robotics, vol. 8, no. 4, pp. 445–455, 2016. View at Publisher · View at Google Scholar · View at Scopus
  7. P. Pennisi, A. Tonacci, G. Tartarisco et al., “Autism and social robotics: A systematic review,” Autism Research, vol. 9, no. 2, pp. 165–183, 2016. View at Publisher · View at Google Scholar · View at Scopus
  8. B. Scassellati, H. Admoni, and M. Matarić, “Robots for use in autism research,” Annual Review of Biomedical Engineering, vol. 14, no. 1, pp. 275–294, 2012. View at Publisher · View at Google Scholar
  9. R. P. Hastings, “Behavioral adjustment of siblings of children with autism engaged in applied behavior analysis early intervention programs: The moderating role of social support,” Journal of Autism and Developmental Disorders, vol. 33, no. 2, pp. 141–150, 2003. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Virués-Ortega, “Applied behavior analytic intervention for autism in early childhood: meta-analysis, meta-regression and dose-response meta-analysis of multiple outcomes,” Clinical Psychology Review, vol. 30, no. 4, pp. 387–399, 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. M. W. Steege, F. Charles Mace, L. Perry, and H. Longenecker, “Applied behavior analysis: Beyond discrete trial teaching,” Psychology in the Schools, vol. 44, no. 1, pp. 91–99, 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. S. I. Greenspan and S. Wieder, “A Functional Developmental Approach to Autism Spectrum Disorders,” Research and Practice for Persons with Severe Disabilities, vol. 24, no. 3, pp. 147–161, 1999. View at Publisher · View at Google Scholar · View at Scopus
  13. C. M. Corsello, “Early intervention in autism,” Infants & Young Children, vol. 18, no. 2, pp. 74–85, 2005. View at Publisher · View at Google Scholar · View at Scopus
  14. S.-T. Liao, Y.-S. Hwang, Y.-J. Chen, P. Lee, S.-J. Chen, and L.-Y. Lin, “Home-based DIR/Floortime™ intervention program for preschool children with autism spectrum disorders: Preliminary findings,” Physical & Occupational Therapy in Geriatrics, vol. 34, no. 4, pp. 356–367, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Thill, C. A. Pop, T. Belpaeme, T. Ziemke, and B. Vanderborght, “Robot-Assisted Therapy for Autism Spectrum Disorders with (Partially) Autonomous Control: Challenges and Outlook,” Paladyn, Journal of Behavioral Robotics, vol. 3, no. 4, pp. 209–217, 2012. View at Publisher · View at Google Scholar
  16. D. Feil-Seifer and M. J. Matarić, “B3IA: A control architecture for autonomous robot-assisted behavior intervention for children with autism spectrum disorders,” in Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN, pp. 328–333, deu, August 2008. View at Scopus
  17. J. Wainer, B. Robins, F. Amirabdollahian, and K. Dautenhahn, “Using the humanoid robot KASPAR to autonomously play triadic games and facilitate collaborative play among children with autism,” IEEE Transactions on Autonomous Mental Development, vol. 6, no. 3, pp. 183–199, 2014. View at Publisher · View at Google Scholar · View at Scopus
  18. S.-S. Yun, H. Kim, J. Choi, and S.-K. Park, “A robot-assisted behavioral intervention system for children with autism spectrum disorders,” Robotics and Autonomous Systems, vol. 76, pp. 58–67, 2016. View at Publisher · View at Google Scholar · View at Scopus
  19. J. C. González, J. C. Pulido, and F. Fernández, “A three-layer planning architecture for the autonomous control of rehabilitation therapies based on social robots,” Cognitive Systems Research, vol. 43, pp. 232–249, 2017. View at Publisher · View at Google Scholar · View at Scopus
  20. A. Ghorbandaei Pour, A. Taheri, M. Alemi, and A. Meghdari, “Human–Robot Facial Expression Reciprocal Interaction Platform: Case Studies on Children with Autism,” International Journal of Social Robotics, vol. 10, no. 2, pp. 179–198, 2018. View at Publisher · View at Google Scholar
  21. M. Salvador, A. S. Marsh, A. Gutierrez, and M. H. Mahoor, “Development of an ABA Autism Intervention Delivered by a Humanoid Robot,” in Proceedings of the International Conference on Social Robotics, vol. 9979, pp. 551–560, Springer International Publishing. View at Publisher · View at Google Scholar
  22. G. Trafton, L. Hiatt, A. Harrison, F. Tanborello, S. Khemlani, and A. Schultz, “ACT-R/E: an embodied cognitive architecture for human-robot interaction,” Journal of Human-Robot Interaction, vol. 2, no. 1, pp. 30–55, 2013. View at Publisher · View at Google Scholar
  23. Y. Feng, Q. Jia, M. Chu, and W. Wei, “Engagement Evaluation for Autism Intervention by Robots Based on Dynamic Bayesian Network and Expert Elicitation,” IEEE Access, vol. 5, pp. 19494–19504, 2017. View at Publisher · View at Google Scholar · View at Scopus
  24. P. G. Esteban, P. Baxter, T. Belpaeme et al., “How to Build a Supervised Autonomous System for Robot-Enhanced Therapy for Children with Autism Spectrum Disorder,” Paladyn, Journal of Behavioral Robotics, vol. 8, no. 1, pp. 18–38, 2017. View at Publisher · View at Google Scholar
  25. S. Shamsuddin, H. Yussof, L. I. Ismail, S. Mohamed, F. A. Hanapiah, and N. I. Zahari, “Humanoid robot NAO interacting with autistic children of moderately impaired intelligence to augment communication skills,” Procedia Engineering, vol. 41, no. 2012, pp. 1533–1538, 2012. View at Google Scholar · View at Scopus
  26. L. I. Ismail, S. Shamsudin, H. Yussof, F. A. Hanapiah, and N. I. Zahari, “Robot-based Intervention Program for autistic children with Humanoid Robot NAO: Initial response in stereotyped behavior,” Procedia Engineering, vol. 41, no. 41, pp. 1441–1447, 2012. View at Google Scholar · View at Scopus
  27. A. Tapus, A. Peca, A. Aly et al., “Children with autism social engagement in interaction with Nao, an imitative robot: A series of single case experiments,” Interaction Studies, vol. 13, no. 3, pp. 315–347, 2012. View at Publisher · View at Google Scholar · View at Scopus
  28. M. A. Miskam, M. A. C. Hamid, H. Yussof, S. Shamsuddin, N. A. Malik, and S. N. Basir, “Study on social interaction between children with autism and humanoid robot NAO,” Applied Mechanics and Materials, vol. 393, pp. 573–578, 2013. View at Publisher · View at Google Scholar · View at Scopus