British Children’s and Adults’ Perceptions of Robots
Robotics and artificial intelligence (AI) systems are quickly becoming a familiar part of different aspects of everyday life. We know very little about how children and adults perceive the abilities of different robots and whether these ascriptions are associated with a willingness to interact with a robot. In the current study, we asked British children aged 4–13 years and British adults to complete an online experiment. Participants were asked to describe what a robot looks like, give their preference for various types of robots (a social robot, a machine-like robot, and a human-like robot), and answer whether they were willing to engage in different activities with the different robots. Results showed that younger children (4 to 8 years old) are more willing to engage with robots compared to older children (9 to 13 years) and adults. Specifically, younger children were more likely to see robots as kind compared to older children and adults. Younger children were also more likely to rate the social robot as helpful compared to older children and adults. This is also the first study to examine preferences for robots engaging in religious activities, and results show that British adults prefer humans over robots to pray for them but such biases may not be generally applicable to children. These results provide new insight into how children and adults in the United Kingdom accept the presence and function of robots.
1.1. Changing Age of Robots in the Society
Robotics and artificial intelligence (AI) systems are rapidly becoming normal components of modern technological societies . Robotics and AI help to solve several workforce problems such as making industry more efficient, completing hazardous jobs (such as those working in Fukushima Daiichi Nuclear Power Station), tackling social problems of loneliness (via interacting robots such as Pepper), encouraging language skills  or exercise in children , helping the elderly as domestic helpers (such as fetching containers), and praying or completing a ritual as religious robots, such as Mindar in Kodaiji or SanTO, a robot designed to have divine features similar to Christian saints . Perhaps due to their widespread usefulness, robotic presence has increased . Despite the increase in the presence of AI and robots, we still know very little about how their presence and abilities are, or would be, accepted among children and adults, especially how children and adults perceive religious robots. Children’s perceptions are particularly valuable as they are likely unaware of the design and function for a robot. By examining children’s perceptions, we gain worthwhile information whether children like and prefer these robots based on their appearance and behavior and not because of predetermined ideas of the robot’s function. Additionally, although studies have been conducted with children in the USA (e.g., ), Asia (e.g., ), and across Europe (e.g., ), very few studies have focused on children in the UK (but see ), a nation that is seeing an increase in robotic presence. As robots become more and more familiar in British lives, it is important to understand how they may be perceived and accepted. This study explores both British children’s and adults’ perception of and motivation to interact with three different types of robots and introduces a novel exploration of their acceptance of religious robots. Our research aims are at (1) comparing British children’s and adults’ perception of robots, (2) examining differences in impressions toward three different robots, and (3) exploring whether participants prefer a human over a robot to interact with in different activities, including a novel one: praying.
1.2. Current Perspectives of Robotics
Understanding how children and adults perceive and accept the presence and assistance of robots is a pressing topic globally. Many nations are investing in research to explore the usefulness of AI and robots for various social, health, educational, religious, and economic sectors. In particular, the UK government acknowledges the increased importance of robots and AI and a substantial amount of investments was made for robotic technologies for social care up to €2.8 billion in 2014–2020 . However, extensive surveys of European countries found that British adults are reluctant to use robots for social care , including a striking fact that 60 percent of participants thought robotic research for care of children, the elderly, and disabled people should be banned. A gap in these surveys and research is understanding children’s and adults’ perception and acceptance of robots in other sectors beyond social care, including religion.
Across studies, other work suggests that the increase in the presence and use of robots has been met with different reactions and perceptions . Some adults were concerned that the increase of robotic presence might lead to losses in jobs , safety and privacy, and the quality of human relationships . Others saw the benefit of robots such as their assistance in healthcare  or education . The difference in reactions may be because robots are used for different activities. Evidence suggests that adults may be more accepting of robots engaging in some activities over others. For example, in one study, adults were happy to have robots clean their house but not cook for them , and in another, half of the participants in a pain clinic at a hospital did not mind the presence of a robot during their first consultation with a doctor . Additionally, some nations such as Japan may be more open to robotic presence in the society than other societies such as the USA  and Korea . Comparatively, other studies suggest much more hesitation. One recent study suggests that US American adults were reluctant to have robots taking over jobs and to employ them in the future [18, 20]. There are also reports that adults in Western nations, especially those in the Christian tradition, are hesitant to have robots in a religious service .
This hesitancy may originate based on the shape and design of robots, and in particular, this hesitancy to accept robots may increase with the degree to which the robots appear as human. Early work in human-robot interaction found that if robots appear anthropomorphic, or have just some elements of human form, adults are more likely to cooperate with them . However, if the robot looks too human like, such as having skin and human facial features, adults perceive the robot as weird. This phenomenon is called the “uncanny valley,” where participants may feel uncomfortable if the robot appears too human like [22–25]. Indeed, some work suggests that adults prefer machine-like robots over humanoids [16, 26]. In summary, adult acceptance of robots is uncertain especially the more human like they are.
Although adults experience mixed perceptions and feelings regarding the usage of robots, other work suggests that children may be more accepting. Across several different domains, children are willing to interact with robots. For example, 3-year-old children will choose to learn information from social robots who give accurate (but not inaccurate) information and will also choose not to learn from inanimate toys that give either accurate or inaccurate names for items . Further, 3-to-6-year-old children consider robots to be better sources for particular types of information, such as learning information about machines rather than about biology or psychology . Other work has shown that 3- and 4-year-old children are willing to help robots  and 4- to 9-year-old children will even seek comfort from machine-like robots .
Some of the desire to interact may come from initial perceptions of robots. Although children may be more receptive to robots than adults, we know that this acceptance is not due to an inability to conceptualize robots. Several studies suggest that preschool-aged children make distinctions between living and nonliving agents easily [31–33], but these distinctions are clearer when children have more experience with robots  and are older . Children also perceive robots differently according to their appearance, and studies suggest that this perception may change with age. For example, 4- and 5-year-olds attribute more human-like characteristics than older children [32, 35] and psychological attributes than biological ones . Children older than 9 years are similar to adults in that they find human-like robots creepy (; although see evidence from a study with infants that shows evidence of the uncanny valley effect at 12 months ). The study by Brink and colleagues  is important as it suggests that younger children perceive robots differently from older children and adults and may be more open and receptive to interacting with robots. Outward appearance, then, may affect how children conceptualize robots. Despite the work by Brink et al.  and Lewkowicz and Ghazanfar , no prior work has compared both children’s and adults’ perceptions of various different robots and whether they are willing to interact with them.
1.3. Current Gaps in the Literature
We identified three gaps in the literature that align with our research goals. The first is a gap in comparing British children’s and adults’ perceptions of robots, a nation just beginning to see an increase in robotic presence. In only one British study that we are aware of, Woods  showed that the uncanny valley appears for British children aged 9–11, though the study does not examine any developmental trend with a larger sample of children nor compare these perceptions with those of adults.
The second is a gap in understanding if children and adults would be willing to interact with different types of robots. Some robots look more industrial (e.g., a robotic arm), others are anthropomorphic but range on a scale of machine like with a body and face (a humanoid), to more human like (an android). We wanted to explore how children and adults perceived a machine-like robot (Titan, used to perform specific tasks in a factory), a human-like religious robot (Mindar, an android robot used in Japanese temples for prayer), and a social robot (Nao, a humanoid robot used for social interaction with children and adults). Although prior work shows that younger children may be more accepting of robots and that children and adults find some robots creepier than others, we do not know whether they would be willing to interact with them across different activities. One activity that has been underexamined is how British children and adults would perceive and be motivated to interact with a robot that could perform prayers . The utilization of robots for religious services has increased, especially in the wake of COVID-19, and religious organizations have considered robotics and AI to offer prayer to avoid human contact . Prior to the pandemic, robotic prayer was already in practical use in a Japanese temple (Mindar in Kodaiji; ) and in a German church (Bless-U; ). We used the android robot, Mindar, to explore whether British children and adults are willing to accept prayers from this robot.
The third gap is understanding whether children and adults are more or less likely to prefer interaction with a robot than with a human. One study with Dutch and Pakistani 8- and 12-year-olds has shown that child-robot interaction is enjoyed (and much more so by Pakistani than Dutch children) compared to playing alone, but playing with a friend is preferred than playing with a robot . We asked participants to choose whether they would like the social robot (Nao) or a human to cook for them, play with them, or pray for them. We only asked about Nao since this robot is frequently used in child-robot interaction settings  and has been used as a representative of robots in previous studies with child samples (e.g., ).
1.4. The Current Study
To address these gaps, we asked British children aged 4 to 13 years and British adults to complete an online experiment in three sections.
In the first section, we asked participants to describe a robot. This question was asked first to help us understand how children and adults conceptualize what a robot is prior to showing them photos and asking them to reason about different robots.
In the second section, we asked participants about three types of robots: a social robot (Nao: a characterized anthropomorphic humanoid robot with metal body used for social interaction), an industrial, machine-like robot (Titan, a nonanthropomorphic mechanical arm used in various tasks in the industry), and a religious, human-like robot (Mindar: an anthropomorphic, android robot used in a Japanese Buddhist temple, Kodaiji, which has received academic interest [44, 45] and is regarded both positively and negatively according to comments on a press release video clip ). See Figure 1(a) for images of the three robot types.
(a) Sections 1 and 2
(b) Section 3
We wanted to compare participants’ perception of robots on a spectrum of least uncanny to most uncanny and chose three robots that might invoke different perceptions and engagement. According to the IEEE robotics website (https://robots.ieee.org/), where they have released a ranking of how much people like each robot, Nao is ranked as highly liked (17th when accessed in 2021/05/27) and Titan is in the middle (98th) out of 230 robots. Although Mindar is not registered on the website, all robots which have human-like skin with mechanical bodies are in the less liked group (e.g., Sophia: 207th; HRP-4C: 216th). Mindar, who has human-like skin with metal limbs, would likely be rated as one of the least liked robots. In another robot database, ABOT (http://www.abotdatabase.info/collection), which provides subjective human-likeness scores of humanoid robots based on more than 1000 participants’ ratings , Nao’s human likeness is rated in the middle (45.92 out of 100.00) and Alter (which has the same design as Mindar; they have different names because of their different uses) has a higher human-like rating (61.30). Although there are robots rated more human like, such as Geminoid  with a score of 92.60, they might not be distinguishable from actual humans only by static images. Work has shown that people find it difficult to tell the difference between a Geminoid and an actual human . Because of this, we did not include such robots which have a higher rate than Alter/Mindar and would be positively evaluated by overcoming the uncanny valley .
We did not explain to participants the purpose and intended design of these robots, although we showed images of robots in their characteristic poses (Titan with a poised tool, Nao standing up in a friendly pose, and Mindar praying). In section two, we were curious to know whether children and adults were open to a machine-like, human-like/religious, and social robot helping, playing, or praying for them and also if they thought that the robot was kind.
In the third section, we asked whether participants preferred a robot or a human to perform different actions including playing, cooking, and praying. Here, we asked three types of questions to ask for preference about each activity. First, we asked whether participants would prefer a robot or a human (e.g., “Who would you want to play with?”; who item). Then, we asked their preference when focusing on the ability or skill of the activity (e.g., “Who would be the best to play a game with?; best item). And finally, we asked whether a robot or human would fulfill a skill in a nuanced way for that activity (e.g., “Who would play a game in a way that you like?; like item). We chose playing and cooking as items because these would be actions that are familiar to children. We also chose praying to explore how children and adults feel about a robot versus a human praying for them.
We divided children into two age groups: 4 to 8 years old (younger) and 9 to 13 years old (older), which approximately match the age group of previous studies.  found that the emergence of the uncanny valley occurs around 9 years old. Thus, it was important to be able to compare children above and below 9 years. Previous studies focused on narrowly defined age groups. Typical younger age groups were ages 4 to 8 years (e.g., 4 to 5 years, ; 4 to 7 years, ; 4 to 9 years, ; and 4 to 10 years, ), or some studies with older children were 9 to 15 years . None of these studies compared both younger and older children.
We had several research questions. Based on the work by Brink et al. , we predicted that younger children will have stronger preferences for and be more willing to interact with robots than adults and that the social robot would be favored over the machine-like and human-like robots. In addition, because recent surveys  have reported that British adults are reluctant to see robotics in everyday life, we also predicted that children and adults will prefer to play, receive cooked food, and receive help and prayer from a human over a robot.
Forty-four children from 4 to 8 years old (, ; 15 females, 29 males), 40 older children from 9 to 13 years old (, ; 17 females, 23 males) (for age distribution, see Figure S1), and 30 adults (, ; 14 females, 15 males, and 1 other; 19 nonreligious, 4 Christians, 1 Muslim, 5 Atheist/Agnostic, and 1 Spiritualist) participated online via Qualtrics. Children were recruited as part of an online public engagement event at a university (redacted for review) that provides science activities and studies for children (website of event redacted for review). Parents filled out a short demographic survey at the time of giving consent for their children to participate in the large event which included many different studies. Because of limits to the number of questions collected and in keeping tasks short, ethnicity, SES, and religious affiliation were not collected for the children during the online event. Children aged 4 to 13 years participated either on his/her own or with the aid of a parent (to read out instructions). Participants were asked to indicate if they received help at the end of the questionnaire: for younger children, 18 participated with help and 24 participated on their own (2 did not answer); all older children indicated that they participated on their own. Adult participants were recruited from Amazon M-Turk. In M-Turk, we limited participants to UK MTurk workers only. Adult participants received 1.60 USD for participation and children received virtual “tokens” that were used to tally a child’s overall participation in the event and the child could see their participation on a leaderboard. Child participants were assigned an anonymous ID to use for each activity to ensure that their identity was confidential throughout the event. Ethical approval was obtained for both the children and adults through a university (retracted for review). The participant size of each age group is comparable to previous studies (approximately 30 to 40 participants for each age group or conditions; e.g., [30, 35, 50]).
Our studies used images retrieved from several sources. In section 1, images were retrieved from the IEEE robots website (https://robots.ieee.org/robots/). Each image contains the whole body of each robot in a plain white background (see Figure 1(a)). In section 2, the same images of the social robot (Nao) and machine-like robot (Titan) presented in section 1 were used. For the human-like robot (Mindar), we used a picture from a news source  in which it is facing the front and putting its palms together in front of its chest (See Figure 1(a)). In section 3, the pictures of the social robot (Nao) were created in the lab (retracted for review). Pictures of robots were taken first, and then, the corresponding pictures of a human (White female adult) were taken aligning her posture with the pictures of robots. The color of a balloon and the relative size of the whisk and the chef hat are controlled between pictures (see Figure 1(b)).
All questions were conducted using Qualtrics at a time convenient for the child and independent of interaction with the researchers. On the first screen, participants answered demographic questions (ID, age) so that the child could be later matched with the large demographic survey from the event. Then, the web survey started with an example question and an explanation of how to respond to questions. After completing an example question (e.g., the participant’s favorite color), participants were asked to respond to target questions. For children, before parental consent was given, an information screen explained to parents that experimenters were interested in children’s responses and to refrain from suggesting answers. For parents of young children who could not read, further instructions were given to read any screens that were not narrated and to dictate (by typing in) any responses that the child could not type him- or herself. For the parents of older children, the parents were instructed to let the child respond and type answers by him- or herself.
Target questions were arranged in three sections in a fixed order: the first section asked participants to describe their perception of robots. We hoped that this section would provide some clarity for how children and adults conceptualize a robot. This section was presented first so that their description would not be influenced by the images in sections 2 and 3 and before answering more specific questions about different kinds of robots. It consisted of two items asking participants to describe the general appearance of robots. On the first screen, the first item was an open question: “Can you describe what a robot looks like?”. The second item on the following screen asked participants to choose one of two pictures: “Which picture looks like a “robot” to you?” We displayed a pair of two pictures horizontally: a machine-like robot (Titan: a mechanical arm designed for industrial manufacturer needs) and a social robot (Nao: a characterized humanoid robot designed for communication needs). The position of the pictures were counterbalanced between participants.
In the second section, participants were asked to rate their impression of three different robots (all counterbalanced): a social robot (Nao), a machine-like robot (Titan), and a human-like robot (Mindar: a humanoid robot designed for religious prayers). This section was second to capture impressions before we examined preferences in the next section. We wanted children to make judgments based on the image of the robot. We also wanted to ensure that children knew that Mindar was a religious robot so we posed Mindar with its hands in prayer. Previous work has shown that Nao is perceived as friendly; the image was simply of Nao standing. The image of the mechanical robot was a side image to capture the full length of the robot and a better view of its features. Pictures of the social robot and the machine-like robot were identical to the first section. On each of three webpages, a picture of one of the three robots was displayed at the top of the page. Below the photo were four items that asked participants to respond according to 4-point Likert scales. Participants were asked to rate willingness to play with the robot, “Do you want to play with this robot?,” helpfulness of the robot, “Do you think the robot would help you?,” kindness of the robot “Is this robot mean or kind?,” and willingness to be prayed for by a robot “If you needed a prayer, could this robot help you?” Only for the kindness item, participants chose from the following four options: “Mean”(1), “Maybe Mean” (2), “Maybe Kind”(3), and “kind”(4). Responses were coded according to the numbers in brackets. For the other three items, options were “Yes”(4), “Maybe Yes”(3), “Maybe No”(2), and “No”(1). All participants answered for each robot in the same order (1st: social, 2nd: machine like, and 3rd: human like). In the third section, participants were asked whether they prefer a robot (Nao) versus a human in three activities: playing, cooking, and praying. For each activity, a pair of pictures (counterbalanced across participants) was presented horizontally. In both pictures, a human (a female adult) or a robot (Nao) had the same posture with the same equipment (e.g., a chef hat in the cooking activity; see Figure 1(b)). Then, three questions followed and participants were asked to choose one of the pictures (of the human or robot) for each question. Participants were asked to think about the action (e.g., playing) and choose who they would want to interact with (e.g., “Who would you want to play with?”), who would be the best at doing that action (e.g., “Who would be the best to play a game with?”), and who would fulfill the action the way the participant wants (e.g., “Who would play a game in a way that you like?”). Items for each action are shown in Table 1. The position of the picture (right or left) was counterbalanced among participants. All participants answered in the same order (1st: playing, 2nd: cooking, and 3rd: praying).
The responses of the free text entry in section 1 were analyzed with manual coding. We only analyzed responses that had more than one word. Two coders (the second author and a masters student, who was blind to hypotheses and did not participate in the research) independently coded for two items: in terms of whether participants made “reference to machine likeness” or “reference to human likeness.” The coding was conducted by checking whether the response included (1) or did not include (0) words representing machine likeness (“metal,” “metallic,” “machine,” and “shiny”) or human likeness (“human,” “person,” and “humanoid”). There were three responses that were different among the two coders. The differences were omissions from one of the coders: specifically, one of the coders initially did not notice 2 responses that were coded as “machine-likeness” and 1 response for “human likeness.” The initial agreement for “machine likeness” was 98.2% (2 disagreements; ) and that for “human likeness” was 99.1% (1 disagreement; ). These were discussed and edited for a final agreement of 100%.
3. Analyses and Results
3.1. Description of Analysis
All the analyses were conducted in R. The code and output are included in the supplementary materials. The list of R packages is also shown in the supplementary file. We report results separately by each section. For each section, we exclude participants with missing values: one young child in the free text entry in section 1, three young children in section 2, and two young children and one older child in section 3. These children were excluded because they did not respond to items in these sections (i.e.., did not select items or did not type any text). These missing values were distributed randomly across items and may be due to forgetting or missing an answer.
3.2. General Description of Robots (Section 1)
When answering which robot children preferred, the majority of children in both age groups (younger: 84.1%, ; and older: 80.0%, ) and adults (83.3%, ) chose Nao (a social robot) as a “typical” robot over Titan (a machine-like robot/nonanthropomorphic). To test whether the distribution of two categories (i.e., choosing Nao or Titan) deviated from chance levels (probability 0.05), we conducted two-tailed exact binomial tests . In all three groups, binomial tests showed that participants’ choices of Nao as a “typical robot” were significantly more frequent than chance levels (). A chi-squared test on 2 (Nao versus (younger children, older children versus adults) contingency table showed that the proportion was not significantly different among the three age groups (, ). Overall, more than 80% of participants chose Nao (a social robot) as a typical robot.
For the free text section, we analyzed whether the proportion of mentions of each term (“reference to human likeness” or “reference to machine likeness”) differed by age group (see Figure 2). For “machine-likeness,” a chi-squared test with a 2 (including “reference to human likeness” versus (younger children, older children versus adults) contingency table showed that the proportion was significantly different among age groups (, ). A Fisher’s exact test with Hochberg adjustment revealed that the proportion of younger children who made reference to human likeness () is significantly less than that of adults () (), but there was no significant difference among younger and older children () () and among the older children and adults (). Similarly, we conducted a chi-square test on “reference to machine likeness” and no significant difference was found in the proportion of those mentioning “reference to machine likeness” among the three age groups (, ).
We note that some child participants used the pronoun “he” for robots, perhaps signifying that they are personifying robots. These responses, however, did not have explicit references to human likeness but instead described appearances without using words like human or machine (e.g., “He is fat and he is very small” and “Two boxes, top box smaller than bottom box, two arms and caterpillar tracks, and two radar sticking up from his ears with small yellow balls on top”). Participants’ use of pronouns for robots was as follows for younger children: it (11/39), they (9), he (3), and no pronouns (16); for older children: it (10/39), they (5), he (2), and no pronouns (22); and, for adults: it (10/30), they (5), and no pronouns (15). We decided to stick with a strict definition of “human likeness” rather than include descriptions with pronouns, because the descriptions themselves did not describe a “human-like” agent.
3.3. Evaluation of Three Types of Robots for Four Activities (Section 2)
3.3.1. Description of Analyses
Next, we analyzed the participants’ evaluation of three robots: a social, anthropomorphic humanoid robot (Nao); an industrial, machine-like, non-anthropomorphic robot (Titan); and a religious, human-like android robot (Mindar) on four activities: helpfulness (help), kindness (kind), willingness to play (play), and willingness to be prayed for (pray) (see Table 1). The descriptive summary is plotted in Figure 3. To investigate whether participants respond differently depending on their age groups or target types of robots, we conducted a two-way repeated measure analysis of variance (ANOVA, type III) for each item. The dependent variable was participants’ rating of each item and the independent variables were age groups (younger children, older children, and adults; between-participant variable) and robot types (social, machine like, and human like; within-participant variable). values were adjusted by Bonferroni correction method (adjusted values are signified as ). Below, we report each action item, respectively.
3.3.2. Helpfulness (Help)
A statistically significant two-way (robot groups) interaction effect (), , ) was found among responses for rating the helpfulness of robots. The simple main effect of the robot types was significant for the social robot () but not for the machine-like robot () and the human-like robot (). Pairwise comparisons for the social robot showed that the mean responses for the helpfulness of robots were significantly different: young children (, ) and older children (, ) rated robots as significantly more helpful than adults did (, , both ).
3.3.3. Kindness (Kind)
A significant interaction effect (robot groups) was not found (, , ) for responses regarding the kindness of robots. Significant main effects were found among the age groups (, , ) and the robot types (, , ). For the age group, pairwise comparisons showed that the mean responses for younger children (, ) rated robots as significantly more kind than older children (, ; ) and compared to adults (, ; ). Examining all robot types, the social robot (, ) was rated as more kind than the human-like robot (, ; ) and machine-like robot (, ; ). The summary of these ANOVA results is in Figure 3.
3.3.4. Willingness to Play (Play)
A significant interaction effect (robot groups) was not found (, , ) in responses regarding the willingness to play with robots. Significant main effects were found for robot types (, , ) but not in age groups (, , ). Pairwise comparisons showed that participants were significantly more likely to play with the social robot (, ) than to play with the human-like robot (, ; ) and the machine-like robot (, ; ).
3.3.5. Willingness to Be Prayed (Pray)
A significant interaction effect (robot groups) was not found (, , ) for responses regarding the willingness to be prayed for by robots. Significant main effects were found both among the age groups (, , ) and the robot types (, , ). For the age group, pairwise comparisons showed that younger children (, ) responded that they were significantly more willing to be prayed for by robots than older children (, ; ) and adults (, ; ). For the robot types, participants responded that they were significantly more willing to be prayed for by the human-like robot (, ) than by the social robot (, ; ) and the machine-like robot (, ; ) and by the social robot than by the machine-like robot. Since responses could be influenced by a participant’s religious affiliation, we also examined whether self-reported religious adults (, 4 Christians, 1 Muslim, and 1 Spiritualist) were more or less willing to be prayed for by a robot. The data pattern shown in Figure S3 did not show a notable trend between religious and nonreligious participants (note that religious affiliation was only available for adult participants and the sample size was small for the religious group; see Procedure).
3.3.6. Summary of Four Items (Help, Kind, Play, and Pray)
Overall, the results show that young children perceive robots to be kind and are willing to have robots pray for them compared to the older children and adults; this pattern was not found for the play (willingness to play with) and help (helpfulness) items. However, there was a significant age-related effect for the social robot and the help item: younger children and older children were more likely to respond that the social robot was helpful compared to adults.
Also, the main effects of robot types were found in all four items. The social robot was rated higher for the kind and play items than the human-like and machine-like robots. Also, the machine-like robot was rated higher than the human-like one in the play item and help item. Finally, although the human-like robot was evaluated lower in the play and pray items, participants responded that they were willing to have a robot pray for them (see Figure 3(c)). We further interpret these results of age groups and robot types in the discussion.
3.4. Comparison of Preferences for a Human or Robot in Three Activities (Section 3)
3.4.1. Description of Analysis
In the final section, we examine whether participants preferred to interact with a human or a robot based on three activities (play, pray, and cook). The proportion of choosing the human over robots is shown in Figure 4. For each activity (play, cook, and pray) and items (who, best, and unlike), we conducted Fisher’s exact test on contingency tables for 3 age groups (younger children, older children, and adults) and 2 selection choices (human or robot). We further compared each pair of groups with Fisher’s exact test and contingency tables (2 age groups and 2 selection choices). The value was adjusted with the Bonferroni method. Additionally, we merged two children groups (younger and older group), to run a logistic regression analysis for each case to predict participants’ choice of human or robot (dependent variable, of human, and of robot) by age (independent).
3.4.2. Result of Analysis
The proportion of choosing the human over a robot in three activities is shown in Figure 4. Results of Fisher’s exact tests and logistic regression analyses are summarized in Table 2. The results of the regression analyses are also visualized in Figure 5. A Fisher’s exact test showed that in all 9 cases (3 items), the proportion of choosing the human over the robot was different among the three age groups (). Each pair of age groups was compared by an additional Fisher’s exact test. First, we report results comparing each children group versus adults (i.e., younger children versus adults; older children versus adults), which were consistent regardless of activities (play, pray, or cook activities) or how the questions were asked (who, like, or best items). Results showed that in all 9 cases, there were significant differences between the responses (see Table 2 for each result): younger children were more likely to prefer robots over humans compared to adults (), and older children were more likely to prefer robots over humans compared to adults (). These results showed that regardless of the activities (play, pray, or cook activities) or how the questions were asked (who, like, or best items), a strong bias to choose humans (human preference bias) in adults was not seen in response by both younger or older children.
We also compared children’s responses between the younger and older age groups. For only particular items, younger children were significantly more likely to prefer robots over humans compared to older children. For the play activity, a Fisher’s exact test showed that younger children were significantly more likely to choose robots than older children in the best item () but not in the who () and like () items. This result was consistent with the regression analysis (see Table 2). For the cook activity, Fisher’s exact tests () and the regression analysis () did not find significant differences in all three items, indicating that younger children did not prefer robots over humans compared to older children.
Finally, for the pray activity, although significant differences were not found between younger versus older children for best items (), younger children were significantly more likely to choose robots than older children in both who () and like () items (see Table 2). Further, regression coefficients were significant () among children in all three items but this result was not consistent with a Fisher’s exact test for the best item for the pray activity. The Fisher’s exact test between the younger and older children groups was not significant (). This contrasting result may be because the criteria of the Fisher’s exact test is stricter due to a Bonferroni correction. Descriptive statistical patterns of the data (see Figure S2), however, suggested that preferences for the human model for the best and who items increase with age. In addition, further inspection of the descriptive pattern of data (Figure 5) suggests that participant preference for a human to pray for them emerges around 6–8 years.
4.1. Summary of Studies
In the present study, we investigated how children and adults evaluate robots differently. Furthermore, we investigated participant’s willingness to interact depending on the robot type and the activity, including in a novel context: praying. We had three different sections. In the first section, we asked participants what a “robot” looks like. Analysis of the free text description showed that many participants described terms or concepts related to “metalness/machine likeness” or “human likeness.” Although there was no developmental shift in the proportion of participants describing a metallic aspect, there was a developmental shift in the use of “human likeness” to describe robot appearance: compared to the older age groups, younger children are less likely to use the concept of “human” when describing robots. This tendency aligns with previous findings that younger children are less likely to ascribe human-like characteristics (emotional, social, and perceptual abilities) to robots compared to older children and adults [27, 31].
When we asked participants to make a choice regarding whether they judged a robot to be metallic or humanoid, the majority of participants selected the humanoid robot (Nao) over the industrial machine-like robot (Titan). Perhaps, British children are more likely to conceptualize “robot” as humanoid because robots such as Nao are created and marketed towards children and exposure to stories and movies starring humanoid robots may influence children to be more likely to accept them . British children may be unaware of what other types of robots look like, especially ones used in the industry.
In section two, we asked participants to evaluate three types of robots: Nao (a humanoid, social robot), Titan (a machine-like robot), and Mindar (an android, human-like robot) according to four items (help, kind, play, and pray). There were differences across age groups and robot types. Younger children’s responses showed that they perceive robots as more kind than older participants. Further and overall, the social robot (Nao) was rated more highly than the machine-like robot and the human-like robot, which may reflect the characteristic feature of Nao being a social robot that is designed to interact with children or elderly people.
However, a similar pattern was not found in the responses for helpfulness and willingness to play. Age did not influence both items, except for responses from younger children who rated the social robot as more helpful than adults and that the younger participants preferred to play with the social robot (Nao) over the machine-like and human-like robots. Participants also evaluated the machine-like robot as more helpful than the human-like robot (Mindar). Participants’ willingness to play with Nao confirms (similar to section 1 and to prior work) that British participants are receptive to interacting with the social robot (Nao). Indeed, Nao was designed for communicative purposes and is currently used for a wider range of ages such as in the care of the elderly  or children .
We also found that the human-like robot was perceived as appropriate for praying. In particular, the youngest children strongly responded that they would be willing to have Mindar pray for them compared to the other age groups. Although there is the possibility that participants recognized that the human-like robot, Mindar, was designed for reciting prayers during religious services, it is more likely that participants took note of the praying pose and made this distinction based on appearance. Because we wanted to signify that this robot was a praying robot and we did so by intentionally presenting it with praying hands, we do not know if the other robots were put into a praying-like pose whether participants would rate the robots differently. This should be followed up in future research. Future work should also explore the context where the robot engages in these actions. For example, responses may change if participants see a robot praying or playing in a home versus a playground versus in a religious service. Future work should compare different robots (e.g., including the social robot in a praying pose) and different social contexts or activities to explore whether ratings would be more favorable to a social vs human-like robot. Using a matching paradigm could make it so that participants could compare many different combinations .
Finally, in section three, we asked participants whether they preferred a human adult or the social robot in three activities: playing, cooking, and praying. Adult participants showed a strong bias to prefer a human across all activities over the robot and compared to children (both younger and older children). Furthermore, a visual inspection of the descriptive data pattern suggested that adult participants and children (within the age range of 6 to 8 years old), but not younger children, preferred the human over the robot to pray for them (see Figures 4 and 5; section 3). Similar to section 2, results also showed that willingness to be prayed for by robots decreased in older age groups (i.e., younger children versus older children and adults). Together, these results may suggest that there is a developmental trajectory towards a negative preference of robots in the religious context. Further work should examine whether children are more comfortable accepting prayer from different types of robots in addition to examining whether they are more comfortable with a robot versus a human in different play activities. This would tell us more about the specific preferences that children and adults may have. A limitation of this section is that in the previous section, children saw a photo of Mindar praying and may be predisposed to think that only Mindar prays and that Nao does not. If all robots had been in a neutral pose in section two, children may have responded differently about praying in this section.
Overall, our results show that the younger children are more willing to interact with robots compared with older children and adults. Furthermore, British adults seem to prefer a human over a robot. Perhaps, children are more tolerant of robots performing duties in various activities such as playing, cooking, and praying. Could the results imply that the younger generation is more accepting of robots? Future work should explore whether younger children have higher tolerance of nonhuman entities as well as explore whether this generation is influenced by exposure to more robotic technologies and media than previous generations. Studies that incorporate longitudinal or crosscultural studies comparing two or more cultures would also contribute particular understanding of the influence of different levels of exposure to robotic technologies and children and adults’ acceptance of them.
4.2. Limitations and Future Directions
There are several limitations to the present study. First, we did not control the appearance of the robots in all activities. We used three types of robots that were on a spectrum of machine like/industrial (Titan) to anthropomorphic and social/friendly (Nao) to anthropomorphic and human like (Mindar) and chose these robots for their abilities (Titan who is used in the industry, Nao who is used in social interactions, and Mindar who is used in a temple). When we presented these robots, Mindar was seen in a prayer pose and Titan and Nao were in neutral poses. Differences in responses may be because of these predetermined poses. We purposefully put Mindar in the praying pose to signify that Mindar is used in religion because most British children will likely not have seen Mindar. However, because we did not have Nao or Titan in a prayer pose or Mindar in a nonprayer pose, responses may be biased because participants noticed that Mindar is praying and the other robots are not. Further work should ensure the pose is the same across different robots to explore perceptions. The data suggest that the younger children’s responses of whether they would like to play with Mindar were slightly lower than adult responses. These responses could be because children thought Mindar was only a praying robot but also because Mindar looks realistic and children were uncertain about playing with a robot with a humanoid appearance. Some of the unease may be triggered by the physical appearance of robots. Fifty years of work suggests that there is an “uncanny valley” and a point where robots who look too human will be perceived as strange compared to robots that look less like a human [23–25]. We did not ask participants for their rationales for their answers so we cannot conclude whether their more negative responses were because of their unease about the appearance of the robots, whether they would want a robot to pray for them, or whether they were biased by Mindar’s pose. One additional reason that the youngest children did not want to play with Mindar or that they did not respond that they wanted Nao to pray for them is that they may see these robots as designed for a purpose. Studies show that children can be biased to ascribe teleology to objects  and might see Mindar as designed for prayer and not for play, and likewise, Nao and Titan were not designed for prayer (because they were not in a prayer pose). Further studies that control for pose and context would help understand this question.
Another limitation is that we only showed photographs of the robots rather than movies. Children have been shown to be more favorable to robots when robot behavior is displayed in a movie form rather than in a photo . Viewing the robot’s behavior seems to help children understand intentions. Indeed, if the robot displays some human and some mechanical features, children experience less discomfort than if the robot had strong human-like features [59, 60]. Thus, if we showed videos of the three types of robots, participant responses might be different.
Additionally, the present study only focused on British participants. As suggested above, future work should explore cultural influences (including a participant’s exposure to robots) on attitudes and perceptions of robots. Crosscultural data would be valuable to understanding how robots are understood and used in different nations. Other studies have found cultural differences in evaluating robots [5, 18, 19]. For example, Japanese university students rated robots as having more autonomy, emotional capacity, and social relationships than university students in the USA or Korea . Future work could test if these cultural differences appear in younger children in these cultures. Additionally, future work should ask participants their level of exposure to robots to examine whether the level of exposure influences perception and motivation to interact. These studies would help us understand whether the hesitation towards robots seen in Western participants is because of cultural influence, technological exposure, or from other reactions, such as those that arise from the “uncanny valley.”
Finally, our study presents a novel contribution in examining British children’s and adult’s impressions of a human-like robot and also whether they would allow a robot to pray for them. We discovered that British children and adults have different opinions about whether they would want a human-like robot to pray for them. Children were much more open than adults. Much more work is needed to understand whether this openness changes longitudinally or whether this reflects a generation that welcomes new technologies. Crosscultural studies would also highlight whether this trend is seen in children in other countries where robots are more present (Japan) or absent (hunter-gatherer societies). Further, an interesting study would be to examine Christian/Catholic children and adults’ perceptions of SanTO, a robot in the shape of a statue of a saint who can recite prayers and sacred texts . Would children and adults accept this prayer companion or would SanTO be seen as inferior to a priest or fellow congregant who could say prayers as well? Comparisons of using religious robots across children and adults of religious faiths could also highlight if acceptance or hesitancy may stem from morphological design and proposed function of the robot [61, 62] or the influence of religious doctrine. For example, Japanese acceptance of Mindar may be because the Shinto and Buddhist traditions accept that objects are sacred . Whereas British hesitancy may come from a Christian belief that souls are within people and not objects .
Our study provides a glimpse of British children’s and adults’ current view of robotics and their willingness to interact with them. Our results suggest that British adults may be hesitant to interact with different robots in their life and society, including the religious context, while young children may be open to their use and presence. More work is needed to understand this difference.
The data that supports the findings of this study are available in the supplementary material of this article. Analysis code and results are also available in the supplementary file.
This study was approved by the Ethics Committee in the School of Psychology at the University of Nottingham.
All adult participants consented to participating in the study by signing a consent form on the first screen of the survey. All children who participated were required to have signed approval from one of their parents before participating.
Conflicts of Interest
The authors claim no conflict of interest.
We thank Mao Fujiwara for helping with coding the data. We thank Faisal Mehmood for helping with preparing robots’ images. We also thank the Toshiba International Foundation for funding to both ERB and YN.
We have included several supplementary files, including a supplementary figure file (supplementary Figures 1–3), the analysis code and results, and a raw data file. (Supplementary Materials)
S. Turkle, Alone Together: Why we Expect More from Technology and Less from each Other, Basic Books, New York, 2011.
J. R. Movellan, M. Eckhardt, M. Virnes, and A. Rodriguez, “Sociable robot improves toddler vocabulary skills,” in Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI’09), pp. 307-308, San Diego, USA, 2009.View at: Publisher Site | Google Scholar
C. Suárez Mejías, C. Echevarría, P. Nuñez et al., “Ursus: a robotic assistant for training of children with motor impairments,” in Biosystems and Biorobotics, G. Eugenio, Ed., pp. 249–253, 1st edition, 2013.View at: Publisher Site | Google Scholar
G. Trovato, L. De Saint Chamas, M. Nishimura et al., “Religion and robots: towards the synthesis of two extremes,” International Journal of Social Robotics, vol. 13, no. 4, pp. 539–556, 2019.View at: Publisher Site | Google Scholar
Nomura Research Institute, “Social acceptance and impact of robots and artificial intelligence: findings of a survey in Japan, the U.S. and Germany,” NRI Papers, no. 211, pp. 1–17, 2017, https://www.nri.com/~/media/PDF/global/opinion/papers/2017/np2017211.pdf.View at: Google Scholar
K. A. Brink, K. Gray, and H. M. Wellman, “Creepiness creeps in: uncanny valley feelings are acquired in childhood,” Child Development, vol. 90, no. 4, pp. 1202–1214, 2019.View at: Publisher Site | Google Scholar
T. Kanda, T. Hirano, D. Eaton, and H. Ishiguro, “Interactive robots as social partners and peer tutors for children: a field trial,” Human–Computer Interaction, vol. 19, no. 1-2, pp. 61–84, 2004.View at: Publisher Site | Google Scholar
A. Peca, R. Simut, S. Pintea, C. Costescu, and B. Vanderborght, “How do typically developing children and children with autism perceive different social robots?” Computers in Human Behavior, vol. 41, pp. 268–277, 2014.View at: Publisher Site | Google Scholar
S. Woods, “Exploring the design space of robots: children’s perspectives,” Interacting with Computers, vol. 18, pp. 1390–1418, 2006.View at: Publisher Site | Google Scholar
C. Kenny and R. Wilson, Robots in social care, UK Parliament, 2018, https://post.parliament.uk/research-briefings/post-pn-0591/.
“Dataset of Eurobarometer survey on public attitudes towards robots. Special Eurobarometer 382. Retrieved from,” European Commission., 2012, https://digital-strategy.ec.europa.eu/en/library/dataset-eurobarometer-survey-public-attitudes-towards-robots.View at: Google Scholar
E. Broadbent, “Interactions with robots: the truths we reveal about ourselves,” Annual Review of Psychology, vol. 68, pp. 627–652, 2017.View at: Publisher Site | Google Scholar
A. Waytz and M. I. Norton, “Botsourcing and outsourcing: robot, british, chinese, and german workers are for thinking-not feeling-jobs,” Emotion, vol. 14, no. 2, pp. 434–444, 2014.View at: Publisher Site | Google Scholar
H. M. Robinson, B. A. MacDonald, N. Kerse, and E. Broadbent, “Suitability of healthcare robots for a dementia unit and suggested improvements,” Journal of the American Medical Directors Association, vol. 14, no. 1, pp. 34–40, 2012.View at: Publisher Site | Google Scholar
F. Basoeki, F. Libera, E. Menegatti, and M. Moro, “Robots in education: new trends and challenges from the Japanese market,” Themes in Science and Technology Education., vol. 6, no. 1, pp. 51–62, 2013.View at: Google Scholar
C. Ray, F. Mondada, and R. Siegwart, “What Do People Expect from Robots?” in 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3816–3821, Nice, France, 2008.View at: Publisher Site | Google Scholar
K. Yoshikawa, M. Sumitani, Y. Matsumoto, and H. Ishiguro, “Android robot system to support medical and welfare fields,” The Transactions of Human Interface Society, vol. 14, no. 2, pp. 197–207, 2012.View at: Publisher Site | Google Scholar
K. F. MacDorman, S. K. Vasudevan, and C. C. Ho, “Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures,” AI and Society, vol. 23, no. 4, pp. 485–510, 2009.View at: Publisher Site | Google Scholar
T. Nomura, T. Kanda, T. Suzuki et al., “Implications on humanoid robots in pedagogical applications from cross-cultural analysis between Japan, Korea, and the USA,” in Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication, pp. 1052–1057, Jeju, Korea (South), 2007.View at: Publisher Site | Google Scholar
A. Granulo, C. Fuchs, and S. Puntoni, “Preference for human (vs. robotic) labor is stronger in symbolic consumption contexts,” Journal of Consumer Psychology, vol. 31, no. 1, pp. 72–80, 2021.View at: Publisher Site | Google Scholar
A. Powers and S. Kiesler, “The advisor robot: tracing people’s mental model from a robot’s physical attributes,” in Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, pp. 218–225, Salt Lake City, USA, 2006.View at: Publisher Site | Google Scholar
K. Gray and D. M. Wegner, “Feeling robots and human zombies: mind perception and the uncanny valley,” Cognition, vol. 125, no. 1, pp. 125–130, 2012.View at: Publisher Site | Google Scholar
K. F. MacDorman and H. Ishiguro, “The uncanny advantage of using androids in cognitive and social science research,” Interaction Studies, vol. 7, no. 3, pp. 297–337, 2006.View at: Publisher Site | Google Scholar
M. Mori, “Bukimi no tani,” Energy, vol. 7, no. 4, pp. 33–35, 1970.View at: Google Scholar
M. Mori, K. F. MacDorman, and N. Kageki, “The uncanny valley,” IEEE Robotics & Automation Magazine, vol. 19, no. 2, pp. 98–100, 2012.View at: Publisher Site | Google Scholar
D. Cerqui and K. O. Arras, “Human beings and robots: towards a symbiosis? A 2000 people survey,” in International Conference on Socio Political Informatics and Cybernetics (PISTA’03), Orlando, USA, 2003.View at: Google Scholar
K. A. Brink and H. M. Wellman, “Robot teachers for children? Young children trust robots depending on their perceived accuracy and agency,” Developmental Psychology, vol. 56, no. 7, pp. 1268–1277, 2020.View at: Publisher Site | Google Scholar
C. Oranç and A. C. Küntay, “Children’s perception of social robots as a source of information across different domains of knowledge,” Cognitive Development, vol. 54, article 100875, 2020.View at: Publisher Site | Google Scholar
D. U. Martin, C. Perry, M. I. MacIntyre, L. Varcoe, S. Pedell, and J. Kaufman, “Investigating the nature of children’s altruism using a social humanoid robot,” Computers in Human Behavior, vol. 104, article 106149, 2020.View at: Publisher Site | Google Scholar
T. N. Beran, A. Ramirez-Serrano, O. G. Vanderkooi, and S. Kuhn, “Reducing children’s pain and distress towards flu vaccinations: a novel and effective application of humanoid robotics,” Vaccine, vol. 31, no. 25, pp. 2772–2777, 2013.View at: Publisher Site | Google Scholar
J. L. Jipson and S. A. Gelman, “Robots and rodents: children’s inferences about living and nonliving kinds,” Child Development, vol. 78, no. 6, pp. 1675–1688, 2007.View at: Publisher Site | Google Scholar
F. Manzi, G. Peretti, C. Di Dio et al., “A robot is not worth another: exploring children’s mental state attribution to different humanoid robots,” Frontiers in Psychology, vol. 11, 2020.View at: Publisher Site | Google Scholar
M. M. Saylor, M. Somanader, D. T. Levin, and K. Kawamura, “How do young children deal with hybrids of living and non-living things: the case of humanoid robots,” British Journal of Developmental Psychology, vol. 28, no. 4, pp. 835–851, 2010.View at: Publisher Site | Google Scholar
D. Bernstein and K. Crowley, “Searching for signs of intelligent life: an investigation of young children's beliefs about robot intelligence,” Journal of the Learning Sciences, vol. 17, no. 2, pp. 225–247, 2008.View at: Publisher Site | Google Scholar
P. H. Kahn, T. Kanda, H. Ishiguro et al., ““Robovie, you’ll have to go into the closet now”: children’s social and moral relationships with a humanoid robot,” Developmental Psychology, vol. 48, no. 2, pp. 303–314, 2012.View at: Publisher Site | Google Scholar
M. K. Nigam and D. Klahr, “If robots make choices, are they alive?: children’s judgments of the animacy of intelligent artifacts,” in Proceedings of the Annual Meeting of the Cognitive Science Society, Philadelpia, USA, 2000, https://escholarship.org/uc/item/6bw2h51d.View at: Google Scholar
D. J. Lewkowicz and A. A. Ghazanfar, “The development of the uncanny valley in infants,” Developmental Psychobiology, vol. 54, no. 2, pp. 124–132, 2012.View at: Publisher Site | Google Scholar
K. L. Ladd and D. N. McIntosh, “Meaning, God, and prayer: physical and metaphysical aspects of social support,” Mental Health, Religion & Culture, vol. 11, no. 1, pp. 23–38, 2008.View at: Publisher Site | Google Scholar
A. Quraishi, “Will the future of spirituality include artificial intelligence and virtual worship?” The Denver Channel, 2020, (accessed 2021/06/02), https://www.thedenverchannel.com/news/national-politics/the-race-2020/will-the-future-of-spirituality-include-artificial-intelligence-and-virtual-worship.View at: Google Scholar
J. Omura, Kitanomandokoro yukarinochini Android Kanon kaihatsuhi ichiokuen, Asahi Shimbun Digital, 2019, 2021/06/02, https://www.asahi.com/articles/ASM2R56Z8M2RPLZB00D.html.
K. Leslie, Press here to activate your robot priest. Cnet, 2017, https://www.cnet.com/news/robot-priest-blessu2-germany-anniversary-reformation/.
S. Shahid, E. Krahmer, and M. Swerts, “Child–robot interaction across cultures: how does playing a game with a social robot compare to playing a game alone or with a friend?” Computers in Human Behavior, vol. 40, pp. 86–100, 2014.View at: Publisher Site | Google Scholar
T. Belpaeme, J. Kennedy, A. Ramachandran, B. Scassellati, and F. Tanaka, “Social robots for education: a review,” Science Robotics, vol. 3, no. 21, 2018.View at: Publisher Site | Google Scholar
V. J. Sunim, “Religious education of Buddhism and the fourth industrial revolution,” Buddhism and the fourth industrial revolution, pp. 245–255, 2019.View at: Google Scholar
J. K. Wight, “The battle for the robot soul,” Philosophy Now, vol. 139, pp. 16–19, 2020, https://philosophynow.org/issues/139/The_Battle_for_the_Robot_Soul.View at: Google Scholar
SankeiNews, Kodaiji ga android kannon wo kokai [Video], 2019, https://www.youtube.com/watch?v=KptQUZ6Vjj0.
E. Phillips, X. Zhao, D. Ullman, and B. F. Malle, “What is human-like? Decomposing robots’ human-like appearance using the anthropomorphic roBOT (ABOT) database,” in Proceedings of the 2018 ACM/IEEE international conference on human-robot interaction, pp. 105–113, Chicago, USA, 2018.View at: Publisher Site | Google Scholar
M. Noma, N. Saiwaki, S. Itakura, and H. Ishiguro, “Composition and evaluation of the humanlike motions of an android,” in 2006 6th IEEE-RAS International Conference on Humanoid Robots, pp. 163–168, Genova, Italy.View at: Publisher Site | Google Scholar
M. B. Mathur and D. B. Reichling, “Navigating a social world with robot partners: a quantitative cartography of the uncanny valley,” Cognition, vol. 146, pp. 22–32, 2016.View at: Publisher Site | Google Scholar
M. C. Somanader, M. M. Saylor, and D. T. Levin, “Remote control and children’s understanding of robots,” Journal of Experimental Child Psychology, vol. 109, no. 2, pp. 239–247, 2011.View at: Publisher Site | Google Scholar
J. M. Kory-Westlund and C. Breazeal, “Assessing children’s perceptions and acceptance of a social robot,” in Proceedings of the 18th ACM International Conference on Interaction Design and Children, IDC 2019, pp. 38–50, Boise, USA, 2019.View at: Publisher Site | Google Scholar
K. Sommer, M. Nielsen, M. Draheim, J. Redshaw, E. J. Vanman, and M. Wilks, “Children’s perceptions of the moral worth of live agents, robots, and inanimate objects,” Journal of Experimental Child Psychology, vol. 187, p. 104656, 2019.View at: Publisher Site | Google Scholar
C. J. Clopper and E. S. Pearson, “The use of confidence or fiducial limits illustrated in the case of the binomial,” Biometrika, vol. 26, no. 4, pp. 404–413, 1934.View at: Publisher Site | Google Scholar
M. Appel, S. Krause, U. Gleich, and M. Mara, “Meaning through fiction: science fiction and innovative technologies,” Psychology of Aesthetics, Creativity, and the Arts, vol. 10, no. 4, pp. 472–480, 2016.View at: Publisher Site | Google Scholar
J. P. M. Vital, M. S. Couceiro, N. M. M. Rodrigues, C. M. Figueiredo, and N. M. F. Ferreira, “Fostering the NAO platform as an elderly care robot,” in SeGAH 2013- IEEE 2nd International Conference on Serious Games and Applications for Health, Vilamoura, Portugal, 2013.View at: Publisher Site | Google Scholar
S. Shamsuddin, H. Yussof, L. I. Ismail, S. Mohamed, F. A. Hanapiah, and N. I. Zahari, “Initial response in HRI-A case study on evaluation of child with autism spectrum disorders interacting with a humanoid robot NAO,” Procedia Engineering, vol. 41, no. Iris, pp. 1448–1455, 2012.View at: Publisher Site | Google Scholar
D. Kelemen, “The scope of teleological thinking in preschool children,” Cognition, vol. 70, no. 3, pp. 241–272, 1999.View at: Publisher Site | Google Scholar
F. W. Tung, “Child perception of humanoid robot appearance and behavior,” International Journal of Human-Computer Interaction, vol. 32, no. 6, pp. 493–502, 2016.View at: Publisher Site | Google Scholar
K. Bumby and K. Dautenhahn, “Investigating children’s attitudes towards robots: a case study,” in Proceedings of CT99, The Third International Cognitive Technology Conference, pp. 391–410, San Francisco, USA, 1999.View at: Google Scholar
S. Woods, K. Dautenhahn, and J. Schulz, “The design space of robots: investigating children’s view,” in Proceedings from the 13th IEEE International Workshop on Robot and Human Interactive Communication, Kurashiki, Japan, 2004.View at: Publisher Site | Google Scholar
D. Löffler, J. Hurtienne, and I. Nord, “Blessing robot BlessU2: a discursive design study to understand the implications of social robots in religious contexts,” International Journal of Social Robotics, vol. 13, pp. 569–586, 2019.View at: Publisher Site | Google Scholar
X. Wang and E. G. Krumhuber, “Mind perception of robots varies with their economic versus social function,” Frontiers in Psychology, vol. 9, pp. 1–10, 2018.View at: Publisher Site | Google Scholar
R. N. Geraci, “Spiritual robots: religion and our scientific view of the natural world,” Theology and Science, vol. 4, no. 3, pp. 229–246, 2006.View at: Publisher Site | Google Scholar