Abstract

Social attributes of intelligent robots are important for human-robot systems. This paper investigates influences of robot autonomy (i.e., high versus low) and group orientation (i.e., ingroup versus outgroup) on a human decision-making process. We conducted a laboratory experiment with 48 college students and tested the hypotheses with MANCOVA. We find that a robot with high autonomy has greater influence on human decisions than a robot with low autonomy. No significant effect is found on group orientation or on the interaction between group orientation and autonomy level. The results provide implications for social robot design.

1. Introduction

Robots play an increasingly important role in our daily life. More and more robots move out from laboratories into everyday life providing services or decision support for human beings. Recent research exploring the influence of a robot’s attributes on human decision-making has shed some light on designing social robots, from their physical attributes to the organization of the human-robot team (e.g., [13]). The current research focuses on two of the less explored but important factors in the human-robot decision-making process: robot autonomy and group orientation.

Current technology allows for different levels of robotic autonomy, which describes to what degree a robot can act on its own accord. Determining a proper level of autonomy can benefit the interaction between a human and a robot. Autonomy has been studied comprehensively in designing industrial robots (e.g., [3, 4]). However, determining an appropriate autonomy level of social robots that are in close interaction with human remains to be solved.

Additionally, as the social robot is increasingly endowed with human natures (e.g., voice, appearance, and motion), it is necessary to define its social identity, as we generally do with human beings. Group orientation of a robot toward its human partner is one of the essential identities, especially in a collaborative decision-making process. Prior research about human decision-making process has revealed that humans tend to have contrasting attitudes toward ingroup and outgroup members [5]. Similarly, when a social robot is perceived as an ingroup member by the interacting human, it may receive different evaluations and exert different levels of influence on human decisions compared with a robot that is perceived as an outgroup member.

Therefore, this study investigates the influences of autonomy level and group orientation of a social robot on a human decision-making process and on human’s subjective attitudes toward a robot. The finding can be leveraged to design a proper autonomy level and group orientation of a social robot, which leads to facilitated interaction processes and maximized benefits of using social robots in daily life, especially in decision-supporting scenarios.

2. The Literature Review

Prior research in the field of human-robot interaction highlights that a careful design of robots’ attributes, including physical, behavioral, and linguistic attributes as well as control system, plays a crucial role in facilitating communication and cooperation between humans and robots [1, 2, 68]. Moreover, the relationship between humans and robots also exerts influences on their interaction processes [9].

Goodrich and Schultz [10] classify the influence of robot attributes on human-robot collaboration into five categories: (1) level and behavior of autonomy, which consists of mapping inputs from the environment into actuator movements, representational schemas, or speech acts; (2) nature of information change, meaning the manner in which information is exchanged between the human and the robot, including the communications medium and format of the communications; (3) structure of the human-robot team with different roles of robot(s) and person(s); (4) adaptation, learning, and training, as regards the learning and training schemes of artificial intelligence; and (5) shape of the task, which emphasizes the change in the manner of completing a task when new technology is implemented.

Among the five categories, we investigate two in this study, namely, robot autonomy level and group orientation that defines the structure of a human-robot team. We review related work in the following sections. In addition, as subjective attitudes of humans toward their robot partners are important indicators of their collaboration processes, we also review studies on subjective attitudes at the end of this chapter.

2.1. Robot Autonomy Level

Autonomy is an essential design feature of a robot. Among the numerous definitions of autonomy, the notion of level of autonomy (LOA) is one of the most human-centered ones. LOA describes the degree to which a robot can act on its own accord [11]. Adapted from the autonomy scale used in human-computer interaction by Sheridan and Verplank [11], a scale describing levels of autonomy in human-robot interaction can be obtained by replacing “computer” with “robot” as follows.(1)Robot offers no assistance; human does it all.(2)Robot offers a complete set of action alternatives.(3)Robot narrows the selection down to a few choices.(4)Robot suggests a single action.(5)Robot executes that action if human approves.(6)Robot allows the human a limited time to veto before automatic execution.(7)Robot executes automatically then necessarily informs the human.(8)Robot informs human after automatic execution only if human asks.(9)Robot informs human after automatic execution only if it decides to.(10)Robot decides everything and acts autonomously, ignoring the human.

Kaupp and Makarenko [3] investigate how the level of robot autonomy influences human-robot team effectiveness. In their study, maximizing robot performance (low autonomy) and minimizing the amount of human input (high autonomy) are traded off in a human-robot communication system. Sellner et al. [12] study on the level of robot autonomy reveals the importance of adjustable autonomy in multiagent domains, where remote human operators have the flexibility to join or leave a human-robot team. Their result supports that incorporating a remote human operator in multiagent teams can increase the robustness and efficiency of the team. Nevertheless, when the focus is shifted to social robots, the generalizability of the results from the above studies becomes questionable. The reason is that both studies target the maximization of the robots’ efficiency and minimizization of human input. However, these targets are not suitable for a social robot, for which adequate interaction between humans and the robot is necessary and expected. Thus, studies on effects of a social robot’s autonomy on human-robot interaction processes and outcomes are needed.

2.2. Group Orientation

Group orientation can be viewed as a component of a team structure. It can be classified as ingroups or outgroups. Ingroups can be defined as groups with which we are taught to associate [13]. We are concerned about the welfare of the ingroup people; we wish to cooperate with them without demanding equitable returns; and separation from them leads to discomfort or even pain. Outgroups can be defined as groups of people with whom we are not taught to associate. We are not concerned about their welfare; and we require an equitable return in order to cooperate [14].

Since 1970s, the impacts of ingroup and outgroup orientation on human perceptions of and attitudes toward other group members have been extensively investigated. Tajfel and his colleagues [5] conducted a series of studies to assess the effects of social categorization on intergroup behavior and the results strongly support that participants have favoritism toward ingroup members and discrimination against outgroup members. The contrasting attitudes toward ingroup and outgroup members have been shown repeatedly in later studies. Researchers suggest that ingroup favoritism and discrimination against outgroup members are originated from human natures rather than from conflicts of interests that partially or wholly come from social environments. In contrast, some researchers argue that ingroup favoritism only occurs when participants expect that other ingroup members will reciprocate the favor they receive [15]. Adding to this discussion, one recent study indicates that the outgroup discrimination may be traced to higher expectations of ingroup members rather than hostility toward outgroup members [16]. All in all, prior studies suggest that the perceived group orientation has an influence on human perception and behaviors.

When the focus comes to group cooperation, extant studies show that shared group orientation within a group can increase cooperation with ingroup members through generating favoritism toward them [1719]. This effect can be observed even in a minimal group situation. For example, in Tajfel et al.’s [5] experiment, they arbitrarily grouped participants with meaningless criteria, such as preferences for a certain painting or the color of the participants’ shirts. They find that even such distinction can trigger the ingroup and outgroup orientation; participants treat their ingroup member more favorably than they treat an outgroup member. In addition, people who feel strongly connected to the ingroup are more likely to be influenced by other ingroup members’ opinions [20].

However, no evidence has been found to support that humans also form ingroup or outgroup orientation to a robot in a human-robot interaction. Evers et al. [9] suggest that the most recognizable signs to prompt an ingroup orientation in interpersonal interaction (e.g., sharing a long history, and sharing successful experiences) lack validity in human-robot interaction. Similarly, Wang et al. [21] also report a failure of manipulating group orientation in human-robot interaction, in which they used visual signs to differentiate an ingroup from an outgroup. Nevertheless, Lin et al. [8] find that once the participants perceive a robot as an ingroup member, they perceive the robot as more trustworthy and credible, and they are more likely to accept the robot’s recommendations. The study suggests that the ingroup favoritism should exist between humans and robots, but humans might form group orientation differently with a robot as opposed to with a human partner. We explore which attributes can promote humans to form an ingroup orientation toward a robot in a laboratory experiment.

2.3. Human Subjective Attitudes toward a Robot

The human-robot collaboration process can be evaluated both objectively and subjectively according to Cognitive Engineering theories in automation [27]. The human’s subjective attitudes toward a robot are usually evaluated by different factors depending on the task scenario. We summarize the most commonly used subjective attitudes in human-robot interaction and their measurements in Table 1: trust, credibility, and workload.

Trust is an important factor in human-robot collaboration. Trust of humans in robots’ autonomous decision capabilities is considered a major issue that significantly influences the effectiveness of human-robot collaboration, especially in the willingness to share tasks and information as well as promote supportive behavior [28]. The level of human trust in a robot largely depends on the human’s observation of the characteristics of the robot, such as its performance, reliability, and the manner of reaching the goal [29, 30].

Credibility concerns the quality of feedback from the system. The information that a credible source provides is more likely to be believed, internalized, and incorporated into the receiver’s beliefs. Thus, a credible source is believed to be more persuasive, whose influences are more likely to lead to attitude change [31]. The robot must be seen as presenting correct information to the user, whether this is outside information (i.e., something it is programmed to have knowledge about) or data about the user or their interactions with the robot (e.g., health data that the system has observed over time). This measure was found to be reliable and could be used to measure aspects of trust in human-robot interaction [32].

Assessment of human perceptions of cognitive workload has been widely used in automation and user interface design [27]. For example, one of the most prevalent scales to measure human performance and workload, namely, NASA-Task Load Index, has been extensively used in teleoperation scenarios. The general results have indicated that when the system autonomy increases, subjective ratings of workload tend to decrease, and shorter task time leads to lower workload ratings [33].

3. Research Framework and Hypotheses

The paper focuses on the effects of robot autonomy and group orientation on the human decision-making process. The two factors considered were the robot’s designing autonomy level (i.e., high or low) and its group orientation with a human in interaction with it (i.e., ingroup or outgroup). The dependent variables were a robot’s influence on a final decision, its credibility, user’s trust of a robot, and user workload.

Hypothesis 1a. A robot with higher autonomy exerts more influence on participants’ decision-making.

Hypothesis 1b. A robot with higher autonomy is perceived as more trustworthy and this leads to lower workload for participants.

Hypothesis 1 is about the effects of robot autonomy on human decision-making and subjective attitudes. The robot with high level of autonomy behaves actively requiring little interference of human input; therefore, it may alleviate humans’ workload and improve their trust. As a result, the highly autonomic robot, which is trusted by humans and requires little human interference, may have high influence on human decision-making.

Hypothesis 2a. An ingroup robot exerts more influence on participants’ decision-making than an outgroup robot.

Hypothesis 2b. An ingroup robot is considered more credible and leads to lower workload for participants than an outgroup robot.

Hypothesis 2 is about the influence of group orientation on human decision-making and subjective attitudes. According to the fruitful results from the researches in human-human communication, outgroups are usually viewed with suspicion and expected to discriminate against the ingroup. When the robot appears as an ingroup member, the participants may subconsciously assign positive attributes to the agent and they tend to expect an ingroup robot demonstrating favorable actions. Such evaluation and expectation may lead to higher perception of credibility. When the robot appears as an outgroup member, participants may need more effort to process the information provided by the robot before they accept its recommendations, which may lead to higher workload. Consequently, the ingroup robot which is perceived as more credible and requires less human effort may have higher influence on human decision-making.

Hypothesis 3. Compared with an outgroup robot, an ingroup robot’s autonomy has higher influence on participants’ decision-making.

In hypothesis 3, the ingroup favoritism and hostility against an outgroup member may interact with the robot’s level of autonomy and exert influence on human decision-making. Taking the ingroup favoritism and outgroup hostility into consideration, an ingroup robot may be expected to be more active and autonomic than an outgroup robot. When interacting with an ingroup robot, people are more likely to accept the recommendations from a highly autonomic robot than a lowly autonomic one. But when the robot is outgroup, this expectation may not be so significant.

4. Methodology

4.1. Task and Participants

This study examines how a robot’s autonomy and group orientation influence human decision-making, as well as their perception of and reaction to the robot. A laboratory experiment was developed to test the hypotheses.

In the experiment, a participant and a robot formed a team to complete a sea survival task based on the US Army Survival Manual [34]. The scenario was described as follows: the participant and a robot chartered a yacht for a holiday trip across the Atlantic Ocean; unfortunately, in mid Atlantic, a fire destroyed the ship and the participants had to choose six items from twelve to take to the life raft. The participants need to make a series of decisions as regards selecting six items out of twelve to carry, the way to set up the sail, the position to drop the anchor, the method to drive the shark away, and the location to land on the island. In total, participants need to make ten decisions in the task. Before the robot was present, participants made initial decisions based on his/her experience; in the experiment, the robot gave its recommendations to the participants to form final decisions.

The robot gave recommendations either in low or in high levels of autonomy. In the low level of autonomy, the robot gave recommendations reactively, which means it gave its recommendations only after the participants made a decision and the participants could change his/her decision based on the robot’s suggestions. For example, when the scenario showed that the sailing yacht encounters a shark, the system asked the participant to decide between “stay quiet and wait until the shark leaves” and “sound the alarm to scare the shark away;” after the participant made the decision, the robot suggested that sounding the alarm may be the better choice; then, the participant could change or insist on his/her former decision; finally the robot took the action according to the participant’s decision. In the high level of autonomy, the robot gave recommendations in an active way, which means the robot gave its recommendations before the participants carried out any action and the participants only had the right to veto the robot’s decisions. After the system showed out the two choices in the shark-encountering scenario, the high autonomy robot directly suggested sounding the alarm, and the participant only needed to choose between accept or reject the robot’s suggestion.

The robot in the group was characterized either as an ingroup member or an outgroup member. In the ingroup setting, the robot asked which school the participant was in and introduced itself as a student from the participant’s university. Meanwhile, a school badge was attached to the robot’s body. In the outgroup setting, the robot asked about the participant’s school and introduced itself as a student from another university; no school badge was attached to the robot’s body. The task was described and manipulated on a computer and an interactive program was designed.

The number of participants was determined by power test [35], using the data for estimation of population deviation from a relevant study [1]. The expected statistical power was set to be greater than 0.76 with a significant level at 0.05 and the calculated sample size of participants was 48 (effect size Cohen’s ). So we invited 48 participants (31 males and 17 females) in the experiment with an age range of 18 to 27 years ( , ). All the participants were recruited by posting in the campus BBS and personal contacts, who were undergraduate or graduate students at Tsinghua University in Beijing. Students who majored in automation or artificial intelligence were excluded from the candidate pool and all the participants should have no prior experience with the robot used in this experiment. The participants were randomly assigned to one of the four treatment conditions upon arrival. A summary of the participants’ profiles is reported in Table 2.

As shown in Table 2, the participants were mainly characterized as young people in college with university education, having little experience with or knowledge of robotics and the experimental task. Prior studies indicated that gender played a role in human-robot interaction [40] and prior experience and knowledge of robotics, would influence users’ behavior to and perception of robots [41]. We took participants’ gender, prior knowledge of robotics and sea sailing as covariates in the data analysis.

4.2. Design of Experiment

A 2*2 between-subject design with robot autonomy (high versus low) and group orientation (ingroup versus outgroup) as dimensions was used. Forty-eight participants were randomly assigned to the four groups (Table 3). It has been proven that prior experience with a robot significantly influences participants’ attitude toward a robot [41]. Since the same robot was used for all the four treatments, the between-subject design can avoid learning effect and hold comparable interaction experience for all treatments.

4.3. Measurements

One behavioral dependent variable was the robot’s influence on decisions. It was measured by the differences between participants’ self-made decisions and decisions under the robot’s influence. We asked the participants to make independent individual decisions before the experimental task. Whenever a participant changed his/her initial decision to comply with the robot’s suggestion, we counted it as one decision change under the robot’s influence. All the decision changes were summed up and used as an indicator of the extent to which the participants were influenced by the robot. Both the initial decisions and the final decisions were recorded by the computer and compared in the data analysis.

The other three dependent variables—trusting the robot, robot credibility, and user workload—were measured through self-report scales. Trust in the study means that the user has faith in the future ability of the system to perform even in situations in which it is untried. The trust scale was adapted from Madsen and Gregor’s [36] five-item questionnaire (reported ). We adopted this scale by changing the term “system” to “robot” (Table 4). It measures the level of trust the participants have in the robot’s undemonstrated skills. The participants were asked to rate on a 7-point Likert scale from 1 (strongly disagree) to 7 (strongly agree).

Credibility is a perceived quality, composed of multiple dimensions. Credibility was assessed using McCroskey and Young’s [25] source credibility scale (reported ) containing 12 items. Each item includes two antonyms for participants to choose on a 7-point scale, such as “honest versus dishonest” and “trained versus untrained” (Table 4). The 12 items measure two dimensions of credibility: trustworthiness and expertise. Trustworthiness indicates the perceived goodness or morality of the source and expertise shows the perceived knowledge and skill of the source.

Workload can be defined as a theoretical construct of the cost incurred by an operator to achieve a certain performance [26]. NASA Ames Research Center developed the NASA Task Load Index, a rating procedure to evaluate the overall workload in a task. It contains six subscales: Mental Demands, Physical Demands, Temporal Demands, Own Performance, Effort, and Frustration [26]. As indicated by Xiao [37], the item “Performance” has low discrimination and small correlation with the total workload, which suggests that this item should be deleted from the workload scale. The reported Cronbach’s was increased to 0.790 when the item “Performance” was deleted. We used the five items scale to measure participants’ workload in the task (Table 4).

Moreover, negative attitudes toward robots and individual (versus group) self-representations were measured as two control variables (covariates) because they significantly predicted outcomes in several models.

The Negative Attitudes toward Robots Scale (NARS) was developed to measure people’s attitudes toward social robots [38]. It consisted of three subscales: (S1) “negative attitude toward situations of interaction with robots” (3 items); (S2) “negative attitude toward social influence of robots” (4 items); (S3) “negative attitude toward emotions in interaction with robots” (2 items). All the subscales were used in the pretask questionnaire in the experiment to measure participants’ attitudes toward robots in general (Table 4). Each item was rated on a 7-point Likert scale (1: strongly disagree; 7: strongly agree). The reported Cronbach’s Alpha was 0.738, 0.732, and 0.657 for the three subscales, respectively, which indicated acceptable reliability.

Self-representation, which determines whether people define the self with aspects different from others (i.e., the individual self), or shared with others (i.e., the group self), has been recognized as one of the influential factors in predicting the decision-making style and interpersonal relationships [42]. Some studies in the field of human-robot interaction revealed how people’s self-definition influenced their acceptance and relationships with a robot as a recommendation provider [8, 9]. We adopted the scales of individual self-representation (IR, 3 items, ) and group self-representation (GS, 4 items, ) in the questionnaire.

The items of the five subjective measurements are shown in Table 4. All the measurements were translated into participants’ native language, Chinese. A translation and back-translation process was conducted to ensure the scale validity of this cross-cultural use.

4.4. Apparatus
4.4.1. Robot

A remote controlled mobile robot was used in the experiment (see Figure 1).

Four main design features of the robot were appearance, spoken contents, voice, and movement.

Firstly, a previous study indicated that users formed a mental model of a robot’s capability, characters, and social roles by observing its appearance [2, 43]. To avoid the bias introduced by the robot’s appearance, the robot was designed as neutral as possible with recognizable head, legs, and body. The robot was about 1.2 meters tall. In the ingroup condition, the school logo of Tsinghua University was attached to the body of the robot, while in the outgroup condition, no school logo was attached.

Secondly, in the task, the robot expresses its opinions through its sound. The content of its talking included greetings, task introduction, opinions on decisions, and transitional words. The sequence and the content of its speaking were predetermined according to the treatment. For example, in the ingroup situation, the robot introduced itself as a student from Tsinghua University, while in the outgroup situation, it introduced itself as a student from another university. The robot was designed as an expert, which meant the robot’s opinions had very high accuracy. That was to simulate the real scenarios when a robot acts as a decision-supporter and provides expert opinions for the decision-makers. However, if a robot was introduced as an expert, there was a high risk that participants would overwhelmingly believe in its opinion and the effect of treatments would be minor. Thus, the robot was introduced as an ordinary student who had half-year experience in sailing. In the task description, we highlighted that “the robot may not be an expert in sailing, but it will give you some information or suggestions based on its own understanding. You can choose to accept or reject its suggestions.” Furthermore, we expected that the robot’s expertise in sailing was held identically for participants, and, at the same time, the perceived trustworthiness of the robot and the robot’s influence on decision-making would be varied by treatments.

Thirdly, since we were not interested in analyzing the influence of the robot’s gender, the voice of the robot was designed as a male’s voice for all the participants to minimize the bias of the perceived gender. The Chinese voices were generated by Neospeech TTS App and VW Liang’s voice was chosen as the male’s voice. The characters of this technology used to generate the voice were of high quality timbre and nature. The voices were set at a frequency of 16 KHZ 16 bit and held at a normal speech-rate.

Fourthly, the sound and movement of the robot was remotely controlled. The head of the robot was a sound box, which were connected to a computer for operation through a Bluetooth signal. When participants carried out a certain operation, the Bluetooth adapter on the computer sent a Bluetooth signal to the sound box and then the robot could “speak” in response to the participants’ operation. The base of the robot consists of four wheels, which enabled it to move. A remote-control receiver was embedded in the body of the robot, which could drive the wheels on the base. The robot’s moving speed was held as constantly and stably as possible for all participants.

4.4.2. Operational Program

To create a scenario for decision-making, a computer program was developed to present the task and collect data. The program had two functions. Firstly, it presented the pretask questionnaires and collected participants’ initial decisions before the robot was present. After the experimental task, it presented the posttask questionnaires and collected data. Secondly, it presented a scenario of “survival on the sea,” activated the robot’s speech, and recorded participants’ decisions in response to the robot’s recommendations.

The questionnaires were developed in Microsoft Access and the scenario was programmed by Visual Basic for Applications (VBA) in Microsoft PowerPoint. A Bluetooth adapter was connected to the operational computer and controlled the sound box on the robot.

4.5. Procedure

The study took place in the Usability Lab in the Department of Industrial Engineering at Tsinghua University. The complete experiment was finished in a period of five days. The experiment for each participant lasted about 40 minutes and every participant took part in the experiment independently and solely.

The layout of the lab and a real scene of the experiment are shown in Figure 2. Two experimenters were involved in the experiment: an observer and an operator. The observer viewed the participants’ behaviors through a camera on top of the computer used to present the questionnaires and the task. The operator stayed in another room and monitored the process of the experiment through three cameras on the ceiling of the lab (cameras 1−3 in Figure 2). He remotely controlled the robot to be present at a certain time point. The robot was controlled by the same operator for all the participants to avoid bias introduced by different operators.

There were two staying positions of the robot (see points [C] and [E] in Figure 2). Position [C] was the point where the robot waited before the task began; point [E] was the point where the robot interacted with the participant during the task. A screen was used to separate the robot from the field of the participant’s view. In this way, all participants met the robot only when the task began and their behaviors in the first meeting were captured and analyzed later.

The following describes the process of the experiment.(1)Upon arrival, the participant was led to the Chair [A], where she/he signed the informed consent and filled out the personal information and pretask questionnaire on the computer.(2)When the experimental task began, the operator remotely operated the robot to move smoothly from point [C] to point [E]. Meanwhile, the observer sitting in Chair [B] recorded the participant’s reactions through camera [4]. The robot greeted the participant when it moved out of the screen and orientated its body to the participant at position [D]. When standing at position [E], the orientation of the robot was facing the participant.(3)The participant interacted and worked collaboratively with the robot in the task. The robot provided its opinions and the participant operated on the computer. At the same time, the participant’s behaviors were observed and recorded by the observer.(4)After the task, the participant completed a posttask questionnaire.(5)Finally, the observer carried out a three-minute interview with the participant being asked about his/her feelings and concerns in the experiment. The information collected in the interview could verify the construct validity of the questionnaires.

5. Results

To verify the validity of the experiment, we first checked the manipulation of the two independent variables with two questions in the posttask questionnaire. The manipulation of autonomy was checked by a question asking to what extent the participants think the robot was controlling the task (7: point scale: 1—lowest autonomy, 7—highest autonomy). The group orientation was checked using Aron et al.’s [44] Inclusion of Other in the Self Scale which consists of 6 graphic demonstrations of the relationship between the participant and the robot (1: largest distance between the two agents, 6: smallest distance between the two agents). Judged from the histograms, the data of the two measurements were not normally distributed. Therefore, nonparametric test (Mann-Whitney U test) was used to test the main effect of LOA and group orientation. The results showed that the robot was perceived with a higher level of autonomy in the high LOA condition (after a logarithmic transformation, Mean = 0.466, SD = 0.249) than in the low LOA condition (Mean = 0.563, SD = 0.213; , ). Participants in the ingroup condition (Mean = 4.080, SD = 0.133) perceived the robot to be of a closer relationship with them than participants in the outgroup condition (Mean = 3.752, SD = 0.138; , ). Considering the relative small sample size, the significance level of 0.1 was acceptable for the manipulation check. Thus, we conclude that the manipulation achieved the purpose we intended.

Then we tested the reliabilities of the scale measurements. Nunally and Bernstein [45] indicated that 0.70 was an acceptable level of reliability, but lower thresholds were sometimes used in the literature, especially when participant’s responses were influenced by the task, and when the measurements were not intrinsic attributes. The current research calculated the measurements’ Cronbach’s index from 0.455 to 0.876. Specifically, Cronbach’s of trust scale was 0.704, the credibility scale was 0.876, and the NASA workload was 0.673. The NARS three subscales “interaction,” “social impact,” and “negative emotion” had Cronbach’s as 0.679, 0.621, and 0.455, respectively. Cronbach’s index of the two self-representation subscales “group self-representation” and “individual self-representation” was 0.652 and 0.516, respectively. As a result, the internal consistencies of two subscales, “negative emotion,” and “individual self-representation,” were too low and were deleted from further calculation.

To test our hypotheses, we used multivariate analysis of covariance (MANCOVA) to determine the effects of independent variables on multiple dependent variables. MANCOVA can protect against Type I errors that might occur if multiple ANCOVAs were conducted independently [46]. Robot autonomy (high versus low) and group orientation (ingroup versus outgroup) were involved as independent variables. The dependent variables included robot influence on decisions, trust, credibility, and workload. The negative attitudes toward the robot and group self-representation were involved in the model as covariates.

In hypothesis 1, we argued that a robot with higher level of autonomy, compared with the lower autonomic robot, can have more influence on participants’ decision-making, receive more trust, and reduce participants’ workload in the task. The statistical result was shown in Table 5.

The result indicated that the robot’s influence on participants’ decisions was significantly increased when it gave recommendations in the high level of autonomy than in the low level of autonomy. The difference of participants’ trust in the robot was marginally significant in high versus low level of autonomy; people trusted the lowly autonomic robot more than the highly autonomic one. No significance showed in workload.

In hypothesis 2, we predicted that, compared with consideration of the robot as an outgroup member, when it is considered as an ingroup member, it exerts more influence on participants’ decision-making, seems more credible, and reduces participants’ workload. The statistical result is shown in Table 6.

Significances were not found in the measurements except for workload, which is marginally significant. When the robot was characterized as an ingroup member, participants had slightly lower mental workload in the task. Although the ingroup favoritism was not reflected in the scales of credibility, we tended to conclude that participants felt more relaxed with an ingroup robot (i.e., from the same university). It is possible that the group orientation did not strongly influence users’ perception of expertise and capability of the robot but affected their mental workload.

For hypothesis 3, we predicted that autonomy has higher influence on participants’ decision-making when the robot is considered as an ingroup member versus an outgroup member. The hypothesis was verified with a marginal significance ( , ). The result (see Figure 3) showed that for both the ingroup robot and the outgroup robot, the high level of autonomy increased the robot’s influence on decision-making significantly, and this increase in the robot’s influence was greater when the robot was an ingroup member versus an outgroup member. It seems that participants were more sensitive to the changed level of autonomy when the robot was an ingroup member.

Two subscales of NARS were taken as covariates in the MANOVA model. They evaluated the negative attitudes towards “interaction” with robots and the “social impact” of the robot, respectively. The result indicated that the negative attitudes towards the “social impact” of the robot significantly influenced participants’ mental workload in the task ( , ). The two variables were positively correlated in a following Pearson correlation analysis ( , ), meaning that negative attitudes toward a robot’s social impact can increase people’s workload in interaction with it.

The other covariate was group self-representation. MANCOVA results showed a significant effect of group self-representation on credibility ( , ). The two variables were positively correlated ( , ) indicating that people with higher group-oriented self-representation perceived the robot to be more credible.

6. Discussion

Our study demonstrated the effects of a social robot’s autonomy level on human decision-making process. Based on previous researches of the design of autonomy level for an industrial robot (e.g., [3, 4]), we creatively manipulated a social robot with different autonomy levels through experiment. We found strong evidence that a highly autonomic robot has more influence on human decisions than a lowly autonomic robot. This becomes the first reference for social robot designers when they face the problem of autonomy.

In the results, it was surprising that people tended to trust the lowly autonomic robot more than the highly autonomic one, although this difference is only marginally significant on a 0.05 confidence level. The scale of trust measured the user’s faith and prediction of the undemonstrated capability of the robot. In the high level of autonomy in the study, the robot gave its opinions autonomously and fewer interactions between the robot and the participants were required. From the participants’ point of view, the reduced sense of control over the robot may lead to uncertainty of the undemonstrated capability, and consequently result in lower trust of the future capability. Furthermore, automation reliability is an important determinant of human trust in interaction with automation systems [30]. The lack of demonstration of the robot’s reliability may reduce users’ trust more in the high autonomy situation than in the low autonomy situation. More studies are encouraged to further understand the effect of robot autonomy on users’ trust.

The result also showed a marginally significant trend that an ingroup robot may decrease more the workload of participants than an outgroup robot. Previous studies in interpersonal communication indicated that the membership in a group increased the positive evaluation of ingroup members. The ingroup members were believed to behave more fairly [47] and to be more trustworthy as well as cooperative [17]. Foddy et al. [48] reported that such positive evaluations for ingroup members were aroused by participants’ expectation of favorable treatment from the ingroup counterparts rather than the perception of positive qualities of the ingroup members. In our study, the lower mental workload with an ingroup robot could be explained by the participants’ expectation of favorable treatments from the ingroup robot. They might consider the ingroup robot to be more supportive and with better intentions than the outgroup robot. Such expectations might reduce their mental intension and monitoring behaviors, which resulted in the lower mental workload in the task.

For the interaction between autonomy and group-orientation, the study found that the high level of autonomy increased the robot’s influence on decision-making significantly, and this increase in the robot’s influence was marginally greater when the robot was an ingroup member versus an outgroup member. In the task, the robot was first characterized as either an ingroup or an outgroup member and then it demonstrated proactive or reactive behaviors. It is possible that the differentiation between ingroup and outgroup exerted differences on participants’ expectation of the robot. Participants might expect that an ingroup member could proactively take part in the decision-making process and provide decision support to them, and they were most likely to accept the ingroup robot’s proactive recommendations among the four conditions. When the ingroup robot behaved passively and reactively, it may fail to meet the participants’ expectations and the participants were least willing to accept its recommendations. For an outgroup robot, although participants were more influenced when it behaved actively than when it behaved passively, the difference was not as large as that for an ingroup robot.

Even though our study offers new insights into human interaction with social robots with different levels of autonomy and group orientation, some questions still remain unanswered and additional studies are required to overcome the limitations of the current research. Firstly, the results, except for the autonomy effect on decision change, were marginally significant on the 0.05 level. These marginal results reveal the complexity of the effects and the possible interactive influences from other factors, especially from some unclear individual factors. Therefore, further studies on robot autonomy and group orientation may address this issue and explore other influencing factors. Secondly, the current study involved only one robot and one person in the human-robot system; it is interesting to analyze other compositions of human-robot teams in social interactions. Thirdly, it was found that group orientation has different meanings in different cultures. For example, Chinese tend to distinguish more clearly between ingroup and outgroup members than Americans [49]. Therefore, the results may be different when investigating the group orientation effect on human-robot interaction in cultures other than Chinese.

Although our research raises many questions and further research is needed to fully understand the effects of autonomy and group orientation on human-robot interaction, we offer some initial findings suggesting that the design of a social robot should take into consideration both its technical and social aspects, which is a different approach to designing industrial robots. As robots achieve higher levels of autonomy with the development of technology, it will be increasingly important to endow them with more social attributes adequately and to satisfy people’s requirements of social interaction with robots.

To fulfill this objective in practice, the current technology-orientated paradigm, which mainly focuses on developing natural robotic technologies but overlooks the social attributes of robots, might be improved. Robot designers need to be more aware of people’s different perceptions and expectations of robots with different social attributes. They should also define the robot’s social identity and study people’s expectations of the particular social robot before designing the technical attributes accordingly. This socially orientated paradigm may help to increase people’s trust and acceptance of social robots and to improve the performance of human-robot teams.

7. Conclusions

The purpose of this study was to investigate the effects of robot autonomy level and group orientation on the collaborative decision-making process between people and robots. The results showed that a highly autonomic robot has more influence on human decisions than a lowly autonomic robot; no significant effects were found with group orientation or the interaction between group orientation and autonomy level. Further investigations are needed to better understand the effects and to generalize the results to more complex human-robot team compositions and to cultures other than Chinese.

More practically, this study provides ways to improve the influence of a robot on human-robot team performance by increasing its autonomy level. The study also recommends a shift from technology-orientated design to socially orientated design in order to satisfy people’s requirements in interaction with social robots.

Acknowledgment

This study was funded by the National Science Foundation China Grants no. 71031005 and no. 71188001.