Abstract

This study investigates whether people perceive social drones differently depending on pilot type and perceived safety. A “drone campus tour guide” social drone service was examined to explore these values. This study involves a between-subjects experiment using two drone control types (human-driven and algorithm-driven) and two levels of perceived safety (low and high). The results demonstrate that the drone pilot type changes the service experience when the drone is flying in an unsafe manner. In the group where the drones were flown in an unsafe manner, participants exhibited higher levels of satisfaction with the algorithm-driven drone guide, while both types of drones received the same level of satisfaction when they were flown safely. The results have implications for understanding how expectations influence service evaluations in relation to human connection.

1. Introduction

Following the recent growth in the drone industry, there has been an increase in drone services that assist or collaborate with humans. In particular, advances in drone technology have made it possible for drones to autonomously perform given tasks by moving to a specified location and along a specified path based on a global positioning system (GPS) [1]. In fact, more sophisticated drones can be developed by applying conventional human-robot interaction (HRI) technology. For example, in the near future, we will be able to use drones to lead exercise groups while moving together with people [2], to guide people visiting cultural areas for the first time (e.g., Skycall, MIT), to guide the blind [3], or to act as bodyguards to ensure safe journeys home [4].

As the use of drones becomes increasingly prevalent, research on effective communication with humans and HRI for social robots is necessary in order to facilitate the acceptance of drones in everyday environments. We refer to these drones as social drones. Gongora and Gonzalez-Jimenez [5] examined the technology for surveying drone maneuvers using GPS, and Cho et al. [6] examined the aspect of usability, considering the approachability of drone control to the general public. However, there has been a lack of research on service evaluations based on the perceptions of the humans (users) that interact with drones. Therefore, we consider what variables must be considered in human-drone interactions and what effects these variables have on service evaluations.

As defined by Clarke [7], control over drone-flight may be exercised by a human pilot or an autopilot. A review of remote control and autopilot functions for small drones can be found in the work of Chao et al. [8]. As such, regarding the control of a drone, one cannot help but consider these two particular types. Meanwhile, remotely controlled drones and drones that operate autonomously based on a fixed algorithm are related to the concepts of avatars and agents as discussed under the topic of human-computer interaction (HCI). According to the claims of several studies on HCI, the perceptions and evaluations of users vary between the use of avatars and agents [912].

Furthermore, as social robots, drones face a robot-ontological issue, namely, safety. Dautenhahn and his colleagues [13] studied human comfort while interacting with a social robot. They thought that feelings of safety with a robot would be impossible to study and instead user comfort should be the focus. In this paper, perceived safety includes both of these meanings and we constructed a variable that could influence the level of satisfaction in social drone services. Little by little, academic focus has shifted to human perceptions of social drones. Cauchard and her colleagues [14] studied drones as a type of social computing that features an affective factor. However, this is merely a starting point in the study of social drones. In this paper, we will explore the relationship between user satisfaction and two fundamental issues: drone control conditions and perceived safety. Thus, in order to empirically investigate this relationship, the present study examines the following research question.

Research Question. What is the relationship between tourist satisfaction and drone control type or perceived safety?

2. Literature Review and Hypotheses

2.1. Human-Driven versus Algorithm-Driven

Bailenson and Blascovich [15] define an avatar or an agent as follows: “a perceptible digital representation whose behaviors reflect those executed, typically in real time, by a specific human being.” These terms are often encountered when using a computer application or playing a game. Examples include the clipper in MS office, Siri on an iPhone, and various avatars featured in the game Second Life. Depending on who controls these characters (a human versus the system), there can be different perceptions or service evaluations. Lim and Reeves [9] studied the engagement of avatars and agents, part of the game experience, and found that playing a game with avatars showed improved engagement over the use of agents. Concerning general interactions, Cauchard and her colleagues [14] studied the social evaluations during interactions with digital human representations and observed that there is a difference in social evaluations of avatar and agent environments when digital human representations were made to smile, one of the social cues of interaction. Namely, there was a tendency toward a negative evaluation of the smile of a digital human, which is an agent.(H1)The group using an algorithm-driven tour guide will have a higher level of satisfaction than the group using a human-driven drone tour guide.

2.2. Perceived Safety

Duncan and Murphy [16] examined the maneuver position of a drone that minimizes danger or stress when a drone interacts with people in public places. It was found that drones that interact with people are considered social robots (beings) and, hence, are required to remain at an appropriate distance or position, similar to humans interacting with humans in daily life. Whether human or not, social beings that interact with people can either be stable to interact with or, on the contrary, be unstable to interact with, which can indeed affect the service evaluation. Dautenhahn and his colleagues [13] also studied focusing on human comfort, which can be interpreted as perceived safety. Young and his colleagues [17] pointed to safety as a factor affecting acceptance because robots have the potential to injure humans. Researchers in social robotics continue to reconfirm that perceived safety, a psychological comfort with human-robot interaction, is a major factor to be considered [1820].(H2)A drone tour guide flying stably will receive a higher level of satisfaction from participants than a drone tour guide flying in an unstable manner.

3. Materials and Methods

3.1. Participants

We recruited 60 undergraduate or graduate students from Sungkyunkwan University in South Korea. Subjects took part in the experiment voluntarily by responding to an online announcement on the university’s main website. Females made up 47% of the sample and the age range of subjects was 19–29 (M = 23.72, SD = 2.36).

For ethical reasons, the guidelines proposed by Brownhill [21] (informed consent, privacy, incentives, the right to withdraw, and protection of the researcher) were followed, and our researchers distributed a detailed information sheet prior to the experiment and spent approximately five minutes obtaining consent. The information sheet explained the experimental procedure and the required time commitment. The participants were asked if they felt comfortable participating in the experiment, as it was presented in an open space; they were also asked to inform the accompanying researchers of any discomfort during the experience. It was explained that the survey following the experiment would be anonymous and that approximately 5,000 KRW would be provided as an incentive. Following the explanations, it was confirmed that the participants understood all the details.

3.2. Experimental Design

A between-participants, full factorial experiment was designed to extrapolate how human connection cues for levels of perceived safety regarding social drone services influence user satisfaction.

3.3. Procedure

To study social drones, we operated a trial drone service where people could tour a college campus while communicating with a drone guide. The drone, as a social robot, could provide humans with information on a specific location or building during the four-minute tour.

The participants were instructed to be in front of a designated structure at a predetermined time. Researchers introduced how the drone would work: either controlled by a remote guide or moving along a preprogrammed route. They were provided receivers through which they could hear the guidance of the drone. Once three to five participants were ready, the tour began. The drone used weighed approximately 420 g and had a smooth external cover. The drones were controlled in the same manner, by a well-trained guide in a “Wizard of Oz” setup.

The drone’s interpretation was transmitted through the earphones of the receivers that were provided to participants, who followed the movement of the drone and received information on buildings and structures as they approached them. The recorded voice was that of a well-trained female student announcer. An excerpt of the guidance script is shown below:Now, let us begin with the campus guide. Our university traces its origins to … the 7th year of the rule of the first king of the Chosun dynasty in 1398. This campus is a natural science campus, which is devoted to the research and development of natural science and contains the departments of natural science, engineering, pharmacy, and medical science. The building on your left looks like it is a single building, but it is composed of four zigzagging buildings. When viewed from the sky, it looks like a honeycomb. The designer of the building admitted that he got the inspiration for the architecture from a honeycomb [to be continued].

Researchers provided participants with a manual about their respective drone guide and took the time to explain how the drone worked. There were manuals for both drone types: human-driven drone and algorithm-driven drone.Group 1 (Drone Tour Guide Service Manual for a Human-Driven Drone): a flying drone guide provides guide service to the campus. You can listen to the voice of the drone guide through the supplied receiver. The drone is being remotely controlled by someone who guides you around the campus while viewing the route through the camera on the front of the drone. Experience the campus guide service led by the drone’s guide.Group 2 (Drone Tour Guide Service Manual for an Algorithm-Driven Drone): a flying drone guide provides guide service to the campus. You can listen to the voice of the drone guide through the supplied receiver. This drone is a guide service robot that flies according to a predefined algorithm and stops and give explanations about buildings or structures. Experience the campus guide service led by the drone’s guide.

The values of perceived safety were divided into two groups, high and low, around the median value. Each group contains a balanced number of males and females. Perceived safety of the high-level group includes 15 males and 15 females, and the low-level group includes 17 males and 13 females.

3.4. Measures

All measurements were adopted from previous studies and revised for this exhibition guidance service context. All items were 7-point scaled items: strongly disagree (1) to strongly agree (7).

3.4.1. Satisfaction

Satisfaction with the guidance service experience was measured (mean = 16.26, standard deviation = 2.55, and Cronbach Alpha = .76) by rating 7-point scaled items [22, 23]. The scale ranged from strongly disagree (1) to strongly agree (7) for each of the following statements: “I do not have a positive attitude or evaluation about the service” (inversed), “I think the system is very helpful,” and “Overall, I am satisfied with the system.”

3.4.2. Perceived Safety

Perceived safety of the guidance service was measured (mean = 13.18, standard deviation = 3.51, and Cronbach Alpha = .72) by rating the following bipolar, 7-point scaled items [18]: “Anxious to Relaxed,” “Agitated to Calm,” and “Quiescent to Surprised” (inversed).

4. Results

A factorial 2 (human-driven versus algorithm-driven) × 2 (perceived safety: low versus high) analysis of variance (ANOVA) was conducted with the two drone control type variables (human-driven and algorithm-driven) as the between-participants factor and the two-category level of perceived safety variable (low, high) as the between-participants factor.

When the ANOVA was performed with satisfaction as the dependent variable, there is no significant difference between the two groups: human-driven drone (M = 5.30) and algorithm-driven drone (M = 5.54). Therefore, our data did not support (H1). However, when the ANOVA was performed with satisfaction as the dependent variable, a significant effect for perceived safety was found. Study participants showed lower satisfaction levels with the human-driven drone (M = 5.12) than with the algorithm-driven drone (M = 5.72), , and . Therefore, our data supported (H2).

Additionally, in the ANOVA with the human connection index, there was a significant two-way interaction, , , and , indicating that participants felt a higher level of satisfaction (, ) with the algorithm-driven drone than the human-driven drone (, ) when they felt less safe with the drone.

5. Discussion

Firstly, the results confirmed that the control type of a social drone does not influence users’ evaluation levels, which highlights that human connection cues would do not impact satisfaction in a social drone context. According to the Computers Are Social Actors (CASA) paradigm proposed by Nass and his colleagues [24, 25], HCI follows the social rules of human-human interactions. In particular, it was suggested that humans mindlessly confuse the computing medium in question as a social being and exhibit reactions based on social rules owing to the anthropomorphic cues that apply to computing media, for instance, gender, personality, and voice, as well as the elements of the social rules [26, 27]. Such a paradigm also applies to our test drone, which corresponds to HRI, and one can therefore evaluate that when a drone interacts through a voice, the drone itself is regarded as a social being during the service experience regardless of who controls the drone, leading to the same level of satisfaction.

However, we found that perceived safety significantly affects levels of satisfaction; thus, we reiterate that perceived safety is still a critical factor in the social drone service environment.

We also found an interaction effect between type of drone control and level of perceived safety. Both groups of participants with high perceived safety had similar levels of satisfaction regardless of whether or not the drone was human controlled. On the contrary, the level of satisfaction becomes significantly lower for participants when the drone is controlled by a human, and the participants feel that the drone guide is not flying the drone safely. This could indicate that participants expect the human-controlled drone to exhibit a higher quality flight; however, when they experienced a lower quality flight, they became critically disappointed. This can be understood from a perspective similar to Oliver [28], whose work showed that disconfirmation of expectation affects the attitude or satisfaction of a customer. Furthermore, Oliver and DeSarbo [29] showed that disconfirmation of expectation is the most important factor in customer satisfaction.

6. Conclusions

The implications of this study are as follows. We showed that human connection affects user perceptions of a drone service. The research on user service evaluations with human-driven drones has been mostly conducted in the field of HCI, and it has been assumed that social robots are independently controlled using a fully autonomous algorithm. However, considering the appearance of telepresence robots or the necessity of human supervision due to the limitations on perfect automatic robot control [30, 31], it was highly valuable finding that human supervision of social robots is a major factor affecting service evaluation.

Moreover, we verified that perceived safety, a continuing issue in social robotics, still acts as a service quality improvement factor in social drones. Furthermore, the observation that human connection has an interaction effect supports the interpretation that the imperfectness of the automated robotic service is recognized and there are high expectations for human-driven robots. We believe that this is in line with the findings of Zimmerman and his colleagues [32], who studied embodied agents. In their work, user perceptions of agents with humanoid and non-humanoid exterior designs were examined, and it was found that agents with humanoid designs are perceived to be more intelligent.

Another interpretation is as follows. The group with low perceived safety exhibited lower service satisfaction with human-controlled drones than with algorithm-driven drones. This may indicate that when a robot is unstable, the control capability of artificial intelligence (AI) is trusted more than that of a human [10, 11]. Based on numerous works on consumer perception where trust is in a defining relationship with satisfaction [33, 34], it may be possible to infer that low confidence in the human control of drones led to low satisfaction in our study. Regardless, we wish to claim that the second implication of this study is that the control method of a robot can act as a major factor in user service evaluation when designing a service robot that assists or collaborates with humans.

We aimed to examine our hypotheses based on a laboratory experiment, which is an unusual situation [35]. One limitation of this method is that participants in their natural context may not pay full attention to the guide and often do other things while walking, whereas the subjects of the experiment were asked to specifically focus on the service. Additionally, they experienced the service for only five minutes. Participants may have felt that the time allotted for the experience was insufficient.

The findings of the study should be carefully generalized. The sample size was insufficient to represent the population of drone guidance service. The participants not only were of a younger generation but also were university students familiar with new media technology. It is natural for them to experience new technologies [36], such as drones. It is necessary to verify the results in another setting to generalize the findings to a drone guidance service population.

In addition, the experiment was conducted in South Korea. The network infrastructure in South Korea is suitable to communicate with others synchronously without delay. The experience of a drone guidance system would be affected by these factors. Thus, it is also necessary to compare this study with studies in different cultures to generalize the current findings.

Disclosure

These findings were presented in the 19th International Conference on Human-Computer Interaction.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Acknowledgments

Partial support for this research was provided by the Department of Human ICT Convergence in Sungkyunkwan University.