Table of Contents Author Guidelines Submit a Manuscript
Advances in Human-Computer Interaction
Volume 2018, Article ID 9280581, 5 pages
https://doi.org/10.1155/2018/9280581
Research Article

Effects of Human Connection through Social Drones and Perceived Safety

1Department of Interaction Science, Sungkyunkwan University, Seoul, Republic of Korea
2Human ICT Convergence, Sungkyunkwan University, Seoul, Republic of Korea

Correspondence should be addressed to Hwayeon Kong; moc.liamg@olleholf

Received 6 September 2017; Accepted 23 November 2017; Published 16 January 2018

Academic Editor: Marco Porta

Copyright © 2018 Hwayeon Kong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This study investigates whether people perceive social drones differently depending on pilot type and perceived safety. A “drone campus tour guide” social drone service was examined to explore these values. This study involves a between-subjects experiment using two drone control types (human-driven and algorithm-driven) and two levels of perceived safety (low and high). The results demonstrate that the drone pilot type changes the service experience when the drone is flying in an unsafe manner. In the group where the drones were flown in an unsafe manner, participants exhibited higher levels of satisfaction with the algorithm-driven drone guide, while both types of drones received the same level of satisfaction when they were flown safely. The results have implications for understanding how expectations influence service evaluations in relation to human connection.

1. Introduction

Following the recent growth in the drone industry, there has been an increase in drone services that assist or collaborate with humans. In particular, advances in drone technology have made it possible for drones to autonomously perform given tasks by moving to a specified location and along a specified path based on a global positioning system (GPS) [1]. In fact, more sophisticated drones can be developed by applying conventional human-robot interaction (HRI) technology. For example, in the near future, we will be able to use drones to lead exercise groups while moving together with people [2], to guide people visiting cultural areas for the first time (e.g., Skycall, MIT), to guide the blind [3], or to act as bodyguards to ensure safe journeys home [4].

As the use of drones becomes increasingly prevalent, research on effective communication with humans and HRI for social robots is necessary in order to facilitate the acceptance of drones in everyday environments. We refer to these drones as social drones. Gongora and Gonzalez-Jimenez [5] examined the technology for surveying drone maneuvers using GPS, and Cho et al. [6] examined the aspect of usability, considering the approachability of drone control to the general public. However, there has been a lack of research on service evaluations based on the perceptions of the humans (users) that interact with drones. Therefore, we consider what variables must be considered in human-drone interactions and what effects these variables have on service evaluations.

As defined by Clarke [7], control over drone-flight may be exercised by a human pilot or an autopilot. A review of remote control and autopilot functions for small drones can be found in the work of Chao et al. [8]. As such, regarding the control of a drone, one cannot help but consider these two particular types. Meanwhile, remotely controlled drones and drones that operate autonomously based on a fixed algorithm are related to the concepts of avatars and agents as discussed under the topic of human-computer interaction (HCI). According to the claims of several studies on HCI, the perceptions and evaluations of users vary between the use of avatars and agents [912].

Furthermore, as social robots, drones face a robot-ontological issue, namely, safety. Dautenhahn and his colleagues [13] studied human comfort while interacting with a social robot. They thought that feelings of safety with a robot would be impossible to study and instead user comfort should be the focus. In this paper, perceived safety includes both of these meanings and we constructed a variable that could influence the level of satisfaction in social drone services. Little by little, academic focus has shifted to human perceptions of social drones. Cauchard and her colleagues [14] studied drones as a type of social computing that features an affective factor. However, this is merely a starting point in the study of social drones. In this paper, we will explore the relationship between user satisfaction and two fundamental issues: drone control conditions and perceived safety. Thus, in order to empirically investigate this relationship, the present study examines the following research question.

Research Question. What is the relationship between tourist satisfaction and drone control type or perceived safety?

2. Literature Review and Hypotheses

2.1. Human-Driven versus Algorithm-Driven

Bailenson and Blascovich [15] define an avatar or an agent as follows: “a perceptible digital representation whose behaviors reflect those executed, typically in real time, by a specific human being.” These terms are often encountered when using a computer application or playing a game. Examples include the clipper in MS office, Siri on an iPhone, and various avatars featured in the game Second Life. Depending on who controls these characters (a human versus the system), there can be different perceptions or service evaluations. Lim and Reeves [9] studied the engagement of avatars and agents, part of the game experience, and found that playing a game with avatars showed improved engagement over the use of agents. Concerning general interactions, Cauchard and her colleagues [14] studied the social evaluations during interactions with digital human representations and observed that there is a difference in social evaluations of avatar and agent environments when digital human representations were made to smile, one of the social cues of interaction. Namely, there was a tendency toward a negative evaluation of the smile of a digital human, which is an agent.(H1)The group using an algorithm-driven tour guide will have a higher level of satisfaction than the group using a human-driven drone tour guide.

2.2. Perceived Safety

Duncan and Murphy [16] examined the maneuver position of a drone that minimizes danger or stress when a drone interacts with people in public places. It was found that drones that interact with people are considered social robots (beings) and, hence, are required to remain at an appropriate distance or position, similar to humans interacting with humans in daily life. Whether human or not, social beings that interact with people can either be stable to interact with or, on the contrary, be unstable to interact with, which can indeed affect the service evaluation. Dautenhahn and his colleagues [13] also studied focusing on human comfort, which can be interpreted as perceived safety. Young and his colleagues [17] pointed to safety as a factor affecting acceptance because robots have the potential to injure humans. Researchers in social robotics continue to reconfirm that perceived safety, a psychological comfort with human-robot interaction, is a major factor to be considered [1820].(H2)A drone tour guide flying stably will receive a higher level of satisfaction from participants than a drone tour guide flying in an unstable manner.

3. Materials and Methods

3.1. Participants

We recruited 60 undergraduate or graduate students from Sungkyunkwan University in South Korea. Subjects took part in the experiment voluntarily by responding to an online announcement on the university’s main website. Females made up 47% of the sample and the age range of subjects was 19–29 (M = 23.72, SD = 2.36).

For ethical reasons, the guidelines proposed by Brownhill [21] (informed consent, privacy, incentives, the right to withdraw, and protection of the researcher) were followed, and our researchers distributed a detailed information sheet prior to the experiment and spent approximately five minutes obtaining consent. The information sheet explained the experimental procedure and the required time commitment. The participants were asked if they felt comfortable participating in the experiment, as it was presented in an open space; they were also asked to inform the accompanying researchers of any discomfort during the experience. It was explained that the survey following the experiment would be anonymous and that approximately 5,000 KRW would be provided as an incentive. Following the explanations, it was confirmed that the participants understood all the details.

3.2. Experimental Design

A between-participants, full factorial experiment was designed to extrapolate how human connection cues for levels of perceived safety regarding social drone services influence user satisfaction.

3.3. Procedure

To study social drones, we operated a trial drone service where people could tour a college campus while communicating with a drone guide. The drone, as a social robot, could provide humans with information on a specific location or building during the four-minute tour.

The participants were instructed to be in front of a designated structure at a predetermined time. Researchers introduced how the drone would work: either controlled by a remote guide or moving along a preprogrammed route. They were provided receivers through which they could hear the guidance of the drone. Once three to five participants were ready, the tour began. The drone used weighed approximately 420 g and had a smooth external cover. The drones were controlled in the same manner, by a well-trained guide in a “Wizard of Oz” setup.

The drone’s interpretation was transmitted through the earphones of the receivers that were provided to participants, who followed the movement of the drone and received information on buildings and structures as they approached them. The recorded voice was that of a well-trained female student announcer. An excerpt of the guidance script is shown below:Now, let us begin with the campus guide. Our university traces its origins to … the 7th year of the rule of the first king of the Chosun dynasty in 1398. This campus is a natural science campus, which is devoted to the research and development of natural science and contains the departments of natural science, engineering, pharmacy, and medical science. The building on your left looks like it is a single building, but it is composed of four zigzagging buildings. When viewed from the sky, it looks like a honeycomb. The designer of the building admitted that he got the inspiration for the architecture from a honeycomb [to be continued].

Researchers provided participants with a manual about their respective drone guide and took the time to explain how the drone worked. There were manuals for both drone types: human-driven drone and algorithm-driven drone.Group 1 (Drone Tour Guide Service Manual for a Human-Driven Drone): a flying drone guide provides guide service to the campus. You can listen to the voice of the drone guide through the supplied receiver. The drone is being remotely controlled by someone who guides you around the campus while viewing the route through the camera on the front of the drone. Experience the campus guide service led by the drone’s guide.Group 2 (Drone Tour Guide Service Manual for an Algorithm-Driven Drone): a flying drone guide provides guide service to the campus. You can listen to the voice of the drone guide through the supplied receiver. This drone is a guide service robot that flies according to a predefined algorithm and stops and give explanations about buildings or structures. Experience the campus guide service led by the drone’s guide.

The values of perceived safety were divided into two groups, high and low, around the median value. Each group contains a balanced number of males and females. Perceived safety of the high-level group includes 15 males and 15 females, and the low-level group includes 17 males and 13 females.

3.4. Measures

All measurements were adopted from previous studies and revised for this exhibition guidance service context. All items were 7-point scaled items: strongly disagree (1) to strongly agree (7).

3.4.1. Satisfaction

Satisfaction with the guidance service experience was measured (mean = 16.26, standard deviation = 2.55, and Cronbach Alpha = .76) by rating 7-point scaled items [22, 23]. The scale ranged from strongly disagree (1) to strongly agree (7) for each of the following statements: “I do not have a positive attitude or evaluation about the service” (inversed), “I think the system is very helpful,” and “Overall, I am satisfied with the system.”

3.4.2. Perceived Safety

Perceived safety of the guidance service was measured (mean = 13.18, standard deviation = 3.51, and Cronbach Alpha = .72) by rating the following bipolar, 7-point scaled items [18]: “Anxious to Relaxed,” “Agitated to Calm,” and “Quiescent to Surprised” (inversed).

4. Results

A factorial 2 (human-driven versus algorithm-driven) × 2 (perceived safety: low versus high) analysis of variance (ANOVA) was conducted with the two drone control type variables (human-driven and algorithm-driven) as the between-participants factor and the two-category level of perceived safety variable (low, high) as the between-participants factor.

When the ANOVA was performed with satisfaction as the dependent variable, there is no significant difference between the two groups: human-driven drone (M = 5.30) and algorithm-driven drone (M = 5.54). Therefore, our data did not support (H1). However, when the ANOVA was performed with satisfaction as the dependent variable, a significant effect for perceived safety was found. Study participants showed lower satisfaction levels with the human-driven drone (M = 5.12) than with the algorithm-driven drone (M = 5.72), , and . Therefore, our data supported (H2).

Additionally, in the ANOVA with the human connection index, there was a significant two-way interaction, , , and , indicating that participants felt a higher level of satisfaction (, ) with the algorithm-driven drone than the human-driven drone (, ) when they felt less safe with the drone.

5. Discussion

Firstly, the results confirmed that the control type of a social drone does not influence users’ evaluation levels, which highlights that human connection cues would do not impact satisfaction in a social drone context. According to the Computers Are Social Actors (CASA) paradigm proposed by Nass and his colleagues [24, 25], HCI follows the social rules of human-human interactions. In particular, it was suggested that humans mindlessly confuse the computing medium in question as a social being and exhibit reactions based on social rules owing to the anthropomorphic cues that apply to computing media, for instance, gender, personality, and voice, as well as the elements of the social rules [26, 27]. Such a paradigm also applies to our test drone, which corresponds to HRI, and one can therefore evaluate that when a drone interacts through a voice, the drone itself is regarded as a social being during the service experience regardless of who controls the drone, leading to the same level of satisfaction.

However, we found that perceived safety significantly affects levels of satisfaction; thus, we reiterate that perceived safety is still a critical factor in the social drone service environment.

We also found an interaction effect between type of drone control and level of perceived safety. Both groups of participants with high perceived safety had similar levels of satisfaction regardless of whether or not the drone was human controlled. On the contrary, the level of satisfaction becomes significantly lower for participants when the drone is controlled by a human, and the participants feel that the drone guide is not flying the drone safely. This could indicate that participants expect the human-controlled drone to exhibit a higher quality flight; however, when they experienced a lower quality flight, they became critically disappointed. This can be understood from a perspective similar to Oliver [28], whose work showed that disconfirmation of expectation affects the attitude or satisfaction of a customer. Furthermore, Oliver and DeSarbo [29] showed that disconfirmation of expectation is the most important factor in customer satisfaction.

6. Conclusions

The implications of this study are as follows. We showed that human connection affects user perceptions of a drone service. The research on user service evaluations with human-driven drones has been mostly conducted in the field of HCI, and it has been assumed that social robots are independently controlled using a fully autonomous algorithm. However, considering the appearance of telepresence robots or the necessity of human supervision due to the limitations on perfect automatic robot control [30, 31], it was highly valuable finding that human supervision of social robots is a major factor affecting service evaluation.

Moreover, we verified that perceived safety, a continuing issue in social robotics, still acts as a service quality improvement factor in social drones. Furthermore, the observation that human connection has an interaction effect supports the interpretation that the imperfectness of the automated robotic service is recognized and there are high expectations for human-driven robots. We believe that this is in line with the findings of Zimmerman and his colleagues [32], who studied embodied agents. In their work, user perceptions of agents with humanoid and non-humanoid exterior designs were examined, and it was found that agents with humanoid designs are perceived to be more intelligent.

Another interpretation is as follows. The group with low perceived safety exhibited lower service satisfaction with human-controlled drones than with algorithm-driven drones. This may indicate that when a robot is unstable, the control capability of artificial intelligence (AI) is trusted more than that of a human [10, 11]. Based on numerous works on consumer perception where trust is in a defining relationship with satisfaction [33, 34], it may be possible to infer that low confidence in the human control of drones led to low satisfaction in our study. Regardless, we wish to claim that the second implication of this study is that the control method of a robot can act as a major factor in user service evaluation when designing a service robot that assists or collaborates with humans.

We aimed to examine our hypotheses based on a laboratory experiment, which is an unusual situation [35]. One limitation of this method is that participants in their natural context may not pay full attention to the guide and often do other things while walking, whereas the subjects of the experiment were asked to specifically focus on the service. Additionally, they experienced the service for only five minutes. Participants may have felt that the time allotted for the experience was insufficient.

The findings of the study should be carefully generalized. The sample size was insufficient to represent the population of drone guidance service. The participants not only were of a younger generation but also were university students familiar with new media technology. It is natural for them to experience new technologies [36], such as drones. It is necessary to verify the results in another setting to generalize the findings to a drone guidance service population.

In addition, the experiment was conducted in South Korea. The network infrastructure in South Korea is suitable to communicate with others synchronously without delay. The experience of a drone guidance system would be affected by these factors. Thus, it is also necessary to compare this study with studies in different cultures to generalize the current findings.

Disclosure

These findings were presented in the 19th International Conference on Human-Computer Interaction.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Acknowledgments

Partial support for this research was provided by the Department of Human ICT Convergence in Sungkyunkwan University.

References

  1. D. Scaramuzza, M. C. Achtelik, L. Doitsidis et al., “Vision-controlled micro flying robots: From system design to autonomous navigation and mapping in GPS-denied environments,” IEEE Robotics and Automation Magazine, vol. 21, no. 3, pp. 26–40, 2014. View at Publisher · View at Google Scholar · View at Scopus
  2. F. F. Mueller and M. Muirhead, “Understanding the design of a flying jogging companion,” in Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST 2014, pp. 81-82, New York, NY, USA, October 2014. View at Publisher · View at Google Scholar · View at Scopus
  3. M. Avila, M. Funk, and N. Henze, “DroneNavigator: Using drones for navigating visually impaired persons,” in Proceedings of the 17th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS 2015, pp. 327-328, Portugal, October 2015. View at Publisher · View at Google Scholar · View at Scopus
  4. M. Delmont, “Drone encounters: Noor Behram, Omer Fast, and visual critiques of drone warfare,” American Quarterly, vol. 65, no. 1, pp. 193–202, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. A. Gongora and J. Gonzalez-Jimenez, “Enhancement of a commercial multicopter for research in autonomous navigation,” in Proceedings of the 23rd Mediterranean Conference on Control and Automation, MED 2015, pp. 1204–1209, Torremolinos, Spain, June 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. K. Cho, M. Cho, and J. Jeon, “Fly a Drone Safely: Evaluation of an Embodied Egocentric Drone Controller Interface,” Interacting with Computers, vol. 29, no. 3, pp. 345–354, 2017. View at Publisher · View at Google Scholar · View at Scopus
  7. R. Clarke, “Understanding the drone epidemic,” Computer Law and Security Review, vol. 30, no. 3, pp. 230–246, 2014. View at Publisher · View at Google Scholar · View at Scopus
  8. H. Chao, Y. Cao, and Y. Chen, “Autopilots for small unmanned aerial vehicles: A survey,” International Journal of Control, Automation, and Systems, vol. 8, no. 1, pp. 36–44, 2010. View at Publisher · View at Google Scholar · View at Scopus
  9. S. Lim and B. Reeves, “Computer agents versus avatars: Responses to interactive game characters controlled by a computer or other player,” International Journal of Human-Computer Studies, vol. 68, no. 1-2, pp. 57–68, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. C. Clerwall, “Enter the Robot Journalist,” Journalism Practice, vol. 8, no. 5, pp. 519–531, 2014. View at Publisher · View at Google Scholar
  11. H. A. J. Van der Kaa and E. J. Krahmer, “Journalist versus news consumer: The perceived credibility of machine written news , Paper presented at the Computation + Journalism Symposium,” in Proceedings of the the Computation + Journalism Symposium, New York, NY, USA, 2014.
  12. R. E. Guadagno, K. R. Swinth, and J. Blascovich, “Social evaluations of embodied agents and avatars,” Computers in Human Behavior, vol. 27, no. 6, pp. 2380–2385, 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. K. Dautenhahn, M. Walters, S. Woods et al., “How may I serve you?” in Proceedings of the 1st ACM SIGCHI/SIGART Conference, pp. 172–179, Salt Lake City, Utah, USA, March 2006. View at Publisher · View at Google Scholar
  14. J. R. Cauchard, K. Y. Zhai, M. Spadafora, and J. A. Landay, “Emotion encoding in human-drone interaction,” in Proceedings of the 11th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2016, pp. 263–270, March 2016. View at Publisher · View at Google Scholar · View at Scopus
  15. J. N. Bailenson and J. Blascovich, “Avatars,” in Encyclopedia of Human-Computer Interaction, W. S. Bainbridge, Ed., pp. 64–68, Berkshire Publishing Group, Great Barrington, Mass, USA, 2004. View at Google Scholar
  16. B. A. Duncan and R. R. Murphy, “Comfortable approach distance with small Unmanned Aerial Vehicles,” in Proceedings of the 22nd IEEE International Symposium on Robot and Human Interactive Communication: "Living Together, Enjoying Together, and Working Together with Robots!", IEEE RO-MAN 2013, pp. 786–792, Kyeongju, Republic of Korea, August 2013. View at Publisher · View at Google Scholar · View at Scopus
  17. J. E. Young, R. Hawkins, E. Sharlin, and T. Igarashi, “Toward acceptable domestic robots: Applying insights from social psychology,” International Journal of Social Robotics, vol. 1, no. 1, pp. 95–108, 2009. View at Publisher · View at Google Scholar · View at Scopus
  18. C. Bartneck, D. Kulić, E. Croft, and S. Zoghbi, “Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots,” International Journal of Social Robotics, vol. 1, no. 1, pp. 71–81, 2009. View at Publisher · View at Google Scholar · View at Scopus
  19. C. W. Lee, Z. Bien, G. Giralt, P. I. Corke, and M. Kim, “Report on the First IART/IEEE-RAS Joint Workshop: Technical Challenge for Dependable Robots in Human Environments,” IART/IEEE-RAS Robotics and Automation Society Magazine Report, Seoul, Korea, 2001. View at Google Scholar
  20. I. R. Nourbakhsh, C. Kunz, and T. Willeke, “The Mobot Museum Robot Installations: A Five Year Experiment,” in Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3636–3641, usa, October 2003. View at Scopus
  21. S. Brownhill, “The Researcher and the researched: Ethical research in children’s and young people’s services,” in Empowering the children’s and young people’s services: Practice based knowledge, skills and understanding, S. Brownhill, Ed., pp. 45–61, Routledge, Abingdon, UK, 2014. View at Google Scholar
  22. T. McGill, V. Hobbs, and J. Klobas, “User-developed applications and information systems success: A test of DeLone and McLean's model,” Information Resources Management Journal, vol. 16, no. 1, pp. 24–45, 2003. View at Publisher · View at Google Scholar · View at Scopus
  23. A. Rai, S. S. Lang, and R. B. Welker, “Assessing the validity of IS success models: An empirical test and theoretical analysis,” Information Systems Research, vol. 13, no. 1, pp. 50–69, 2002. View at Publisher · View at Google Scholar · View at Scopus
  24. C. Nass, Y. Moon, B. J. Fogg, B. Reeves, and D. C. Dryer, “Can computer personalities be human personalities?” International Journal of Human - Computer Studies, vol. 43, no. 2, pp. 223–239, 1995. View at Publisher · View at Google Scholar · View at Scopus
  25. B. Reeves and C. Nass, The media equation: How people treat computers, television, and new media like real people and places, Cambridge University Press, New York, NY, USA, 1996.
  26. C. Nass, J. Steuer, and E. R. Tauber, “Computers are social actors,” in Proceedings of the the SIGCHI Conference, pp. 72–78, Boston, Mass, USA, April 1994. View at Publisher · View at Google Scholar
  27. K. M. Lee and C. Nass, “The multiple source effect and synthesized speech: Doubly-disembodied language as a conceptual framework,” Human Communication Research, vol. 30, no. 2, pp. 182–207, 2004. View at Publisher · View at Google Scholar · View at Scopus
  28. R. L. Oliver, “What is customer satisfaction?” Wharton Magazine, vol. 5, no. 3, pp. 36–41, 1981. View at Google Scholar
  29. R. L. Oliver and W. S. DeSarbo, “Response determinants in satisfaction judgments,” Journal of Consumer Research, vol. 14, no. 4, pp. 495–507, 1988. View at Publisher · View at Google Scholar
  30. K. Misawa and J. Rekimoto, “Wearing another's personality: A human-surrogate system with a telepresence face,” in Proceedings of the 19th ACM International Symposium on Wearable Computers, ISWC 2015, pp. 125–132, Japan, September 2015. View at Publisher · View at Google Scholar · View at Scopus
  31. K. Kraft and W. D. Smart, “Seeing is comforting: Effects of teleoperator visibility in robot-mediated health care,” in Proceedings of the 11th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2016, pp. 11–18, Christchurch, New Zealand, March 2016. View at Publisher · View at Google Scholar · View at Scopus
  32. J. Zimmerman, E. Ayoob, J. Forlizzi, and M. McQuaid, “Putting a face on embodied interface agents,” in Proceedings of Designing Pleasurable Products and Interfaces Conference, S. Wensveen, Ed., pp. 233–248, Technical University Press, Eindhoven, Netherlands, 2005.
  33. C. Moorman, R. Deshpande, and G. Zaltman, “Factors affecting trust in market research relationships,” Journal of Marketing, vol. 57, no. 1, pp. 81–101, 1993. View at Publisher · View at Google Scholar
  34. R. C. Caceres and N. G. Paparoidamis, “Service quality, relationship satisfaction, trust, commitment and business-to-business loyalty,” European Journal of Marketing, vol. 41, no. 7-8, pp. 836–867, 2007. View at Publisher · View at Google Scholar · View at Scopus
  35. J. M. McLeod and B. Reeves, “On the nature of mass media effects,” in Television and social behavior: Beyond violence, S. B. Withey and R. P. Abeles, Eds., pp. 17–54, Lawrence Erlbaum Associates, Hillsdale, NJ, USA, 1980. View at Google Scholar
  36. T. Syvertsen, G. Enli, O. J. Mjøs, and H. Moe, The Media Welfare State: Nordic Media in the Digital Era, University of Michigan Press, Ann Arbor, Mich, USA, 2014. View at Scopus