Table of Contents Author Guidelines Submit a Manuscript
Advances in Human-Computer Interaction
Volume 2018, Article ID 5781363, 12 pages
https://doi.org/10.1155/2018/5781363
Research Article

UbiCompass: An IoT Interaction Concept

1Lund University, P.O. Box 118, 221 00 Lund, Sweden
2Sony Mobile Communications, Nya Vattentornet, 221 88 Lund, Sweden
3Axis Communications, Emdalavägen 14, 223 69 Lund, Sweden

Correspondence should be addressed to Günter Alce; es.htl.ngised@ecla.retnug

Received 26 September 2017; Accepted 5 February 2018; Published 12 April 2018

Academic Editor: Thomas Mandl

Copyright © 2018 Günter Alce et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Lately, different wearable form factors have reached the consumer domain. Wearables enable at-a-glance access to information and can continually sense the surrounding environment. Internet of Things (IoT) researchers have focused on the main enabling factors: the integration of several technologies and communication solutions. Less effort has been devoted to exploring how not-so-tech-savvy end users can discover and directly interact with the numerous connected things predicted by the IoT vision. This paper presents a novel IoT interaction concept called UbiCompass. A functional, smartwatch face prototype of the UbiCompass was developed and integrated with an existing smart home system, in which five different connected devices could be controlled using simple interaction. It was then compared to a traditional smartphone mobile application in a controlled experiment. The results show statistically significant differences in favor of the proposed concept. This highlights the potential the UbiCompass has as an IoT interaction concept.

1. Introduction

Over the past few years, there has been a proliferation of wearable devices such as smartwatches, smart glasses, and other forms of wearable computing in the consumer domain. Characteristics of wearable devices are that they are intended to always be “on,” that they enable access to information at a glance, and that they continually sense the surrounding environment in order to offer a better interface to the real world [1]. While most wearables are connected to the cloud via smartphones, they can still be considered a subset of the Internet of Things (IoT), a global infrastructure for the information society that enables advanced services by interconnecting physical and virtual things [2]. IoT comes with several challenges that involve dealing with device-to-human interactions and device-to-device interactions.

IoT interaction can be roughly divided into two types: explicit and implicit [3]. Pure, explicit interaction is context-free, which means that users must repeat the required action every time (e.g., pressing a switch to turn a light on or off). Built on implicit interaction, the same example can be achieved with a sensor that monitors when people enter a room and automatically switches on the light for authorized people. Mark Weiser originally proposed the shift from explicit to implicit interaction in the early 90s. One notion is “smart” rooms, instrumented with different embedded sensors, such as the iRoom [4], smart classrooms [5], and environment-aware handheld projectors [6].

In contrast to Weiser, Rogers argues that “We need to design new technologies to encourage people to be proactive in their lives, performing ever greater feats, extending their ability to learn, make decisions, reason, create, solve complex problems and generate innovate ideas” [7].

In general, IoT researchers focus on two main enabling factors: the integration of several technologies and communication solutions [8]. Less effort has been devoted to exploring how a not-so-tech-savvy end user can discover and directly interact with the numerous connected things predicted by the IoT vision. One notion is to use the benefits of wearables to facilitate IoT interaction. Four basic tasks that a user of an IoT system needs to be able to perform are discovering devices, selecting a particular device, viewing the device’s status, and controlling the device [9].

The fast development in virtual personal assistant (VPA) technology (e.g., Apple’s Siri and Microsoft’s Cortana) has introduced new ways for users to interact with their connected devices and services. However, as pointed out by Norman [10], natural user interfaces built on, for example, gestures and speech, lack the ability to make all possible actions visible to the user. This might be less of a problem in a familiar home environment, where the user knows what devices are available and which services are available and where they are located. However, in an unknown environment, such as a new working place, it could be difficult for a user to discover nearby devices and their capabilities.

The purpose of this paper is twofold: to introduce a novel IoT interaction concept called UbiCompass and to compare a functional, smartwatch prototype of it with a commercial mobile application when using an IoT solution.

This paper contributes a novel IoT interaction concept that addresses how a not-so-tech-savvy end user can discover, select, and directly interact with numerous connected things.

The next section presents relevant related work. Then the UbiCompass concept is described followed by evaluation, results, discussion, and conclusions.

2. Related Work

A variety of studies have explored different IoT aspects including technology challenges, enablers, applications, and interaction concepts. Naturally, one important aspect is technology enablers [8]. Many research groups in the IoT community are currently targeting enablers for context-aware computing and computational intelligence that is part of both the physical and the digital worlds [1113]. Several technical enablers have been developed with an application that runs on a mobile device. Examples are the Fibaro Home Center 2 [14], Samsung SmartThings [15], Apple HomeKit [16], and Google Weave [17]. Having yet another application to control things is perhaps not the best solution, particularly when it comes to short interactions, such as turning a device on/off. This requires the user to pick up the phone, unlock it, find the application, start it, find the device, and then finally be able to control it. Nevertheless, such applications are practical for doing more advanced configurations.

Ledo et al. [9] discuss four dominant concepts for interacting with connected devices: touching, pointing, scanning, and world in miniature.(i)Touching, using RFID to pair two devices(ii)Pointing, using techniques such as infrared, computer vision, and inertial sensors to select devices(iii)Scanning, using a hub to get a list of all devices. This is the typical form of interaction seen nowadays(iv)World in miniature, used to represent devices through their spatial topography.

The UbiCompass is built on the notion of combining the last three: pointing, scanning, and world in miniature.

2.1. Pointing

Pointing a mobile device at an intended object is appropriate when the two are at a distance from each other. Many technologies enable this technique by including infrared and computer vision (CV). The UbiControl [18] system is an example of an infrared pointing system that uses laser pointers to select devices and control them with PDAs. All controllable devices need to be equipped with sensor nodes that carry photodiodes that recognize the pulse sequences emitted by the laser pointer. Budde et al. [19] developed a CV system that uses Microsoft Kinect to enable point and click interaction to control devices in smart environments. A server determines through collision detection which device the user is pointing at and sends the respective control interface to the user’s mobile device. New devices can be registered manually or by using markers such as QR codes to identify them and obtain their position simultaneously. The MISO [20] system has a similar approach using Kinect for pointing recognition. The difference is that MISO uses six gestures to control the system and does not have a separate display. Most of these techniques/concepts rely heavily on nonstandard hardware making them difficult to deploy outside of the lab.

There are a number of different CV-based pointing systems developed by researches. Snap-To-It [21] was recently introduced and allows users to interact with any device by simply taking a picture of it. Snap-To-It shares the image of the device with over a local area network; the image is then analyzed along with the user’s location, and the corresponding control interface is delivered to the user’s mobile device. However, the Snap-To-It system does not offer good discoverability (i.e., ability of an IoT system to enable a user to find information, applications, and services). The user needs to guess or try to take a photo and then wait for a UI, a process that potentially can irritate the user.

Another CV-based idea is Tag-It! [22]. It uses two wearable technologies, a head-worn wearable computer (Google Glass), and a chest-worn depth sensor (Tango). Google Glass generates and displays virtual information to the user while Tango provides robust indoor position tracking for Google Glass. Tag-It! is a promising project and will be even better once the technologies have been merged into a smaller device. Mayer and Soros [23] combine wearables to interact with devices in the user’s environment. Smart glasses select a device and the user interface for said device is rendered on the user’s smartwatch. This setup, though, has problems similar to those of Snap-To-It: it does not provide a quick overview of the devices the user can interact with and where they are located.

2.2. Scanning

According to Ledo et al. [9], scanning is based on having a dedicated application that retrieves a list of all devices from a hub, such as Fibaro Home Center 2 [14]. Most of the existing IoT applications work in this manner. However, retrieving a list with all available devices can overwhelm the user with devices and does not represent the spatial topography.

2.3. World in Miniature

The world in miniature (WIM) concept can be used to represent the spatial topography of devices on a display. This can be achieved in several ways, such as through live video feeds in which the state of the displayed devices can be modified through a mobile device [24, 25].

Ledo et al. [9] represent devices in a WIM solution with a proxemics-aware solution that exploits the spatial relationships between a person’s handheld device and all surrounding devices to create a dynamic device control interface. A user can discover and select a device by the way he or she orients a mobile device around the room and then progressively view the device’s status and control its features in increasing detail by just showing the device’s presence, its state, at a detailed level of status together with the control mechanism, by simply moving towards it.

None of the mentioned research projects utilizes the benefits of wearables to offer a not-so-tech-savvy end user easy discovery and simple interaction with numerous connected things. The UbiCompass concept is an attempt to explore how these requirements can be fulfilled.

3. The UbiCompass Concept

The ideas behind the UbiCompass concept were developed in an iterative design process. Focus group meetings took place every two weeks; in total there were six focus group sessions. The focus group consisted of representatives from EASE (The Industrial Excellence Centre for Embedded Applications Software Engineering) [26]. The representatives were both academic, two from the Design Department at Lund’s University and one from MAPCI (Mobile and Pervasive Computing Institute Lund University) [27], and industrial, two from Axis [28] and one from Sony Mobile [29]. The focus meeting bred new ideas through brainstorming and discussions but also gave valuable feedback throughout the projects progress. One important requirement was to develop on off-the-shelf standard components available for smart homes. The focus was on easy discoverability and simple interaction. The following scenario gives an idea of how the UbiCompass can be used.

IT consultant Tom is visiting a big company to make a presentation. He is received by Matthew who shows him the presentation room: “You have access to everything you need and you can just set it up as you like. I’ll be right back!” Tom looks at his watch. A number of icons appear on the watch face, signaling the presence and whereabouts of connected things in the room that he has access to:

“Ok, so we have the projector and a motorized projection screen… video conference system, speakers, lamps… aha, motorized blinds! Convenient!”

“First, let’s fix the projector”. He points the watch at the projector. When the projector icon is positioned under the 12 on the watch face, a button with “Control” written on it appears. Tom taps it and a large projector icon appears. He can now turn the projector on by tapping the icon. In the same way, he also brings down the projection screen as well as the blinds and turns on some lights.

“Hmm, too much light around the projection screen” Tom points the watch towards the lamp over the projection screen and taps its control button. He then places his index finger on the appearing lamp icon while twisting his wrist to lower the brightness of the lamp.

Next, the background and the ideas of the proposed interaction concept are described, followed by a more technical explanation of the UbiCompass prototype implementation and the limitations of the current setup.

3.1. Background

The UbiCompass concept is based on the watch form factor. The wrist has long been a compelling location to place wearable technology [30]. Our usage of watches is also transforming from just showing the time to becoming more and more of a personal computer. Smartwatches are available on the market and have proven to be more socially viable than smart glasses. Since, most people are comfortable with wearing a watch. Using a wearable device to interact with other devices has several benefits, one of them being that the device can almost always be worn and will almost always be on and running [1].

The smartwatches available on the market come with a limited number of applications, but with the ability to download additional applications, similar to smartphones. However, finding the right application on a smartwatch once downloaded is more difficult, mainly due to the limited size of the display. We wanted to avoid the situation of requiring the user to install yet another application, finding it, and starting it up. Instead, the UbiCompass was implemented as a new watch face. Almost all smartwatches come with a variety of different watch faces from which the user can choose. The most common ones show the time, date, and weather. The UbiCompass watch face shows the time and the devices that the user is allowed to interact with in a certain room (Figure 1).

Figure 1: (a) No device in focus. (b): The Sonos sound system is in focus, indicated by a triangle and brackets.
3.2. Interaction

As already mentioned, a user of an IoT system needs to be able to perform four basic tasks: discovering devices, selecting a particular device, viewing the device’s status, and controlling the device [9]. The UbiCompass concept addresses these tasks by using a compass metaphor in combination with traditional touch interaction in the prototype; see video (UbiCompass at YouTube: https://youtu.be/5onr-NGYta4).

3.2.1. Discover

A compass is always on, always showing the four cardinal directions. In similar manner the UbiCompass concept prototype facilitates the discoverability of devices and the direction in which they are placed in a room. It should be noted that no application needs to be started: The UbiCompass runs as a watch face, so the devices that the user can interact with are always visible. Having a UI running in symbiosis with the watch face facilitates access to information at a glance. Since the user does not need to open an application, it is always there ready for interaction.

It should be noted that everything else in the smartwatch runs as usual. The devices, which the user can interact with, and their positions are updated automatically. The user does not need to search for a certain application, but only needs to set the UbiCompass as his or her watch face once. When the user chooses to interact with a device and clicks on the control button (Figure 1), an application automatically starts, offering simple interaction possibilities for the chosen device.

3.2.2. Select

The watch face is inspired by the Sony SmartWatch 3 [31] standard watch face and is monochrome, moderate, and minimalistic in its design. It provides an aesthetically appealing design while leaving room to highlight other details of the prototype. The user selects a device to interact with by moving his or her arm until the 12 o’clock position (i.e., north) is pointing at the device. When the device is in the line of sight, the user feels a distinct vibration and the device icon is highlighted. This indicates it is ready to be selected (Figure 1).

3.2.3. Status

The user finds out the status of the device after choosing which device to control. The icons, shown in Figure 1, follow the same simplicity as the watch face as a whole: stylish and minimalistic. They are monochrome as well, but, at a later state, the use of colors most likely will be introduced to show the devices’ active status. A yellow lamp, for example, would indicate that the lamp was turned on. This feature, however, has not yet been implemented.

3.2.4. Control

In the beginning of the design process, the focus was on coming up with a simple on/off functionality for the lamps; for this, a traditional on/off switch was used. Later on, dimming functions, simple Sonos control functions, and the ability to check the temperature were added. For the last two, basic functions such as play/pause/skip and a traditional touch interface were used. To increase/decrease the volume and light intensity, we decided to test the limits of the UbiCompass prototype by implementing a special feature called wrist-twist. Wrist-twist uses the smartwatch accelerometer to calculate how much the watch is tilted and then increases or decreases the volume or the light intensity. The idea was to use a control knob metaphor, which is a common way to increase or decrease the volume and light intensity on a radio or a dimmer light switch (Figure 2).

Figure 2: Conceptual picture of the wrist-twist feature.
3.3. Prototype Implementation

An early decision in the UbiCompass project was to use existing off-the-shelf standard components available for smart homes. The connected devices use the Z-Wave standard: a widespread standard found in plenty of third-party devices that are easily available and relatively affordable. The Z-Wave communication between the devices and the controller is carried out on the 868 MHz band to avoid interference with other equipment such as Bluetooth or Wi-Fi, which use the 2.4 GHz band. The controlling communication runs through the network via a Wi-Fi router (Figure 3). To connect and configure the devices, the Fibaro web application was used, and all devices were given a title containing the relative -position of the specific device. This means that if the user wants to move around the devices, the devices’ position will not be updated automatically. It has to be updated manually via the Fibaro web application.

Figure 3: A simple setup with two connected Z-Wave lamps.

To make the prototype adaptable to other wearable devices in the future, a smartphone was used as a routing device to forward commands from the watch and to update the watch’s interface depending on the status of the connected devices. This way, the wearable application focused on interaction and simple communication with the smartphone, while the smartphone was responsible for the communication with the Fibaro Home Center controller. Bluetooth was used for the communication between the smartwatch and the smartphone. The following devices were used for the project:(i)Asus RT-N56U [32], wireless router for internet and LAN communication(ii)Fibaro Home Center 2 [14], as Z-Wave controller(iii)Sony SmartWatch 3 [31], running Android Wear(iv)Samsung S6 Edge [33], running Android(v)Sonos PLAY:1 [34], for playing music(vi)Zipato LED bulb [35](vii)Aeotec LED bulb [36](viii)Popp-Wall Plug Switch Indoor [37], table fan connected to this wall plug switch(ix)Fibaro eyeball [38], multisensor as thermometer

3.4. Limitations

The UbiCompass concept expressed in the prototype has several limitations including the following:(i)Positioning: the position of the devices is coded in the name when connecting to the Fibaro Home Center.(ii)The user’s position is not tracked and at this time, the prototype works only within a limited area. The user cannot walk around the whole room.(iii)The inertial sensors that come with the smartwatch are not very accurate and can sometime lag.

The way we implemented the UbiCompass concept in the prototype also infers limitations:(i)Scaling issues: the prototype works for about ten devices. Adding more would clutter the watch face with icons.(ii)Simple interaction: the prototype works for simple interaction. If the user wants to make more complex adjustments of a system, another user interface would be more preferable.

4. The Fibaro Application

This section briefly introduces the Fibaro application [39] with which the UbiCompass prototype was compared. The Fibaro application is connected to the controlling unit, Fibaro Home Center 2 [14] via Wi-Fi. The Fibaro Home Center 2 acts as a hub for the communication with all smart devices that are connected and provides Internet connection for the smart devices. Handheld devices and web pages can then be used to communicate and control the connected devices. From the main screen of the Fibaro application (Figure 4), the room can be selected, in which the connected devices are listed and then the selected device UI is shown.

Figure 4: The Fibaro application. (a) The main screen. (b) A specific room is selected and corresponding devices are listed. (c) A dimmable lamp icon with color adjustments is selected to control a lamp.

5. UbiCompass Evaluation

A comparative evaluation was conducted in a laboratory environment to compare the UbiCompass concept prototype with the Fibaro application [39]. Both quantitative and qualitative data were collected. The evaluation compared the UbiCompass watch face prototype and the mobile Fibaro application in terms of usability and perceived workload.

5.1. Setup

The evaluation was conducted in a usability laboratory with audio and video recording facilities. The sessions involved a participant and a test leader (Figure 5). All test sessions were recorded. Five devices were connected to the system as seen in Figure 5. A Sony SmartWatch 3 [31] and a Samsung S6 Edge [33] were the units used to control the devices.

Figure 5: The evaluation setup for the UbiCompass prototype. Test leader, fan, desk lamp, thermometer, sound system, dimmable spotlight, test participant, and smartwatch. For the Fibaro application condition, the smartwatch was replaced with a smartphone.

To control the Sonos system through Fibaro, a third party software called Sonos Remote [40] was used.

5.2. Participants

Personal social networking was used to recruit participants, preferably with nontechnical backgrounds to see if not-so-tech-savvy participants would be able to manage the given tasks. In total, 36 participants (18 female, 18 male) were recruited. Friends and family members were excluded. The age of the participants ranged from 18 to 51 years (M = 30.8, SD = 9.39). The group was composed of 18 participants with an engineering background and 18 participants with a nontechnical background. The participants for the most part were novice users of smart home concepts, although nine had tried a smart home device prior to the evaluation. Twenty-seven had never tried a smart home device, while 10 had no prior knowledge of the smart home concept. Roughly, one-third of the participants had tried a smartwatch prior to the evaluation.

5.3. Procedure

When the participants arrived in the laboratory, the test leader asked them to complete the consent form and fill out a short demographic/background survey. The test leader then introduced them to IoT, the smart home concept, and the smartwatch, followed by an overview of the study. In order to focus on the differences in how to control devices and not how to setup the room, the UbiCompass watch face was preselected on the smartwatch and the Fibaro application was set to start with the corresponding room on the smartphone.

The experiment consisted of one scenario that was repeated twice (Figure 6), alternating between the UbiCompass prototype and the Fibaro application. The scenarios were counterbalanced for number of participants and gender. Half of the participants started with the UbiCompass application and the other half with the Fibaro application. After the tasks in each scenario were completed, the participant filled out the NASA Task Load Index (TLX) and the System Usability Scale (SUS) questionnaire. In an attempt to understand and describe the users’ perceived workload NASA TLX was used as an assessment tool. It is commonly used to evaluate perceived workload for a specific task. It uses an ordinal scale on six subscales (mental demand, physical demand, temporal demand, performance, effort, and frustration). A second part of the NASA TLX creates an individual weighting of the subscales by letting the subjects compare them pairwise based on their perceived importance. This requires the user to choose which measurement is more relevant to workload. The collected data is useful both when comparing the different concepts and when analyzing a single concept for future improvements. The NASA TLX was utilized in this study to gain an understanding of the contributing factors that determined the task workload [41, 42]. In an attempt to understand and describe the users’ cognitive workload, SUS was used. It is often used to get a rapid usability evaluation of the system’s human interaction [43]. It attempts to measure cognitive attributes such as learnability and perceived ease of use. Scores for individual items, such as, “I thought the system was easy to use,” can be studied and compared, but the main intent is the combined rating (0 to 100) [44]. The questionnaires were followed by a short semistructured interview. All the semistructured interviews were video recorded. The video recordings were reviewed to detect recurring themes and to find representative quotes.

Figure 6: Test session procedure.

The scenario was designed to balance different content types and device combinations:(1)You arrive at home and it is a bit dark, so you turn on the desk lamp (item , Figure 5).(2)You decide to turn off the music (item , Figure 5).(3)You press the home button on your phone to check some new messages and notice that it needs charging. You connect the charger.(4)You check the temperature (item , Figure 5).(5)It is quite warm, so you turn on the fan (item , Figure 5).(6)Time to relax, so you dim the spotlight (item , Figure 5).(7)To get into the right mood, you turn on the music again and increase the volume (item , Figure 5).

After having completed the scenario twice, the participants shared their initial thoughts about the UbiCompass prototype and the Fibaro application and reported what went well and what did not. The discussion continued and the subjects were asked more specific questions regarding comparable differences in discoverability and the controllability. Finally, the test subjects were asked which concept they liked the most and why. Each session lasted about 30 min.

6. Results

In the following section the results from the NASA TLX, SUS scale and the semistructured interview are presented.

All of the 36 participants managed to accomplish the tasks for both the UbiCompass prototype and the Fibaro application. Twenty of the 36 participants showed signs of enjoying the UbiCompass prototype experience, as determined by posttest interviews and spontaneous positive comments during the testing.

We used an alpha level of .05 for all statistical tests.

6.1. NASA TLX Data

The overall weighted NASA TLX scores are illustrated in Figure 7 for the UbiCompass prototype mean value, , and in Figure 8 for the Fibaro application mean value, , . The results obtained from the NASA TLX subscales are illustrated in Figure 9 for the UbiCompass prototype and in Figure 10 for the Fibaro application.

Figure 7: UbiCompass NASA TLX score.
Figure 8: Fibaro NASA TLX score.
Figure 9: UbiCompass NASA TLX subscore. to outliers.
Figure 10: Fibaro NASA TLX subscore. to outliers.

A paired -test of the overall weighted NASA TLX scores indicated no statistically significant differences between the two: , .

Moreover, all subscales, except the physical one, have the same or very similar median values (Table 1). On the physical subscale, the Fibaro application had a significantly lower value (Wilcox , ).

Table 1: Median values for the UbiCompass prototype and the Fibaro application.

In addition, there were no statistically significant differences between tech and nontech participants for the UbiCompass; for the Fibaro application, it was close to the margin of statistical significance, , . The nontech participants’ mean value was , , while tech participants’ mean value was , .

6.2. SUS Data

The results obtained from the SUS questionnaire for the UbiCompass prototype present a mean score of , with a minimum score of 47.5 and a maximum score of 95 (Figure 11). For the Fibaro application, a mean score of , with a minimum score of 20 and a maximum score of 95 (Figure 12). A paired -test was used to explore the difference and found that the UbiCompass prototype had a significantly higher SUS score than the Fibaro application , .

Figure 11: UbiCompass SUS score.
Figure 12: Fibaro SUS score.

There were no statistically significant differences between tech and nontech participants for the UbiCompass prototype, but, for the Fibaro application, there was a direction heading towards significance: , . The nontech participants’ mean value was , , while tech participants’ mean value was , .

6.3. Semi-Structured Interviews

In general, participants tended to describe the UbiCompass prototype as innovative and intuitive. The Fibaro application was often described as inconsistent, but also being based on a comfortable, familiar form-factor. All interviews ended with a final question asking which system was preferred: 20 participants preferred the UbiCompass prototype, 11 preferred the Fibaro application, and 5 would like to use both, since they found them to be complementary. The data from the semistructured interviews were split between the UbiCompass prototype and the Fibaro application and were grouped into different themes representative for each system.

6.3.1. UbiCompass Prototype Comments

Recurring themes for the UbiCompass prototype were discoverability, being easy to use, tapping icons, and wrist-twist functionality.

(a) Discoverability. One of the goals of the UbiCompass concept is to offer an easy and comfortable way to discover devices, and most participants found that the UbiCompass provided a fast and easy way to see all the interactable devices in a given room. One participant stated, “The mapping of devices and their positions were very easy to understand. You get a quick overview of what is there and which one you can control.” Concerns about having several devices were also mentioned, “If too many devices are available in the room, the discoverability and the device selection might get ambiguous.”The physical ingredients of the interaction were intuitive and inspiring.

(b) Easy to Use. A very common comment from the participants was about the UbiCompass being easy to use and consistent. One participant stated, “The concept is innovative, very consistent, clear and logical.” “The watch is always available and easy to use for simple interactions.”

(c) Tapping Icons. Several participants tried to take control of a device by tapping its icon instead of the control button. One stated, “Why do I have to move my arm when it would be easier to just tap on the icons?” Another suggested being able to tap icons to interact with devices that were behind you, “When interacting with devices behind me, an icon tap function would be practical.”

(d) Wrist-Twist Functionality. The comments about the wrist-twist functionality were very mixed. Several participants had trouble understanding how it worked. Some understood directly and liked it and some did not like it mostly due to system delays. Those who figured it out usually showed signs of excitement, “That’s so cool and innovative!” Some found it very hard to control. One stated, “The feedback was very clear on the watch face but the wrist-twist function was not intuitive. I prefer a simple touchscreen interface.” Several participants requested more feedback from the wrist-twist function, especially due to system delay. To increase the volume or the light intensity, the participant had to twist the wrist towards his or her body; about half of the participants felt that the twist direction was right.

6.3.2. Fibaro Application Comments

Recurring themes for the Fibaro application were discoverability, familiar form-factor, and inconsistency.

(a) Discoverability. Several participants had trouble identifying the devices, in particular the light sources, since they had to rely on the device name, “I didn’t know which light source I wanted to control, so I guessed.” Some felt the application offered good discoverability, “The mobile application gives a good overview of the available devices.” Others commented on the advantages of a larger screen, “A larger screen offers more possibilities for complex interaction. Visually impaired users might also benefit from the larger screen.

(b) Familiar Form-Factor. Several participants emphasized that the mobile form factor was comfortable to use. One participant commented, “I always have my phone nearby and I do not want to have another device that has to be recharged.” “There is no need for a smartwatch interaction method if you already have a mobile.” Some preferred the watch form-factor, though, since it offers access to at-a-glance information. One participant reported, “The at-a-glance interaction is not provided with the phone form factor since there are too many steps before reaching the desired function or application. Unlocking the phone, finding the application, opening it, navigating to the desired device, and finally controlling it.” Another participant commented, “The watch is superior due to its coolness and its simple interaction capabilities.”

(c) Inconsistency. There were several comments about the user interface being inconsistent. One participant stated, “Some icons are interactable, some not. Some devices need hierarchical navigation through layers to reach basic functions, some don’t.” Another commented, “Some basic functions are hidden; the scrollable room menu (to see all devices) or the layer-based navigation are only reachable by tapping the device name.”

6.3.3. General Comments

Few participants reported that they wanted both systems. One commented, “The interaction methods complement each other. The watch is good for simple and quick interactions while the mobile application is better suited for more complex functions.”

7. Discussion

In this section, we will discuss the “take-aways” from developing the UbiCompass prototype: the benefits, the limitations, and the comparative study.

7.1. Benefits

The UbiCompass aimed to make it easy and comfortable for a user to discover and do simple, quick interactions with numerous connected things. The main strength of the concept is that it truly exploits the characteristics of a wrist-worn wearable device [1], first, by making the icons that illustrate the connected devices part of the watch face. This results in information that is always available or in a sense always “on.” Second, the information is available at a glance, which means that a user can quickly get an idea of how many connected devices are available and their approximate whereabouts. Comments about this appeared in the semistructured interviews; most participants found that the UbiCompass provided a fast and easy way to see all interactable devices. Third, the results from the SUS scores and the semistructured interviews indicated that the UbiCompass prototype was easy to use and consistent. A final benefit is that the concept is built on off-the-shelf products supporting numerous products that can be connected with the Fibaro Home Center 2 hub.

7.2. Limitations

In its current form, the UbiCompass has some flaws. First, it scales badly for environments with many connected devices. At a certain point the many icons on the watch face will make the user interface cluttered and hard to read. Second, the UbiCompass only shows the direction to connected devices in the horizontal plane; no distinction is made between devices that are placed above/below each other. This makes it difficult for the user to distinguish one from another if they are placed right on top of each other. Third, physical comfort can be compromised in some situations. A seated user who wants to turn on a lamp directly behind him would have to bend his arm to a rather unergonomic position to point the watch directly at the lamp. Comments about this appeared in the semistructured interviews; some participants argued that they would prefer to just tap on the icons instead of being required to point at the device. Fourth, the concept assumes functional indoor positioning, a well-known issue for IoT in general. Sensors or beacons can be used, such as BeSpoon [45], but also existing network infrastructure. One interesting project is Chronos, developed by MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) [46]. The project exploits phase differences at different channels (i.e., different frequencies) in the Wi-Fi nodes to approximate the distance. The units require a slight unit firmware modification but no additional hardware is needed.

Finally, one of the main goals of the UbiCompass prototype is to offer at-a-glance access to information and simple and quick interactions built on familiar touch interaction. We chose to challenge the design space by adding a wrist-twist feature that uses inertial sensors to increase or decrease the volume of the Sonos device and the intensity of a dimmable light source. This caused some notable issues, mainly due to the system delays but also to difficulties in twisting the wrist in mid-air as if turning a “real” control knob. Several participants did not understand how it worked and commented that they missed getting feedback. The fact that the volume increased or the light dimmed was not sufficient. However, those who figured it out found the feature to be “cool and innovative.” This emphasizes the importance of two basic rules of interaction design: having direct visible feedback on the watch face and signifier(s) of how the user can interact with the system [10]. We used a horizontal ticker to inform the participants about the wrist-twist feature but few of them paid any attention to it.

The UbiCompass prototype was designed with very basic interaction tasks in mind. For more advanced use cases, such as changing the settings of a connected device, would probably be preferable switching to another form factor such as a smartphone.

7.3. Comparative Study

Overall, the UbiCompass prototype and the Fibaro application received acceptable scores on both the SUS questionnaire and the NASA TLX. This indicates that the participants were able to complete the evaluation tasks with either concept reasonably well.

The UbiCompass had a significantly higher median score in the NASA TLX physical subscale. This was expected since the UbiCompass requires more physical activity. However, the overall weighted NASA TLX score was slightly lower for the UbiCompass prototype than the Fibaro application indicating that the latter was perceived as having a higher workload. One attribute that might have influenced the overall weighted NASA TLX score is the wearable characteristic of always being “on” and available and not having to pull out a smartphone and make a lot of clicks before starting the correct application. According to Sauro [47], more clicks usually mean more screens. More screens usually mean spending more time completing tasks. More time spent on tasks usually means higher task failure and a poorer user experience.

The UbiCompass had a significantly higher SUS score compared to the Fibaro application. Worth noting is that 25 out of 36 participants have a score that was equal to or greater than 68, which is considered to be above average. The SUS score measures cognitive attributes such as learnability and perceived ease of use. Possible attributes that may have lowered the cognitive workload for the UbiCompass prototype are having access to information at a glance, simple and quick interactions, and a consistent user interface. Thus, the UbiCompass had a significantly better SUS score but also a slightly better score regarding the overall weighted NASA TLX score.

One of the goals was to explore whether a not-so-tech-savvy end user would find the UbiCompass harder to use than the Fibaro application. With that in mind, it was a positive result that there were no significant differences between nontech and tech participants in controlling the devices. We believe that the scenario tasks were too simple and too short to find any significant difference in NASA TLX.

Another aspect is the compass metaphor that the UbiCompass is based upon. UI metaphors use a source domain to help the user understand the target domain and, if well designed, can provide a good conceptual model [48]. However, UI metaphors also present some drawbacks. They may impede users who lack knowledge about the metaphor’s source domain, and they may trigger unrelated preknowledge in the user’s mental model. In this comparative study, some participants seemed to have problems constructing a relevant mental model of the UI and initially they did not map the lamp icons to the physical lamps. Instead, they clicked on the icons and did not understand why nothing happened. Nevertheless, once they grasped the UI, they had no problems solving the tasks. This suggests that the way the compass metaphor is used in the UbiCompass is learnable, but could be designed to be more intuitive.

8. Conclusions

The contribution of this paper is a novel IoT interaction concept called UbiCompass. The UbiCompass concept prototype is developed for the watch form factor and works together with off-the-shelf products. What differs the UbiCompass from other IoT solutions is the ability to access information at a glance. It requires almost no motor skills or effort to quickly get an overview of devices that the user can interact with and in which horizontal direction in the room they are positioned. The UbiCompass also offers simple, quick interactions such as turning on/off devices. Albeit, the UbiCompass comes with different limitations as discussed, the results show that it had a significantly higher SUS score compared with a traditional smartphone application. In summary, this indicates that the UbiCompass has potential as an IoT interaction concept.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research is funded by the VINNOVA funded Industrial Excellence Center EASE (Mobile Heights). Special thanks go to the team working with EASE Theme A.

Supplementary Materials

A functional, smartwatch face prototype of the UbiCompass was developed and integrated with an existing smart home system. It was then compared to a traditional smartphone mobile application in a controlled experiment. The results show statistically significant differences in favor of the proposed concept. This highlights the potential the UbiCompass has as an IoT interaction concept (Supplementary Materials)

References

  1. S. Mann, “Wearable computing as means for personal empowerment,” in Proceedings of the 3rd Int. Conf. on Wearable Computing (ICWC), pp. 51–59, 1998. View at Scopus
  2. Global Standards Initiative on Internet of Things (IoT-GSI), “Internet of Things Global Standards Initiative,” 2015.
  3. S. Poslad, Ubiquitous Computing: Smart Devices, Environments and Interactions, Wiley, 2009. View at Scopus
  4. B. Johanson, A. Fox, and T. Winograd, “The interactive workspaces project: Experiences with ubiquitous computing rooms,” IEEE Pervasive Computing, vol. 1, no. 2, pp. 67–74, 2002. View at Publisher · View at Google Scholar · View at Scopus
  5. S. Yuanchun et al., “The smart classroom: merging technologies for seamless tele-education,” IEEE Pervasive Comput, vol. 2, no. 2, pp. 47–55, 2003. View at Google Scholar
  6. D. Molyneaux et al., “Interactive environment-aware handheld projectors for pervasive computing spaces,” in Pervasive Computing, pp. 197–215, Springer, 2012. View at Google Scholar
  7. Y. Rogers, The changing face of human-computer interaction in the age of ubiquitous computing, Springer, 2009. View at Scopus
  8. L. Atzori, A. Iera, and G. Morabito, “The internet of things: a survey,” Computer Networks, vol. 54, no. 15, pp. 2787–2805, 2010. View at Publisher · View at Google Scholar · View at Scopus
  9. D. Ledo, S. Greenberg, N. Marquardt, and S. Boring, “Proxemic-Aware Controls: Designing Remote Controls for Ubiquitous Computing Ecologies,” in Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 187–198, 2015. View at Publisher · View at Google Scholar · View at Scopus
  10. D. A. Norman, “Natural user interfaces are not natural,” Interactions, vol. 17, no. 3, pp. 6–10, 2010. View at Google Scholar · View at Scopus
  11. Y. Rogers, “Moving on from Weiser’s Vision of Calm Computing: Engaging UbiComp Experiences,” in Proceedings of the UbiComp: Ubiquitous Computing - 8th International Conference, P. Dourish and A. Friday, Eds., pp. 404–421, 2006. View at Publisher · View at Google Scholar
  12. Q. Sun, W. Yu, N. Kochurov, Q. Hao, and F. Hu, “A Multi-Agent-Based Intelligent Sensor and Actuator Network Design for Smart House and Home Automation,” Journal of Sensor and Actuator Networks, vol. 2, no. 3, pp. 557–588, 2013. View at Google Scholar
  13. H. Wirtz, J. Rüth, M. Serror, J. Á. B. Link, and K. Wehrle, “Opportunistic interaction in the challenged internet of things,” in Proceedings of the 9th ACM MobiCom Workshop on Challenged Networks, pp. 7–12, 2014. View at Scopus
  14. Fibaro Group, “Fibaro Homecenter 2 Z-Wave controller,” 2015.
  15. SmartThings Inc., “Smart Home. Intelligent Living,” 2014.
  16. Apple Inc., “iOS 9 - HomeKit,” 2015.
  17. Google Inc., “weave - google developers,” 2015. View at Publisher · View at Google Scholar · View at Scopus
  18. M. Ringwald, “Ubicontrol: providing new and easy ways to interact with various consumer devices,” in Proceedings of the Adjunct Proceedings, p. 81, 2002.
  19. M. Budde et al., “Point & control--interaction in smart environments: you only click twice,” in Proceedings of the ACM conference on Pervasive and ubiquitous computing adjunct publication, pp. 303–306, 2013.
  20. D. Fleer and C. Leichsenring, “MISO: A context-sensitive multimodal interface for smart objects based on hand gestures and finger snaps,” in Proceedings of the 25th annual ACM symposium on User interface software and technology, pp. 93-94, 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. A. A. De Freitas, M. Nebeling, X. Chen, J. Yang, A. S. K. K. Ranithangam, and A. K. Dey, “Snap-To-It: A user-inspired platform for opportunistic device interactions,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 5909–5920, 2016. View at Publisher · View at Google Scholar · View at Scopus
  22. A. Nassani, H. Bai, G. Lee, and M. Billinghurst, “Tag it!: AR annotation using wearable sensors,” in Proceedings of the SIGGRAPH Asia Mobile Graphics and Interactive Applications, p. 12, Japan, 2015. View at Publisher · View at Google Scholar · View at Scopus
  23. S. Mayer and G. Soros, “User Interface Beaming--Seamless Interaction with Smart Things Using Personal Wearable Computers,” in Proceedings of the Wearable and Implantable Body Sensor Networks Workshops (BSN Workshops), 11th International Conference, pp. 46–49, 2014. View at Publisher · View at Google Scholar · View at Scopus
  24. S. Boring, D. Baur, A. Butz, S. Gustafson, and P. Baudisch, “Touch projector: Mobile interaction through video,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2287–2296, 2010. View at Publisher · View at Google Scholar · View at Scopus
  25. T. Seifried et al., “CRISTAL: a collaborative home media and device controller based on a multi-touch display,” in Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, pp. 33–40, 2009. View at Publisher · View at Google Scholar
  26. Lund University, “EASE - Embedded Applications Software Engineering,” http://ease.cs.lth.se, [Accessed: 16-Jan-2018].
  27. Lund University, “MAPCI - Mobile and Pervasive Computing Institute Lund University,” http://mapci.lu.se/, [Accessed: 16-Jan-2018].
  28. Axis Communications, “Axis Communications,” https://www.axis.com/, [Accessed: 16-Jan-2018]. View at Publisher · View at Google Scholar
  29. Sony, “Sony Mobile,” http://www.sonymobile.com, [Accessed: 16-Jan-2018].
  30. K. Lyons, “What can a dumb watch teach a smartwatch? Informing the design of smartwatches,” in Proceedings of the ACM International Symposium on Wearable Computers, pp. 3–10, 2015. View at Publisher · View at Google Scholar · View at Scopus
  31. Sony, “Sony SmartWatch 3 SWR50,” 2014.
  32. ASUS, “Dual-Band Wireless-N600 Gigabit Router,” 2015.
  33. SAMSUNG ELECTRONICS CO. LTD, “Samsung Galaxy S6 Edge,” 2015.
  34. Sonos, “PLAY:1 Wireless Speaker- Compact & Powerful,” 2013.
  35. Zipato, “Zipato RGBW bulb,” 2015.
  36. Aeotec, “Z-Wave LED Bulb with RGBW,” 2015.
  37. Fibaro Group, “Wall Plug Switch Indoor,” 2015.
  38. Fibaro Group, “FIBARO Motion Sensor,” 2015.
  39. Fibaro Group, “Google Play - Fibaro,” 2015.
  40. V. Jean-Christophe, “GDomotique Fibaro - SONOS Remote V1.0.0 beta pour Fibaro HC2,” 2016.
  41. Nasa, “Nasa TLX homepage,” 2016.
  42. S. Hart, “NASA-task load index (NASA-TLX); 20 years later,” in Proceedings of the Hum. Factors Ergon. Soc. Annu. Meet, 2006.
  43. J. Brooke, “SUS-A quick and dirty usability scale,” Usability Evaluation in Industry, vol. 189, no. 194, pp. 4–7, 1996. View at Google Scholar
  44. T. Tullis and W. Albert, Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics (Interactive Technologies), Morgan Kaufmann, 2008.
  45. A. Jean-Marie, “BeSpoon,” 2015.
  46. M. Harris, “MIT turns Wi-Fi Into Indoor GPS,” 2016.
  47. J. Sauro, “Click Versus Clock: Measuring Website Efficiency,” 2011.
  48. Y. Rogers and H. Sharp, Beyond Human-Computer Interaction, Wiley, 2002.