Abstract

The increasing amount of NFC phones is attracting application developers to utilize NFC functionality. We can hence soon expect a large amount of mobile applications that users command by touching NFC tags in their environment with their NFC phones. The communication technology and the data formats have been standardized by the NFC Forum, but there are no conventions for advertising to the users NFC tags and the functionality touching the tags triggers. Only individual graphical symbols have been suggested when guidelines for advertising a rich variety of functionality are called for. In this paper, we identify the main challenges and present our proposal, a set of design guidelines based on more than twenty application prototypes we have built. We hope to initiate discussion and research resulting in uniform user interfaces for NFC-based services.

1. Introduction

Near field communication (NFC) technology [1] enables building tangible user interfaces [2] for mobile phones. The display and the buttons of a phone are no more in the main role, rather the phone is used as a physical object to touch other physical objects. Generally, users can start services and give commands to the services by touching objects in their local environment with their phones. In the case of NFC, the phones are equipped with NFC readers, and NFC tags are placed in the environment. NFC is a short-range wireless technology, compatible with the technology used in some proximity RFID tags and contactless smartcards. As the reading distance is short (about 5 centimeters), users can be instructed to touch tags, and the data read from tags can be interpreted as commands. Separate NFC readers are available as well, and the technology supports communication between two NFC readers and readers that emulate tags, but we focus in this paper on NFC phones reading NFC tags placed in the environment.

NFC is being discussed as revolutionizing payment, ticketing, and advertising applications [3, 4], but we see much bigger potential for NFC-based user interfaces. NFC can bring good user experience to any situation in which interacting with a service requires knowing and performing a sequence of actions (like menu selections) or entering text (like URLs). All such information can be stored in an NFC tag and entered by a single touch. In a nutshell, NFC enables building extremely easy-to-use user interfaces that connect the digital and physical worlds: users can trigger actions and fetch digital content matching the situation at hand with simple acts of touching NFC tags with their mobile phones. We have implemented over twenty prototypes, including a remote control for wall displays [5], an application for collecting content from museums [6], and an application supporting three-to-five-year-old children in their efforts to learn to read [7]. During this work, we have time after time got very positive feedback from the test users.

NFC technology is being standardized by NFC Forum [1]; hence, interoperable NFC devices can be expected. All manufacturers’ phones can read standard data formats from NFC tags and, for example, fetch the webpage determined by the URL read from a tag. However, there are no guidelines for advertising the tags to the users. Only individual graphical symbols have been suggested when guidelines for advertising a rich variety of functionality are called for. Such guidelines would facilitate fully exploiting the potential of NFC as a technology for building easy-to-use user interfaces, as users would experience uniform user interfaces.

The importance of this work is emphasized by the expected large-scale mobilization of NFC user interfaces in the near future due to the expected mushrooming of NFC phones. Several phone manufacturers have already NFC phones on the market, and more phone models are expected during 2011 and 2012. Nokia C7 and Samsung Nexus S are right now (June 2011) the most popular phone models supporting NFC technology. Forthcoming models from RIM, LG, Apple, Nokia, Samsung, and Sony Ericsson are expected to offer NFC functionality as well. Analyst iSuppli estimates over 400 million NFC handsets to be shipped in 2014 [8]. This development will lead to an increasing interest in developing applications utilizing the NFC functionality.

In this paper, we suggest design guidelines for advertising NFC tags placed in the environment. This set of rules has the same goal as the GUI design guidelines published for the modern mobile phone platforms like Android, iPhone, and Windows Mobile. On all these platforms, the size, appearance, and position of icons, menus, and other GUI widgets shown on the phone screen are specified in detail. In the case of NFC, we need a set of visual elements to advertise the points to touch in the environment and the functionality triggered when these points are touched. As these visual elements resemble computer icons (and other widely used graphical symbols and small pictures), we use mainly the term “icon” to refer to them. We expect our everyday environment to offer rich sets of services and hence contain large amounts of NFC tags. Moreover, we consider flexibility an essential requirement for the guidelines, as NFC user interfaces can be expected to be constructed by a large variety of application developers, service providers, and users.

We base this paper on six years of research on NFC user interfaces. The new user interfaces presented in this paper have not been tested yet in real-life situations, as the guidelines are brand new. However, we considered it important to publish this work right away, as NFC applications are expected to become common soon. We consider now to be a good time to start discussion and hopefully a common effort to develop the guidelines. We believe that by agreeing together about a common basis for NFC user interfaces, we can achieve the best user experience and through satisfied users also advance business and the use of NFC technology as well.

The rest of the paper is structured as follows. In the second section, we present the work done so far in advertising NFC tags the short history of NFC user interfaces. In the third section, we suggest our guidelines, and in the fourth section, we present examples. We present discussion in the fifth section and conclusions in the sixth section.

Visualizing NFC-based physical user interfaces is still in its infancy, and hence, only a few suggestions have been made to visualize NFC tags. Some suggestions have been made to advertise RFID tags as well. RFID technology is similar to NFC, and they both can be used in building touch-based user interfaces. For example, the system reported in [9] presents the home page of a person on a computer display when that person’s photograph is touched with an RFID reader. An icon marks passports that contain RFID tags, and icons have been suggested to help workers with hand-held readers to identify RFID tags [10]. Arnall [11] presents a few suggestions for visualizing RFID tags. Figure 1 presents a recent suggestion of an NFC icon for advertising NFC tags [12]. The icon has been designed to inform users about the vicinity of the invisible RFID radio field. In fact, the icon is a stylized representation of an RFID reader’s operation range: it describes the form of the physical volume inside which a tag needs to be placed to be read. Recently, Hang et al. [13] have performed usability tests to identify the best symbols to advertise actions and to embed the symbols in the environment.

Figure 2 presents a collection of NFC icons in use. These examples do not inform users about the related services, but the focus in the design has been in marketing and branding. The examples can be seen as technical service promises of each brand. The icon on the left (Figure 2) is the symbol proposed by the NFC Forum (N-Mark). The icon on the middle advertises NFC-based services in the city of Nice, France [14]. The icon at the right advertises the Felica technology [15] widely used in Japan.

Figure 3 presents the NFC icons we have designed. The icons form, together with the RFID tags placed behind them, a two-sided interface between the physical and digital world. We are now in our fifth iteration in the icon design. In the first iteration [16], the icons were divided into two sections. The general section, a rectangular shape representing an RFID antenna (RFID was used instead of NFC), advertises a point to touch with an RFID reader. The special section advertises the action. In the second iteration [17], the icons were also divided into two sections, but now the general symbol was more circular, emphasizing wireless communication between a reader and a tag.

In the third iteration, we moved to completely circular external border. The blue and black external border communicates a point to touch. The pictogram (and sometimes text) in the middle represents the action performed when the tag is touched. In the fourth iteration [18], we integrated the icons in cartoons. The cartoons contain several characters performing different actions. Each action is surrounded by a blue circle indicating an interactive region of the cartoon. Actions in the cartoons are metaphors for commands stored in the NFC tags placed behind the icons. Devices are indicated as well; in the figure, most actions require just a phone, and one action requires a phone and a wall display.

3. Designing NFC User Interfaces

3.1. Basics of NFC-Based Interaction

An NFC user interface consists of NFC readers and NFC tags placed in the environment. Interaction occurs when an NFC reader and an NFC tag are brought near to each other and the reader reads the data stored in the tag. We focus in this paper on reading NFC tags placed in the environment with mobile phones equipped with NFC readers. As the reading distance is small, users can be instructed to touch tags with their mobile phones. The action performed as a response to the act of touching is determined by the data stored in the tag; the data can directly represent a command to perform the action, or the command can be generated based on the data. The command is delivered to the corresponding service, which performs the action and produces a response. The response is then observed by the user.

Users have to recognize tags as the default mode of service discovery is visual browsing [19]: users scan their local environment, recognize the tags present, and select among them the tag to touch. The locations have to be recognized accurately, as a tag has to be touched with a phone in order to read its data. Actually, a phone can read a tag only if the active area of the phone (containing the embedded NFC reader’s antenna) is placed close to the tag. The active area depends on phone design, but is usually not very large. Moreover, users have to understand the action touching a tag triggers and the service performing the action. This information can be provided to the users by means of icons placed in front of the tag.

Basically, a user needs to have enough information to decide whether to touch a tag or not. However, the icons are not the only source of information, but they form the user interface together with the user’s mobile phone and the other user interface components in the local environment. As the user interface is scattered into the environment, the positions of the icons (i.e., tags) and other context provide additional information to the user. The interpretation of the available information depends on the user’s earlier experience as well, and on information communicated by other means (e.g., web pages). All this information together facilitates recognizing available tags correctly, but a sufficient amount of information at each tag location is still needed to provide good user experience.

After a user has touched a tag, the interaction between the user and a service utilizes the available user interface components as specified by the developer of the service. The mobile phone’s user interface can be used to give feedback for a successful operation, for example, vibration can announce that data has been read successfully from a tag, or the tags accepted by the service being used can be listed on the phone’s display. Wall display and loudspeakers can present content, and additional commands can be given by touching other NFC tags, for example.

3.2. Service Advertisements

Although the user interface has many components and potentially many modalities as well, we focus in this paper on graphical advertisements communicating information about the NFC-based interaction possibilities. We call these advertisements as service advertisements. They are placed on top of NFC tags, and they consist of visual elements (i.e., icons). For the best visibility, parts of the advertisements can be placed separately, for example, at a higher position.

A complete service advertisement specifies the exact position to touch, an action, a service, and all the other details a user needs to know when making a decision on whether to touch a tag or not. However, as discussed above, a service advertisement does not have to specify all details of a service and an action. For example, when the service is known by the user, determining the action is enough. As the service is not specified in an advertisement, that general advertisement can be used to control several services. General advertisements require common data formats as well, but they are outside the scope of this paper.

Figure 4 presents an example of a service advertisement. The attention element advertises to the user “Here is a service available!” It draws a user’s attention even from a distance. This element supports visual browsing; it is the same for each advertisement and hence enables users to quickly estimate the number and the locations of NFC tags by scanning the local environment visually. Separate attention elements can be placed higher to draw users’ attention from a distance at public places like bus stops. This can be necessary because service advertisements placed at a height convenient for users to touch are easily blocked by people using the services, passers-by, and vehicles.

The technology element  indicates the technology utilized in the interaction. Our aim is to support also other interaction techniques, like 2D barcodes, WiFi, Bluetooth, infrared, gestures, and speech, but in this paper, we focus on NFC. The N-Mark (Figure 2, left) is an obvious choice for the technology element; though in Japan, the Felica symbol (Figure 2, right) might be more familiar.

The interaction element advertises the exact point to touch. This symbol indicates a possibility for interaction—that this point can be touched with an NFC phone to interact with a service. The shape of the interaction element can be seen as of the counterpart of a button in traditional GUIs. Moreover, the shape hints that an important part of the service advertisement is in the middle of the element, so we suggest placing here the most valuable information, the action element (see below).

A service advertisement can contain multiple interaction elements as in Figure 5. This service advertisement might be placed next to a tourist attraction. It announces a multimedia player service for watching videos related to that attraction. When the play icon is touched, a video is shown on a nearby display. When the stop icon is touched, the video is stopped. The two interaction elements need to be far enough from each other to guarantee the correct tag to be read. On the other hand, each interaction element needs to be connected to the rest of the service advertisement. The black segment of line in Figure 5 is used for this purpose.

The action element indicates the action that the system performs when a user touches an interaction element. The related NFC tag stores the command(s) to execute that action, or data that the system maps to the command(s). We suggest using the well-known icons, for example, the “Play,” “Stop,” “Pause,” “Forward,” and “Rewind” icons that have been in use since tape recorders and the icons familiar from GUIs, like “Save,” “Open,” “Print,” and “Send.” A tag for starting (and stopping) a service can be advertised with an “On/Off” icon. Another possibility is to use an icon representing the service itself to advertise such tag. For example, the Google Maps icon might be used as an action element when touching the corresponding tag opens Google Maps in the phone.

Action element is the most significant part of a service advertisement, as a user should associate this icon with an action that touching a tag triggers. Together with the attention, technology, and interaction elements, it gives the core information for the users nearby. When these elements together with other available information do not determine the service and the action accurately enough, the context and instruction elements can be used as well (see below). The service in question can be advertised in several ways, and hence, we do not include a service icon as part of the core information. On the other hand, part of the core information can be left out when the information is communicated by other means (see examples below).

The context element defines context for the advertisement. For example, when the service to be commanded is not obvious, it can be advertised with a context element. The symbols created by service developers are the obvious choice for advertising services, including those of Facebook, Twitter, Google Maps, OVI services, World of Warcraft, and iTunes. Also organizations like cities might use their icons in advertising information services. A context element advertising a service can be left out when, for example, the placement of the service advertisement determines the service.

Another important function for the context element is to advertise the device(s) belonging to the user interface and presenting the responses of the service. For example, digital content can be presented on a wall display or on the phone’s display—or it can be printed. When we have tested NFC prototypes presenting content on wall displays, we have found out that other devices than users’ phones need to be advertised, otherwise, the users might get confused about which device to focus attention to.

We suggest that context elements are placed inside the attention element. The tourist information symbol and text in Figure 5 form a context element. Moreover, the chargeable element in Figure 4 is a context element illustrating how several context elements can be included in an advertisement. This element indicates whether the service is free or chargeable and which payment methods the service supports. A chargeable service can be advertised using the symbol of the local currency, credit card, or some other well-known symbol of the used payment method. The actual payment process is payment and service specific and could be informed on the phone display—and the required interaction could be performed by the user using the phone UI.

If several sets of icons will be developed for advertising actions, the icon set might be identified with a context element as well, for example, using a company logo. Other types of context elements can be specified as required by services. On the other hand, it should be noted that the situation in hand is the main source of the context. Context elements can hence be seen as an additional tool to define the context not otherwise obvious. For example, the topic of the videos a user can watch by touching the service advertisement presented in Figure 5 is determined by the tourist attraction.

The instruction element is used to explain how the interaction is to be performed. This element is optional and can be presented alone or with the other elements. We expect instructions to be common and verbose when NFC phones and new applications controlled by touching NFC tags are introduced, but to be compressed and even left out completely from many advertisements when the technology becomes more common. Moreover, when a service and its user interface are advertised by other means, for example, in the Internet, the need for instructions in the local environment decreases. Instructions can be given as text, cartoon, and so forth. Figure 4 presents a simple instruction element and Figure 5 an example of a cartoon instructing users.

Separate instruction elements (together with attention elements) can give general instructions about services in a certain area. The instructions can describe the area, the interaction technologies used, and the available services. If a specific client application is needed for using the services, a service advertisement for downloading the client can be attached to the instructions. Alternatively, a user can be instructed to start the application (e.g., iTunes) to which an NFC user interface is offered. For example, a hotel room’s instructions might describe, “If you want to stream music from your iPhone’s iTunes, please start the application and touch the following icon next to the loudspeakers.” Other information can be included as well—rich general instructions can in fact allow simplifying the advertisements at tag locations.

4. Examples

The following examples illustrate service advertisements. Already existing applications can require modifications to support the functionality presented in these examples, but we do not discuss those modifications in this paper.

As discussed above, a service advertisement does not have to be complete. Elements that are communicated to users by other means can be left out. Figure 6 (left) presents a service advertisement for a video player. The second advertisement does not contain the context elements determining the service, the third one does not determine the action, and the fourth one determines neither of these. The context defines what information needs to be included in the advertisement.

Figure 7 presents a service advertisement for a photo album service presenting photographs on a wall display. We have implemented several versions of this service; one version is presented in [5]. Touching the “On/Off” action element starts the service and configures the phone as remote control—the commands for controlling the service are given with the phone UI. The figure illustrates the placement of elements relative to each other: the interaction element at the edge of the attention element, with action element inside, and context elements inside the attention element.

Figure 8 presents a configuration service. Touching the interaction element connects the related photo printer to the phone so that the printer is seen from application menus and a user can select it to print photographs. In this case, the action element describes a connection between a phone and a printer, and the price icon on the right determines that this service is not free.

Figure 9 presents an advertisement for a multiplayer game, played by several players on a single wall display. The communities of currently popular online games like World of Warcraft might already be large enough for this kind of services. As this advertisement would be placed on a public place, for example, on an airport, and the users are not expected to have much a priori information about NFC user interfaces, all instructions are given in the advertisement. On the other hand, the attention element is left out, as the game logo is expected to attract potential users. The first interaction element specifies an action for downloading a mobile client and the second one action of joining the game.

These examples illustrate how service advertisements do not need to be complete, when part of the information can be communicated by other means, or complete advertisements might just be too big. The environment might not have surfaces large enough, or complete advertisements might be considered not aesthetic enough. Moreover, service providers might want to have compact advertisements advertising their brand as well. Finally, when users become familiar with NFC user interfaces or part of the information is available otherwise, less information suffices.

Advertisements placed at homes might be quite compact. As the family members themselves select the services and actions, they know the context of the actions. For example, just an attention element and an action inside it might be sufficient. Application icon might be used as a context element when several applications are controlled. Figure 10 shows an example of a living room table containing service advertisements for controlling a media player. This UI might be used to start and stop playing the playlist currently selected in the living room’s media center. The third action brings the media player’s user interface to TV screen. One obvious media player would be iTunes, Spotify is another. If we compare this user interface to iPhone’s remote control application for iTunes, the difference is that the table top user interface does not require starting any application in iPhone or focusing attention to the phone’s display—touching the corresponding action element suffices.

As another example, a museum might offer a service to collect information about the presented objects. In this case, instructions can be presented at the entrance, as shown in Figure 11: the interaction element at the top left and textual instructions below it. The instructions specify that the service is free and show a map of the area containing NFC tags. Instructions for downloading the client application to the phone are given at the entrance, and the interaction element at the top left needs to be touched. Advertisements are placed next to the objects, and a stuffed lynx is shown in Figure 12. In the prototype we implemented to a local zoological museum, touching this kind of advertisement brought a list of files to the phone display, and the user could then select files to download to the phone [6].

Figure 13 presents an example from a tourist office offering maps and contact information to tourists. In this example, action elements are service icons, and the services can be started by touching the icons. The content presented by these services is determined by the context: local maps and contacts considered useful for tourists. The LightHouse company symbol in the context element informs that the specific services are distributed by this company. The company could be Nokia, for example. Furthermore, the application will be opened in the mobile phone.

In a similar fashion, prefilled StoryTeller messages might be offered (Twitter being an obvious existing service). Figure 14 presents a service advertisement for creating a StoryTeller message. Touching this advertisement opens a StoryTeller client into the phone’s display with a prefilled message describing the place (and possibly also the event). A user can complete the message with her/his comments before sending it. The obvious context element is the StoryTeller icon. The action symbol specifies writing a message. Service advertisements can also be offered for reading StoryTeller feeds, and the topic can be specified as context. Sending a message to Facebook could be achieved in a similar fashion by a single touch; when the context related to the place is sufficient, the user would not need to write a single character her/himself. Moreover, users of Google Places [20] could rate services by touching rating icons of the service advertisements.

Service advertisements can be compact also when they are general. When an action can be performed by several different services, the service is not identified. We have implemented an NFC application that supports children in their efforts to learn to read—by helping the children to associate pronunciation with written text [7]. Teachers explain the service to children, so service advertisements can be minimized. Children’s name tags in a kindergarten (placed to chairs, coat racks, etc.) are equipped with NFC tags storing the same name as in the name tag. The service advertisements can consist of only an attention element instructing the exact place to touch (the “Star” icon in Figure 15). In this case, the same tag can be used by several services: one service says the name aloud when the element is touched, and another one says the name first aloud and asks the child to touch the corresponding name tag.

5. Discussion

We presented design guidelines for NFC-based user interfaces. We suggest advertising the presence of a service with the attention element and the exact place to touch with the interaction element. The technology element enables advertising interaction based on other technologies as well, although this paper focuses on NFC. Moreover, the action element determines the actual functionality a user can trigger by touching the advertisement. We do not specify special elements for other information but suggest general context and instruction elements for any other information included in advertisements. We suggested also relative positions for these elements and discussed the visual appearance of each element. Finally, we presented several examples of service advertisements.

These guidelines are not ready. In our earlier work, we have got positive feedback from individual visual elements. Now, we need to perform usability tests and field trials with different service advertisements to get feedback from users and to improve the guidelines and to define them in more detail. We need to evaluate how well users understand the attention, interaction, and technology elements—also the relative positions and sizes of the elements need to be studied. We need to study the best way to advertise actions and the amount of context elements needed. We do not expect complete advertisements to be always used; in fact, they might be rare once NFC technology is widely known. However, at this stage, this approach facilitates detailed consideration of different functionalities and different ways of compressing service advertisements, including general advertisements. The appearance of the icons is a fundamental research topic. Uniform attention and interaction elements can facilitate interpreting even proprietary action and context elements. These two elements can help to achieve the goal that we see crucial for the success of the design guidelines; they have to be flexible.

NFC-based user interfaces share challenges with the traditional GUIs but introduce new ones as well. The new challenges follow from scattering user interface elements into our everyday environment. Components of user interfaces are more difficult to recognize, and all recognized icons do not belong to user interfaces. Our everyday environment also provides a richer and more dynamic context for the user interfaces than the traditional GUIs. Balancing the amount of information provided by context and the amount of information encoded in a service advertisement is a challenging task, specifically, as part of the contextual information is implicit and depends on the experience of a user. The service advertisements need to provide enough information for new users, but must not provide too much information to distract an experienced user. We expect the aesthetics of the advertisements to play a crucial role in tackling this challenge.

Typical scenarios need to be considered when these user interfaces are designed: the types of users, how they move to and in the space, which services they use and at which locations, what are the typical service and command sequences, what do the users know about the services before entering the space, what software the users can be expected to have in their phones, what devices are used, how environment can change, and so forth. Based on such scenarios, the amount of information to be encoded in service advertisements can be decided: what kind of instructions to give at the entrance, how to place the service advertisements, and which elements to include in each advertisement. This is a classical situation of designer model, system model, and user model [21]; the designer has to design for the user so that the user model and designer model are equivalent and the developer has to transform the design model correctly into a system offering the services to the user. Imperfections in transforming the model and in understanding the user cause incompatibilities between the models and decrease user experience.

6. Conclusions

With this paper we would like to start a discussion on advertising services that are embedded in our everyday environment. These services are activated and controlled by touching NFC tags in the environment with mobile phones. This technology enables new ways of interaction with services and the environment; it creates totally new practices to use services in everyday settings. A lot of effort has been put in standardizing NFC technology and in developing NFC phones. Surprisingly, much less effort has been put in studying how services using this technology should be advertised to users although common guidelines would be valuable, specifically as the NFC technology is becoming common in mobile phones.

We present in this paper initial guidelines based on our experience in building more than twenty different prototypes and testing them with real users. We define service advertisements consisting of attention, interaction, action, technology, context, and instruction elements. We consider flexibility essential, as the amount of information to communicate to users can be expected to vary. Moreover, companies can be expected to emphasize their brands, and hence, strict guidelines might not be followed. We provide a set of scenarios that illustrate the design guidelines.

The potential of our work stems from uniform user interfaces. If different services are advertised with similar advertisements, they are easier to recognize and learn. We expect this to facilitate achieving good user experience and through satisfied users advance business and generally the use of NFC technology as well. By publishing these guidelines, we hope to advance the development of NFC-based user interfaces and invite others to join this work.

Acknowledgment

The NFC icon presented in Figure 1 was created by Timo Arnall and Jack Schulze in the Touch project [12].