Table of Contents Author Guidelines Submit a Manuscript
Advances in Human-Computer Interaction
Volume 2012, Article ID 787469, 12 pages
Research Article

Haptic Addition to a Visual Menu Selection Interface Controlled by an In-Vehicle Rotary Device

Division of Human Work Science, Department of Business Administration, Technology and Social Sciences, Luleå University of Technology, 97187 Luleå, Sweden

Received 4 July 2011; Accepted 21 October 2011

Academic Editor: Ian Oakley

Copyright © 2012 Camilla Grane and Peter Bengtsson. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Today, several vehicles are equipped with a visual display combined with a haptic rotary device for handling in-vehicle information system tasks while driving. This experimental study investigates whether a haptic addition to a visual interface interferes with or supports secondary task performance and whether haptic information could be used without taking eyes off road. Four interfaces were compared during simulated driving: visual only, partly corresponding visual-haptic, fully corresponding visual-haptic, and haptic only. Secondary task performance and subjective mental workload were measured. Additionally, the participants were interviewed. It was found that some haptic support improved performance. However, when more haptic information was used, the results diverged in terms of task completion time and interface comprehension. Some participants did not sense all haptics provided, some did not comprehend the correspondence between the haptic and visual interfaces, and some did. Interestingly, the participants managed to complete the tasks when using haptic-only information.

1. Introduction

As complexity in vehicles increases, new techniques are being developed to reduce the demands on a driver’s attention [15]. Because driving is mainly a visual task [6], many new systems have been developed to reduce visual load by providing supporting auditory or haptic information. For example, a haptic rotary device can provide haptic information intended to support interaction with a visual user interface. In this paper, we focus on this type of haptic information.

Today, several cars are outfitted with haptic rotary devices to help the driver handle secondary tasks [7]. This type of haptic information includes kinaesthetic and tactile sensations [8] provided through active touch [9]. An exploratory procedure of repeated hand movement [10], in this case turning the rotary device back and forth, is required to perceive the haptic information. Haptic information includes the placement of a ridge between menu items [7] and special haptic effects for scrolling through a list or searching for radio stations. These kinds of haptic effects could help a driver if designed to extend or correlate with the visual information. That is, the haptic interface provides similar information as the visual interface. This redundant information may help drivers perform actions without looking at the visual display. If the driver knows that a desired function is three steps to the right in the menu, the driver can select the correct function by simply counting the haptic ridges, a strategy that allows the driver to keep focus on the road. In principle, this ability to multitask while maintaining one’s main visual attention on the task of driving might be a positive outcome of the new multimodal techniques developed for in-vehicle use. However, the effects of complementary haptic information are not fully understood. For example, it is unclear whether the mental resources required to operate haptic devices make such devices unsafe. Many studies have shown that high mental workload negatively impacts driving [1117]. Hence, the challenge with the new visual-haptic techniques is to find a way of communicating information that supports rather than burdens or confuses the driver.

These new multimodal techniques use more than one sense, a strategy addressed in multiple resources and time-sharing theories. According to Wickens [18], multiple resource theory concludes that sometimes it is better to divide information across modalities instead of presenting all information through the same modality. Although this assumption is somewhat vague, Wickens [18] argues that the effectiveness of multiple modalities could be due to the fact that different senses use different resources. Furthermore, the multiple resource theory [18] states that some information is better suited for one modality even though that modality is time-shared with another task using the same modality. Moreover, the model refers to the visual and auditory modalities, but it is not clear if the same holds for the visual and haptic modality. Therefore, it is difficult to predict the relationship between vision and haptics, especially in highly demanding tasks such as driving a vehicle. Nevertheless, some studies have shown that using a combination of visual and haptic information can be beneficial [19, 20], a conclusion that suggests similar multimodal effects concerning secondary tasks might be expected while driving.

Few studies comprise haptic interfaces for in-vehicle use, and typically these studies deal with force feedback or vibrotactile information. Force feedback provided through a haptic gas pedal was found promising when a car was closely following other cars [21]. Moreover, vibrotactile information has proven effective for directing a driver’s attention and for presenting direction information [22, 23]. According to Van Erp and Van Veen [23], visual direction information induced a higher workload than vibrotactile information. Furthermore, the fastest reaction time was found with multimodal visual-vibrotactile information. Van Erp and Van Veen’s study [23] implies that drivers may benefit from haptic information. However, vibrations are primarily for on/off information; more complex content is difficult to present through vibrations. New haptic devices, providing haptic cues in different ways, are constantly being developed to ease handling of in-vehicle secondary tasks [2427]. These haptic devices usually do not evoke actions rather they support driver-initiated activities.

Rydström et al. [28] studied the use of a haptic rotary device providing specific haptic cues as a complement to an in-vehicle visual interface. For example, a haptic cue marked where a radio station could be found with an attraction force. The haptic cues were compared to a reference interface with more common haptic effects, such as ridges placed between different alternatives. One of five haptic cues improved performance in terms of task completion time and reduced the number of glances off road. It was not clear whether the lack of improvement for the other haptic cues was due to design issues, unfamiliarity, or something else such as mental overload. However, since the reference interface also included haptic information, the results do not exclude haptics as a positive complement for in-vehicle design. Lederman and Abbott [29] presented an interesting theory about ecological validity that might explain the somewhat negative results for haptic cues in the study by Rydström et al. [28]. Lederman and Abbott [29] concluded that, since early computers mainly presented information visually, users are more sensitive and open to visual stimuli in computer interfaces even when haptic cues are provided; that is, because users expect visual information, visual information overpowers information provided by the other senses. On the other hand, when a haptic rotary device is used, the interface has similarities with a traditional mechanical knob that commonly provides inflexible haptic steps. Consequently, based on Lederman and Abbott [29], haptic effects that resemble mechanical knobs could be expected to have higher ecological value than new and unexpected haptic cues, such as those used by Rydström et al. [28]. According to Hayward et al. [8], haptic effects do not need to imitate reality; they need only to be suggestive. Hence, haptic cues may be more accepted and easily sensed if simultaneously presented with visual cues.

A haptic rotary device was also used in a study by Grane and Bengtsson [30]. Instead of complementing haptic cues, as used by Rydström et al. [28], the study by Grane and Bengtsson [30] comprised a fully corresponding haptic and visual interface that provided the same information through both channels. This could be done using different textures as menu items instead of functions. Because textures can effectively be perceived both haptically and visually [3133]; textures were considered suitable for modality comparisons. Additionally, the textures made it possible to investigate if the participants could learn to choose between menu items in an interface with haptic-only information. It was thought that experienced users would be able to find and select frequently used functions without taking their eyes off road if the interface provided effective haptics. Grane and Bengtsson [30] found it possible to use a haptic-only interface even though the visual-haptic interfaces in the study resulted in better performance. Furthermore, a fully corresponding visual-haptic interface induced significantly less mental workload and fewer turn errors, that is, when a target was passed without being selected, than a more common interface with visual information supported by haptic ridges placed between menu items. Based on these results, a fully corresponding interface was predicted to have benefits in high-workload environments; however, this needs to be studied further by including a driving task. A haptic addition to a visual interface could help drivers keep their eyes on the road. However, since driving is a highly demanding cognitive task, there is a risk that added information could confuse rather than help drivers.

This study investigates the use of a visual interface combined with a haptic rotary device for solving menu selection tasks during simulated driving. The purpose was to determine whether a haptic interface that corresponds well to a visual interface interferes with or supports secondary task performance and whether the haptic interface could be used without drivers taking their eyes off the road. Four interfaces were compared during simulated driving: visual only, partly corresponding visual-haptic, fully corresponding visual-haptic, and haptic only. The interfaces were experimentally compared in terms of task completion time and error rate. In addition, mental workload was measured and the participants were interviewed. Interviews were used to provide a deeper understanding of the statistical data and to capture the participants’ comprehension and interpretation of the interfaces.

Three hypotheses were made in the experimental study. First, a haptic addition was expected to improve secondary task performance. Second, a fully corresponding visual-haptic interface was expected to produce better performance results than a partly corresponding interface. Third, it was expected that a haptic-only interface would allow tasks to be completed successfully although generating lower performance than interfaces that also provided visual information.

2. Experiment Method

2.1. Participants

Forty first-year engineering students (27 men and 13 women) participated in the study as part of an academic course. The participants were between 19 and 25 years old ( , ). None of the participants had experience with the haptic rotary device or the simulator environment used in the study.

2.2. Equipment

A simple desktop simulator was set up in accordance with the equipment specified in the Lane Change Test User Guide 1.2 [34]. Figure 1 shows the driving environment with a 20′ LCD monitor (2) and a Logitech Momo Racing steering wheel (1) placed in front of the participant. The figure also shows the equipment for solving a menu selection task: a haptic rotary device (3), a laptop computer (4), and a 6.4′ display (5). The haptic rotary device (Alps Haptic Commander) is a knob (Ø 3.5 cm) that could be turned and pushed. The equipment for the menu selection task was placed in the imagined centre stack with the haptic rotary device placed about a forearm’s distance from the participant and approximately 30 degrees from the participant’s right side. The laptop computer was also placed on a 30-degree angle from the participant’s centreline, and the 6.4′ display was placed just under the monitor associated with the primary task. All equipment was fixed, but the participant’s chair was adjustable. All the participants could position themselves at a comfortable distance from the steering wheel and haptic rotary device.

Figure 1: The figure shows the experimental setup: a steering wheel (1), a screen with a simulated highway (2), a haptic rotary device (3), a laptop used for the menu selection task (4), and a 6.4′ display (5).
2.3. Simulated Driving

The Lane Change Test (LCT) method was chosen as the primary task since it is a simulated driving task with a high level of control and reliability [35] suitable for comparing different conditions. The driving scene and driving task was simple and the same for all participants. When using the LCT, the participant drove for about three minutes on a straight three-lane road on which no other cars or obstacles were present. The driving task was to keep the car inside a driving lane and change lanes when directed by signs. Eighteen signs were placed along the road to show which lane to choose. There were different tracks available in the LCT method, and a variation of tracks was recommended to avoid learning effects. The tracks only differed in which order the signs were placed. The signs were placed the same distance apart. In this study, tracks one through five were used once for each participant and the order of the tracks was consistent. The driving speed was controlled by the test leader and fixed to 60 km/h.

2.4. Haptic and Visual Interface Design

The experimental task was a simple menu selection task programmed in and controlled by Macromedia Director 8.5. Textures were used as menu items instead of letters or functions, which are more common in these types of studies. The textures made it possible to create a fully corresponding visual-haptic interface. Moreover, textures were necessary to investigate whether a haptic interface could be used alone. Four textures—A, B, C, and D—were presented to the participants as visual images on the laptop screen and/or perceived as haptic effects through the haptic rotary device (presented in alphabetic order in Figure 2). The textures were designed using ergonomic policies developed by Ivergård [36] and results from user tests. The visual interface was created in Adobe Illustrator CS5 and the haptic interface with Alps Rotary Haptic Editor. Repeated click effects with a linearly changing torque were used to create the haptic textures (Table 1).

Table 1: Specification of the haptic effects used in the experiment.
Figure 2: The visual and haptic information provided in the four different interfaces (V, pVH, fVH, and H). The haptic information is represented visually around the haptic rotary device. The visual and haptic textures are placed in alphabetic order (A, B, C, and D).

The rotation angle for a whole menu was 150 degrees; the rotation angle for each menu item was 30 degrees with 10 degrees in between. For some experimental conditions, a haptic ridge separated the menu items. A ridge was made by a single click effect with a linearly changing torque of 5 mN m/deg, a maximum torque of 50 mN m, and a traction force of 30%. A traction force makes the click effect more distinct. Haptic walls were placed at the menu borders as end stops with a steep incline (50 mN m/deg) and a maximum torque set at 90 mN m. A damper effect, that is, a friction proportional to the knob velocity, was added over the whole menu to reduce unwanted vibrations. The damper coefficient (d) was set to 30 mN m s. The damper torque can be calculated as — * the velocity (rad/s).

2.5. Experimental Conditions

The interface had four menu fields with different textures. The menus and textures were either presented visually and/or haptically. Figure 2 presents the experimental conditions compared in the study:(i)interface V: visual only,(ii)interface pVH: partly corresponding visual-haptic,(iii)interface fVH: fully corresponding visual-haptic, and(iv)interface H: haptic only.

The visual-only interface (V) had no haptic support other than end stops, that is, haptic walls at the beginning and at the end of a menu. Similar end stops were also found in the other three interfaces. The partly corresponding visual-haptic interface (pVH) used the same visual interface as V with an addition of sensible menu field boarders, that is, haptic ridges placed in between the menu fields. In the fully corresponding interface (fVH), the visual interface information was also presented haptically; that is, both menu field boarders and textures were presented visually and haptically. In the haptic-only interface (H), the menu field boarders and textures could be felt, as in fVH, but no visual information was provided.

2.6. Experimental Design

The experiment used an intersubjects design as Rydström and Bengtsson [37] found asymmetric learning effects in a similar study. Ten participants were randomly assigned to each experimental condition.

2.7. Procedure

An experimental session lasted about one hour, and the test leader gave all participants the same instructions. Each session started with the simulated driving. The participants drove on two practice tracks, one immediately after the other. Thereafter, the menu selection task was explained and the participants practiced the task during two training trials. The first training trial presented the textures in alphabetic order with the label “A,” “B,” “C,” or “D” displayed on the laptop monitor. In this trial, the participants learned the textures, noting when they felt they were ready to continue. The second training trial resembled the experimental trial. The participants were asked to find and select one of four textures identified as the target. The texture to select, for example, “Locate A”, was given through headphones by a computer voice as well as displayed on the 6.4′ display. The start position was always on the left most texture, and the active texture was marked blue. The participants turned the haptic rotary device to the appropriate texture and selected it by pressing the device. If they selected the right texture, a tone was played, the textures changed order, and a new target texture was given. If they selected a wrong texture, nothing happened. To proceed, they had to select the right texture. This training phase continued until 12 correct selections were made in a sequence. That is, the training phase required them to select each texture correctly at least two times. The length of a practice trial differed between participants, but at the beginning of the experimental trials all of the participants had reached the threshold level of proficiency. In the experimental trial, the participants drove three tracks in the LCT and carried out tasks with the rotary device at the same time. The order of textures and target textures were counterbalanced and did not change between participants. Twelve textures were selected during a driving round, which took approximately three minutes. The experimental tasks occurred once every 13 seconds throughout the whole round except for a small pause at the beginning and end. The time (13 seconds) was selected to reduce a floor effect and was based on results from a prestudy. To ensure that selections were based on visual and haptic perception only, pink noise was provided through the headphones. At the end of each experiment, the participants completed one questionnaire that asked them to provide information about themselves (e.g., their level of computer experience) and completed two NASA-TLX forms [38].

2.8. Measurements and Analysis

Performance was measured as the time it took to complete a task and the number of errors made. Two types of errors were measured: push errors and turn errors. A push error was registered when the participants selected a texture that was not a target. When the participants went past the right texture without selecting it, a turn error was registered. In the analysis, the number of push errors was divided by the total number of tasks for each participant. This did not change the data or the results but made the results more informative. The same was done for the turn errors. If the participants did not manage to select a texture within 13 seconds, the task was logged as a missing value and given the highest possible time in the analysis, 13 seconds. NASA-TLX [38] was used to measure the participants’ experienced mental workload. After the experimental trials, the participants completed two NASA-TLX forms: the rating scale form and the pair-wise comparison form. In addition to descriptive statistics, the results from the menu selection task and the NASA-TLX forms were analysed with the Kruskal-Wallis tests. The interfaces were also compared pair-wise with the Mann-Whitney tests to answer the hypothesis. Nonparametric tests were used due to nonnormally distributed data and nonhomogeneity. To answer the third hypothesis, the total amount of push errors made by each participant was analysed using binomial distribution (the Bernoulli trial). The level was set to 0.05.

3. Experiment Results

3.1. Task Completion Time

Figure 3 shows a boxplot of the mean task completion time (s) for each interface. The boxplot shows a larger spread for the fully corresponding visual-haptic interface (fVH) compared to the other interfaces. A significant difference was found between the interfaces with the Kruskal-Wallis test (H (3) , ). To test the first hypothesis, the visual-only interface (V) was compared to the two interfaces with a haptic addition (pVH and fVH) with the Mann-Whitney tests. The menu selection tasks were completed significantly faster with the partly corresponding interface (pVH) ( , , ). However, no difference was found between V and fVH ( , , ). To test the second hypothesis, the partly corresponding interface (pVH) was compared to the fully corresponding interface (fVH), revealing no difference ( , , ). To test the third hypothesis, the haptic-only interface (H) was compared with the interfaces with visual information. The haptic-only interface needed significantly longer time to complete a task than the other interfaces: H-V ( , , ), H-pVH ( , , ), and H-fVH ( , , ).

Figure 3: Boxplot showing the mean task completion time (s).
3.2. Turn Errors

The spread of turn errors (%) in each interface is visualised with a boxplot (Figure 4). The Kruskal-Wallis test revealed a significant difference between the interfaces (H (3) , ). The first hypothesis, tested with the Mann-Whitney tests, revealed that significantly more turn errors were made with only visual information (V) compared to the partly corresponding visual-haptic interface (pVH) ( , , ). However, no difference was found between V and fVH ( , , ). When the second hypothesis was tested, no difference between the partly and fully corresponding interfaces was found ( , , ). In addition, the test of the third hypothesis revealed that significantly more turn errors were made with the haptic-only interface than the interfaces with visual information: H-V ( , , ), H-pVH ( , , ), and H-fVH ( , , ).

Figure 4: Boxplot showing the turn errors (%).
3.3. Push Errors

No significant differences were found with the Kruskal-Wallis test for push errors (H (3) , ). The Mann-Whitney tests were used to answer the three hypotheses. Significantly more push errors were made with the haptic-only interface (H) than with the partly corresponding interface (pVH) ( , , ). No other differences were found. Table 2 presents the median values and ranges of push errors for each interface. The third hypothesis was tested with the Bernouilli trial. The largest number of push errors made by a participant using the haptic-only interface was used in the analyses, that is, 13 push errors out of 36 selections. With the probability of selecting a nontarget set to  .75, the probability of making 13 or fewer push errors out of 36 selections by chance was less than  .05 ( ).

Table 2: Median and range of push errors and mental workload for each interface.
3.4. Mental Workload

No significant differences were found with the Kruskal-Wallis test for mental workload (H (3) , ). The Mann-Whitney tests were used to answer the three hypotheses. No differences were found. Table 2 presents the median values and ranges of mental workload for each interface.

4. Interview Method

4.1. Respondents

Every participant attending the experiment was also interviewed at the end of each experimental session.

4.2. Interview Design and Questions

Interviews were used to provide a deeper understanding of the statistical data by capturing the participants’ experiences and comprehension of the different interface conditions. The interviews were semistructured since we wished to compare the answers between different interface conditions. Every participant was asked the same questions, but the questions were sometimes explained, followed up, or adjusted to suit the type of interface they had used. The questions were all open ended since we were interested in the participants’ thoughts, experiences, and difficulties with the different interfaces. We did not know what to expect and could therefore not ask specific questions with predetermined answer alternatives. The interviews all began with questions concerning the menu selection task: “How did it feel to use the rotary device and the user interface you used for solving the tasks?” and “What did you find good and bad?” All participants were asked how they experienced their performance: “How do you think you managed to drive and carry out tasks at the same time?” The participants were also asked more specific questions about the information presented in their interface related to other information. These questions were different according to the interfaces used. For example, the participants from the visual-only interface were asked the following questions: “How do you think it would have been if you were able to feel ridges between the menu alternatives?”, “How do you think it would have been if you also could sense the textures when you were moving pass them?”, and “How do you think it would have been if you could sense the textures but not see them?”. The participants that experienced the fully corresponding visual-haptic interface were also asked more specific questions about their comprehension of the information: “You could both see and feel the four textures. Which information did you use the most?” and “Was any information unnecessary?” The interviews lasted about ten minutes and were conducted in Swedish, the participants’ native language. Each interview was, with permission from the participants, recorded on tape.

4.3. Analysis

A method similar to the sequential analyses described by Miles and Huberman [39] was used for the analyses. At first, the interviews were transcribed verbatim and reduced to individual case synopses. Since the interview material was short, the next step in the analyses was to make matrices with key terms and key phrases. Thereafter, the phrases were reduced or labelled as quotations. The material was further analysed by creating clusters and attaching labels, such as plus or minus signs. The matrices made it possible to produce an overview of the material and to compare answers between the interface conditions. The data was not analysed with statistical methods since open-ended questions were used. If some participants identified a specific feature as important, it did not mean that the other participants would have agreed or disagreed. However, those comments could still help explain the statistical data. Therefore, the answers were summarized in text form describing key incidents as “the majority,” “some,” or “a few” and similar. The actual answer incidences can be found in Tables 35 and, for some parts, directly in the text. The key words and quotations used in the results section have been translated into English.

Table 3: Answer incidences for spontaneous remarks.
Table 4: Answer incidences for experienced performance.
Table 5: Answer incidences for interface preferences.

5. Interview Results

5.1. Spontaneous Remarks

At the opening of the interviews, the participants were asked how it felt to use the interface for the menu selection task and what they considered good and bad about the interface. The spontaneous answers differed depending on which interface was used (Table 3). However, participants from the same interface group often used similar expressions and mentioned the same problems. To describe their user interfaces, many participants from V and pVH used the phrases “easy to use,” “easy to learn,” and “it felt good.” Some participants from fVH also described the user interface “as easy to use.” The participants from H were least satisfied with the user interface. Some of them described the only-haptic interface as “quite easy” or “fairly easy” to use. They thought it was troublesome in the beginning, but after a while they got used to it. Almost all participants from the haptic-only interface (H) mentioned difficulties differentiating a pair of the textures. However, the participants did not find the same textures similar. For example, some participants had problems with texture A and B while others mentioned C and D. Only a few from the interfaces with visual information mentioned problems differentiating or finding textures. Some participants from interface V mentioned problems with turn errors. The marker sometimes moved passed the target when they tried to select it, or they turned too far since there were no ridges between the menu items. No participants from the other interfaces mentioned the same problem.

5.1.1. Interface V

Two participants from interface V spontaneously said that they had preferred a user interface with haptic ridges provided from the rotary device. This was expressed from one participant as “I would like to see that it had some form of response when turning it… it may perhaps be some ridge or so.”

5.1.2. Interface pVH

Two participants spontaneously remarked that the haptic ridges were positive. One of them said, “It was good that there were ridges… because I looked at the display and then I saw where it (the target) was and could sort of count out were [sic] it was with the ridges”.

5.1.3. Interface fVH

The opinions of interface fVH diverged. Two participants described the haptics as non-congruent: they found no correlation between what they felt and saw. Two other participants spontaneously said they had preferred haptic ridges instead of haptic ridges and textures. One participant expressed it as “(y)ou could not really feel where you were… you felt these structures instead. I had probably thought it was better if there were only four positions”. Another participant had the opposite opinion: “I liked it (the haptic textures); it is better than merely ridges.”

5.1.4. Interface H

Two participants from interface H spontaneously said they would have preferred more visual feedback. Two others wanted more pronounced ridges and larger space for the textures or in-between textures. One wanted the textures to differ more and another wanted an addition of auditory information.

5.2. Experienced Performance

The participants explained their performance differently depending on which interface they had used (Table 4). The majority of participants who used interface pVH described their performance as “good,” whereas most of the participants who used interfaces V and fVH expressed their performance as “fairly good.” The participants who used interface H were least satisfied with their performance. One participant said, “(i)t was more difficult than I thought; you got a bit stressed when you did not really find the right texture”.

5.3. Interface Preferences

The participants had a relatively clear idea of how it would have been to have other types of information, although they occasionally thought it hard to imagine. Table 5 presents the positive and negative responses to different types of information. The information discussed was similar to the different interfaces compared in the experimental study, and the results are therefore grouped accordingly.

5.3.1. Interface V

Almost all participants from interface pVH and fVH wanted more than just visual information. A participant from interface pVH feared that “(t)he risk is… that you, when you push, happen to move it (the cursor) to another menu item.” Six participants from pVH and five from fVH specifically explained that without haptic information they would have taken their eyes off the road more often. With respect to the haptic information, they said they could watch the road while performing the task by counting the ridges. Many participants from interface H were negative about only visual information. One participant would have preferred to have only visual information instead of haptic-only, and two participants said they would rather have had haptic-only information than visual-only information. One participant who preferred haptic-only information argued that it was “(b)etter to use the perception of touch so that you will not need to look off the road.”

5.3.2. Interface pVH

When interface pVH was discussed, the tone was different. Almost all of the comments about this interface were positive. The arguments for interface pVH were very much the same independent of which interface the participant had used. A participant from interface V expressed the common argument for visual information with ridges: “(i)t would probably be much easier because… then it is only to turn without looking at the display because you know how many clicks there should be”.

5.3.3. Interface fVH

When interface fVH was discussed, the responses were more positive than negative from participants that used interface V and H, whereas the number of negative and positive responses from participants who used interface pVH were similar. Participants from interface V and pVH had similar arguments for and against an addition of haptic textures. A typical positive response came from one participant from interface V: “It had possibly been better because then you would not need to look on that (the visual interface); you could concentrate on watching the road and just feel”. Another participant from interface V gave a typical negative response: “Then you would get more to think on. I would think it could have been a bit laborious”. Several participants from interface H thought a visual addition would make the task easier and that it would speed up their responses. However, some feared that it would be too much information and that they would look too much at the display and miss the road.

5.3.4. Interface H

The two visual interfaces V and pVH generated mostly negative responses to having haptic information only, as in interface H. One participant from interface V described it in the following way: “It can probably be quite comfortable to have something that you can look at in case you get insecure. It feels as if you trust vision more than the perception of touch”. However, not all participants were negative. Some participants from interface V and fVH were fairly positive. For example, one participant from interface fVH chose to use the haptic information only even though he had visual information available.

5.4. Perception of Interface fVH

The perception of the fully corresponding visual-haptic interface differed among the participants. Three categories were found: only ridges, ridges and textures—not correlated, and ridges and textures—correlated.

5.4.1. Only Ridges

Three participants did not perceive the haptic texture information; they only felt the ridges between menu items, as in pVH. The user interface included haptic texture information; that is, the participants excluded the information by themselves. One of the participants perceived the textures during training, but not while driving. The participant explained it as “(t)he clicks had different characters for different positions in the beginning (during training), which it never was during the test. Perhaps my brain fooled me, but I thought it felt as if there was one distinct and identical click between every position in the test.”

5.4.2. Ridges and Textures: Not Correlated

The other participants all perceived the textures, but two of them did not understand the correlation between the visual and haptic information: “It was a bit bumpy, there were ridges or something but when you turned, it did not jump on every step. I saw that I was two steps away, but it was not really only two steps”.

5.4.3. Ridges and Textures: Correlated

Five of the participants who perceived the textures also understood their purpose. Three of those participants chose not to use the textures. One of them explained it this way: “I think it was a bit odd that you should feel…. It would demand more exercise to use the sense of touch, so now I mostly thought it was a bit disturbing…. I had thought it would be better if there were only four positions”. However, two participants deliberately used the haptic textures. One of them did not use the visual information at all: “While I was driving, I did not look at this (the visual interface); you did not have to”.

6. Discussion

When driving, the eyes and mind need to be focused on the road. Secondary tasks should therefore be carefully designed. Haptic addition to a visual interface could ease interaction or could provide too much information to process. In this study, different amounts of haptic information were studied and both advantages and disadvantages were found. All interfaces were studied while concurrently performing a simulated driving task that needed both visual and cognitive attention. Different results could be found for these types of visual-haptic interfaces in different human-computer situations.

6.1. First Hypothesis

The first hypothesis in this paper was accepted: haptic additions to a visual interface improved performance. Both task completion time and error rate, in terms of turn errors, were significantly lower when haptic ridges were added to the visual interface (pVH). The fully corresponding interface with haptic ridges and texture information did not result in the same positive results and responses. This issue will be addressed and discussed more thoroughly later. The turn error result means that the target was more often passed over without being selected when the interface lacked haptic information. This suggests that the haptic ridges made it easier to stop at a certain position and stay in that position during selection. This suggestion is also evident in the interviews: some participants using interface V spontaneously mentioned these kinds of problems. The positive results for the visual-haptic interface (pVH) in comparison to the visual-only interface (V) agree with Wickens’ [18] theory of multiple resources. The addition of a haptic interface reduces the visual load and makes effective multitasking possible. According to the interviews, the haptic ridges were important. Because of the ridges, the participants did not have to look away from the road to find a target. Many participants using interface pVH mentioned they could count the ridges to be sure of their position while watching the road. With only visual information, the participants needed to look away from the road throughout the task. The interview answers indicate that the participants looked off the road less often with the partly corresponding visual-haptic interface (pVH) than with the visual-only interface (V), which is good from a driving and safety perspective. Furthermore, the interview answers revealed that almost all participants who received only visual information preferred haptic ridges and thought the knobs lacked such ridges. This agrees with Lederman and Abbott’s [29] theory about ecological validity. The haptic rotary device resembles a mechanical knob that usually provides ridges. This might be expected by the participants and explain why they thought they were lacking. If they had used a nonhaptic computer mouse instead, they might not have thought of haptic ridges as a possible improvement.

6.2. Second Hypothesis

The second hypothesis in the paper was rejected: a fully corresponding interface with haptic textures (fVH) did not produce higher performance results than an interface with haptic ridges (pVH). The haptic rotary device’s full potential was used in the fully corresponding visual-haptic interface (fVH) with the intention to ease driver demand and facilitate use. Redundant information was expected to aid decision making. The expectations were based on results from a study by Grane and Bengtsson [30] where a fully corresponding visual-haptic interface generated fewer turn errors and less mental demand than a partly corresponding interface. In this paper, however, no differences between the interfaces were found, and consequently the results do not correspond with Grane and Bengtsson’s [30] findings. The main difference between the two studies is that this study comprised a simulated driving task and the Grane and Bengtsson’s [30] study did not. Accordingly, a fully corresponding interface that normally induces low mental demand cannot be expected to give the same positive results in a driving situation. The interviews revealed an inconsistent comprehension of the fully corresponding interface. A few participants reported that the haptic textures were useful while some described them as unnecessary. Apparently, there is a risk that haptic information intended to facilitate the use of a visual menu selection interface is confusing rather than helpful.

6.3. The Fully Corresponding Interface

According to the interviews, the perception and comprehension of the fully corresponding interface varied and three different groups were found. A few participants did not perceive the haptic textures, only the haptic ridges separating the menu items. Guest and Spence [40] point out that there is no evidence for enhanced discrimination performance with visual-haptic texture perception. Rather, the two senses seem to act independently and divided attention between the modalities reduces each senses’ ability to discriminate. In highly demanding environments, such as driving a vehicle, this effect could be further augmented. This may explain why some of the participants using the fully corresponding visual-haptic interface only sensed a part of the haptics provided, the haptic ridges. The ridges stood out more since they had a higher torque than the haptic textures and therefore were easier to discriminate.

Some of the participants using the fully corresponding visual-haptic interface (fVH) sensed more haptic information than ridges but did not comprehend the correspondence with the visual information provided. According to Ernst and Bülthoff [41], information from different modalities has to be efficiently merged to form a coherent percept. If visual and haptic information are to be integrated, it should be clear that the information comes from the same object. Wall and Harwin [42] remarked that visual and haptic exploration through probes and monitors provides approximations rather than exact models of natural surfaces. Even if an interface is designed to be corresponding, it is unclear whether users will merge the information as in real life. Exactly how the brain decides to interpret information as a whole is not known. According to Ernst and Bülthoff [41], signals should most likely not differ too much spatially. In the study described in this paper, the information that meets the eye and the hand are separated spatially. It could be difficult for the brain to build a whole from those obvious separated information bearers even though the signals match in time and have a similar design. This may explain why some of the participants using the fully corresponding interface did not interpret it as coherent. However, in a study by Grane and Bengtsson [30], a fully corresponding interface with a similar spatial separation as found in this paper proved better than a partly corresponding interface. The main difference between the study described in this paper and Grane and Bengtsson’s study [30] is that Grane and Bengtsson’s study [30] lacked a simulated driving task. Therefore, the problem with integrating the haptic and visual information might be an effect of rational resource utilization due to mental overload. According to Ernst and Bülthoff [41], the brain is not willing to wait for an accurate answer if it can deliver a quick, uncertain response. If it is easier to grasp the visual information describing the textures, some of the participants may have responded based on that information only without spending time on the haptic textures. As concluded by Wickens [18] and Lederman and Abbott [29], some tasks are more appropriate for one modality than another. When drivers want to find a target in a menu as quickly as possible, it is naturally more effective to visually scan a menu than to serially turn through a haptic menu. If the textures already are perceived visually, it may seem unnecessary to use mental resources to merge the haptic information with the visual information. The perceived noncorrespondence between the haptic and visual information could also be explained by Lederman and Abbott’s [29] theory of ecological validity. It is possible that the participants used a familiar behaviour. Visual menu selection interfaces are more commonly found in daily life without informative haptics. As a result, the participants might have expected the visual information, perhaps accompanied with haptic ridges and consequently would have been more open to the visual stimuli and confused by the haptic textures. With more contact with these types of systems, it might be easier to take in and process the haptic information. More experienced users might learn new interaction strategies that use the haptic information in a resource-effective manner.

Interestingly, some participants using the fully corresponding visual-haptic interface (fVH) had no problems sensing the haptic information and interpreted it as corresponding with the visual interface. According to their interview answers, some still considered the haptic textures as unnecessary although two participants considered them useful. One of those two participants said he did not need the visual information and chose to use only the haptics so he could pay more attention to the road. Why did he do that if the available visual information was more efficient to process? Could it be that the haptic modality dominates over visual for some people? Visual information is traditionally said to dominate haptic information in multimodal tasks. This was shown by Rock and Victor [43] and has been proven by others. Interestingly, in the Rock and Victor [43] study, two out of ten participants mainly used the haptic information, indicating a haptic dominance for those two. Lederman et al. [44] question vision as the dominant modality; they found vision to be important when spatial density was judged and tactile cues important when roughness was judged. Furthermore, McDonnell and Duffett [45] found a clear individual difference in modality dominance. If haptic dominates vision for some people, it could possibly explain why the haptic information was interpreted as corresponding for some of the participants in the study even though they never had used a similar interface. Accordingly, when designing interfaces for demanding situations such as driving, the designer should not trust that users will use the most efficient information.

6.4. Third Hypothesis

In the third hypothesis, the haptic information was expected to be useful even though it would induce lower performance results than when visual information was used concurrently. This hypothesis was tested by including an interface with haptic-only information. By removing the visual interface, the participants were forced to rely on their haptic sense. The third hypothesis was accepted since the participants managed to complete the tasks with this interface. Furthermore, as expected the haptic-only interface demanded more time and resulted in more turn errors than the other interfaces with visual information. In addition, more push errors were made with the haptic-only interface when compared to the partly redundant interface (pVH). For clarification, a turn error was registered when a target was passed without being selected and a push error when the wrong target was selected. Other studies also have found haptic exploration of objects to be more time consuming than visual exploration [30, 31]. This could be expected since a haptic search using a rotary device is restricted to serial and requires repeated hand movements back and forth in a menu field to sense the textures [46]. Moreover, the menu items could easily be compared visually, while a haptic comparison required a hand movement. Haptic comparison of textures while searching for the target could also explain the increased turn errors. If the participants were uncertain about a target and wanted to compare it with other textures, the target sometimes needed to be passed over without being selected, resulting in a turn error. According to the interview answers, the participants using the haptic-only interface had problems differentiating the textures. This explains the increased number of turn errors and the increased number of push errors. It is clear that the haptic information did not provide sufficient support for making quick selections. Most participants using the haptic-only interface described it as difficult to use and would rather use an interface with visual support. Nevertheless, many participants were negative toward visual-only information, a common setup in many vehicles today. Since the interfaces used in this study were developed for this study only, the novelty was high. With more interaction, performance might improve. Furthermore, the interfaces were developed primarily for modality comparison. Other types of haptic effects, developed with a focus on usability, might be more easily comprehended and used, especially after some training.

6.5. Further Research

This study focused on secondary task performance. An interesting continuation of this study would be to analyse driving data. Secondary task performance does not necessarily correlate with driver performance. Moreover, the primary task in this study was constituted by a simple desktop simulator. It would be interesting to increase validity by further investigating haptic-visual interfaces with more advanced driving simulators or real driving.

7. Conclusions

As expected in the first hypothesis, a multimodal approach that adds haptic information to an in-vehicle visual interface for solving menu selection tasks supported the participants’ performance. However, this applied for a visual-haptic interface with marked menu boarders and not for a fully corresponding visual-haptic interface. Consequently, the second hypothesis was rejected. Interestingly, the fully corresponding visual-haptic interface, expected to ease interaction by providing redundant visual-haptic information, was interpreted and comprehended differently by the users. Some participants did not sense all haptics provided, and some did not comprehend the correspondence between the senses. This study makes clear that a haptic interface that correlates well with an in-vehicle visual interface could confuse rather than support some drivers. Furthermore, this study clarifies the importance of including some form of driving task when testing in-vehicle interfaces. The results in this study did not correspond with the findings in a similar study with no driving task [30]. A fully corresponding visual-haptic interface proved better when no driving task was included. Moreover, an informative haptic interface could be used without any visual information, as expected in the third hypothesis. Finally, this study does not present a fully developed solution, but it does provide a step towards an explanation to why haptic interfaces sometime confuse rather than support drivers using in-vehicle interfaces.


  1. P. Bengtsson, C. Grane, and J. Isaksson, “Haptic/graphic interface for in-vehicle comfort functions—a simulator study and an experimental study,” in Proceedings of the 2nd IEEE International Workshop on Haptic, Audio and Visual Environments and their Applications, pp. 25–29, Ottawa, Canada, September 2003.
  2. G. E. Burnett and J. M. Porter, “Ubiquitous computing within cars: designing controls for non-visual use,” International Journal of Human Computer Studies, vol. 55, no. 4, pp. 521–531, 2001. View at Publisher · View at Google Scholar · View at Scopus
  3. K. Prynne, “Tactile controls,” Automotive Interiors international, pp. 30–36, 1995. View at Google Scholar
  4. C. Spence and C. Ho, “Multisensory interface design for drivers: past, present and future,” Ergonomics, vol. 51, no. 1, pp. 65–70, 2008. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  5. W. W. Wierwille, “Demands on driver resources associated with introducing advanced technology into the vehicle,” Transportation Research Part C, vol. 1, no. 2, pp. 133–142, 1993. View at Google Scholar · View at Scopus
  6. M. Sivak, “The information that drivers use: is it indeed 90% visual?” Perception, vol. 25, no. 9, pp. 1081–1089, 1996. View at Google Scholar · View at Scopus
  7. D. Grant, “Two new commercial haptic rotary controllers,” in Proceedings of the EuroHaptics, pp. 451–455, Munich, Germany, June 2004.
  8. V. Hayward, O. R. Astley, M. Cruz-Hernandez, D. Grant, and G. Robles-De-La-Torre, “Haptic interfaces and devices,” Sensor Review, vol. 24, no. 1, pp. 16–29, 2004. View at Google Scholar · View at Scopus
  9. J. J. Gibson, “Observations on active touch,” Psychological Review, vol. 69, no. 6, pp. 477–491, 1962. View at Publisher · View at Google Scholar · View at Scopus
  10. R. L. Klatzky, S. J. Lederman, and D. E. Matula, “Haptic exploration in the presence of vision,” Journal of Experimental Psychology, vol. 19, no. 4, pp. 726–743, 1993. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Alm and L. Nilsson, “The effects of a mobile telephone task on driver behaviour in a car following situation,” Accident Analysis and Prevention, vol. 27, no. 5, pp. 707–715, 1995. View at Google Scholar · View at Scopus
  12. C. Collet, A. Guillot, and C. Petit, “Phoning while driving I: a review of epidemiological, psychological, behavioural and physiological studies,” Ergonomics, vol. 53, no. 5, pp. 589–601, 2010. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  13. C. Collet, A. Guillot, and C. Petit, “Phoning while driving II: a review of driving conditions influence,” Ergonomics, vol. 53, no. 5, pp. 602–616, 2010. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  14. J. Engström, E. Johansson, and J. Östlund, “Effects of visual and cognitive load in real and simulated motorway driving,” Transportation Research Part F, vol. 8, no. 2, pp. 97–120, 2005. View at Publisher · View at Google Scholar · View at Scopus
  15. D. Lamble, T. Kauranen, M. Laakso, and H. Summala, “Cognitive load and detection thresholds in car following situations: safety implications for using mobile (cellular) telephones while driving,” Accident Analysis and Prevention, vol. 31, no. 6, pp. 617–623, 1999. View at Publisher · View at Google Scholar · View at Scopus
  16. T. C. Lansdown, N. Brook-Carter, and T. Kersloot, “Distraction from multiple in-vehicle secondary tasks: vehicle performance and mental workload implications,” Ergonomics, vol. 47, no. 1, pp. 91–104, 2004. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  17. D. L. Strayer and W. A. Johnston, “Driven to distraction: dual-task studies of simulated driving and conversing on a cellular telephone,” Psychological Science, vol. 12, no. 6, pp. 462–466, 2001. View at Google Scholar · View at Scopus
  18. C. D. Wickens, “Multiple resources and performance prediction,” Theoretical Issues in Ergonomic Science, vol. 3, no. 2, pp. 159–177, 2002. View at Google Scholar
  19. M. S. Prewett, L. Yang, F. R. B. Stilson et al., “The benefits of multimodal information: a meta-analysis comparing visual and visual-tactile feedback,” in Proceedings of the 8th International Conference on Multimodal Interfaces, pp. 333–338, ACM Press, Alberta, Canada, November 2006. View at Publisher · View at Google Scholar · View at Scopus
  20. H. S. Vitense, J. A. Jacko, and V. K. Emery, “Multimodal feedback: an assessment of performance and mental workload,” Ergonomics, vol. 46, no. 1–3, pp. 68–87, 2003. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  21. M. Mulder, M. Mulder, M. M. van Paassen, and D. A. Abbink, “Haptic gas pedal feedback,” Ergonomics, vol. 51, no. 11, pp. 1710–1720, 2008. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  22. C. Ho, H. Z. Tan, and C. Spence, “Using spatial vibrotactile cues to direct visual attention in driving scenes,” Transportation Research Part F, vol. 8, no. 6, pp. 397–412, 2005. View at Publisher · View at Google Scholar · View at Scopus
  23. J. B. F. Van Erp and H. A. H. C. Van Veen, “Vibrotactile in-vehicle navigation system,” Transportation Research Part F, vol. 7, no. 4-5, pp. 247–256, 2004. View at Publisher · View at Google Scholar · View at Scopus
  24. F. Asif, J. Vinayakamoorthy, J. Ren, and M. Green, “Haptic controls in cars for making driving more safe,” in Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO '09), pp. 2023–2028, Guilin, China, 2009. View at Publisher · View at Google Scholar
  25. G. Costagliola, S. Di Martino, F. Ferrucci, G. Oliviero, U. Montemurro, and A. Paliotti, “Handy: a new interaction device for vehicular information systems,” in Proceedings of the Mobile Human-Computer Interaction (Mobile HCI '04), S. Brewster and M. Dunlop, Eds., vol. 3160 of Lecture Notes in Computer Science, pp. 264–275, Springer, Glasgow, UK, 2004.
  26. J. Mark Porter, S. Summerskill, G. Burnett, and K. Prynne, “BIONIC – ’eyes-free’ design of secondary driving controls,” in Proceedings of the Accessible Design in the Digital World Conference 2005, Dundee, Scotland, August 2005.
  27. A. Tang, P. McLachlan, K. Lowe, C. R. Saka, and K. MacLean, “Perceiving ordinal data haptically under workload,” in Proceedings of the 7th International Conference on Multimodal Interfaces (ICMI '05), pp. 317–324, ACM, Trento, Italy, October 2005. View at Scopus
  28. A. Rydström, R. Broström, and P. Bengtsson, “Can haptics facilitate interaction with an in-vehicle multifunctional interface?” IEEE Transactions on Haptics, vol. 2, no. 3, pp. 141–147, 2009. View at Publisher · View at Google Scholar · View at Scopus
  29. S. J. Lederman and S. G. Abbott, “Texture perception: studies of intersensory organization using a discrepancy paradigm, and visual versus tactual psychophysics,” Journal of Experimental Psychology, vol. 7, no. 4, pp. 902–915, 1981. View at Publisher · View at Google Scholar · View at Scopus
  30. C. Grane and P. Bengtsson, “Menu selection based on haptic and/or graphic information,” in Proceedings of the 11th International Conference on Human-Computer Interaction, G. Salvendy, Ed., Las Vegas, Nev, USA, July 2005.
  31. W. M. Bergmann Tiest and A. M. L. Kappers, “Haptic and visual perception of roughness,” Acta Psychologica, vol. 124, no. 2, pp. 177–189, 2007. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  32. E. Gentaz and Y. Hatwell, “Haptic processing of spatial and material object properties,” in Touching for Knowing: Cognitive Psychology of Haptic Manual Perception, Y. Hatwell, A. Strieri, and E. Gentaz, Eds., pp. 123–159, J. Benjamins, Amsterdam, The Netherlands, 2003. View at Google Scholar
  33. M. A. Heller, “Visual and tactual texture perception: intersensory cooperation,” Perception and Psychophysics, vol. 31, no. 4, pp. 339–344, 1982. View at Google Scholar · View at Scopus
  34. A. G. DaimlerChrysler, Lane Change Test—User Guide 1.2. Stuttgart: DaimlerChrysler AG, Research and Technology, 2004,
  35. S. Mattes, “The lane-change-task as a tool for driver distraction evaluation,” in Quality of Work and Products in Enterprises of the Future, H. Strasser, K. Kluth, H. Rausch, and H. Bubb, Eds., pp. 57–60, Ergonomia, Stuttgart, Germany, 2003. View at Google Scholar
  36. T. Ivergård, Handbook of Control Room Design and Ergonomics, Taylor and Francis, London, Uk, 1989.
  37. A. Rydström and P. Bengtsson, “Haptic, visual and cross-modal perception of interface information,” in Proceedings of the Human Factors Issues in Complex System Performance, D. de Waard, G. R. J. Hockey, P. Nickel, and K. A. Brookhuis, Eds., pp. 399–409, Shaker Publishing, Maastricht, The Netherlands, 2007.
  38. S. G. Hart and L. E. Staveland, “Development of NASA-TLX (Task Load Index): results of empirical and theoretical research,” in Human Mental Workload, P. A. Hancock and N. Meshkati, Eds., pp. 139–183, North-Holland, Amsterdam, The Netherlands, 1988. View at Google Scholar
  39. M. B. Miles and A. M. Huberman, Qualitative Data Analysis, SAGE, Thousand Oaks, Calif, USA, 1994.
  40. S. Guest and C. Spence, “What role does multisensory integration play in the visuotactile perception of texture?” International Journal of Psychophysiology, vol. 50, no. 1-2, pp. 63–80, 2003. View at Publisher · View at Google Scholar · View at Scopus
  41. M. O. Ernst and H. H. Bülthoff, “Merging the senses into a robust percept,” Trends in Cognitive Sciences, vol. 8, no. 4, pp. 162–169, 2004. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  42. S. A. Wall and W. S. Harwin, “Interaction of visual and haptic information in simulated environments: texture perception,” in Proceedings of the Haptic Human-Computer Interaction 2000, S. Brewster and R. Murray-Smith, Eds., pp. 108–117, Springer, Glasgow, UK, August-September 2000.
  43. I. Rock and J. Victor, “Vision and touch: an experimentally created conflict between the two senses,” Science, vol. 143, no. 3606, pp. 594–596, 1964. View at Google Scholar · View at Scopus
  44. S. J. Lederman, G. Thorne, and B. Jones, “Perception of texture by vision and touch. Multidimensionality and intersensory integration,” Journal of Experimental Psychology, vol. 12, no. 2, pp. 169–180, 1986. View at Publisher · View at Google Scholar · View at Scopus
  45. P. M. McDonnell and J. Duffett, “Vision and touch: a reconsideration of conflict between the two senses,” Canadian Journal of Psychology, vol. 26, no. 2, pp. 171–180, 1972. View at Google Scholar · View at Scopus
  46. C. Grane and P. Bengtsson, “Serial or parallel search with a multi-modal rotary device for in-vehicle use,” in Proceedings of the 2nd International Conference on Applied Human Factors and Ergonomics (AHFE '08), W. Karwowski and G. Salvendy, Eds., USA Publishing, 2008.