Abstract

The usage of a smartphone while driving is a pervasive problem and has been acknowledged as a significant source of road accidents and crashes. Several solutions have been developed to control and minimize risky driving behavior. However, these solutions were mainly designed from the perspective of normal users to be used in a nondriving scenario. In a driving scenario, any deviation from these assumptions (e.g., touching or taping interfaces and looking to visual items) could impact driving performance. In this research paper, we aimed to design and develop a context-aware adaptive user interface framework to minimize driver distraction. The proposed framework is implemented in Android platform, namely, “DriverSense,” which is capable of adapting smartphone user interfaces based on contextual factors including driver preferences, environmental factors, and device usage in real time using adaptation rules. The proposed solution is evaluated both in real time using AutoLog application and through an empirical study by collecting data from 93 drivers through a mixed-mode survey using a questionnaire. Results obtained from AutoLog dataset show that performing activities on smartphone native interfaces while driving leads to abrupt changes in speed and steering wheel angle. However, minimal variations have been observed while performing activities on DriverSense interfaces. The results obtained from the empirical study show that the data are found to be internally consistent with 0.7 Cronbach’s alpha value. Furthermore, an Iterated Principal Factor Analysis (IPFA) retained 60 of a total of 61 measurement items with lower uniqueness values. The findings show that the proposed solution has significantly minimized the driver distractions and has positive perceptions in terms of usefulness, attitude, learnability and understandability, and user satisfaction.

1. Introduction

Smartphone-distracted driving is one of the main concerns in road safety, which is evident from the fact that 1.25 million deaths and 50 million injuries are reported each year [1]. The usage of a smartphone while driving has made driving more complex by requiring fine-grained cognitive, physical, and psychological skills to perform concurrent executions [2]. Despite known catastrophes, people are habitual of using a smartphone while driving. For example, 0.66 million drivers are using smartphones at a particular instant of time while driving [3]. In reality, the status of a driver while driving is different from a person not driving. In other words, in nondriving scenarios, a person is free to be engaged and performes smartphone activities in almost every situation. However, in driving scenarios, a driver can somehow say to be a special person due to having limitations to perform smartphone activities. These limitations are due to excessive physical and visual interaction as well as cognitive overload. One of the main reasons for physical and mental engagement in performing smartphone activities is the complex nature and rich interfaces of smartphone platforms. The existing interfaces (i.e., handheld and dock-mounted smartphone interfaces) are typically designed with the assumption that they may be used by the normal users (e.g., a user who has perceptual and cognitive abilities, who can interact for the maximum duration, and who is sitting in comfortable environments) [4]. In a driving scenario, any deviation from these assumptions (e.g., touching or taping the interfaces via hands, looking to the visual items, and cognitive overload) could impact the driving performance.

Furthermore, the interaction of the driver with smartphone applications is not suitable due to complex interfaces as each activity is time-consuming, redundant, and repetitive and has complex navigational structure, requires much cognitive power, and needs a long route to follow [5]. For example, typing and reading text messages require several steps, which could seriously affect eyes movements, reaction time, lane positioning, stimulus detection, speed, and headway while driving [6]. A driver consumes about 12.4 seconds while interacting with a smartphone for dialing calls and an average of 36.4 seconds for performing a texting activity [7]. Moreover, using a smartphone for sending or receiving a text message diverts eyes off the road for an average of 23 seconds [7]. It means that a text message sent or received can divert a driver’s eyes off the road for more than half a kilometre while driving at the speed of 90 km/h [7]. Similarly, safe driving requires full attention and loss of focus due to taking eyes off the road for 2 seconds could increase the chances of accidents to twenty-four times [8].

Therefore, there is a strong need to balance the safety and usability of the smartphone while keeping in mind the drivers’ status. One way to improve the usability is to change the interaction between the drivers and smartphone, using an adaptive user interface. The adaptive user interfaces use a context-awareness approach and generate new interfaces according to the change in environment, user preferences, and device usage [9, 10]. This approach will help drivers in the personalization of their smartphone user interfaces irrespective of their visual, physical, and cognitive limitations. To achieve a considerably improved driver-friendly user interface design, it requires moderate revisions in the existing interfaces to meet the driver’s needs and requirements. This may require a framework supporting an adaptation mechanism to address the drivers’ needs, capabilities, and context-of-use to ensure a high degree of acceptability and usability [11].

In this paper, we propose a multimodal smartphone context-aware adaptive user interface framework for drivers. The proposed framework aims to accommodate the user interface requirements of drivers based on the evaluation of different driving and environmental contexts. The proposed framework is implemented on Android platform, namely, “DriverSense,” which makes effective use of smartphone and vehicular sensing capabilities to capture and identify different driving contexts (e.g., number of people in the vehicle, road status, weather status, traffic status, speed, noise, vehicle dynamics, and drivers’ interests and preferences) to adjust smartphone user interface automatically. The context-dependent simplified interface can be generated using adaption rules and will improve driver safety by minimizing visual, manual, and mental interactions. In this research work, the available researches and best practices from the other domains (e.g., applications of ICT for naturally disabled people) have been borrowed/reused with different levels of details and have come up with a more flexible and adaptive solution for the drivers to ensure their safe journey on the road.

The rest of the paper is organized as follows. Section 2 describes related work. Section 3 introduces the proposed framework, and its implementation is presented in Section 4. Section 5 illustrates the experimental evaluation. The results and discussion are presented in Section 6. Finally, Section 7 concludes the paper.

With the rapid development in vehicular technologies, Intelligent Transportation System, Advanced Driver Assistance Systems, and vehicle handling stability have been promoted since the past century [12]. However, a growing problem of driver distractions, especially usage of smartphone, still exists. The driver’s distraction by smartphone, such as texting, phone calling, and using a navigation system, can divert attention away from the primary task, which is one of the main contributors to the road traffic accidents [13]. The usage of a smartphone while driving contributes to nearly one thousand crashes or near-crashes per year, which is a challenging hurdle for road safety [3]. The researchers have tried to minimize driver engagement with a smartphone with the help of some adaptive technologies. These technologies aimed to limit the interactions or provide simplified interactions to the drivers. The existing adaptive technologies focus on three basic principles: blocking of smartphone features, changing the nature of interactions, and simplifying smartphone functionalities (e.g., with the help of shortcuts to the apps) [14, 15].

Several solutions have been designed to reduce drivers’ interactions with their smartphones while driving [16]. These solutions recommend blocking off some of smartphone features/functions, including texting, web browsing, and phone calls [14, 15]. Although the blocking approach is encouraging by considering the leading cause of accidents and crashes [14, 17, 18], the approach of blocking smartphone features is not a viable solution to fully mitigate the issues as it is against the will of smartphone users [19]. In addition, researchers from Australia and USA have reported in their studies that blocking of smartphone features has low acceptability among drivers as it is against the adoption of the technology [2022].

The other approach used by the researchers to minimize drivers’ distractions is to change the nature of the interactions between drivers and smartphones by using text-to-speech and speech-to-text metaphors instead of visual-manual interactions [23, 24]. This is an emerging concept and has shown comparatively distinct advantages over the visual-manual interfaces [25, 26]. However, the researchers have suggested that drivers could still face numerous challenges while driving as it requires visual-manual demands, interior glance time, and higher mental demand than a baseline drive [15]. In addition, cognitive demands are high for tasks using voice-based interfaces [27]. Similarly, voice-commands-based interfaces are difficult to comprehend properly in a noisy environment and could have language barriers as most of the system supports only a few natural languages, including English [28, 29]. Privacy is another issue in a driving scenario due to the presence of other commuters in the vehicle, which may restrict the use of smartphones. The privacy issues for auditory interactions can be resolved by using headphones. However, this will lead to compromised safety due to blocking important background sounds and increase cognitive overload [27]. Moreover, the interaction between driver and smartphone can also be minimized using Head-Up-Displays (HUDs) (i.e., Android Auto, CarPlay, etc.) [30]. These devices can be paired with a smartphone using Bluetooth or physical interfacing. The aim of these devices is to keep eyes on the road and hands on the steering wheel when performing common activities on a smartphone. However, there is a probability to lose focus off the road when looking into HUDs for necessary operations. In addition, using external hands-free systems is often a barrier due to usability, cost, and lack of practicality [14]. Although hands-free systems could reduce visual-manual interaction, they would not reduce cognitive overload [3133].

A third and emerging approach is the simplification of smartphone functionalities [14, 15]. This approach is aimed at reducing visual interactions by simplifying driver interactions with smartphone applications. Following the idea, several solutions have been developed, which aim to simplify the interactions between drivers and smartphones with the help of shortcuts for apps and voice commands for interactions [34]. However, these solutions can result in excessive cognitive overload due to voice commands, as discussed earlier, off-road visual engagement, and navigational complexity [35]. Furthermore, the latest study [14] found no empirical evidence of these applications regarding minimizing the risk of crashes. Similarly, performing common activities on smartphone and other technologies are tedious and risky tasks for the drivers; even people in normal daily life routines consume about 66% of their efforts and time in correcting and editing text in automatic speech recognizing devices [5]. Various high-quality applications have been introduced but were washed out from the market due to their complex, inefficient, unattractive, static, and confusing user interfaces [36]. These nonadaptive effects of user interfaces can create frustration, which can impact usability and performance among the end users [37]. Therefore, adaptive user interfaces can provide a significant assistance to overcome these usability barriers. The researchers from different domains have emphasized on the development of adaptive user interfaces and have designed easy-to-use, user-friendly, and accessible interfaces according to the HCI guidelines to solve real-world problems in the different domains [5].

Similarly, various tools and methodologies have been used to generate user interfaces in real time by the researchers automatically. A system called “Supple System” [4] generates user interfaces for the users based on their tasks, preferences, and cognitive abilities. The findings have shown that novice users can complete a complex task in less than 20 minutes using the proposed user interface. Multipath user interface systems are developed, which use XML to generate user interfaces on the basis of current contexts [38]. The Egoki system is a user interface generator system designed for people with disability [39]. The purpose of the system was to recommend appropriate user interfaces for the selection of multimedia contents to the users based on their needs. The MARIA system proposed a model-based user interface description language to automatically generate and customize user interfaces for the different devices in runtime [40]. The ODESeW system is a semantic web portal using the WebODE platform and an ontology application to generate a knowledge portal of interests automatically [41]. For example, it generates different menus based on the users’ interests and adjusts the visibility of contents according to the users’ needs. A generic interface infrastructure has been presented in the MyUI system, which aims to increase accessibility through an adaptive user interface [42]. The MyUI provides a runtime adaptation to user preferences, device usages, and work conditions. An XML-based pervasive multimodal user interface framework is proposed, which helps the designer to design a wide range of platforms that support multiple languages [43]. The aim was mainly how to change the monomodal web-oriented environment of simplified interface for the variety of platforms. A context-aware framework called ViMos has been proposed to provide adapted information to the users through devices embedded in the environment [44]. The system is composed of a set of available widgets to render different data patterns on various visualization techniques to adapt and customize visual layouts in the available area. A conceptual framework has been designed for Intelligent Adaptive Interfaces (IAIs) to guide interface design with the help of a user-centred design approach and proactive use of adaptive intelligent agents (AIAs). These AIAs provide interface aids to minimize the workload and increase awareness. Similarly, the framework will enable the researchers to design knowledge-based systems such as uninhabited aerial vehicle using the IAI models [45].

Researchers have proposed numerous tools for designing creative adaptive UIs for the heterogeneous domains. An adaptive UI has been designed by the researcher [46] to prevent and block the phone calls and messages during the distracted condition. However, blocking the smartphone features is against the will of drivers and is strongly discouraged by the driver as discussed earlier. Furthermore, the researchers investigated the limited adaptive effects like the speed of the car and the angle of the steering wheel. ICCS [46, 47] is an in-car communication system intended to minimize driver distraction when the drivers engages with their cell phones with the help of speech input and output. However, this system is not widely adopted because it does not use the vehicle contextual information for generating automatic UI.

Researchers have proposed different adaptation techniques related to user interface features, such as content, layout optimization, navigation, and modality. These existing adaptation techniques still have limitations and gaps as they merely focus on design-time feature minimization rather than the runtime. Similarly, these adaptations cannot be effectively applied to generate user interfaces needed for the drivers while driving. Most of them are using preidentified UI feature set based on context at design time. However, they lack recommending the different mode of interactions, which is essential for the contextual changes in driving scenarios. To the best of our knowledge, no attention has been given to proposing a system that automatically generates user interfaces based on the driver history and profile, with different varying contexts such as speed, road status, noise, and weather.

3. Proposed Framework

To provide cellular connectivity to drivers and avoid distractions caused by smartphone usage has the prime focus of researchers. A number of solutions with varying capabilities and strengths have been presented over the years; however, each has its own shortcomings and limitations. In addition, the solutions are developed by the researchers, academia, and organization using their self-developed methodologies with no common understandings and consensuses, therefore resulting in separate islands, which is the wastage of potentials, resources, and time. The context-aware adaptive user interfaces paradigm can potentially solve the distractions and would result in increased usability of a smartphone while driving. Therefore, a context-aware adaptive user interface framework is proposed. The proposed framework is aimed to be adaptive, flexible, workable, and context-aware in different driving scenarios. The framework architecture is pluggable, where external services may be plugged in in a seamless fashion. The framework will make effective use of smartphone and vehicular sensing capabilities to capture and identify different driving contexts (e.g., number of people in the vehicle, road status, weather status, traffic status, speed, noise, vehicle dynamics, and drivers’ interests and preferences) to adjust a smartphone user interface dynamically. The context-dependent simplified user interface will improve driver safety by minimizing visual and manual interactions and reduce physical and mental distractions. The framework architecture is a layered architecture consisting of three layers (as shown in Figure 1): data curation layer, processing layer, and UI layer. However, the schematic diagram of the proposed framework and the flow of information between the components to materialize the context-aware adaptive user interface for a driver is depicted in Figure 2. The layered architecture and schematic diagram are explained in the following subsections.

3.1. Data Curation Layer

The data curation layer is responsible for obtaining data from multiple sources for processing and use by the upper layers. The data curation layer is divided into various modules, including interaction module, sensory module, data acquisition, and preprocessing module. In the beginning, the driver input could be captured through voice commands, touches, or gestures and stored in user interactions-log for further operations. The speech input of the driver can be captured using a smartphone microphone, car internal infotainment system, or a hand-free Bluetooth device. Sensory input from smartphone sensors as well as vehicle sensors could also be collected. For example, information can be obtained from various sensors, including the Global Position System (GPS), accelerometer, light, noise, and gyroscope. The GPS is used to find the location, altitude, direction, and speed of the car. Information from the online sources (i.e., web services) could also be used to obtain weather information, temperature, speed of wind, humidity, and so forth. The status of a road can be detected using accelerometer data. The vehicular data could be obtained from the Controller Area Network (CAN) using the standard Onboard Diagnostic (OBD-II) port [48]. Similarly, the data regarding steering angle, brake pressure, and accelerator could be obtained using a Bluetooth scanner. However, the captured data will be processed to obtain meaningful contextual information using contextual values to devise a new mode of interaction for the drivers while driving.

3.2. Processing Layer

The processing layer is the core layer of the proposed system, which is responsible for processing and storing the contextual information received from the data curation layer. The reception of contextual information, identification of user context, user information models, and transformation of the user interface into an appropriate layout is the responsibility of this layer. To simplify the operations of this layer, it has been divided into three main modules: information model building, adaptation rule manager, and transformation.

3.2.1. Information Models Building

This module is focused on the development of different models based on the creation of adaption rules in online and offline phases. These models include driver model, vehicle model, device model, and context model. The main classes of the models are shown in Figure 3. These models and associated rules could be considered the baseline requirements for the context-aware adaptive user interface generations. The driver model stores information about driver demographics, cognition, sensing power, and experience. The driver’s demographics information is all about his/her driving skills, education, age, and cognition including driver attention, learning ability, perception, and concentration. The driver’s sensory information is modelled as driver’s hearing, sight, and touch sensitivity that directly affects his/her interactions with the system. The experiences are modelled as the level of satisfaction of the user interface after changing according to the context.

The vehicle model stores information about vehicle data (i.e., type of vehicle, type of transmission, capacity, safety features, types of telematics, etc.). The type of vehicle information includes a company of the vehicle model and so forth and transmission system involved automatic or manual gear system, which will also affect the interaction with the system.

The capacity can be modelled by the number of maximum passengers in a vehicle. The safety features include brake assist, automatic emergency braking, and adaptive cruise control. The device information could be stored in the device model (e.g., device type (i.e., smartphone, smartwatch, or other infotainment systems), screen size, screen resolutions, display type, interaction mode, input/output capabilities, connectivity, etc.). This information is essential for the efficient adaptation of the user interface. Furthermore, the user-preferred mode of interaction also contributes to the better user interface adaptation. The context model stores information about the environments and context (e.g., road condition, weather, noise, light, temperature, location, time, speed, traffic condition, etc.). The context model is composed of a user, platform, vehicle, and environment (as shown in Figure 4). Once the models are built, they will be passed to the adaptation rule manager.

3.2.2. Adaptation Manager

The information models are input to the adaptation rule manager, where the concepts are selected from these models that are associated with different contextual dimensions. The adaptation rules can be specified in the form of events, conditions, and actions [49]. This approach has been extensively used in [50, 51] to provide adaptive UIs. The event part of the rule should be composed of the associated event whose manifestation activates the evolution of the rules. The condition part is composed of a Boolean condition, which needs to be satisfied to execute the action part.

The action part may lead to one or more simple actions containing indications of how the description of the proposed UI should be changed to perform the adaption process. The rules can be triggered due to contextual cues, which can be dependent on various aspects (i.e., user preferences, environmental changes, etc.). The UI or mode of interaction can be changed according to adaptation rules (e.g., change user interface from vocal to graphical in case the environment is noisy). The proposed adaptation rules for the generation of the context-aware adaptive UI for drivers have been depicted in Table 1 and their threshold values are described in Table 2.

3.2.3. Transformation

This module ensures the transformation of a personalized user interface to drivers while driving. The user information model and context model are input to the transformation module through adaptation rules and generate the appropriate user interface to the driver. The contexts and preferences of the drivers are changing with the passage of time in such case when the adaptation rules manager automatically fires the rules to generate a new instance of a user interface or mode of interaction to the driver at runtime. The automatic user interface transformation identifies and transforms the common interface elements/feature into specific interface through a series of adaptation rules. These rules constitute a knowledge base system for drivers and the transformation module hinders drivers to be not visually, mentally, and physically distracted while using a smartphone during driving.

3.3. Adaptive User Interface Generator

The adaptive user interface generator is communicating with the transformation module to receive the information in real time in order to visualize the appropriate user interface according to the contextual information and adaption rules. The adaptive user interface generator module implements the action part of the adaptation rule depending upon the content received from the transformation module. It can either transform the new simplified user interface or indicate some changes in the exiting interface accordingly. The generated user interfaces could be multimodal (e.g., voice-based, gesture-based, and tactile-based) and will be changed dynamically according to the contexts.

4. Implementation

The proposed framework is implemented on an Android platform. Figure 5 shows the snapshots of DriverSense application. The DriverSense app is basically developed for smartphones; however, it can be deployed on any other platform (e.g., infotainment system, etc.) if the required technologies (e.g., libraries and APIs) and resources (e.g., sensors) are available. The DriverSense app is developed while keeping in view all of the design considerations (e.g., privacy and security, battery power consumption, and accessibility). The app is flexible to accommodate and support the new upcoming technologies, especially those related to accessibility. On startup, the main user interface will be divided into subsections yet in a simplified interface whenever the vehicle status is changed to driving mode. The app will take an assessment of a driver’s behavior based on the interactions with the user interface. The app will automatically adjust the icons on the main screen, font size, and alert volume, based on the context and the driver’s responses. The front screen will contain the selection of most frequently used applications automatically. Furthermore, the settings would be adjusted according to the contexts: if the noise level is detected, the option for a graphical user interface would be initiated.

Text messaging is found to be the most distracting activity while driving, which can divert eyes off the road and could lead to accidents and crashes. The DriverSense app will handle the text messaging process according to the different driving contexts (i.e., speed, road condition, etc.). For example, if lower speed is detected, such as 30 km/h or less, text messages with a length of less than or equal to 30 characters will be allowed to be read with maximum adjustable font size, whereas lengthy text messages with a length of more than 30 characters will be placed on reading later queue. The auto-reply message will be generated for the SMSs from unknown contacts. The DriverSense app is provided to divide the SMS reply into categories, and driver will choose an option. For example, an SMS reply could be shown in three parts (i.e., standard reply (I’m driving), personal reply (you have an option to write a short message or auto-reply), and fun reply (gossip-type message from friends, which may be skipped)).

Likewise, emails and WhatsApp messages could be managed similar to SMSs. The DriverSense app will also effectively manage a driver’s phone calling activities based on the driving scenarios. When the DriverSense app detects a vehicle’s driving mode, the simplified user interface for managing phone calls will be launched. The phone calls activities have been classified into simplified and easy-to-access modes including simplified dialer, missed calls, dialed calls, received calls, favorite contacts, and contact list. The activities can be performed using simple touches or using voice commands in case of no external noise. The dialer activity will be automatically sent into the background, and the mode of interaction with the interface will be changed into voice mode when a vehicle’s medium speed is detected. Similarly, only the favorite contact list will be made visible, and other activities will be hidden when high speed is detected. Furthermore, the DriverSense app also manages to receive calls activity in the different driving contexts. For example, receiving call option will be displayed for every call if low speed is detected and an option of auto-reply SMS will be made accessible along with receiving call option if medium speed is detected (a driver may swipe the received call option or simply touch the auto-reply SMS to caller) and incoming calls from the unknown number will be automatically cancelled with auto-reply SMS if high speed is detected. The DriverSense app also manages the navigation activity. The activity will be on the top in case of unknown routes. If the visited place is familiar (the place visited for five times), the navigation activity will be automatically hidden from the main user interface. For unknown routes, the navigational activity will inform drivers about their current locations on request as well as automatically after some time interval based on their speeds. The DriverSense app will automatically announce the points saved by the drivers and public points of interests through voice. Furthermore, the DriverSense app will automatically send the web-browsing activity into the background whenever vehicle motion is detected. In addition, the DriverSense app will automatically block video watching in any driving scenario.

5. Experimental Evaluation

To the best of our knowledge, the DriverSense app is the first attempt to demonstrate context-aware adaptive user interfaces for drivers to minimized distractions. Therefore, there are no widely agreed evaluation techniques proposed by the researchers. The DriverSense app is tested using basic research-oriented technique and user-based evaluation to demonstrate its effectiveness, accuracy, and usability. In addition, the evaluation is aimed at investigating the systematic understanding of user experiences in using smartphone applications on DriverSense and measuring the reductions in visual interactions, physical interactions, and cognitive overloads; for the evaluation, the following hypotheses were made:

5.1. Evaluation Parameters

The evaluation process of DriverSense has been carried out through an empirical study on drivers. The usability methods have a common smartphone usage over time for evaluating the usability of applications. Among the others, the most commonly used usability evaluation includes heuristic evaluation, end-user-usability test, and survey and cognitive modelling [52]. Similarly, numerous alternative methods have been used for usability, user experience, and accessibility evaluation, which include automated checking of conformance to guidelines and standards, evaluation using model and simulations, the evaluation conducted by experts, evaluation through users, and evaluation through collected data using keystroke analysis [53]. The DriverSense app is evaluated through the already established set of methods, metrics, and usability parameter suggested by Human-Computer Interaction (HCI) (i.e., ease of use, perceived usefulness, intention to use, operability, understanding and learnability, minimal memory load, system usability scale, consistency, and user satisfaction).

5.2. Participants Recruitment

To conduct the empirical evaluation, a sample of 93 participants (79 males and 14 females) are selected voluntarily from the different professional and casual sectors including truck drivers, taxi drivers, students, businessmen, and employees. However, the participants were filtered with conditions of (1) having a valid driving license and more than two years of postlicense driving experience and (2) having experience of using smartphone while driving for more than a year at least. The participants are briefly addressed regarding the purpose of the study and research and expressed their willingness. Table 3 depicts the details of the participants’ information in terms of demographic profile, educational background, and gender. The DriverSense app is installed on the participant smartphones, and initial training has been provided to the participants about its usage.

5.3. Evaluation Criteria

The three types of experiments (i.e., user satisfaction, user experience assessment, and perceived usability) are performed in the evaluation process. User satisfaction has been assessed by using a questionnaire for user interaction satisfaction, which measures the overall satisfaction of the system in terms of nine user interface (UI) factors [54]. Similarly, the user experience has been assessed using User Experience Questionnaire (UEQ) [55], which allows a quick assessment of the user experience by getting impressions, user feelings, and attitude after using DriverSense. The UEQ measures both the user experience aspect and the classical usability aspect. Finally, for achieving the perceived usability, the most widely used measure of System Usability Scale (SUS) has been used. The findings obtained from the SUS are more accurate as compared to the Post-Study System Usability Questionnaire (PSSUQ) and Computer System Usability Questionnaire (CSUQ) when the sample size is greater than 8. We are interested in finding the user experience, perceived usefulness, and user satisfaction of the drivers by performing common activities on the interface shown by the DriverSense app. The effectiveness of each activity was evaluated through a set of usability parameters, including the degree of easiness, navigational complexity, consistency, and persistency.

5.4. Evaluation Process

User evaluation of the proposed methodology has been performed using the real-world DriverSense and AutoLog [56] android applications. The DriverSense and AutoLog applications are installed on the participants’ smartphones. We have instructed the participants that the AutoLog application will be running in the background to record the drivers’ smartphone activities (e.g., time for activity and activity completion time), vehicle dynamics (e.g., speed, steering angle, brake status, accelerator status, and engine RPM), and environmental data (e.g., location, traffic status, road condition, weather information, temperature, and light intensity) [56]. The participants are ensured that the data will automatically be anonymized before being stored in the database to protect the privacies of the participants. Furthermore, the AutoLog application will automatically stop logging the data whenever the drivers stop driving. The participants are instructed that the logged data will only be used for the evaluation purpose to compare the activities performed in native smartphone interfaces with the activities performed on DriverSense. After completing the exercise of three months, the participants are asked to fill the questionnaire to investigate DriverSense user satisfaction, perceived usability, user experience, and efficiency.

6. Results and Discussion

The data were collected through both the questionnaire and AutoLog application and used for performing two types of analysis: empirical analysis and dataset-based analysis. The purpose of these two types of analysis is to find out the significance of the DriverSense application.

We have carried out different tests in this study and analyzed the statistical data using different software like STATA, SPPS, AMOS, and Excel. In our case, we have used descriptive tabulation reporting frequencies and percentages of the categories of the variables. After that, a cross-tabulation is performed with cell percentages and cell likelihood ratio Chi-squared tests. The complied results have significant importance in a way that they give us two-way (2 × 2) cell frequencies count and cell percentages along with the measures of association of measurement items. To check the variable’s scales reliability, Cronbach’s alpha test has been carried out. Furthermore, we have also performed factor analysis in which Iterated Principal Factor Analysis (IPFA) was found to be better as compared to others. The purpose of these tests is to investigate the relationship between the user experience attributes of DriverSense user interfaces on attitude, perceived usefulness, ease of use, intention to use, understandability and learnability, minimal memory load, minimal visual interaction, minimal physical interaction, etc. Finally, structural models have been estimated to test the study hypothesis.

6.1. Descriptive Test Statistics

The results in Table 4 are self-explanatory, showing descriptive statistics of frequencies and percentages of the categorical indicators of all the variables. For attitude, 60.22% of the respondents chose “very probably” to use DriverSense and 31% chose “Definitely” to use DriverSense. In terms of intention to use DriverSense, 49% and 29% chose “very probably” and “Definitely”. A higher 69% of the respondents agreed and 12.90 strongly agreed in terms of perceived usefulness. It shows that almost above 80% perceived its usefulness well.

For the understandability and learnability, more than 90% found DriverSense understandable and easy to learn. About 80% were satisfied with the operation of the DriverSense, and 77% agreed that it is easy to use. In terms of system usability, 74% were in agreement with the software system usability, while 18% chose “probably.”

About minimal memory load, 88% moderately agreed, less 7% strongly agreed, and only 4% slightly agreed. It shows that more than 90% were in agreement in terms of minimal memory load, which means that DriverSense requires significantly minimum memory load. For minimal visual interaction, the results were similar as 63% moderately agreed, 14% strongly agreed, and 22% slightly agreed. In minimizing the physical interaction, the results show that more (48%) agreed moderately, less 19% strongly agreed, a good 30% slightly agreed, and negligible 2% disagreed moderately. It shows that majority of the respondents agreed that DriverSense did not require as much memory load and visual and physical interactions. Finally, 66% were very satisfied, 21% were extremely satisfied, and lowly 12% were moderately satisfied with the DriverSense usefulness.

In Table 5, cross-tabulation of cell percentage and LR Chi-squared test statistics are presented. These results are of significant importance in a way that they give us two-way (2 × 2) cell frequencies count and cell percentages along with the measures of association of measurement items. Cell frequencies and cell percentages give us more exact values of how much each category of factors contributes to the category of the second factor. Also, we have calculated cell test statistics, which gives us the measure of association of each category cell contribution to LR Chi-square of both factors. The significant coefficients of cell LR Chi-squared test statistics are marked with asterisk () at different levels of significance.

6.2. Data Reliability and Factor Analysis

Cronbach’s alpha tests have been carried out to measure the reliability or, more specifically, internal consistency of the scales of the measurement items [57, 58]. The alpha is measured for each measurement item (factor), and the alpha score represents the expected squared correlation of one scale (also called test) of an item with all other scales (correlation among observed and true value). Here the coefficient of scale reliability is 0.68 (≈0.7), which is good, and alpha score for each item ranges from 0.67 to 0.68. This shows that our scale items are reliable and internally consistent. For reference, the alpha value of 0.70 and above is considered good, and 0.60 is acceptable [57, 59]. However, a good alpha score varies with the nature of the study and scales of the measurement items. In Table 6, the observations in the alpha column show the number of nonmissing values of the measurement items, while the sign shows the direction of scales correlation. The item-test coefficient shows the strength of correlation of each item with the scales of all other items, while a more robust rest-item coefficient (Corrected Item Total correlation) shows the strength of correlation with the scales of all other 60 items only. The higher the item-test and item-rest correlation coefficients are, the better fit the items are. The average interitem (between measurement items) correlation shows the average correlation between the items. Scales reliability of the measurement items has a theoretical relationship with the factor analysis, as it is assumed that the factor loadings contribute to almost the same/equal information about the score [60]. We have carried out all types of factor analysis of the measurement items (Principal Factor (PFA), Principal Component Factor (PCFA), Iterated Factor (IFA), and Maximum Likelihood Analysis (MLE)) but reported the Iterated Principal Factor Analysis because it retained 60 factors out of 61 factors (measurement items).

Several studies demonstrated that PCFA is the best factor analysis and the most commonly used. The reason why we preferred IPFA over PCFA is the lower uniqueness values of the former over the latter, and there is not much significant difference of factor’s retention between the two analyses despite the fact that the PCFA retained all 61 factors. Secondly, the PCFA assumes uniqueness of “0,” but here they were all higher than IPFA. In factor model analysis, uniqueness shows the variance of a particular factor that is not explained by other factors in the model. The results are presented in Table 7. Higher uniqueness values show higher measurement error or a variable with higher uniqueness values means that the latent variable is not well explained by the factor model. For comparison, PCFA has higher uniqueness than IPFA.

In terms of interpretation of the FA results, the eigenvalues show the amount of variation (variance) explained by a particular factor in total variation. In IPFA, 60 factors out of 61 contributed to total variance as all of these factors’ eigenvalues are above 0 (positive eigenvalues). But the first 23 factors are stronger than the rest because of their values being above 1. The difference shows the difference between one eigenvalue and the next. But here the proportion is important to be discussed as it shows the proportion of the explained variation to the total variation of a particular factor. Finally, the LR test for the factor model is significant, showing low factor saturation, which is good.

Based on the nature of our variables, we have estimated Kendall’s tau-b rank correlation coefficient. In Table 8, we can see that there is no multicollinearity issue in the data (responses to the scales of the variables). The Kendall’s tau-b correlation coefficients show the independence of the responses of the factor’s scales, which is good in terms of analysis. The values having asterisk () in Table 8 show that the correlation is significant. The results of IPFA in terms of independence versus the saturated model are similar to correlation matrix results.

6.3. Model Summary and Fitness

The measurement model had 61 items for 8 latent variables and estimated the absolute and relative, parsimony, and noncentrality fit indices, i.e., Chi-square/d.f., Comparative Fit Index (CFI), Normed Fit Index (NFI), Increment Fit Index (IFI), Tucker-Lewis Index (TLI), Parsimonious Comparative Fit Index (PCFI), Parsimonious Normed Fit Index (PNFI), Relative Fit Index (RFI), and RMSEA for model’s assessment. The results show good model fitness with Chi-square/d.f. = 1.227, CFI = 0.543, NFI = 0.84, IFI = 0.825, TLI = 0.5, PCFI = 0.5, RFI = 0.15, PNFI = 0.542, and RMSEA = 0.05. The model estimates the measurement items with their standard errors and probability values.

These measurements indicate that the estimated covariance metrics of the proposed model, as well as the observed model, are found to be significant and satisfactory. Figure 6 shows the final structural model generated from the relationship of latent variables, and Table 9 shows the model estimates of measurement items with their standard errors and probability values.

In respect of Hypotheses such as H1, H2, H3, H4, and H5, we reject the null hypotheses as a structural model has significant positive estimates as shown in Table 10. The structural model gives p values less than 0.05, which means that the DriverSense app will minimize mental, visual, and physical interaction and will significantly improve user satisfaction. In terms of attitude, we have significant positive estimates with , which shows that respondents have a positive attitude towards the usage of DriverSense. Similarly, understandability and learnability and intention to use the app have significant positive estimates, showing positive perceptions of the proposed solution.

6.4. Analysis through AutoLog Dataset

The AutoLog application is used for logging data about drivers’ interactions with common smartphone applications [56]. The logged data contain information about different operations carried out by smartphone application such as number of activities used to perform tasks and number of input taps. The common smartphone applications include calls, SMS, e-mail, WhatsApp, Navigation, and Weather. As discussed earlier, the applications and their interfaces are designed from the perspective of a normal user as the number of activities is either redundant or repetitive and has a complex structure, long route to follow, and so forth. The logged data obtained from smartphone native interfaces are analyzed and compared with the data obtained from DriverSense for using common smartphone activities. The DriverSense interfaces are found to be less complex and have minimum activities and input taps. The comparison is shown in Table 11. To investigate the performance of DriverSense, the AutoLog data generated from the DriverSense app during the normal operations performed by the participants have been analyzed and compared with the AutoLog dataset generated from the smartphone native interfaces. After analyzing the data from both datasets, the findings, as shown in Figure 7, indicate that DriverSense requires comparatively less visual and physical attention to perform smartphone activities while driving as compared to native interfaces. It is due to the fact that DriverSense interfaces are simplified, adaptive, and consistent, having a minimum number of activities and input taps. Since most of the activities can be performed automatically based on a context, it will minimize the drivers’ interactions. The results obtained after the analysis are discussed in the following sections.

6.4.1. Automatic Response

Since the operations of DriverSense user interfaces change according to drivers’ context, most of the activities are automatically performed. Analyzing the dataset, the activities automatically performed by the DriverSense are auto-reply, auto-skipping lengthy and unknown SMSs, auto-reply for unknown calls during high speed, and so forth. The operations automatically performed by DriverSense user interface are compared with smartphone native interfaces and other technologies (i.e., Android Auto, CarPlay, etc.) and results are shown in Table 12.

6.4.2. Steering Wheel Control Variations

The datasets also captured steering wheel control variation while driving. The control of the steering wheel has been analyzed while the driver performed smartphone activities in both smartphone native interfaces and DriverSense. Comparatively high steering wheel variations have been observed when drivers performed common activities such as SMS and phone calls using smartphone native interfaces. However, significantly minimum steering wheel variations have been observed when the drivers performed the same activities on DriverSense. A comparison of the steering wheel control variations while receiving voice call is depicted in Figure 8.

6.4.3. Speed Variations

The speed variations data are also captured while performing activities like attending the call, reading, and replying to text messaging using both smartphone native interfaces and DriverSense interfaces. The significant speed variations are observed when the drivers attended calls and read and replied to text messages on smartphone native interfaces. The speed is found to be degraded from approximately 80 km/h to 50 km/h. On the other hand, the data extracted from DriverSense dataset have shown less speed variations as compared to smartphone native interfaces. A comparison of the speed variations is depicted in Figure 9.

7. Conclusions

The usage of a smartphone is a global phenomenon and has been acknowledged as a major source of accidents and crashes. Using a smartphone while driving requires much visual interaction, physical interaction, and mental workload, which cannot be afforded by the drivers as eyes off the road for two seconds increases the chances of accidents to twenty-four times. The researchers have tried to minimize the visual, physical, and mental distractions of the drivers with the help of supportive technologies. However, the available solutions are not designed with the assumption that the drivers have certain limitations such as physical limitations, visual limitations, and cognitive limitations. These limitations can vary due to different driving contexts.

In this research paper, we have designed and developed a context-aware adaptive user interfaces framework named DriverSense for the drivers to minimize distractions and subsequent catastrophes. The proposed framework uses contextual and models information to minimize the drivers’ distractions by providing an adaptive, semantically consistent, simplified, context-sensitive, and task-oriented user interface design. The efficiency of the proposed solution with respect to the adaptive user interface is considered to be significant and acceptable in terms of usability and user satisfaction. The users’ experiences after using DriverSense are measured through questionnaire and evaluated in different dimensions such as driver attitude for DriverSense usage, intention to use the app, perceived usefulness, understandability and learnability, operability, ease of use, system usability scale, minimal memory load, minimal physical and visual interaction, and user satisfaction. The results have indicated that DriverSense has significantly reduced the drivers’ distractions caused by cognitive overload, visual interactions, and physical interactions. Furthermore, the results have also shown that DriverSense is more robust, adaptable, and easy to use as compared to the other infotainment solutions.

Data Availability

The data that support the findings of this study are available upon request from the first author, Mr. Inayat Khan ([email protected]).

Conflicts of Interest

The authors declare that they have no conflicts of interest.