Table of Contents Author Guidelines Submit a Manuscript
Mobile Information Systems
Volume 2018, Article ID 6941631, 9 pages
Research Article

User Evaluation of the Smartphone Screen Reader VoiceOver with Visually Disabled Participants

1Department of Information and Communication Technology, University of Agder, Grimstad N-4879, Norway
2Kongsgård School Centre, Kristiansand N-4631, Norway
3Department of Health and Nursing Science, University of Agder, Grimstad N-4879, Norway

Correspondence should be addressed to Berglind F. Smaradottir; on.aiu@rittodarams.dnilgreb

Received 3 August 2018; Revised 30 October 2018; Accepted 11 November 2018; Published 2 December 2018

Guest Editor: Giuseppe De Pietro

Copyright © 2018 Berglind F. Smaradottir et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Touchscreen assistive technology is designed to support speech interaction between visually disabled people and mobile devices, allowing hand gestures to interact with a touch user interface. In a global perspective, the World Health Organization estimates that around 285 million people are visually disabled with 2/3 of them over 50 years old. This paper presents the user evaluation of VoiceOver, a built-in screen reader in Apple Inc. products, with a detailed analysis of the gesture interaction, familiarity and training by visually disabled users, and the system response. Six participants with prescribed visual disability took part in the tests in a usability laboratory under controlled conditions. Data were collected and analysed using a mixed methods approach, with quantitative and qualitative measures. The results showed that the participants found most of the hand gestures easy to perform, although they reported inconsistent responses and lack of information associated with several functionalities. User training on each gesture was reported as key to allow the participants to perform certain difficult or unknown gestures. This paper also reports on how to perform mobile device user evaluations in a laboratory environment and provides recommendations on technical and physical infrastructure.

1. Introduction

Since the last decade, touchscreen technology has been increasingly used not only across multiple types of devices, such as smartphones and tablets [13], but also in photocopying machines, automated teller machines (ATMs), and ticket machines in bus, railway stations, and airports. Reviews from the perspective of human factors and ergonomics and studies of people with developmental disabilities pointed out the relevance of the specific context of system interaction in order to maximize safety, performance, and user satisfaction [4] and the need for more research [5]. Touchscreens require the use of fingers and a choreography of gestures for interaction between the user and the device’s user interface (UI) [6, 7]. However, this type of screen interaction can represent a challenge for visually disabled users where the screens are designed for a visual feedback while using the system [8].

The World Health Organization (WHO) estimates that the number of people with visual disability is around 285 million globally and that about 2/3 of them are older than 50 years [9, 10]. Traditionally, visually disabled people have used different assistive technology devices, such as an external keyboard, a braille terminal, or a screen reader that provides speech feedback related to the visual elements on the screen. Mobile phones with physical buttons are still functional for many visually disabled people because of the surface and the rugosity of the buttons that provide palpable guidance when using the device. However, this type of communication device has become less popular in favour of smartphones with touchscreens that currently dominate the market. Smartphones with touchscreen interaction do mainly incorporate visual and sound feedback for communication with the user. This type of communication represents a challenge for the UI navigation to visually disabled people who do not see the screen with sufficient details and buttons without tactile feedback [11]. Several solutions are available in the market to improve the accessibility of smartphone technology for visually disabled people [1214]. Some of these solutions are standalone products, and others are used in conjunction with other technology. One of the products available is VoiceOver [12], the integrated screen reader in Apple Inc. products. VoiceOver allows users to interact with the UI through gestures and with speech feedback to guide the navigation. The screen reader has been included in Apple Inc. products since April 2005 in Mac OS X 10.4, since June 2009 in iPhone 3GS OS 3.0, and in iPad OS 3.2 since its introduction in April 2010. VoiceOver has to be activated in the device’s settings, and when activated, the device provides a speech feedback when a user interacts using hand gestures on the touchscreen. There are different gestures that can be performed on the UI, and they provide immediate feedback interpreted by the screen reader. For instance, tap with one finger and drag will read the item in the cursor (selected), and four-finger tap near the top of the screen will read the first item at the top. The gestures must be made with the fingers, and the screen reader does not respond to voice commands or sense motion.

In this context, the research project “Visually impaired users touching the screen—A user evaluation of assistive technology” aimed at evaluating the accessibility and usability of a screen reader for touchscreens in smartphones [15]. This paper presents the results from the evaluation of the usability and the accessibility of the screen reader VoiceOver (iOS 7.1.2), which is an integrated functionality in iPhone mobile devices. In addition, the paper provides recommendations on technical and physical infrastructure to perform an evaluation of mobile devices in a laboratory environment.

The three research questions (RQs) targeted by this study were as follows:RQ1: What is the user experience of visually disabled users when interacting with the VoiceOver?RQ2: How is the VoiceOver screen reader response to a set of 16 performed hand gestures during a user evaluation?RQ3: What technical infrastructure can be suitable for an evaluation of mobile assistive technology with visually disabled users?

Following this introduction, the research methodology and the technical test infrastructure are described. The results are presented based on the user evaluation outcomes and experience related to the test infrastructure. Furthermore, a discussion of the main results is provided followed by a summary of the research contributions and conclusions.

2. Materials and Methods

A mixed methods research approach was employed in the evaluation of the screen reader [1618], with quantitative and qualitative measures. The evaluation was conducted in three phases: (1) individual user training at the participant’s home and introduction to the gestures a few days before the test, supplied with a written instruction sent by e-mail; (2) a usability test in a controlled laboratory environment including a pretest interview for collecting participant background information; and (3) a posttest interview for qualitative analysis of the test output. The research team had three members whose background was health technology, educational training with assistive technology, and clinical practice. All research team members had professional experience in working with people with visual disabilities.

In the initial preparation of the study, phone interviews were made with three key informants with expertise in visual disabilities, who worked at the Norwegian State Agency for Special Needs Education Service (StatPed) [19]. The goal of the interviews with the key informants was to gather insights on assistive technology for visually disabled people. Based on the interviews, a pilot test of the evaluation was prepared with a comparison of Android and Apple tablet devices. Two voluntary members from the Norwegian Association of the Blind and Partially Sighted [20] participated in the pilot test, running several tasks. Afterwards, a focus group interview was conducted in order to better understand the interactions and any of the problems that the users found. In the phone interviews and also in the pilot test, the informants explained that their experience was that the smartphone iPhone was the most commonly used and preferred device among their peers, also visually disabled people. Based on that information, an iPhone 4 (iOS 7.1.2) device was chosen for the study (the device can be seen in Figure 1) because it was widely available and had the VoiceOver screen reader integrated. The tasks were inspired by the standard gestures’ descriptions in the VoiceOver guide manual [21].

Figure 1: The smartphone used in the test.
2.1. Recruitment of Participants

The recruitment of participants was made in collaboration with the Norwegian Association of the Blind and Partially Sighted [20]. In addition, the professional network of one of the researchers with expertise in teaching and user training of assistive technology was used to support the recruitment process. The first contact made with the participants was a phone conversation to inform them about the study. The second contact was an e-mail with information about the study and a consent form to be signed by each participant. Six visually disabled people were recruited to participate in the user evaluation, see Table 1 for distribution of participants. They had a mean age of 42.8 years and an average of 1.9 years of user experience with VoiceOver. All the participants had previous experience with using a screen reader for desktop and/or laptop computers.

Table 1: The background of the test participants.
2.2. Test Procedure

In the first phase of the evaluation, each participant had individual user training at home (Figure 2) on 16 specific hand gestures for screen interaction. The individual user training lasted 15–30 minutes (with an average of 21.7 minutes), led by a member of the research team. The gestures that a user knew in advance and which ones were learned during the training were registered during the training session.

Figure 2: User training of VoiceOver gestures at a participant’s home.

The second phase was executed in a usability laboratory. One of the researchers acted as the moderator and sat down beside the test participant. The participants were informed about the subsequent test and signed a consent form before the test began. Demographic information and user experience with specific technical devices were also collected. Each user evaluation followed the same test plan, with a set of 16 tasks related to the use of gestures for touchscreen interaction. The moderator guided through the tasks and asked the participants to speak out loudly during the task solving (Figure 3) following a think aloud protocol [2224].

Figure 3: The moderator (left) guiding a participant (right) through the task solving in the test room.

The task solving was followed by a posttest individual interview (third phase). The participants were asked to score the gesture performance and task solving, choosing among three categories: “easy,” “medium,” or “difficult.” In addition, problems or obstacles observed or reported were discussed. The interviews also covered the general user experience with the smartphone and the first-time use of the VoiceOver.

Each test session (second and third phases) lasted between 90 and 120 minutes, and a total of six test sessions were run across three separate days.

2.3. Technical and Physical Test Infrastructure

The evaluation was executed in the usability laboratory at the Centre for eHealth of the University of Agder, Norway [25]. The usability laboratory consisted of two rooms; one test room and one control room, connected through a one-way mirror with visualisation towards the test room. In the test room, the moderator was placed together with a test participant, and in the control room, two observers followed the test from monitors and directly through the one-way mirror. The technical and physical infrastructure is described in Figure 4.

Figure 4: The technical and physical test infrastructure.

For replicability and information purposes, the technical material and equipment used during the study are presented below grouped by rooms.

Test room:(i)Apple Inc. iPhone 4 MD128B/A iOS 7.1.2 with VoiceOver activated(ii)Fixed camera: Sony BRCZ330 HD 1/3 1CMOS P/T/Z 18x optical zoom (72x with digital zoom) colour video camera(iii)Portable camera: Sony HXR-NX30 series(iv)Apple Inc. iPad MD543KN/A iOS 8.1 for additional sound recording(v)Sennheiser e912 condenser boundary microphone(vi)Landline phone communication

Control room:(i)Stationary PC: HP Z220 CMT workstation, Intel Core i7-3770. CPU @ 3.4 GHz, 24 GB RAM, Windows 7 Professional SP1 64 bit(ii)Monitor: 3x HP Compaq LA2405x(iii)Remote controller: Sony IP Remote Controller RM-IP10(iv)Streaming: 2x Teradek RX Cube-455 TCP/IP 1080p H.264(v)Software Wirecast 4.3.1(vi)Landline phone communication

2.4. Data Collection

The test sessions were audio-visually recorded in a F4V video file format. The recordings from two audio-visual sources were merged into one video file using the software Wirecast v.4.3.1 [26], with multiple video perspectives and one single audio channel. The files were exported to the Windows Media Video (WMV) format and then imported to the qualitative software tool QSR NVivo 10 [27]. The recordings were transcribed verbatim and categorized for a qualitative content analysis [28]. Quantitative measurements of the time and number of attempts in the task solving were made as a part of the analysis of the recordings. In addition, the research team made annotations during the test sessions that were included in the data collection (Figure 5).

Figure 5: The control room showing the visual access to the test room through the one-way mirror.
2.5. Ethical Approval

The Norwegian Centre for Research Data [29] approved this study with the project number 40636. All participants received verbal and written information about the project and confidential treatment of their collected data. They were informed that their participation was voluntary, and each participant signed a consent form. The participants were aware that they could withdraw at any time without reason. In that case, their data would be consequently withdrawn and deleted. For health and safety reasons, each test participant was thoroughly informed about the physical environment before entering the test room and the participants were never left alone in the laboratory facilities.

3. Results

All six participants went through the laboratory test. The test results are presented divided into three categories: user training, quantitative metrics from the user tests, and qualitative outcome of the posttest interviews.

3.1. Pretest User Training

The familiarity with the VoiceOver gestures registered in the user training is presented in Table 2. The registration showed that all participants knew the double tap gesture (number 4) and three-finger flick to the left or right (number 10). 5 out 6 were familiar with the one-finger tap gestures (numbers 1–3). For gesture numbers 6 and 7, the four-finger tap at the top or the bottom of the screen, 5 out of 6 participants did not know them in advance.

Table 2: Familiarity per participant with the VoiceOver gestures in the pretest user training.
3.2. User Evaluations

The quantitative measurements from the user evaluations are presented in Table 3, separated in six columns. The first column describes the 16 VoiceOver standard gestures that were used to solve the associated task. The tasks are described in the second column. The third column displays the average number of attempts needed for the task solving. The fourth column shows the task solving average time that was used, measured in seconds. The fifth column presents the system response to the gesture interaction differentiated in the categories “consequent” and “inconsequent” speech feedback. Consequent speech feedback refers to sufficient and adequate information in the system response and inconsequent feedback to insufficiency or lack of information in the system response. In usability studies, the task accuracy is often categorized into completed or not completed task [23, 30]. In this particular test, there was an additional variable related to the task performance, which was the feedback that the system provided when a participant performed a specific action. The categories chosen were therefore “consequent feedback” or “inconsequent feedback” to the specific hand gesture performed. The “consequent feedback” referred to the system appropriately providing feedback that corresponded to the hand gesture performed by a participant. The “inconsequent feedback” referred to a system feedback that did not correspond to the hand gesture performed by a participant of absence of any feedback. The sixth column specifies the type of inconsequent response occurred.

Table 3: Quantitative metrics of the user evaluations.

The performance of three different one-finger tap gestures (tasks 1–3) for speaking the item in the cursor required many attempts to succeed. The system response was consequent. The double tap and slit-tap gestures (tasks 4-5) were easy and fast to perform for the participants. The gesture four-finger tap at the top and bottom of the screen (tasks 6-7) were reported as technically difficult to perform by the participants, which was also indicated by the time for the task solving. The gestures two-finger flick up and down to read the page from top or bottom (tasks 8-9), were easy to perform and showed consequent speech feedback. The three-finger flick and tap gestures (tasks 10-11) were reported as easy to perform, but there was inconsequent system response related to insufficiency in the speech feedback when trying to inform about the current page. For the rotor-related tasks, 12 and 13, two of the participants needed several attempts (7 and 41) for finding the rotor settings, but adjusting the speed of the speech feedback was easier. The gestures three-finger double and triple tap (tasks 14-15) were easy to perform and with a quick task solving. The two-finger double tap in task 16, to terminate a phone call, was easy to perform but there was inconsequent feedback from the system and the phone call was not terminated in three out of six tests.

3.3. Posttest Interviews

The participants graded the performance of gestures and task solving (Table 4) during the individual posttest interview.

Table 4: The grading of the task solving made by the participants in the posttest interview ().

Five of the gestures in the task solving were categorized “easy” to perform, such as the one-finger double tap and the three-finger double and triple taps. Six gestures were categorized as “easy” or “medium,” such as the one-finger flick up and down and three-finger tap. There were gestures that were categorized as “difficult” by two participants, such as the four-finger tap at the bottom and the top of the screen and the two-finger double tap. The task for the two-finger double tap was termination of a phone call, and in the interviews, the participants confirmed that during the test but also in general, the gesture was associated with inconsistency from the system. For the rotor-related gestures, one participant emphasised the importance of user training to succeed with the specific use of the rotor function.

Regarding the first-time user experience, all participants needed user training to be able to start using the smartphone and for activation of the screen reader VoiceOver. Three had family or friends that helped them with the first-time use: one went to a course organized by the Norwegian Association of the Blind and Partially Sighted and two found it out by themselves explaining that VoiceOver as such provides user training and guidance by informing about which gesture to perform for an action. Four participants stated: It was a bit complicated with first-time set up of the new phone with apple-id and activation of VoiceOver, besides that it is easy to use. […] After user training, when I understood how the system worked, I found it easy to use. […] The functions make sense, and there is a logical structure. […] It was terrible in the beginning, because I knew none of the gestures and I wanted to throw the phone away, but the price stopped me from doing it … now I find it fantastic!

Two participants highlighted the benefits of the smartphone: I like that I can buy it myself in the store, I did not need to apply for and receive assistive technology from the municipal services. […] This is the first device I use with built-in accessibility, as the screen reader is included.

Two participants described how the use of the screen reader had increased their self-management: I feel more included in the society, now I can use the Internet and check the same apps as other people do, such as Facebook, weather forecast and reading news. […] It is a feeling of freedom when the phone can read messages for you when you are outdoors, before I had to ask people I did not know about reading from the screen if I received a message, I can now manage it myself and that is a new world for me. In addition, one participant expressed: VoiceOver has made my life much easier and I have become much more independent. Everyone with a visual impairment should use a phone with it.

However, user text input with the VoiceOver keyboard was reported as complicated by four participants, and, for this reason, those participants preferred to use an external keyboard. Another participant stated: It was hard in the beginning with the virtual keyboard, but with some training I overcame the difficulties. Five participants told that they preferred to use at home a desktop or laptop computer with reading list because the text input was quicker than in the smartphone and relying on the latter when they were out of home. Two participants expressed that it was easier for them to navigate on a small screen when compared to a larger tablet screen.

4. Discussion

This paper has presented a user evaluation of the Apple screen reader VoiceOver (iOS 7.1.2) with six visually disabled participants. The aim was to identify challenges related to the performance of the standard VoiceOver gestures and evaluate the associated system response. Considering the sensory limitation of the target user group, the screen reader was expected to be intuitive with an optimal presentation of the functionality and distribution of the UI. The study showed that most of the gestures were easy to perform for the participants; however, some gestures were unfamiliar to the participants, especially those connected to the rotor function. The possibility of receiving individual user training before the evaluation was an advantage to succeed with the practical use of those gestures. The system appropriately responded to the users’ hand gestures, but inconsistent responses and lack of information were reported in the two-finger flick up, three-finger flick to the left or right, three-finger and double-finger taps. The three research questions (RQs) formulated at the beginning of this paper are answered below based on the results from the study.

RQ1 asked about the user experience when interacting with the VoiceOver. The user experience with VoiceOver in general was positive, as the function was described to increase the self-management and support independence. Most of the gestures were both reported and observed as easy to perform, with some exceptions. The two most difficult ones reported by the participants were the four-finger tap and the two-finger double tap gestures. The gesture made using four-finger tap on the bottom or on the top of the screen to, respectively, read the content of the UI from either side was explicitly reported as difficult to perform.

RQ2 asked about the system response to the 16 hand gestures made on the touchscreen mobile device. The speech feedback appropriately responded during the test with useful information for participants to navigate through the UI, but a few inconsistent responses on correctly performed gestures were registered such as with the two-finger double tap to terminate a phone call. The phone call was terminated correctly only in 3 out of 6 tests and can be considered as a weakness in the system with a negative consequence for the users since speaking on the phone is one of the most frequently used functions. Other user problems identified were related to the gesture made by three-finger flick to the left or right for swiping between screens where the speech feedback was inconsistent and lacked information.

RQ3 asked about recommended technical infrastructure in evaluations of mobile assistive technology with visually disabled users. A suitable infrastructure would be the one that optimizes the data collection and allows an effective retrospective analysis under more demanding conditions than other user evaluations. In addition, the comfort, safety, and trust of the visually disabled test participants are crucial to avoid interference and distortion with the test results. The described technical and physical infrastructure in Figure 4 serves as an example of a controlled scenario for an evaluation with the same type of technology and participants. The video recordings require a sufficient quality allowing us to zoom in the user interface and the finger interactions in details. A professional software video program is needed to substantially reduce the speed for optimal viewing and retrospective analysis. In addition, the data should be collected with synchronized audio and video signals because streaming over a network usually incorporates latency. The synchronization is of high importance for the retrospective analysis, as the gestures and finger interactions with a mobile device’s screen are often made at high speed. Another issue experienced and specific for tests with visually disabled participants was that that the sound from the VoiceOver interfered and overlapped with the sound from the test participant and the moderator in the recordings from the table microphone unit. This might complicate the retrospective analysis, and based on that experience, we recommend using several microphones to record the sound sources separately.

This study of the screen reader VoiceOver had some limitations such as the number of test participants () and tests were conducted only in a usability laboratory setting. However, the number of the participants with a distribution in their ages and smartphone skills meaningfully represented the user group of visually disabled users of smartphones. Other studies have shown that a small number of participants in usability studies can be sufficient for having valid results [3133]. The laboratory setting allowed the collection of detailed research data under controlled conditions. The collected data material was thoroughly analysed in detail to study the interaction between the visually disabled user and the UI touchscreen. Furthermore, the application of mixed method research, combining laboratory tests with detailed interviews, provided insights into the user experiences, as well as benefits and barriers of using the VoiceOver function.

5. Conclusions

This study was made as a part of the project “Visually impaired users touching the screen—A user evaluation of assistive technology” that aimed at evaluating the usability and accessibility of the screen reader VoiceOver. The main contribution of this study lies in the detailed analysis of the interaction with gestures between the visually disabled participants and the screen reader, preceding the responses from the system. In general, most of the hand gestures were easy to perform for the participants, although user training played a key role for the understanding and successful performance of specifically complex gestures. Without training, participants could not have been able to perform such gestures. The system response and speech feedback were in most cases correct, but some functionalities of the system might be improved. The results presented are in line with other studies on assistive technologies and visually disabled users [3436]. The methodological procedures with the use of mixed methods, combining quantitative laboratory test with qualitative interviews and observations, can be recommended to other studies of similar characteristics. The test procedure with user training on the specific hand gestures in advance reduced the memory load in the laboratory test situation, as all the participants were familiar with the gestures and could focus on performing the tasks. The application of a think aloud protocol in the usability laboratory together with posttest interviews is strongly recommended for other studies related to touchscreen assistive technology because they may provide a more comprehensive result.

In terms of future work, it is proposed to validate the laboratory results in the field and address research with a larger sample size focusing on text input and navigation using VoiceOver on a smartphone or tablet device. A comparison between the screen readers VoiceOver from Apple Inc. and TalkBack, which is mainly developed for Android devices, could illustrate differences across different platforms. The integration of VoiceOver in the Apple Watch provides new opportunities of studying user-friendliness and accessibility for visually disabled users. A comparison of the use of VoiceOver on a desktop or laptop computer which are generally more command based could be easily made in a similar usability laboratory. Finally, newer models of iPhone to date, such as 8 and Xs, provide more tactile feedback through vibration during interactions than previous versions and the impact of those functions for visually disabled users would be interesting to evaluate.

Data Availability

The video recordings data used to support the findings of this study have not been made available because of national regulations regarding the privacy of the test participants.

Conflicts of Interest

The authors declare that there are no conflicts of interest with any of the participants, private organizations, or vendors regarding the publication of this paper.


The authors would like to thank all the participants in the study and the informants prior to the study for their disinterested contribution. Thanks are due to Hildegunn Mellesmo Aslaksen for professional photographing and also to the Centre for eHealth at the University of Agder, Norway, for close collaboration and adaptation of the test infrastructure facilities.


  1. R. Ling, The Mobile Connection: The Cell Phone’s Impact on Society, Morgan Kaufmann, Burlington, MA, USA, 2004.
  2. S. L. Jarvenpaa and K. R. Lang, “Managing the paradoxes of mobile technology,” Information Systems Management, vol. 22, no. 4, pp. 7–23, 2005. View at Publisher · View at Google Scholar · View at Scopus
  3. G. Goggin, Cell Phone Culture: Mobile Technology in Everyday Life, Routledge, Abingdon, UK, 2012.
  4. A. K. Orphanides and C. S. Nam, “Touchscreen interfaces in context: a systematic review of research into touchscreens across settings, populations, and implementations,” Applied Ergonomics, vol. 61, pp. 116–143, 2017. View at Publisher · View at Google Scholar · View at Scopus
  5. J. Stephenson and L. Limbrick, “A review of the use of touch-screen mobile devices by people with developmental disabilities,” Journal of Autism and Developmental Disorders, vol. 45, no. 12, pp. 3777–3791, 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. P. A. Albinsson and S. Zhai, “High precision touch screen interaction,” in Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 105–112, Fort Lauderdale, FL, USA, April 2003.
  7. A. Butler, S. Izadi, and S. Hodges, “SideSight: multi-touch interaction around small devices,” in Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology, pp. 201–204, Monterey, CA, USA, October 2008.
  8. L. Hakobyan, J. Lumsden, D. O’Sullivan, and H. Bartlett, “Mobile assistive technologies for the visually impaired,” Survey of Ophthalmology, vol. 58, no. 6, pp. 513–528, 2013. View at Publisher · View at Google Scholar · View at Scopus
  9. World Health Organization (WHO), October 2018,
  10. D. Pascolini and S. P. Mariotti, “Global estimates of visual impairment: 2010,” British Journal of Ophthalmology, vol. 96, no. 5, pp. 614–618, 2012. View at Publisher · View at Google Scholar · View at Scopus
  11. S. K. Kane, J. P. Bigham, and J. Wobbrock, Fully Accessible Touch Screens for the Blind and Visually Impaired, University of Washington, Tacoma, WA, USA, 2011.
  12. Voiceover, October, 2018,
  13. Window-Eyes, October, 2018,
  14. TalkBack, October, 2018,
  15. B. F. Smaradottir, S. G. Martinez, and J. A. Håland, “Evaluation of touchscreen assistive technology for visually disabled users,” in Proceedings of IEEE Symposium on Computers and Communications (ISCC), pp. 248–253, Heraklion, Greece, July 2017.
  16. J. W. Creswell and V. L. P. Clark, Designing and Conducting Mixed Methods Research, SAGE Publications Inc., Thousand Oaks, CA, USA, 2007.
  17. R. B. Johnson, A. J. Onwuegbuzie, and L. A. Turner, “Toward a definition of mixed methods research,” Journal of Mixed Methods Research, vol. 1, no. 2, pp. 112–133, 2007. View at Publisher · View at Google Scholar
  18. C. Teddlie and A. Tashakkori, “Mixed methods research,” in The SAGE Handbook of Qualitative Research, N. K. Denzin and Y. S. Lincoln, Eds., vol. 4, pp. 285–300, SAGE Publications Inc., Thousand Oaks, CA, USA, 2011. View at Google Scholar
  19. StatPed, October, 2018,
  20. The Norwegian Association of the Blind and Partially Sighted, October, 2018,
  21. Learn VoiceOver gestures on iPhone, October, 2018,
  22. M. W. M. Jaspers, “A comparison of usability methods for testing interactive health technologies: methodological aspects and empirical evidence,” International Journal of Medical Informatics, vol. 78, no. 5, pp. 340–353, 2009. View at Publisher · View at Google Scholar · View at Scopus
  23. A. W. Kushniruk and V. L. Patel, “Cognitive and usability engineering methods for the evaluation of clinical information systems,” Journal of Biomedical Informatics, vol. 37, no. 1, pp. 56–76, 2004. View at Publisher · View at Google Scholar · View at Scopus
  24. K. A. Ericsson and H. A. Simon, “Verbal reports as data,” Psychological Review, vol. 87, no. 3, pp. 215–251, 1980. View at Publisher · View at Google Scholar
  25. Centre for eHealth at University of Agder, October 2018,
  26. Wirecast, October 2018,
  27. QSR NVIVO 10, October 2018,
  28. J. Lazar, J. H. Feng, and H. Hochheiser, Research Methods in Human-Computer Interaction, John Wiley & Sons, Hoboken, NJ, USA, 2010.
  29. Norwegian Social Science Data Services, October 2018,
  30. J. M. C. Bastien, “Usability testing: a review of some methodological and technical aspects of the method,” International Journal of Medical Informatics, vol. 79, no. 4, pp. e18–e23, 2010. View at Publisher · View at Google Scholar · View at Scopus
  31. J. Nielsen and T. K. Landauer, “A mathematical model of the finding of usability problems,” in Proceedings of ACM Conference on Human Factors in Computing Systems, pp. 206–213, Amsterdam, Netherlands, April 1993.
  32. C. W. Turner, J. R. Lewis, and J. Nielsen, “Determining usability test sample size,” International Encyclopedia of Ergonomics and Human Factors, vol. 3, no. 2, pp. 3084–3088, 2006. View at Google Scholar
  33. J. Nielsen, “Why you only need to test with 5 users,” Alertbox, October 2018,
  34. K. Park, T. Goh, and H. J. So, “Toward accessible mobile application design: developing mobile application accessibility guidelines for people with visual impairment,” in Proceedings of HCI Korea, pp. 31–38, Hanbit Media, Inc., Seoul, Republic of Korea, 2014.
  35. B. Leporini and M. C. Buzzi, “Interacting with mobile devices via VoiceOver: usability and accessibility issues,” in Proceedings of the 24th Australian Computer-Human Interaction Conference, pp. 339–348, ACM, Melbourne, VIC, Australia, November 2012.
  36. D. McGookin, S. Brewster, and W. Jiang, “Investigating touchscreen accessibility for people with visual impairments,” in Proceedings of the 5th Nordic Conference on Human-Computer Interaction: Building Bridges, pp. 298–307, ACM, Lund, Sweden, October 2008.