Advances in Human-Computer Interaction The latest articles from Hindawi © 2017 , Hindawi Limited . All rights reserved. The Effect of Personality on Online Game Flow Experience and the Eye Blink Rate as an Objective Indicator Sun, 18 Jun 2017 07:53:43 +0000 This study aimed to explore the effects of dominant and compliant personalities, on both flow experience and the external characteristics of flow experience. A total of 48 participants were recruited to play an online game and subsequently asked to recall the songs they had heard while they were playing the game. Eye blink rate was recorded. The results demonstrated that (1) the participant was immersed in the game more if he/she was relatively dominant or noncompliant; (2) the perceptions about the external environment declined remarkably while being in a flow state; and (3) eye blink rates decreased only when the flow happened at the beginning of the game, rather than throughout the whole process. The results suggested that gamers who tend to be dominant or noncompliant were more likely to experience flow. Eye blink rate and perceptions of the external environment could be objective indicators of flow experience. Pei-Luen Patrick Rau, Yu Chien Tseng, Xiao Dong, Caihong Jiang, and Cuiling Chen Copyright © 2017 Pei-Luen Patrick Rau et al. All rights reserved. Developers as Users: Exploring the Experiences of Using a New Theoretical Method for Usability Assessment Sun, 05 Mar 2017 06:42:08 +0000 There is a need for appropriate evaluation methods to efficiently identify and counteract usability issues early in the development process. The aim of this study was to investigate how product developers assessed a new theoretical method for identifying usability problems and use errors. Two cases where the method had been applied were selected and the users of the method in them were asked to fill in a questionnaire and were then interviewed about their experiences of using the method. Overall, the participants (students and professionals) found the methods useful and their outcome trustworthy. At the same time, the methods were assessed as difficult to learn and as cumbersome and tedious to use. Nevertheless, both students and professionals thought that the methods would be useful in future development work. Suggestions for further improvement included provision of further instructions, for example, on how to adapt the methods and development of an IT-support tool. Lars-Ola Bligård, Helena Strömberg, and MariAnne Karlsson Copyright © 2017 Lars-Ola Bligård et al. All rights reserved. An Integrated Support to Collaborative Semantic Annotation Tue, 21 Feb 2017 00:00:00 +0000 Everybody experiences every day the need to manage a huge amount of heterogeneous shared resources, causing information overload and fragmentation problems. Collaborative annotation tools are the most common way to address these issues, but collaboratively tagging resources is usually perceived as a boring and time consuming activity and a possible source of conflicts. To face this challenge, collaborative systems should effectively support users in the resource annotation activity and in the definition of a shared view. The main contribution of this paper is the presentation and the evaluation of a set of mechanisms (personal annotations over shared resources and tag suggestions) that provide users with the mentioned support. The goal of the evaluation was to () assess the improvement with respect to the situation without support; () evaluate the satisfaction of the users, with respect to both the final choice of annotations and possible conflicts; () evaluate the usefulness of the support mechanisms in terms of actual usage and user perception. The experiment consisted in a simulated collaborative work scenario, where small groups of users annotated a few resources and then answered a questionnaire. The evaluation results demonstrate that the proposed support mechanisms can reduce both overload and possible disagreement. Annamaria Goy, Diego Magro, Giovanna Petrone, Claudia Picardi, Marco Rovera, and Marino Segnan Copyright © 2017 Annamaria Goy et al. All rights reserved. Extending the Touchscreen Pattern Lock Mechanism with Duplicated and Temporal Codes Tue, 29 Nov 2016 06:20:10 +0000 We investigate improvements to authentication on mobile touchscreen phones and present a novel extension to the widely used touchscreen pattern lock mechanism. Our solution allows including nodes in the grid multiple times, which enhances the resilience to smudge and other forms of attack. For example, for a smudge pattern covering 7 nodes, our approach increases the amount of possible lock patterns by a factor of 15 times. Our concept was implemented and evaluated in a laboratory user test (). The test participants found the usability of the proposed concept to be equal to that of the baseline pattern lock mechanism but considered it more secure. Our solution is fully backwards-compatible with the current baseline pattern lock mechanism, hence enabling easy adoption whilst providing higher security at a comparable level of usability. Ashley Colley, Tobias Seitz, Tuomas Lappalainen, Matthias Kranz, and Jonna Häkkilä Copyright © 2016 Ashley Colley et al. All rights reserved. Combinations of Methods for Collaborative Evaluation of the Usability of Interactive Software Systems Mon, 26 Sep 2016 07:47:07 +0000 Usability is a fundamental quality characteristic for the success of an interactive system. It is a concept that includes a set of metrics and methods in order to obtain easy-to-learn and easy-to-use systems. Usability Evaluation Methods, UEM, are quite diverse; their application depends on variables such as costs, time availability, and human resources. A large number of UEM can be employed to assess interactive software systems, but questions arise when deciding which method and/or combination of methods gives more (relevant) information. We propose Collaborative Usability Evaluation Methods, CUEM, following the principles defined by the Collaboration Engineering. This paper analyzes a set of CUEM conducted on different interactive software systems. It proposes combinations of CUEM that provide more complete and comprehensive information about the usability of interactive software systems than those evaluation methods conducted independently. Andrés Solano, César A. Collazos, Cristian Rusu, and Habib M. Fardoun Copyright © 2016 Andrés Solano et al. All rights reserved. Capturing the Perceived Phantom Limb through Virtual Reality Mon, 05 Sep 2016 12:31:44 +0000 Phantom limb is the sensation amputees may feel when the missing limb is still attached to the body and is still moving as it would if it still existed. Despite there being between 50 and 80% of amputees who report neuropathic pain, also known as phantom limb pain (PLP), there is still little understanding of why PLP occurs. There are no fully effective long-term treatments available. One of the struggles with PLP is the difficulty for amputees to describe the sensations of their phantom limbs. The sensations may be of a limb that is in a position that is impossible for a normal limb to attain. The goal of this project was to treat those with PLP by developing a system to communicate the sensations those with PLP were experiencing accurately and easily through various hand positions using a model arm with a user friendly interface. The system was developed with Maya 3D animation software, the Leap Motion input device, and the Unity game engine. The 3D modeled arm was designed to mimic the phantom sensation being able to go beyond normal joint extensions of regular arms. The purpose in doing so was to obtain a true 3D visualization of the phantom limb. Christian Rogers, Jonathan Lau, Denver Huynh, Steven Albertson, James Beem, and Enlin Qian Copyright © 2016 Christian Rogers et al. All rights reserved. Kinect-Based Sliding Mode Control for Lynxmotion Robotic Arm Wed, 27 Jul 2016 13:44:53 +0000 Recently, the technological development of manipulator robot increases very quickly and provides a positive impact to human life. The implementation of the manipulator robot technology offers more efficiency and high performance for several human’s tasks. In reality, efforts published in this context are focused on implementing control algorithms with already preprogrammed desired trajectories (passive robots case) or trajectory generation based on feedback sensors (active robots case). However, gesture based control robot can be considered as another channel of system control which is not widely discussed. This paper focuses on a Kinect-based real-time interactive control system implementation. Based on LabVIEW integrated development environment (IDE), a developed human-machine-interface (HMI) allows user to control in real time a Lynxmotion robotic arm. The Kinect software development kit (SDK) provides a tool to keep track of human body skeleton and abstract it into 3-dimensional coordinates. Therefore, the Kinect sensor is integrated into our control system to detect the different user joints coordinates. The Lynxmotion dynamic has been implemented in a real-time sliding mode control algorithm. The experimental results are carried out to test the effectiveness of the system, and the results verify the tracking ability, stability, and robustness. Ismail Ben Abdallah, Yassine Bouteraa, and Chokri Rekik Copyright © 2016 Ismail Ben Abdallah et al. All rights reserved. Transitions in Interface Objects: Searching Databases Mon, 23 May 2016 14:09:10 +0000 Two experiments demonstrate that a list-like database interface which benefits from the persistence of contextual information does not show the same degree of benefit of collocating objects over display changes that has been previously observed in a map-searching study. This provides some support for the claim that the nature of the task must be taken into account in choosing how to design dynamic displays. We discuss the benefit of basing design principles on theoretical models derived from film cutting methods used in cinematography, so that they can be extended to novel design situations. Tim Gamble and Jon May Copyright © 2016 Tim Gamble and Jon May. All rights reserved. A Case Study of MasterMind Chess: Comparing Mouse/Keyboard Interaction with Kinect-Based Gestural Interface Wed, 04 May 2016 14:27:19 +0000 As gestural interfaces emerged as a new type of user interface, their use has been vastly explored by the entertainment industry to better immerse the player in games. Despite being mainly used in dance and sports games, little use was made of gestural interaction in more slow-paced genres, such as board games. In this work, we present a Kinect-based gestural interface for an online and multiplayer chess game and describe a case study with users with different playing skill levels. Comparing the mouse/keyboard interaction with the gesture-based interaction, the results of the activity were synthesized into lessons learned regarding general usability and design of game control mechanisms. These results could be applied to slow-paced board games like chess. Our findings indicate that gestural interfaces may not be suitable for competitive chess matches, yet it can be fun to play while using them in casual matches. Gabriel Alves Mendes Vasiljevic, Leonardo Cunha de Miranda, and Erica Esteves Cunha de Miranda Copyright © 2016 Gabriel Alves Mendes Vasiljevic et al. All rights reserved. Appraisals of Salient Visual Elements in Web Page Design Tue, 19 Apr 2016 09:40:17 +0000 Visual elements in user interfaces elicit emotions in users and are, therefore, essential to users interacting with different software. Although there is research on the relationship between emotional experience and visual user interface design, the focus has been on the overall visual impression and not on visual elements. Additionally, often in a software development process, programming and general usability guidelines are considered as the most important parts of the process. Therefore, knowledge of programmers’ appraisals of visual elements can be utilized to understand the web page designs we interact with. In this study, appraisal theory of emotion is utilized to elaborate the relationship of emotional experience and visual elements from programmers’ perspective. Participants () used 3E-templates to express their visual and emotional experiences of web page designs. Content analysis of textual data illustrates how emotional experiences are elicited by salient visual elements. Eight hierarchical visual element categories were found and connected to various emotions, such as frustration, boredom, and calmness, via relational emotion themes. The emotional emphasis was on centered, symmetrical, and balanced composition, which was experienced as pleasant and calming. The results benefit user-centered visual interface design and researchers of visual aesthetics in human-computer interaction. Johanna M. Silvennoinen and Jussi P. P. Jokinen Copyright © 2016 Johanna M. Silvennoinen and Jussi P. P. Jokinen. All rights reserved. Designing Digital Solutions for Preserving Penan Sign Language: A Reflective Study Wed, 30 Mar 2016 06:59:47 +0000 Oroo’ is a language of nomadic Penans in the rainforests of Borneo and the only way of asynchronous communication between nomadic groups in the forest journey. Like many other indigenous languages, the Oroo’ language is also facing imminent extinction. In this paper, we present the research process and reflections of a multidisciplinary community-based research project on digitalizing and preserving the Oroo’ sign language. As a methodology for project activities, we are employing Participatory Action Research in Software Development Methodology Augmentation (PRISMA). Preliminary results show a general interest in digital contents and a positive impact of the project activities. In this paper, we present scenario of a research project that is retooled to fit the need of communities, informing language revitalization efforts and assisting with the evolution of community-based research design. Tariq Zaman, Alvin W. Yeo, and Geran Jengan Copyright © 2016 Tariq Zaman et al. All rights reserved. Lower Order Krawtchouk Moment-Based Feature-Set for Hand Gesture Recognition Sun, 13 Mar 2016 14:29:38 +0000 The capability of lower order Krawtchouk moment-based shape features has been analyzed. The behaviour of 1D and 2D Krawtchouk polynomials at lower orders is observed by varying Region of Interest (ROI). The paper measures the effectiveness of shape recognition capability of 2D Krawtchouk features at lower orders on the basis of Jochen-Triesch’s database and hand gesture database of 10 Indian Sign Language (ISL) alphabets. Comparison of original and reduced feature-set is also done. Experimental results demonstrate that the reduced feature dimensionality gives competent accuracy as compared to the original feature-set for all the proposed classifiers. Thus, the Krawtchouk moment-based features prove to be effective in terms of shape recognition capability at lower orders. Bineet Kaur and Garima Joshi Copyright © 2016 Bineet Kaur and Garima Joshi. All rights reserved. Evaluating the Authenticity of Virtual Environments: Comparison of Three Devices Thu, 03 Mar 2016 10:58:04 +0000 Immersive virtual environments (VEs) have the potential to provide novel cost effective ways for evaluating not only new environments and usability scenarios, but also potential user experiences. To achieve this, VEs must be adequately realistic. The level of perceived authenticity can be ascertained by measuring the levels of immersion people experience in their VE interactions. In this paper the degree of authenticity is measured via an authenticity index in relation to three different immersive virtual environment devices. These devices include (1) a headband, (2) 3D glasses, and (3) a head-mounted display (HMD). A quick scale for measuring immersion, feeling of control, and simulator sickness was developed and tested. The HMD proved to be the most immersive device, although the headband was demonstrated as being a more stable environment causing the least simulator sickness. The results have design implication as they provide insight into specific factors which make experience in a VE seem more authentic to users. The paper emphasizes that, in addition to the quality of the VE, focus needs to be placed on ergonomic factors such as the weight of the devices, as these may compromise the quality of results obtained when examining studying human-technology interaction in a VE. Aila Kronqvist, Jussi Jokinen, and Rebekah Rousi Copyright © 2016 Aila Kronqvist et al. All rights reserved. Dynamic Arm Gesture Recognition Using Spherical Angle Features and Hidden Markov Models Mon, 16 Nov 2015 13:55:14 +0000 We introduce a vision-based arm gesture recognition (AGR) system using Kinect. The AGR system learns the discrete Hidden Markov Model (HMM), an effective probabilistic graph model for gesture recognition, from the dynamic pose of the arm joints provided by the Kinect API. Because Kinect’s viewpoint and the subject’s arm length can substantially affect the estimated 3D pose of each joint, it is difficult to recognize gestures reliably with these features. The proposed system performs the feature transformation that changes the 3D Cartesian coordinates of each joint into the 2D spherical angles of the corresponding arm part to obtain view-invariant and more discriminative features. We confirmed high recognition performance of the proposed AGR system through experiments with two different datasets. Hyesuk Kim and Incheol Kim Copyright © 2015 Hyesuk Kim and Incheol Kim. All rights reserved. Vibrotactile Stimulation as an Instructor for Mimicry-Based Physical Exercise Tue, 27 Oct 2015 11:53:23 +0000 The present aim was to investigate functionality of vibrotactile stimulation in mimicry-based behavioral regulation during physical exercise. Vibrotactile stimuli communicated instructions from an instructor to an exerciser to perform lower extremity movements. A wireless prototype was tested first in controlled laboratory conditions (Study 1) and was followed by a user study (Study 2) that was conducted in a group exercise situation for elderly participants with a new version of the system with improved construction and extended functionality. The results of Study 1 showed that vibrotactile instructions were successful in both supplementing and substituting visual knee lift instructions. Vibrotactile stimuli were accurately recognized, and exercise with the device received affirmative ratings. Interestingly, tactile stimulation appeared to stabilize acceleration magnitude of the knee lifts in comparison to visual instructions. In Study 2 it was found that user experience of the system was mainly positive by both the exercisers and their instructors. For example, exercise with vibrotactile instructions was experienced as more motivating than conventional exercise session. Together the results indicate that tactile instructions could increase possibilities for people having difficulties in following visual and auditory instructions to take part in mimicry-based group training. Both studies also revealed development areas that were primarily related to a slight delay in triggering the vibrotactile stimulation. Jani Lylykangas, Jani Heikkinen, Veikko Surakka, Roope Raisamo, Kalle Myllymaa, and Arvo Laitinen Copyright © 2015 Jani Lylykangas et al. All rights reserved. NFC-Based User Interface for Smart Environments Wed, 26 Aug 2015 11:54:25 +0000 The physical support of a home automation system, joined with a simplified user-system interaction modality, may allow people affected by motor impairments or limitations, such as elderly and disabled people, to live safely and comfortably at home, by improving their autonomy and facilitating the execution of daily life tasks. The proposed solution takes advantage of the Near Field Communications technology, which is simple and intuitive to use, to enable advanced user interaction. The user can perform normal daily activities, such as lifting a gate or closing a window, through a device enabled to read NFC tags containing the commands for the home automation system. A passive Smart Panel is implemented, composed of multiple Near Field Communications tags properly programmed, to enable the execution of both individual commands and so-called scenarios. The work compares several versions of the proposed Smart Panel, differing for interrogation and composition of the single command, number of tags, and dynamic user interaction model, at a parity of the number of commands to issue. Main conclusions are drawn from the experimental results, about the effective adoption of Near Field Communications in smart assistive environments. Susanna Spinsante and Ennio Gambi Copyright © 2015 Susanna Spinsante and Ennio Gambi. All rights reserved. Should I Stop Thinking About It: A Computational Exploration of Reappraisal Based Emotion Regulation Wed, 12 Aug 2015 07:54:54 +0000 Agent-based simulation of people’s behaviors and minds has become increasingly popular in recent years. It provides a research platform to simulate and compare alternative psychological and social theories, as well as to create virtual characters that can interact with people or among each other to provide pedagogical or entertainment effects. In this paper, we investigate computationally modeling people’s coping behaviors and in particular in relation to depression, in decision-theoretic agents. Recent studies have suggested that depression can result from failed emotion regulation under limited cognitive resources. In this work, we demonstrate how reappraisal can fail under high levels of stress and limited cognitive resources using an agent-based simulation. Further, we explored the effectiveness of reappraisal under different conditions. Our experiments suggest that for people who are more likely to recall positive memories, it is more beneficial to think about the recalled events from multiple perspectives. However, for people who are more likely to recall negative memories, the better strategy is to not evaluate the recalled events against multiple goals. Mei Si Copyright © 2015 Mei Si. All rights reserved. WozARd: A Wizard of Oz Method for Wearable Augmented Reality Interaction—A Pilot Study Wed, 10 Jun 2015 13:45:39 +0000 Head-mounted displays and other wearable devices open up for innovative types of interaction for wearable augmented reality (AR). However, to design and evaluate these new types of AR user interfaces, it is essential to quickly simulate undeveloped components of the system and collect feedback from potential users early in the design process. One way of doing this is the wizard of Oz (WOZ) method. The basic idea behind WOZ is to create the illusion of a working system by having a human operator, performing some or all of the system’s functions. WozARd is a WOZ method developed for wearable AR interaction. The presented pilot study was an initial investigation of the capability of the WozARd method to simulate an AR city tour. Qualitative and quantitative data were collected from 21 participants performing a simulated AR city tour. The data analysis focused on seven categories that can have an impact on how the WozARd method is perceived by participants: precision, relevance, responsiveness, technical stability, visual fidelity, general user-experience, and human-operator performance. Overall, the results indicate that the participants perceived the simulated AR city tour as a relatively realistic experience despite a certain degree of technical instability and human-operator mistakes. Günter Alce, Mattias Wallergård, and Klas Hermodsson Copyright © 2015 Günter Alce et al. All rights reserved. Design and Validation of an Attention Model of Web Page Users Sat, 28 Feb 2015 09:59:22 +0000 In this paper, we propose a model to predict the locations of the most attended pictorial information on a web page and the attention sequence of the information. We propose to divide the content of a web page into conceptually coherent units or objects, based on a survey of more than 100 web pages. The proposed model takes into account three characteristics of an image object: chromatic contrast, size, and position and computes a numerical value, the attention factor. We can predict from the attention factor values the image objects most likely to draw attention and the sequence in which attention will be drawn. We have carried out empirical studies to both develop and determine the efficacy of the proposed model. The study results revealed a prediction accuracy of about 80% for a set of artificially designed web pages and about 60% for a set of real web pages sampled from the Internet. The performance was found to be better (in terms of prediction accuracy) than the visual saliency model, a popular model to predict human attention on an image. Ananya Jana and Samit Bhattacharya Copyright © 2015 Ananya Jana and Samit Bhattacharya. All rights reserved. CaRo 2.0: An Interactive System for Expressive Music Rendering Mon, 02 Feb 2015 09:01:28 +0000 In several application contexts in multimedia field (educational, extreme gaming), the interaction with the user requests that system is able to render music in expressive way. The expressiveness is the added value of a performance and is part of the reason that music is interesting to listen. Understanding and modeling expressive content communication is important for many engineering applications in information technology (e.g., Music Information Retrieval, as well as several applications in the affective computing field). In this paper, we present an original approach to modify the expressive content of a performance in a gradual way, applying a smooth morphing among performances with different expressive content in order to adapt the audio expressive character to the user’s desires. The system won the final stage of Rencon 2011. This performance RENdering CONtest is a research project that organizes contests for computer systems generating expressive musical performances. Sergio Canazza, Giovanni De Poli, and Antonio Rodà Copyright © 2015 Sergio Canazza et al. All rights reserved. Dimensions of Situatedness for Digital Public Displays Mon, 22 Dec 2014 00:10:04 +0000 Public displays are often strongly situated signs deeply embedded in their physical, social, and cultural setting. Understanding how the display is coupled with on-going situations, its level of situatedness, provides a key element for the interpretation of the displays themselves but is also an element for the interpretation of place, its situated practices, and its social context. Most digital displays, however, do not achieve the same sense of situatedness that seems so natural in their nondigital counterparts. This paper investigates people’s perception of situatedness when considering the connection between public displays and their context. We have collected over 300 photos of displays and conducted a set of analysis tasks involving focus groups and structured interviews with 15 participants. The contribution is a consolidated list of situatedness dimensions that should provide a valuable resource for reasoning about situatedness in digital displays and informing the design and development of display systems. Rui José, Nuno Otero, and Jorge C. S. Cardoso Copyright © 2014 Rui José et al. All rights reserved. The Interplay between Usability and Aesthetics: More Evidence for the “What Is Usable Is Beautiful” Notion Tue, 25 Nov 2014 14:46:56 +0000 With respect to inconsistent findings on the interplay between usability and aesthetics, the current paper aimed to further examine the effect of these variables on perceived qualities of a mobile phone prototype. An experiment with four versions of the prototype varying on two factors, (1) usability (high versus low) and (2) aesthetics (high versus low), was conducted with perceived usability and perceived beauty, as well as hedonic experience and the system’s appeal as dependent variables. Participants of the experiment () were instructed to complete four typical tasks with the prototype before assessing its quality. Results showed that the mobile phone’s aesthetics does not affect its perceived usability, either directly or indirectly. Instead, results revealed an effect of usability on perceived beauty, which supports the “what is usable is beautiful” notion instead of “what is beautiful is usable.” Furthermore, effects of aesthetics and of usability on hedonic experience in terms of endowing identity and appeal were found, indicating that both instrumental (usability) and noninstrumental (beauty) qualities contribute to a positive user experience. Kai-Christoph Hamborg, Julia Hülsmann, and Kai Kaspar Copyright © 2014 Kai-Christoph Hamborg et al. All rights reserved. Large Display Interaction via Multiple Acceleration Curves and Multifinger Pointer Control Tue, 25 Nov 2014 00:00:00 +0000 Large high-resolution displays combine high pixel density with ample physical dimensions. The combination of these factors creates a multiscale workspace where interactive targeting of on-screen objects requires both high speed for distant targets and high accuracy for small targets. Modern operating systems support implicit dynamic control-display gain adjustment (i.e., a pointer acceleration curve) that helps to maintain both speed and accuracy. However, large high-resolution displays require a broader range of control-display gains than a single acceleration curve can usably enable. Some interaction techniques attempt to solve the problem by utilizing multiple explicit modes of interaction, where different modes provide different levels of pointer precision. Here, we investigate the alternative hypothesis of using a single mode of interaction for continuous pointing that enables both (1) standard implicit granularity control via an acceleration curve and (2) explicit switching between multiple acceleration curves in an efficient and dynamic way. We evaluate a sample solution that augments standard touchpad accelerated pointer manipulation with multitouch capability, where the choice of acceleration curve dynamically changes depending on the number of fingers in contact with the touchpad. Specifically, users can dynamically switch among three different acceleration curves by using one, two, or three fingers on the touchpad. Andrey Esakia, Alex Endert, and Chris North Copyright © 2014 Andrey Esakia et al. All rights reserved. A Study of Correlations among Image Resolution, Reaction Time, and Extent of Motion in Remote Motor Interactions Mon, 17 Nov 2014 00:00:00 +0000 Motor interaction in virtual sculpting, dance trainings, and physiological rehabilitation requires close virtual proximity of users, which may be hindered by low resolution of images and system latency. This paper reports on the results of our investigation aiming to explore the pros and cons of using ultrahigh 4K resolution displays (4096 × 2160 pixels) in remote motor interaction. 4K displays are able to overcome the problem of visible pixels and they are able to show more accurate image details on the level of textures, shadows, and reflections. It was our assumption that such image details can not only satisfy visual comfort of the users, but also provide detailed visual cues and improve the reaction time of users in motor interaction. To validate this hypothesis, we explored the relationships between the reaction time of subjects responding to a series of action-reaction type of games and resolution of the image used in an experiment. The results of our experiment showed that the subjects’ reaction time is significantly shorter in 4K images than in HD or VGA images in motor interaction with small motion envelope. Zoltán Rusák, Adrie Kooijman, Yu Song, Jouke Verlinden, and Imre Horváth Copyright © 2014 Zoltán Rusák et al. All rights reserved. Orchestrating End-User Perspectives in the Software Release Process: An Integrated Release Management Framework Sun, 16 Nov 2014 07:53:00 +0000 Software bugs discovered by end-users are inevitable consequences of a vendor’s lack of testing. While they frequently result in costly system failures, one way to detect and prevent them is to engage the customer in acceptance testing during the release process. Yet, there is a considerable lack of empirical studies examining release management from end-users’ perspective. To address this gap, we propose and empirically test a release framework that positions the customer release manager in the center of the release process. Using a participatory action research strategy, a twenty-seven-month study was conducted to evaluate and improve the effectiveness of the framework through seven major and 39 minor releases. Simon Cleveland and Timothy J. Ellis Copyright © 2014 Simon Cleveland and Timothy J. Ellis. All rights reserved. PaperCAD: A System for Interrogating CAD Drawings Using Small Mobile Computing Devices Combined with Interactive Paper Thu, 13 Nov 2014 06:48:13 +0000 Smartphones have become indispensable computational tools. However, some tasks can be difficult to perform on a smartphone because these devices have small displays. Here, we explore methods for augmenting the display of a smartphone, or other PDA, using interactive paper. Specifically, we present a prototype interface that enables a user to interactively interrogate technical drawings using an Anoto-based smartpen and a PDA. Our software system, called PaperCAD, enables users to query geometric information from CAD drawings printed on Anoto dot-patterned paper. For example, the user can measure a distance by drawing a dimension arrow. The system provides output to the user via a smartpen’s audio speaker and the dynamic video display of a PDA. The user can select either verbose or concise audio feedback, and the PDA displays a video image of the portion of the drawing near the pen tip. The project entails advances in the interpretation of pen input, such as a method that uses contextual information to interpret ambiguous dimensions and a technique that uses a hidden Markov model to correct interpretation errors in handwritten equations. Results of a user study suggest that our user interface design and interpretation techniques are effective and that users are highly satisfied with the system. WeeSan Lee and Thomas F. Stahovich Copyright © 2014 WeeSan Lee and Thomas F. Stahovich. All rights reserved. Encoding Theory of Mind in Character Design for Pedagogical Interactive Narrative Thu, 23 Oct 2014 09:52:28 +0000 Computer aided interactive narrative allows people to participate actively in a dynamically unfolding story, by playing a character or by exerting directorial control. Because of its potential for providing interesting stories as well as allowing user interaction, interactive narrative has been recognized as a promising tool for providing both education and entertainment. This paper discusses the challenges in creating interactive narratives for pedagogical applications and how the challenges can be addressed by using agent-based technologies. We argue that a rich model of characters and in particular a Theory of Mind capacity are needed. The character architect in the Thespian framework for interactive narrative is presented as an example of how decision-theoretic agents can be used for encoding Theory of Mind and for creating pedagogical interactive narratives. Mei Si and Stacy C. Marsella Copyright © 2014 Mei Si and Stacy C. Marsella. All rights reserved. The Role of Verbal and Nonverbal Communication in a Two-Person, Cooperative Manipulation Task Thu, 07 Aug 2014 06:56:36 +0000 Motivated by the differences between human and robot teams, we investigated the role of verbal communication between human teammates as they work together to move a large object to a series of target locations. Only one member of the group was told the target sequence by the experimenters, while the second teammate had no target knowledge. The two experimental conditions we compared were haptic-verbal (teammates are allowed to talk) and haptic only (no talking allowed). The team’s trajectory was recorded and evaluated. In addition, participants completed a NASA TLX-style postexperimental survey which gauges workload along 6 different dimensions. In our initial experiment we found no significant difference in performance when verbal communication was added. In a follow-up experiment, using a different manipulation task, we did find that the addition of verbal communication significantly improved performance and reduced the perceived workload. In both experiments, for the haptic-only condition, we found that a remarkable number of groups independently improvised common haptic communication protocols (CHIPs). We speculate that such protocols can be substituted for verbal communication and that the performance difference between verbal and nonverbal communication may be related to how easy it is to distinguish the CHIPs from motions required for task completion. Sarangi P. Parikh, Joel M. Esposito, and Jeremy Searock Copyright © 2014 Sarangi P. Parikh et al. All rights reserved. A Proactive Approach of Robotic Framework for Making Eye Contact with Humans Wed, 23 Jul 2014 07:26:22 +0000 Making eye contact is a most important prerequisite function of humans to initiate a conversation with others. However, it is not an easy task for a robot to make eye contact with a human if they are not facing each other initially or the human is intensely engaged his/her task. If the robot would like to start communication with a particular person, it should turn its gaze to that person and make eye contact with him/her. However, such a turning action alone is not enough to set up an eye contact phenomenon in all cases. Therefore, the robot should perform some stronger actions in some situations so that it can attract the target person before meeting his/her gaze. In this paper, we proposed a conceptual model of eye contact for social robots consisting of two phases: capturing attention and ensuring the attention capture. Evaluation experiments with human participants reveal the effectiveness of the proposed model in four viewing situations, namely, central field of view, near peripheral field of view, far peripheral field of view, and out of field of view. Mohammed Moshiul Hoque, Yoshinori Kobayashi, and Yoshinori Kuno Copyright © 2014 Mohammed Moshiul Hoque et al. All rights reserved. A Large-Scale Quantitative Survey of the German Geocaching Community in 2007 Thu, 26 Jun 2014 06:59:42 +0000 We present a large-scale quantitative contextual survey of the geocaching community in Germany, one of the world’s largest geocaching communities. We investigate the features, attitudes, interests, and motivations that characterise the German geocachers. Two anonymous surveys have been carried out on this issue in the year 2007. We conducted a large-scale qualitative general study based on web questionnaires and a more targeted study, which aimed at a comprehensive amount of revealed geocaches of a certain region. With sample sizes of (study 1: general study) and (study 2: regional study) we provide a representative basis to ground previous qualitative research in this domain. In addition, we investigated the usage of technology in combination with traditional paper-based media by the geocachers. This knowledge can be used to reflect on past and future trends within the geocaching community. Daniel Telaar, Antonio Krüger, and Johannes Schöning Copyright © 2014 Daniel Telaar et al. All rights reserved.