Abstract

Augmented reality (AR) may support effective design and constructability reviews by providing both the physical exploration benefits of traditional physical mock-ups and also the flexibility benefits of building information models (BIM). Many different types of mobile computing devices can present the same technical AR environment, but it remains unclear how the different properties of the devices impact user behaviors in an architecture, engineering, and construction (AEC) context. This study tasked users with completing the same design review task, using the same technical AR environment but viewed through different commercially available mobile AR devices. Specifically, 32 participants were tasked with collaboratively laying out and reviewing a simple office design using the randomly assigned AR device. Results showed 11 different behaviors were observed and different mobile computers elicited different behaviors. To add further context to the findings, the results were compared to those of a similar, previously published study where users completed a design review with the option to choose one or multiple AR devices. For several types of behaviors, including alternative design formation, navigation of design, and problem solving, no differences were observed between either groups or based on specific AR devices. Conversely, for other behaviors, including explanative, decision making, and discussions with team members, participants did not engage in these behaviors when they could self-select devices, but these behaviors were observed when participants were forced to use a particular device. This suggests that, for some applications, while users may tend to prefer one type of AR interface, they are fully capable of performing the same types of design review tasks with any AR device. The novelty of this work is in demonstrating how the context in which devices are applied impacts the ways in which they are used. This may help future practitioners and researchers to strategically choose to use, or not to use, certain types of devices to elicit specific behaviors.

1. Introduction

In recent years, many researchers have attempted to develop methods to resolve communication and coordination challenges related to using traditional two-dimensional documentation. With the increase of project complexity, traditional 2D documentation may not be adequate to represent all project information for project parties [1]. For example, observations of project meetings reveal that communicating project information through 2D representations limits project stakeholders’ ability to work together to solve problems and make decisions [2]. In response to this recognition of limitations related to traditional 2D communication, researchers have explored various 3D visualization strategies, including virtual reality (VR) or building information modeling (BIM) walkthroughs, physical mock-ups, and augmented reality (AR).

The research community has gravitated toward using 3D visualization media in part because of recent developments in computing, modeling, and visualization technologies, which have become inexpensive and powerful. These technologies can facilitate collective decision making throughout project phases in the architecture, engineering, and construction (AEC) industries [2, 3]. This is especially important during design and constructability review sessions, when a number of project teams come together to review a design concept [3].

A prior study has explored how AR can enable different human behaviors in design review sessions by allowing participants to choose from various different mobile computers that are capable of displaying the same technical AR experience [4]. This prior work used the DEEPAND coding system developed by Garcia et al. [5] to identify 10 different behaviors that can impact the outcomes of the meeting using mobile AR computing tools [4]. In this prior study, participants were presented with a variety of mobile AR devices and those participants could freely choose to use any device at any point in the design review session. While the prior work enabled individuals to experiment with different devices to support AR visualization, it also provided an opportunity for participants to gravitate toward using devices for reasons other than pure effectiveness. In other words, it is possible that participants were more familiar with a particular computing device or had some other reason for using a particular device, which might have made them use it more. This could potentially increase their likeliness of demonstrating certain design review behaviors. While this approach provided evidence to illustrate the types of behaviors that AR may enable, it did not specifically illustrate the types of behaviors that might be observable in the presence of only a single device. In practice, many design review teams do not have a variety of devices to use for viewing the same environment. Therefore, this study aims to explore the extent to which the same behaviors observed in the prior study are also observable when the participants do not have the option to choose different devices. This may support technology planning in future studies to allow individuals to strategically plan for certain technologies to enable targeted human behaviors.

This study presents findings to address the following research questions:(1)What human behaviors are observed with different mobile AR devices when participants are forced to use a particular device?(2)How do the behaviors observed in this work compare and contrast to those observed during sessions where individuals were provided with various AR device choices?

To address these research questions, the authors conducted design review sessions with graduate and undergraduate students at Arizona State University. While it might initially seem like a limitation to study student participants with less experience than typical industry practitioners, it is common for owners without prior design review experience to be involved in these types of review sessions. In many cases, owners may have a strong understanding of their own needs but do not necessarily understand all detailed design or construction needs. Therefore, the authors aimed to replicate this type of scenario by creating a design review task that would require consideration of space constraints but would not require substantial prior industry experience to make plausible decisions. Furthermore, this approach of using student participants enabled the researchers to systematically test individual AR computing devices in a collaborative design review session, including devices that the prior study [4] suggested would be counterproductive for design review sessions. Therefore, the students were only provided with a single mobile computing device for experiencing the AR design environment. A structured analysis approach was used according to the coding strategies defined in DEEPAND [5] and the initial study [4]. The results are presented, and a detailed discussion is provided to provide insights into the similarities and differences that were observed between implementations.

2. Background

Design and constructability review sessions involve coordinating design information among project teams to understand and negotiate the interests and the objectives of the owners and the project in a timely manner [68]. This process focuses primarily on analyzing design components and methods such as structural, mechanical, electrical, and plumbing elements. In many cases, project stakeholders spend too much time and put too much effort trying to illustrate, define, and understand cross-disciplinary knowledge and information throughout design phases [9, 10]. As a result, important information is unable to be properly leveraged and decisions cannot be made efficiently [2, 11, 12]. However, with so many design and construction concerns to be discussed, analyzed, and decided upon, 3D visualization technologies have become increasingly useful and necessary.

Studies reveal that design review is crucial for identifying conflicts, errors, and inconsistencies in designs when using 3D representations, such as physical mock-ups and VR [6, 13]. Physical mock-ups and VR can support reviewers in addressing potential concerns prior to construction, such as constructability and assembly needs, safety, structural performance, environmental performance, and planning [1416]. While several studies used physical mock-ups and VR to facilitate design review practices, AR has been also used in various collaborative tasks [14, 17, 18]. For example, AR has been used as an interactive architectural visualization tool [2, 19, 20] and also for evaluating heights of virtual buildings for site planning [21].

Researchers have also explored various different types of computing devices to display AR content. For example, smartphone-based AR has also been used for enhancing face to face collaboration and communication [2225]. Personal computers have been used to model, simulate, and visualize AR construction scenarios [26]. A helmet-based wearable mobile computer was also developed and tested for on-site visualization of construction drawings and relevant information through a projection-based AR system [27, 28]. Moreover, Kopsida and Brilakis [29] provided a survey of different markerless solutions for pose estimation with respect to a known 3D model on mobile devices with 2D images, with a monocular algorithm for simultaneous location and mapping or with a combination of RGB images and the corresponding depth images [30]. They claim that the corresponding depth images solution should be the most robust solution to advance AR on mobile devices. These prior studies aimed to develop an AR environment for a single device in order to support a specific AEC use-case.

In most cases, the contributions of these prior works relate to proving that AR can support the targeted use-cases in some way. However, there is a limited understanding of how different mobile AR interfaces may enable different user behaviors when users have no choice to select a preferred AR device in design review sessions. Understanding the types of behaviors can support decision making when planning for the use of AR technology through specific computing interfaces to support specific human behaviors. This study aims to contribute to this understanding.

3. Methodology

Student participants were solicited for this study to complete design and constructability review sessions for a hypothetical office space using specific AR devices. These participants were video and audio recorded, and their behaviors were coded in order for the researchers to understand the typical types of behaviors demonstrated by individuals using different AR devices. Using students as participants allowed the researchers to repeat the same design review session, but to modify the specific AR device supplied to participants for completing the session. Furthermore, it allowed the researchers to incorporate AR devices that prior AR research [4] suggested should be counterproductive. This helped to identify the types of behaviors that would be observed by participants who were required to use a specific AR device for the entirety of their review session and did not have the option to choose to use different or additional devices.

3.1. Design and Constructability Review Scenario

The constraints of the hypothetical design scenario were the same for all participants. During each session, each of the two participants was provided with a single AR computing device. During different sessions, different pairs of participants were provided with different devices to allow each device to be tried in multiple sessions. The following sections present the detailed method involved in collecting and analyzing the data recorded from these sessions.

Upon arriving to the sessions, participants were given a quick introduction to AR and how they can use the device that they were both assigned. Then, they were presented with the design review activity. The participants were brought to a mostly empty room and told that they would need to plan for the layout of that space in order for it to serve as an executive office. This required them to plan out how they would want to arrange AR-based design components in the space. This scenario does not require substantial prior expertise to define a preferred layout as the students are familiar with the types of design components included (i.e., desks, cabinets, chairs, table lamps, and computers) and the general needs of an office worker. However, the activity did challenge students to collaboratively determine what elements they would include in their office layout, given the space constraints. Participants were provided with printed fiducial markers that represented various design elements. To ensure that participants would have to prioritize what objects would fit in their layout, they were intentionally given more office components to select than could realistically fit in the space. Figure 1 shows participants laying out these markers and determining how to best allocate space for the office layout according to the following design programming requirements:(i)Required items needed to be added:(a)Office desk, office chairs, computer, table lamp, bookshelves, and office drawers.(ii)Optional additional components:(a)Larger office desk options, office chairs (different styles and sizes), and other miscellaneous furniture options. These items challenged participants to find space in the office, which could not come at the expense of omitting required components.(iii)Room constraints:(a)The size of the office (the physical room where the activity was conducted) could not be changed.(b)The built-in architectural elements in the space (including a window, door, electrical outlets, TV, and a built-in bookshelf) could not be changed.

The constraints provided a challenge to the student participants, but they also provided a direct method for allowing participants to engage with both physical and virtual objects involved in the session. In order to test different design concepts, participants were required to physically pick up and move printed fiducial markers. This behavior is simple to perform by participants, but it also enables the researchers to accurately code their behaviors (i.e., when a participant moves a marker, it is immediately clear that this represents the consideration of a design alternative). Therefore, this approach enabled the researchers to effectively observe how it might affect the behaviors of the participants.

Participants were allowed to lay out the space for as long as they felt was necessary, but most groups used approximately 30 minutes. When the participants felt that they had identified their ideal space layout, they were asked to document their design decisions on a form provided. This form tasked students with listing all items that they had selected, including both required and available options.

3.2. Technology Selection and Development

The mobile computing interfaces used in this study are listed in Table 1. These devices included both handheld and also wearable, head-mounted display- (HMD-) based devices. Both handheld and wearable devices represented a range of sizes and modes of displaying content, but all were chosen because they can present the same technical AR experience. Theoretically, the authors could have included other AR devices that involve newer, gesture-based, interaction (i.e., Microsoft HoloLens), but this change in interaction could also have an impact on the users’ ability to interact with the AR content. Therefore, these devices were selected because they allowed for exactly the same type of marker-based AR interaction.

The authors followed a structured process to develop a single AR application that could be built to all of the devices that were used in the design and constructability review sessions [31]. Using an Android-based, mobile AR application that allows users to visualize virtual objects in a physical space, participants were given various printed fiducial markers that represented different design elements. Regardless of the device they were given, the interaction with AR involved participants placing markers on the ground, pointing the computing device camera at the markers, and viewing augmented content at full scale. This allowed participants to physically navigate around the space, and their view of the “augmented” content would modify accordingly. This development approach also enabled the researchers to have a consistent AR environment for comparison between all devices.

4. Data Collection and Analysis

The research techniques adopted in this study aimed to facilitate comprehensive data collection. The authors used direct observation to record behaviors as they occurred. These observational data were collected through video and audio recordings of the sessions. Previous research has categorized the behaviors people exhibit when working together in engineering meetings [2, 5]. This prior work classified meeting activities by analyzing the ways people interacted, participated, and contributed to meetings, as well as the way projects evolved. They classified all utterances spoken during several engineering meetings according to the reactions they promoted. They also identified seven codes of behavior among participants that influenced the outcomes and efficiency of the meetings, including describe; explain; evaluate; predict; formulate alternative; negotiate; and decide (DEEPAND). In this research, the authors have used the codes of behavior from DEEPAND as their coding system to study participant behaviors. The authors also developed five additional codes of behavior that could occur in an AR environment that were not included in the original DEEPAND system. All codes used from DEEPAND and those developed specifically for AR are shown in Table 2. Furthermore, the definitions of these behaviors and examples that were observed in this study are provided to clarify how the data were analyzed in this research.

The authors analyzed the data in each category as either a time- or event-based dataset. The time dataset records the amount of time spent engaging in a given behavior while using a different mobile computer. In this category, eight codes of behaviors were considered, including visualizing; describing; explaining; evaluating; predicting; walking/navigating through a design; discussing while looking at others; and discussing while looking at the markers. The time spent on each behavior was noted in intervals of minutes. The events dataset counted the number of occurrences of different behaviors. Three codes were considered for this dataset, including deciding; problem solving; and formulating alternatives (moving markers). This approach of coding certain behaviors based on the total time and others based on the number of occurrences was implemented to try to provide meaningful results based on how teams might actually want to use the findings of this work. For example, “formulate alternative (move markers)” could theoretically be treated as a time dataset, but the time it would take to move markers would be highly dependent on factors outside of the AR device used (i.e., if the size of the room were much larger, it may take longer to move markers per alternative, but this still indicates the same number of considerations of design alternatives). Therefore, the researchers aimed to use coding approaches that would provide practical meaning to the data collected.

The observed time for each behavior was normalized to provide an average use per person. Because some sessions took longer than others, the results were divided by the total time spent in each session to indicate the average time spent for a specific behavior per minute. For example, when using handheld devices with less than a four-inch screen, 4 participants spent 24 minutes visualizing and they took 50 minutes to complete the 3 sessions. Therefore, the average use per person was calculated according to the following equation:

After determining the average use per person, the average use per minute was calculated according to the following equation:

The main purpose of this analysis strategy is to normalize the data so that the observed behavior that occurs per minute can be seen without considering the variance of the total time of the sessions. All observational data were normalized based on the total amount of time that participants spent in the session. The following sections use these percentages to report results.

5. Results and Discussion

In total, 32 participants completed AR-based design and constructability reviews for this research in 16 design and constructability review sessions, with two participants per session. In each review, a different AR device was provided, but the same device was given to both participants involved in the specific session. In total, eight devices were tested throughout all sessions, which meant that each device was tested by four participants in two sessions. In every session completed by every participant, the office space layout design and constructability use-case and AR experience created remained identical. The results from the data collected were sorted in Table 3 to illustrate the percentage of each behavior observed in this research.

In order to provide meaningful results, the discussion sections are separated by the specific behaviors coded. The results observed through the different AR-based design review sessions are included. Additionally, results from prior, industry-based AR design review sessions [4] are provided in order to add context to the results based on whether or not participants could choose their own AR device. This is especially important for this work because this comparison helps to provide evidence of the types of behaviors commonly exhibited among participants when they do and do not have the option to choose different devices.

5.1. Descriptive

Descriptive behaviors were identified as those when participants simply stated what something was in the AR space. For example, participants would often describe what they were looking at as a preface to a subsequent evaluative behavior. Descriptive behavior was coded as a time-based behavior to allow researchers to understand the extent to which people are engaged in this behavior during the design review sessions.

This behavior was observed in this student-based work with all devices that were tested. Similarly, it was also observed in the prior industry-based work using all devices tested. Based on the data collected, there is no evidence to suggest a major difference in descriptive behaviors among users in either study using any devices.

Admittedly, both sessions indicated that users described their environment the least when using a single-eye see-through AR glass. While this might initially suggest that head mounted displays (HMDs) do not support descriptive discussion among participants as much as handheld devices, other HMDs observed elicited comparable amounts of descriptive behavior as compared to the handheld devices. Therefore, it may be more likely that the single-eye see-through AR glass, specifically, was not as well suited to this behavior.

5.2. Explanative

Unlike descriptive behavior that simply aims to describe the “who,” “what,” “when,” and “where” information related to a situation, explanative behaviors are identified when participants explain why something is the case. For example, users might explore the AR environment and explain why they placed the office desk in the corner away from where the office door swings inward. Explanative behavior was coded as a time-based behavior, which allowed researchers to understand the extent to which it occurred.

This behavior was observed in all sessions with student participants using all devices. However, it was not observed in the prior industry-based study when participants used handheld screens less than 4; two-eyed see-through glasses; or one-eyed see-through glasses. This suggests that participants would likely prefer not to use these types of devices when explaining an attribute of a design. This preference to use devices with a larger or more intuitive display may seem largely expected. What is more noteworthy about this comparison is the observation that when not provided with an option to use a more preferable device, users will still be able to explain their thoughts about a particular situation using any of the AR devices tested.

5.3. Evaluative

Evaluative behaviors are those that include an assessment of the extent to which a certain design element will meet the needs of the team. For example, participants might state that a certain desk placement did not provide adequate room for them to swivel in their office chair, which would not be conducive to working in the space. This type of behavior was coded as a time-based behavior as there are not clear delineations between evaluative comments within a single statement or series of statements by participants.

This behavior was observed in all student-based sessions with all devices tested. It was not observed in the prior study when participants used small handheld mobile devices less than 4″; one-eyed see-through AR glasses; or two-eyed see-through glasses. Similar to the explanative behaviors, this suggests that when participants are forced to use a certain AR device, they will be able to evaluate a design. However, unlike the descriptive findings, evaluation may indicate a more cognitively challenging task that requires more consideration from users. It is worth noting that in both student-based and industry-based sessions, there appear to be substantial differences in the extent to which participants demonstrate evaluation in AR. Tablet-based devices consistently led to higher amounts of evaluation among participants. Conversely, HMDs seemed to consistently elicit less (or no) evaluation among participants. This suggests that not only do participants seem to avoid using HMDs for design evaluation, but even if they are not given an option on device, they do not evaluate attributes of the design as often.

5.4. Predictive

Predictive behavior occurs when participants discuss what they believe would be potential impacts of a particular design change. For example, they might discuss cost implications of using a larger office desk option in their space and how that could potentially impact the effectiveness of a given layout. Similar to the prior behavior codes, predictive behavior was coded as a time-based behavior as there often was not a clear end point or number of predictions made within a particular statement of a participant.

This behavior was observed in all student-based implementations with all devices except two-eyed see-through glasses. Conversely, it was not observed in any of the discussion among the industry members. This may imply that the industry participants felt that certain impacts of decision changes were obvious because of the experience that all participants had, but it could also be due to the slight differences in the design activities that were compared. For example, in the student-based activity, participants had to fit all design content within the confines of a single, existing space. In the industry-based study, the team was not given specific room constraints to their design challenge. Therefore, the researchers do not conclude that certain devices are likely to enable or inhibit specific predictive behaviors among participants.

5.5. Alternative Design Formulations

While predictive behaviors describe the impact envisioned from a potential design change, alternative design formulation behaviors simply explore different designs for a space. Because this study leveraged marker-based AR, this was coded as an event data point. Anytime a participant moved a printed fiducial marker in their space, this was coded as an alternate design that they explored. For example, when experimenting with placement of the required office desk, participants may move the printed fiducial marker several times to explore different layout options. Each movement was counted as an alternative design formulation.

This behavior was observed in all student-based implementations using all AR devices tested. It was not observed when using every device in the industry-based study. Specifically, the prior work did not observe this behavior when using the VR box or the smallest smartphone less than 4″. The design activity incorporated in both the student- and industry-based events required participants to lay out a particular space within certain constraints. In all cases, participants were initially given a stack of printed markers that were not placed anywhere. This forced them to determine an initial design on their own. While this method allows participants to demonstrate evolution in their design considerations, it almost guarantees that they will demonstrate this behavior of design alternative formulation. The only exceptions that were observed were in the collaborative industry-based session where users could choose their own device. For example, if a user was wearing the VR box, which presented video pass-through-based AR, and wanted to explore a different design, they would occasionally mention to another participant to move a marker to explore a different option. Therefore, while they did not technically display this behavior in all sessions, it seems reasonable to claim that it is likely that all devices tested would support either direct formulation of alternatives or at least considerations related to design alternatives.

5.6. Negotiating

Negotiating behaviors relate to discussion about who will handle certain responsibilities based on an outcome of the design review session. For example, if a participant were to ask or instruct another to complete a cost estimate or model change based on a decision made during the design review, this could constitute a negotiation activity. This behavior was not observed in either the student-based or industry-based study. This is likely a result of the implementation strategy. During both implementations, participants were asked to complete a single design review of a space and were not asked to complete a follow-up review or complete design tasks after the event. It is likely that negotiating behaviors would have been observed if this mode of visualization were tested on an actual project where stakeholders would be required to plan around the outcomes of the design review meeting, but the authors of this study cannot make claims about this behavior based on the data collection approach used.

5.7. Decision Making

Decision-making behaviors are considered to be those where participants come to an agreement about a particular design element. For coding this type of behavior, the authors identified instances when a participant proposed a design alterative and when another confirmed that the alternative would meet the needs of the project. Because these behaviors have a clear proposition of a design concept and acceptance from another participant, they were coded as event-based data.

In the student-based study, decisions were made by participants using all devices. For the industry-based study, decisions were not made with either the handheld tablet or tablet mounted on a stand, nor were they made with small handheld devices less than 4″. This suggests that all devices may facilitate decision making to some extent if participants are not given a choice about what device they use.

5.8. Navigating the Design

Navigation behavior was coded specifically for AR-based design reviews. For traditional design review sessions that might involve traditional plans and architectural renderings, it is difficult or impossible to know the extent to which participants navigate a design. However, in AR, users physically explore the space to see the design from different perspectives. Therefore, this behavior was coded as a time-based behavior to determine which devices elicited the greatest and least amount of physical exploration through a space.

This behavior was observed in all student-based sessions with all AR devices tested. It was also observed in all industry-based sessions using all devices provided. This seems to indicate a fundamental performance attribute of AR: it enables physical exploration of a space. While this may not always be beneficial in all situations, the building industry is unique from other industries in that many times the final built product cannot be developed through iterative testing. Instead buildings must be completed correctly the first time they are made. Therefore, the ability of AR to support physical exploration of a space seems to provide potential advantages for project stakeholders to explore building design concepts.

5.9. Discussing the Design with Others

This behavior included any of the verbal behaviors mentioned earlier that involve more than one participant. In other words, if a participant was thinking aloud to him or herself, this was not considered to be discussion with other participants, but anytime a statement was made to inform or engage with another participant, this was considered as time spent discussing with others. To further understand how the different devices support engagement with the model or engagement with other participants, this coding category was separated into time spent discussing the design when looking at other participants and time spent discussing the design when looking at the design.

This behavior was observed in all student-based implementation sessions with all devices tested. However in the prior study, this was only observed when using tablet-based devices (either handheld or mounted on a stand). This initially seems to suggest that head-mounted displays might be less conducive to supporting discussion among participants. However, when looking at the student data, there is clear evidence of discussion occurring between student participants and also when students are looking at the model content in AR. This seems to indicate that while participants may not always prefer HMD-based devices for discussing a design, these can lead to different modes of group discussion in the absence of device choice.

5.10. Problem Solving

Problem solving was defined in this work as any event where a decision was made as previously coded that specifically followed the identification of a problem related to a design. This was collected to try to identify the decisions that may have required more cognitive effort from participants. In other words, in both design activities, there may be certain design attributes that may not require substantial consideration from the participants. By considering decisions that were made after a problem was identified, this helps to illustrate the specific cases of decisions that were made that required actual consideration of the needs of the design task to generate agreement about a design approach. Similar to the coding approach used for decision making, this event has a clear start point (proposing a design alternative) and end point (acceptance of proposition). Therefore, these were categorized as event data.

This behavior was observed in all student-based sessions using all devices tested. It was not observed with most of the devices in the industry-based session. It is possible that this is because the initial design alternatives explored by the industry participants were guided by their prior experience and therefore less likely to lead to problematic issues that would require a subsequent problem-solving decision. It is also possible that this had to do with the design challenge given to the different groups that may have been more or less likely to lead to problems that would need to be addressed. Because all student groups using all devices still indicated problem-solving behaviors, the authors conclude that all devices could support this type of behavior among participants.

The findings from this study indicate a few trends when exploring them in conjunction with one another. First, the study demonstrates that while there are often preferences of users to leverage certain technologies more than others, this often does not mean that the device specifically enables or inhibits certain behaviors. For example, when looking at explanation, evaluation, alternative design formation, decision making, and discussion among the team members, these behaviors were frequently not observed with certain devices in the industry-based design review session, but they were often observed in the student-based sessions. This demonstrates that while users may have a preference to a certain type of device, they may still be able to engage in a certain behavior if they do not have the choice to select a more preferable device.

Similar to the time-based data, it is worth noting the similarities between high-occurrence events that were observed through larger tablets and the comparatively low-occurrence events that were observed on the head-mounted displays. This is noteworthy because a head-mounted display-based AR environment might initially seem to offer advantages for marker-based AR because it frees the users’ hands for moving markers and exploring designs. Based on the findings of this work, this assumption does not always seem to be the case. It is possible that this indicates that head-mounted AR does not support these behaviors as much as handheld, but it could also be related to the fact that the devices used a video pass-through method for realizing AR. This would offer an opportunity for a future study to explore the extent to which these behaviors are observable using a newer head-mounted AR device that does not use a video pass-through system (for example: Microsoft HoloLens).

Another noteworthy finding between the prior study and this one relates to the behaviors exhibited while using the handheld mobile computing devices. In the prior study, few if any design alternatives were explored when using smartphone-sized devices. Conversely, in this study, design alternatives appear to be explored at approximately the same rate between all handheld devices. This seems to indicate that while users may prefer to view AR on larger tablets or “phablets,” in the absence of choice, all handheld devices seemed to elicit similar levels of exploration of design alternatives. This is especially important for design and constructability review sessions. One of the core functions of these review sessions is to enable team members to identify potential issues with a concept and explore potential alternatives. The comparison of these findings suggests that while users may gravitate toward using larger devices for exploring design alternatives, similar frequencies of this behavior may be observed through the use of smaller handheld devices as well.

7. Research Limitation

The authors of this study intentionally developed the exact same technical AR experience for all devices tested. While the same environment was built to each device, different computing devices had some differences in terms of technological hardware. This means that some devices may run the application more smoothly than others. The authors were interested in understanding the behaviors elicited or inhibited using the different technologies, so any hardware limitations that may have influenced user behavior in this study would still be relevant for future researchers who use a similar type of hardware. However, as technologies continue to evolve, it is likely that new devices would enable interactions with the AR environment in a fundamentally different way than those tested in this study. For example, new and emerging holographic AR displays (i.e., Microsoft HoloLens or Magic Leap) do not generally rely on marker-based approaches to content tracking and may provide different capabilities than the HMDs tested in this research. As a result, the authors elected to focus on testing different technologies that could all run the same technical marker-based AR environment, but they do not make claims about emerging technologies that function in fundamentally different ways than those tested.

8. Conclusion

This study aimed to understand how different mobile computing technologies presenting the same technical AR environment may enable or hinder different user behaviors in a design review context, based on whether or not users had a choice to select a preferred AR device. By comparing two groups of participants with and without the ability to choose AR devices, the authors were able to identify several noteworthy trends about how different devices enabled or inhibited behaviors.

In some cases, there appeared to be differences in behaviors enabled through different types of AR devices, regardless of whether participants could choose their device or not. For example, regardless of choice, HMDs did not generally elicit evaluative behaviors. Typically, these types of behaviors involve input from various team members. Gaining input from a team member while wearing a HMD would involve a user observing his or her team members through the device. In the marker-based HMDs tested, this means that participants would see their team members through a video screen. It is possible that this type of interaction influenced users while evaluating the design with the HMDs. It is also possible that emerging HMDs that are moving away from video pass-through AR may also be able to better support evaluative behaviors.

For other behaviors studied, all AR devices seemed to elicit certain behaviors, whether or not users had the choice to select their own device. For example, in both studies, users engaged in alternative design formulation; navigation of design; and problem solving. These findings largely aligned with researcher expectations. Creating a marker-based application is likely to encourage alternative design formulation regardless of the device chosen, and the task of agreeing on a design concept among a team of individuals is likely to require problem solving. Furthermore, physical navigation of the design is one of the core benefits afforded by AR, as compared to reviewing a static document or virtual model. Therefore, while partially intuitive, these findings indicate core affordances that all AR devices seem to enable, regardless of whether users may select their own device.

Perhaps the most noteworthy conclusions of this work relate to the observations that differed between groups based on whether or not participants had a choice in selecting their AR device to use. For several behaviors on certain devices, participants who had a choice to select certain devices did not exhibit certain behaviors. However, when not provided with a choice, the same behaviors were exhibited on all devices. These behaviors included explanative; decision making; and discussing with others. These findings are noteworthy because researchers and practitioners who conduct design review sessions with a new visualization technology, such as AR, are unlikely to provide multiple devices that deliver the same technical experience. Instead, they are more likely to own or purchase a particular type of device that they would implement for the session. The novelty of this work is in providing evidence that for certain behaviors, exact AR device types may not matter for supporting specific behaviors. This may help to guide researchers and practitioners when planning for what computing devices to purchase for AR design review sessions, based on what may offer the most range of effective uses for their needs.

Data Availability

The authors of this work will share aggregated results from this research but cannot release individual data points from human subjects, in accordance with their institutional review board’s requirements. Furthermore, the authors will also share models that were used for the AR design review sessions described in this work. This will allow other researchers to replicate the activity if it is of interest. If a reader would like to access any available data, please contact the authors directly with their request for information.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank all the participants who agreed to complete the AR design review activities that made this research possible. This material is based upon the work supported by the National Science Foundation under grant no. IIS-1566274.