Abstract

Recent years have seen a surge in interest in the multifaceted topic of human-computer interaction (HCI). Since the advent of the Fourth Industrial Revolution, the significance of human-computer interaction in the field of safety risk management has only grown. There has not been a lot of focus on developing human-computer interaction for identifying potential hazards in buildings. After conducting a comprehensive literature review, we developed a study framework for the use of human-computer interaction in the identification of construction-related hazards (CHR-HCI). Future studies will focus on the intersection of computer vision, VR, and ergonomics. In this research, we have built a theoretical foundation for past studies’ findings and connections and offered concrete recommendations for the improvement of HCI in danger identification in the future. Moreover, we analyzed two cases studies related to the domain of CHR-HCI in terms of wearable vibration-based systems and context aware navigation.

1. Introduction

The importance of efficient human-computer interaction has grown with the prevalence of computers. Human-computer interaction (HCI) is the study of how humans and computers work together, specifically how well computers are designed to work with humans. The use of computers has always raised the issue of how to connect them. Humans’ means of communicating with computers have progressed considerably throughout the years. While we have come a long way in the previous several decades, we still have a long way to go. Every day, new technological and system designs emerge, and research into this field has exploded. Not only has the quality of communication between humans and computers improved, but the human-computer interaction (HCI) discipline has also diversified over time. Different areas of study have paid more attention to the ideas of multimodality and adaptable user interfaces than they have to the design of traditional command- and action-oriented user interfaces.

In the discipline of civil engineering, “hazard” is frequently defined as the source of energy that, if released and resulted in exposure, might cause harm or death [1]. Because of construction’s unique challenges, the industry as a whole has a comparatively low hazard identification rate (66.5%) when compared to other sectors. Individually, even among construction employees with more than ten years of experience, the danger identification rate is below 80% [2]. In order to lower the accident incidence and guarantee the safety of construction workers, it is crucial to effectively recognise possible risks. However, the current state of the art in danger identification is monomodal and places too much weight on human intuition [3]. One of the key reasons why the worldwide number of deaths in the construction industry has not yet clearly decreased is because hazard detection technology has evolved slowly and has failed to satisfy the demands of the construction industry’s development to date. These days, both worker safety and the long-term viability of the construction sector rely on the ability to accurately identify possible dangers [4].

Therefore, the rapid speed of the Fourth Industrial Revolution is pushing the widespread use of human-computer interaction technologies in the construction sector, which in turn is propelling developments in danger identification tools. For example, scholars like Schulte et al. [5] are working to model, measure, and improve the efficacy of various types of interfaces between computer applications and construction workers, as well as maximise the accuracy with which data are mapped from one modality to another. As a result, it can be deduced that there is both a robust body of academic literature and substantial room for growth in the field of human-computer interaction technology as it pertains to hazard detection in the built environment [3].

Here, we use the term “CHR-HCI” to refer to studies that investigate the intersection of HCI and hazard recognition in the built environment. While the work of the selected few researchers has been extensive, not nearly enough attention has been paid to establishing a broad context for these investigations [3]. Therefore, this study aims to do the following: (1) review the related work presentedin the literature of CHR and HCI; (2) analyze two case studies related to thisfield; (3) identify the directions for future research.

2. Literatuire Review

In the following sections, the study will give detailed analysis of the reviewed works related to human-computer interaction design approaches by synthesising the previous works.

2.1. Overview of Human-Computer Interaction Design

Human-computer interaction is the study of how to create efficient computer systems via assessment, design, and implementation [6] [7]. Human-computer interaction (HCI) is the most crucial step in the creation of any kind of computer system since it is a crucial aspect of “man-machine systems” [8], whose participation is not only about the work at hand but also about the mutual understanding that might result from being in the same room [9] to facilitate “creating input and output modalities of information” [10] as a means of comprehending human interaction with robots. Any interface’s success depends on how well it facilitates “human and computer system communication” in its entirety [11]. In a similar vein, Sumak et al. [12] emphasised that an efficient user interface is one that achieves faultless and harmonious interactions between humans and computer systems, as this is the only way in which people’s mental loads can be reduced fundamentally and their “operational abilities” be enhanced [6].

2.2. Methods for HCI Design

Data and information are entered into and extracted from a computer during the process known as “human and computer interaction” [13] by the use of a specialised user interface, whereby users give their instructions to the system before it examines those inputs, computes, and processes them and then returns the results to the users using the same interface [14]. There are a variety of channels via which information is exchanged and information is extracted between humans and machines in the modern day, such as data communications, numerical and symbolic interaction, voice interaction, and intelligent interactions [11]. Tosi [6] and Jeon et al. [9] have proposed subclassifications within the three main parts of the interface design process: interactive design, structural design, and visual design [15]. For instance, “the types of interactions” and “how the interactions take place” might further categorise interactive design, which is concerned with people’s interactions with systems [7]. When creating an interactive interface, it is crucial to keep in mind factors such as “people’s orientation, consistency, users’ operation ability, shortcuts, assistance, and feedback,” as emphasised by Esposito et al. [16, 17]. Again, structural design may be broken down into three subcategories that focus on analysing individual requirements, the rationale for carrying out the work, and the way in which the task was designed [18, 19]. Finally, “visual design,” which involves combining “complexity and imagery,” aims to make consumers pleased with the interface [19], regardless of what other research studies hve revealed [20]. Discussion about how to best design cutting-edge IT (emerging technologies) has spread to the “HCI discourse” during the last decade and has regularly urged a reevaluation of current practises in interface creation [19]. Information system experts are increasingly interested in learning about HCI methods of development; therefore, the question of HCI interface standards for new technologies has become a hot topic of debate [13].

3. Research Methodology

3.1. Paper Retrieval

In the first place, theinformation-gathering tools were located. The databases used for the literature search were Scopus, ACM Digital Library, Web of Science, and Google Scholar, and they were chosen after much deliberation and comparison.

Second, which literary genres to explore were chosen. Journal articles focusing on HCI technology and risk assessment serve as the primary literature foundation for this investigation. Academic conferences are an essential route for academics to discuss research results and address scientific difficulties faced in this subject, so conference papers should also be a crucial element of the literature resources for studying hazard recognition and HCI [3].

In the end, the constraints to guide the literature search were used. In order to retrieve papers, researchers have to be very specific about what theyare looking for and what time period theyare looking at. The terms “construction,” “hazard,” “recognition,” “human-computer,” and “interaction” were found in the dictionary and looked for their near- and opposite-sounding counterparts. The following procedures were taken to guarantee that the literature search was thorough and exhaustive [3]: synonyms and antonyms were linked using Boolean operators, and the resulting pairs were used to query various data stores. Using the most relevant keywords, abstracts, and publications from the search, we inserted the missing synonyms and near-synonyms.

3.2. Bibliometric Analysis Method

After extracting data from four different databases, the study team compared the titles to create a unique beginning literature list. Second, the names of the publications were examined and then the abstracts were verified to make sure there were not any duplicates or useless studies. As a third stage, the broad subject matter of the literature was studied to further weed out the noncompliant material based on the results of processes one and two. In the end, 274 publications met all of the criteria and were included in the analysis [3].

CiteSpace and VOSviewer were used for the bibliometric analysis once the sample was selected. In the study, basic information analysis, cluster analysis, and keyword co-occurrence analysis were used to thoroughly identify the current research state and future development trends in this subject, as represented by abstracts and keywords.

3.3. Basic Information Analysis

The underlying data from the 274 publications were analysed once the sample was determined. The major purpose of this section, like the descriptive statistics in some experimental research, is to give readers the basics, such as the number of annual publications and the make-up of literary genres in this field. Examining the distribution of different types of publications (journals, conferences, and reviews) over time sheds light on the development of knowledge and may provide clues as to the future of CHR-HCI.

3.4. Number of Annual Publications

Figure 1 displays the trend in these types of yearly publications from the year 2000 to the year 2021 [3]. Most years before 2009 had a relatively low number of relevant articles published. Publications have been on the rise since 2011, especially since 2015, increasing from nine articles in 2015 to 59 papers in 2021. This statistic demonstrates how many articles have been written on this topic despite the influence of the COVID-19 epidemic.

Furthermore, a regression model was performed using the least-squares approach, with the number of publications serving as the dependent variable and the year serving as the independent variable; the resultant slope is positive, as illustrated by the dashed line in Figure 1. Furthermore, the cost index was determined by dividing the sum of all publications by the sum of only those published in the recent five years (2017–2021; because 2022 has not yet concluded, used those years as a proxy) (defined as 2000–2021). With a price index of 0.068, it was clear that studies in this area will only get better over time rather than get stale. Therefore, it can be concluded that CHR-HCI research has garnered considerable interest and has been a rapidly expanding field of study in recent years, as evidenced by the increasing volume of annual publications in this area [3].

3.5. Composition of Literature Types

As shown in Figure 2, articles accounted for 56 percent of all investigations conducted. Following this category were conference papers (42%), followed by review articles (2%) [3].

3.6. Keyword Co-Occurrence Network

The scientific knowledge graph shown in Figure 3 of [3] reveals the development of CHR-HCI research by keyword co-occurrence analysis. Second, the cluster analysis of the term co-occurrence network yielded a mean silhouette (P) value of 0.7533 and a modularity (Q) value of 0.796, both of which are credible. Figure 3 depicts a term co-occurrence network, which may be used directly in cluster analysis. Finally, Figure 3’s research terms can be classified into two groups, one for lower-level concepts and one for higher-level concepts, depending on their frequency of occurrence. At the top is the overarching research question, followed by a tier of keywords related to human-computer interaction and a tier of keywords related to terms concerning construction safety and hazard recognition.

3.6.1. Terms Related to Human-Computer Interaction

The term “human-computer interaction” was used to describe the dynamic in which human beings and computer-related machinery coexist during the execution of a predetermined automated task. Due to this, there has been a dramatic improvement in the detection of danger. There are three main categories into which the current HCI research on hazard recognition can be sorted: key technologies, typical products, and product performance.

Technologies that are “key” to the development of HCI-related products for use in hazard recognition can be either fundamental or ground-breaking. Sensor technology, positioning and map construction, robot operating systems, 3D modeling, and virtual simulation are all examples of basic technologies; breakthrough technologies include computer vision, computer simulation, neural networks, and high-performance material manufacturing. Figure 3 [3] displays how the researchers’ use of terms such as “virtual reality,” “three-dimensional computer graphics,” “computer simulation,” and “computer vision” demonstrates their interest in technology.

Construction robots for narrow scenarios and automated construction systems for broad integration are just two examples of the types of typical HCI products that have been developed with specific hazard recognition functions so far. Excavation robots, handling robots, and painting robots are all examples of scene-specific robots that can recognise hazards and perform the same tasks repeatedly. ABCS systems and SMART systems with more comprehensive hazard recognition functions are two examples of automated construction systems used in integrated scenarios, and both have the ability to integrate multiple single-task robots [21].

When discussing the product performance of HCI in the context of hazard recognition, we are talking about things such as product attributes, product cost, operation efficiency, operation quality, and operation safety. [3]. Both horizontal and vertical comparisons of human resources, building material consumption, machine quality, machine power, machine load, movement speed, operation accuracy, etc., as well as comparisons of typical HCI products and traditional operation methods, can be used to assess performance [22]. Integration of design and construction, increased mobility in humanoid robots, and improved load capacity and positioning accuracy in intelligent machinery were all areas where HCI products applied to hazard recognition were expected to focus on in the future.

3.6.2. Terms Related to Construction Safety and Hazard Recognition

There has been a significant paradigm shift in the area of CHR-HCI research over the last 21 years, with the emphasis moving from accident investigation to hazard prediction and prevention [23]. Forecasting is the key word in Figure 3 that illustrates this change [3]. As opposed to looking at accidents after they have already happened, the focus of accident prevention and hazard prediction is on making sure workers in the construction industry are aware of and prepared for any prospective dangers [3]. Because of this shift in philosophy, terms such as “risk perception” and “risk analysis” have emerged as vital tools for helping construction workers see potential dangers in high-stakes settings [24].

Earthquakes, a significant natural hazard, have also garnered scientists’ ongoing interest in the study of risk prediction and hazard awareness. Researchers have begun promising new inquiries from the vantage points of earthquake design, urban planning, and cutting-edge materials [25]. The evolution of this field is reflected in the vocabulary of the field itself: terms such as “earthquakes,” “seismic design,” “seismology,” “architectural design,” and “reinforced concrete” are all part of the study of earthquakes and their effects [26].

Alterations in management structure in this area are reflected in the keyword co-occurrence network. In order to make accident prevention and hazard prediction a reality, revolutionary changes in organisational management and safety technology are essential [27]. Due to the inextricable link between management and construction safety, experts are always looking for new ways to enhance the industry’s already stellar safety record. The evolution of this field is reflected in the rise of new concepts such as decision-making, monitoring, safety training, and risk management. After 21 years of study, scholars such as Yeo et al. [28] consider risk management, risk decision-making, engineering structural health, and safety training in engineering construction to be significant areas of inquiry.

3.7. Cluster Analysis

Cluster analysis was used to describe the most important developments in the field of CHR-HCI [3]. Using optimum computational techniques in statistics, cluster analysis is a way of analysis that may be used to analyse text data and uncover interesting study subjects. In this investigation, VOSviewer and CiteSpace were used for cluster analysis, with CiteSpace being used to fine-tune the data obtained by VOSviewer. Log-likelihood ratio, mutual information, and greatest word frequency are the three most commonly used approaches to naming modules in CiteSpace [29, 30]. Because the names of the modules are so descriptive, we settled on using the highest word frequency technique to determine which ones existed.

Figure 4 [30] shows the results of the study and optimization, which led to the creation of four modules with no clear link between them: computer vision, ergonomics, computer simulation, and virtual reality [3].

3.7.1. Cluster 1: Computer Vision

Out of a total of 251 articles found, 177 were directly relevant to the keyword [3]. This highlights the important role that computer vision plays in hazard identification investigations. Constant refinement of deep learning techniques such as convolutional neural networks, stacked autoencoder network models, and deep belief networks underpins recent developments in computer vision technology. Topics such as content-based picture extraction, posture assessment, multimodal data identification, autosomal motion, image tracking, scene reconstruction, image recovery, and system integration are crucial areas of study. There are two main lines of inquiry in computer vision related to danger recognition [3]. As an example, Luo et al. [31] have developed models and analysed cognitive connections.

3.7.2. Cluster 2: Ergonomics

Since 2015, CHR-HCI has been strongly tied to ergonomics, which has progressed toward more diversity, humanization, and intelligence [3]. In order to enhance the efficiency of danger identification, scientists are now using physiological and psychometric methods to investigate the rational coordination link between the structural-functional, psychological, and mechanical components of the human body and computers [32]. Sixty-five of the 251 papers retrieved were associated with this keyword, demonstrating that the relationship between construction hazard identification and ergonomics is sufficient and that a large number of researchers have carefully studied the technological methods [3]. Task assessment and quantification, brain-computer interfaces, and experimental paradigms in engineering psychology are now at the centre of this field’s investigation [3].

3.7.3. Cluster 3: Computer Simulation

A computer simulation, often called an “emulation,” is software designed to mimic the behaviour of a model of a system in order to learn more about that system [33]. Of the 251 articles found, 97 were directly connected to the search term [3]. With the goal of simulating hazards in construction scenarios through simulation software and external parameters, current hazard recognition research in computer simulation focuses on discrete simulation, analogous simulation, simulation based on probe elements, and simulation of stochastic processes or deterministic models [3]. Creating new code and improving upon preexisting systems are both vital parts of this study. Discrete event simulation languages such as GPSS, SIMSCRIPT, GASD, CSL, and SIMULA and continuous system simulation languages such as DARE, ACSL, CSS, and CSSL have been continuously optimised by a large number of researchers, laying a firm groundwork for human-computer interaction technology and fostering the growth of hazard recognition [34].

3.7.4. Cluster 4: Virtual Reality

The purpose of virtual reality (VR) technology is to allow people to experience a computer-generated environment with all their senses. [35]. The 52 articles that were found while searching for this keyword among the 251 results show that the introduction of virtual reality into the area of hazard detection has great potential for future growth [3]. Scholars are trying to optimise dynamic environment modeling, real-time 3D graphics creation, stereo display and sensor technology, and system integration technology from the standpoint of technological development [3]. From an application standpoint, virtual reality technology is primarily developed for use in construction risk assessment and worker safety training. The expensive cost of manufacture and the unreliability of the user’s visual experience are two of virtual reality’s key technological drawbacks [36].

4. Case Studies and Analysis

Two case studies are offered here to highlight how HCI research may include human values throughout the process.

4.1. Case Study 1: Wearable Vibration-Based Computer [37]

Information technology is being put to good use in many facets of modern life. Machines have become more vital due to the difficulties people have in conveying and processing information. One of the primary goals of speech recognition systems is to permit more widespread usage of computer systems that aid people’s work in a variety of professions by allowing them to communicate with one another through voice [37].

Humans rely mostly on verbal exchanges for communication [37]. Understanding and identifying the speaker, their gender, age, and emotional state are all possible [38]. Humans’ ability to communicate verbally begins in their minds, where a combination of motivation and neuronal activity produces audible speech. Speech is received by the auditory system, which transforms it into neural signals that the brain can interpret [39].

The inability to localise the source of a sound is the primary challenge faced by those with hearing loss. The primary aim in this research [37] was to find a way to help the hearing-impaired identify the source of an incoming sound and move in that direction. The other goal was to make sure that people with hearing loss could still understand who was talking and how loud they were talking. A voice recognition application’s primary function is to take in speech data and generate an approximate translation. To do so, the captured audio from the microphone must be converted from analogue to digital, after which the characteristics of the acoustic signal can be extracted and used to identify critical features.

Two characteristics of the sound wave itself are very important. Specifically, we are talking about amplitude and frequency [37]. The treble and bass qualities of a sound are determined by the frequency, while the intensity and energy of a sound are established by the amplitude. Analysis and classification of acoustic signals are useful for sound recognition systems. Real-time tests of the wearable device have also been conducted, and the results have been compared. The device, worn by the user as shown in Figure 5 [37], can detect the presence of a deaf person by sensing vibrations transmitted through the user’s clothing in real time.

The primary goal of this research [37] was to determine whether individuals with hearing loss may detect sounds such as brake or horn noises coming from behind them. People who have trouble hearing may experience distress when they hear noises approaching from behind. Additionally, the ability to hear the sounds of brakes and horns is crucial and permits people with hearing problems to travel safely. The goal is to develop a product that people with hearing difficulties can use on a daily basis to improve their lives. This will give them instantaneous, real-time access to additional perception and decision-making skills.

Ketabdar and Polzehl’s research [40] included creating a smartphone app that would analyse sound, detect vibrations, and display alerts in the event of a loud event. This programme is helpful for the deaf and anyone with hearing impairments since it alerts them about nearby loud activities. The mobile phone’s microphone is used by the spoken content analysis algorithm to collect data on the user’s environment, which is then analysed for any shifts in the level of background noise. When changes occur or other circumstances arise, the app alerts the user with visual or vibratory-tactile cues that correspond to the altered speech content. The user will now know about the mishap [37]. With the study of user actions, this algorithm may be improved to do even more tasks [40]. As part of their research, Shivakumar and Rajasenathipathi used hardware control techniques and a screen input application to link people who are deaf or blind to a computer so that they may use modern computer technology for communication purposes, such as vibrating gloves [41].

The wearable solution underwent preliminary testing and deployment in the field. Incoming data were estimated in real time, and the user is updated instantly through vibrations. As the system reacts and reroutes the user, our wearable device predicts the direction once more. This method was used to determine which of the previously described methods was the most effective, and then that method was put into use. Subjects were played recordings of voices coming from a variety of locations and asked to identify their source. The success of our wearable system was evaluated by comparing these numbers to those obtained in the real world [37].

The second step involves hooking up the system to a computer and bringing the voices and their instructions into the digital realm. Each time the data were gathered from four separate microphones, they were stored in a matrix, and this process continued until a sizable data set had been amassed. Preprocessing, feature extraction, and classification were all successful with the data that were generated. Results were compared to the live application and discussed in context [37].

Four microphone ports were integrated into the final wearable system (Figure 6 [37]). To ensure clear audio in all cardinal directions, four microphones were used. Initial tests were conducted with only three microphones, but it was determined that four were required due to the system’s low success rates and the fact that there are four main directions. With the help of the HCI, they were positioned to the right, left, front, and back of the user (Figure 6 [37]). Using four microphones as opposed to three improved accuracy in experiments, the system’s design called for two vibration motor outlet units, one on each fingertip, to indicate the direction of sound via vibration frequencies. The high concentration of nerves in the fingertips is the primary factor in this preference. Furthermore, vibration motors positioned on the fingers are more user-friendly and cause less disruption [37].

The designed system has four LED outlets, and when a sound is detected, the LED of the outlet facing the direction of vibration is illuminated. The combination of vibration and LED lights enhances the user’s ability to identify the correct direction. LEDs were used to provide a visible alert. Meanwhile, the possibility of using four distinct LED lights for the four cardinal directions is being studied. There are LEDs for the user to glance at if they are confused by the vibrations. In this investigation, vibration serves to stimulate the sensation of touch in those who are deaf or hard of hearing. Hearing-impaired people will have a better chance of comprehending and feeling at ease if they can communicate with others via touch [37].

A 32-bit MCU based on the ARM architecture and flash storage were included in the creation [37]. It has a maximum frequency of 72 MHz, a 3.6 V application supply, seven timers, two ADCs, and nine communications. The wearable gadget that we created ran on rechargeable batteries. On the batteries, about 10 hours of run time are possible. Vibration allows people to detect the direction from which an incoming sound is coming 20 milliseconds after the vibration has been given, that is, the listener will be able to recognise the sound coming in within 20 ms [37].

A total of eight directions were used during five days of testing with four deaf people and two individuals with mild hearing loss, with findings compared to those of normal participants. The effectiveness was measured by playing recordings made from the left, right, front, and back and identifying the locations where these directions were intersected. In this research, we analysed the data from four- and eight-directional studies and conducted further tests in both controlled and natural settings [37].

Actual human subjects were employed as sound generators in these real-time studies [37]. An outdoor stroll would be interrupted by a call from behind, with the user’s ability to hear the voice being measured. The computer system used a loudspeaker to play the audio. In this experiment, the participant’s left and right fingertips were attached to vibration motors, and microphones were placed on their right, left, behind, and in front of them. For instance, the left fingertip’s vibration motor would activate in response to a sound coming from the left. The right and left vibration motors would be used for forward and reverse movement, respectively. In the forward movement, three quick vibrations from the right-to-left motors would be produced. The vibration motors would cycle through three times of vibration in the back, right, and left directions. The typical time taken for the user to discern the product’s direction is 70 milliseconds. This research helped classify individuals based on how loud or quiet they sound, so those with hearing impairments could pay attention. When someone was making a loud noise nearby, for instance, those with hearing impairments might still comprehend what was going on and behave accordingly.

4.2. Case Study 2: Context-Aware Navigation System [42]

In mobile navigation contexts, context awareness is a fascinating issue due to the great degree of application-specific change. Not just during development but also in real time when the device is being used, navigation services take the user’s current circumstances into account. A user’s behaviour and the device’s location are two examples of circumstances that might influence the services that a mobile navigation app offers. This article [42] addresses the problems of context-aware systems, which include acquiring context, interpreting context, and adapting applications to context. The work proposes a method for strengthening the precision and dependability of context-aware navigation systems via the use of inexpensive sensors in a multilevel fusion strategy. The experiments show that smartphones may be used for outdoor navigation with the help of context-aware personal navigation systems (PNS) [42].

Applications that are “context-aware” take external factors, such as the user’s actions, into account when making judgments about the user and/or the environment. While many approaches have been explored for automatic context and environment recognition for context-aware applications (such as healthcare, sports, and social networking), there is still room for improvement. The study presented in [42], for example, is one of the firsts to apply user activity context to PNS, and more specifically, vision-aided navigation [43]. For the purpose of recognising and using context in PNS applications, a new hybrid paradigm was introduced. When using a navigation app, the user’s current activity (such as walking or driving) and the device’s current location and orientation provide valuable context.

In the field of pervasive computation, Caetano suggested using a hybrid mythology to combine the best features of data-driven and knowledge-driven approaches [44]. Arato et al. proposed a knowledge-driven hybrid method for continuous and real-time activity recognition in smart homes via the use of multisensor data [45]. In this research, ontology-based semantic reasoning and classification are used for activity recognition, but domain knowledge is heavily leveraged throughout the entire process [42].

An activity recognition module is created to determine which sensors and features best aid in the development of a reliable context detection algorithm. With the help of the activity recognition module and a battery of experiments, it was possible to gauge how well it performed across a variety of user motions and modes [42]. The data collection for this study was performed using a Samsung Galaxy Note 1 smartphone. This proposed context-aware model for navigation services uses a client-server architecture for its software. This architecture allows for the separation of application logic between the user’s local Android device and a server-side resource with access to more extensive data stores and processing capabilities. Examples include sending the average value of a window of recorded accelerometer data from a local Android device to a web server for comparison against a database of context patterns. Wi-Fi allows for instantaneous data synchronisation with a server. An app is built to snag information from a mobile device and transmit it to a server [42]. This software creates timestamped data that can be used in real time. The main software and the user’s data are stored on servers in a remote location, and the end users access the applications through a lightweight mobile application. Automatically or at the user’s urging, all relevant sensor data for detection were preprocessed and sent to the server. The next step involves sending the results of the context detection and navigation solution back to the mobile user. Two men and two women, ranging in age from 26 to 40, participated in the study to provide data on their physical activities [42]. Testing data were collected with the smartphone in a variety of positions, such as in a purse, a jacket pocket, on a belt, held close to the ear while talking, and at the user’s side while the arm was swung. The only restriction on how the smartphone should be worn is where on the body it should be kept. After two minutes, data from each activity with a unique device placement mode were saved to the server’s database (DB). Subjects were asked to mark the beginning and end times of their primary activities in order to construct the reference data [42].

Those sensors that correlate most strongly with the activity classes are the most optimal for activity recognition. In order to detect motion, accelerometer sensors have become increasingly popular. The gyroscope can record the user’s movements and the device’s new orientation. When trying to differentiate between groups of on-body device placements and identify the device’s orientation in each placement, orientation determination is a crucial feature [42]. In addition to assisting with orientation and heading determination, magnetometer sensors also provide absolute heading information. Device orientation can also be estimated using the orientation software sensor (or soft sensor) made available by the Android API. The orientation angles are generated by fusing three signals from an accelerometer, a gyroscope, and a magnetometer in this sensor. The values of these angles characterise the relationship between the device’s coordinate system and the regional navigational reference frame. The orientation soft-sensor’s output can stand on its own as a sensor or be used to transform data from one coordinate system (the device’s) to another (the reference navigation system). Multiple sensors’ context recognition outputs have been analysed as a whole [42].

Calibration and noise reduction are applied to the raw data captured by sensors, as depicted in Figure 7 [42]. Signal processing algorithms are then applied to the data in order to extract useful features. Although there is a vast pool of features from which to choose, only a few should be implemented for reliable, real-time context recognition [42]. Afterwards, the feature space can be classified using classification methods. Tervo et al. [46] noted that there is a wide range of feature extraction and classification methods and that the best method to use is often context-specific.

5. Conclusion

This paper proposes a framework to categorise the CHR-HCI field into three levels, acknowledging that human-computer interaction is an emerging interdisciplinary field encompassing numerous disciplines and that hazard recognition also requires complex theoretical knowledge and practical techniques. The papers reviewed several related work in the field of CHR-HCI and analyized two related case studies. From a research perspective, hazard identification is interested in the construction industry’s practise of finding, perceiving, and recognising dangers and their influencing variables for the sake of risk assessment, accident prevention, foresight, prediction, and intelligent monitoring. The primary improvement in engineering safety driving philosophy during the last 21 years has been the shift from postaccident analysis to preaccident prediction and prevention, made possible by advancements in human-computer interface technology. As a result, this is one reason why we are pushing for the widespread use of HCI methods.Theoretically speaking, there are two basic components to hazard recognition: theory pertaining to the hazards or risks involved and theory pertaining to the actual act of recognising or identifying the hazards. Theoretical guidance for the implementation of HCI technologies may be found in fields including risk psychology, ergonomics, human factors engineering, behavioural psychology, and sociology. Academics have paid a lot of attention to engineering ethics because of its supervisory role in scientific experiments, and this is because of the importance of engineering as science and technology progress. As such, engineering ethics should be taken into account as a fundamental compass for identifying potential dangers [47, 48].In terms of real-world implementation, hazard recognition should find most use in computer simulation, computer vision, VR/AR, and robotics [49]. The three issues we have highlighted are where we think researchers should focus in the future when studying hazard recognition. First, researchers want to find efficient ways to process multimodal data in hazard recognition experiments, and second, they want to use these data to create intuitive devices for hazard recognition. The end goal is to create a user-friendly platform for managing safety measures that uses multimodal data. Accordingly, these three areas of study have seen some practical application and also point in clear future directions.

Data Availability

The data that support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was partially funded by Middle East University.