Abstract

Since the beginning of the 21st century, with the development of information technology, researchers in various fields have gradually increased their research on human emotion and behavior. The current research mechanism used in emotion and behavior research is artificial intelligence technology. Through the literature survey and data analysis in related fields, it is found that the acquisition of human emotions and behaviors will be carried out through facial feature algorithm for point capture and combined with machine learning for output detection and analysis. Among them, the detection process requires machine learning of artificial intelligence first. This paper firstly analyzes and summarizes the advantages of Python programs at this stage and completes the preliminary work of system construction by setting and installing platform parameters. In the research process, this paper uses the existing algorithm to apply the value algorithm to the samples and conducts preliminary tests. The overall detection values in the test data are relatively average, and there are still differences in the samples. At the same time, we compare the and detection algorithms according to the output value of the algorithm in the machine learning. The detection rate of some emoticons in the algorithm is high, but the detection rate of other emoticons is low. Finally, according to the limitation of the output method in the mathematical formula, a new algorithm of taking the weighted sum and taking the logarithm and then taking the square root is proposed again. According to the statistical analysis, the overall average value of the final algorithm has been improved, and the overall detection rate is about 80%; compared with the and algorithms, the overall detection frequency fluctuates less. The algorithm in the frequency fluctuation data table in the paper is also superior to the existing algorithms in machine learning, sample testing, and data in the frequency fluctuation. Our next direction will be to use the Python main program to perform AI automatic facial emotion detection work by combining the new algorithm with the value, DWT, and CNN algorithm in the facial recognition feature through machine learning.

1. Introduction (Research Significance of Detection Mechanism)

1.1. Emotion Detection Technology

In the 21st century, emotion detection is the hot topic for research. We have seen slogans like “How do you feel today?” [1, 2] in some IT places, which shows that human emotions are an important part of life. Positive emotions can bring joy to life, and negative emotions can bring crisis. As we mentioned in the abstract, if there are long-term depressed employees in the enterprise, it will have a great negative impact on life and work [3], and even tragedies will occur. Human emotional expression can be divided into internal physiological phenomena and external facial expression information, of which there are 27 types, and daily emotions can be defined as 6 types. Generally, how we obtain target sentiment is divided into artificial mode and machine mode [4]. Among them, the artificial mode will record the content of human life trajectory and action performance, use statistical tools to measure the recorded information [5], and obtain corresponding values through related software to judge the target’s emotion. At present, the world is in the development stage of Industry 4.0, and the popularization of intelligence will bring us unprecedented convenience and intelligence. Under the Industry 4.0 technology, the research and development and expansion of artificial intelligence technology by researchers is becoming more and more popular. Driven by Industry 4.0, we enable smart devices to accurately extract the appearance and psychological data of the human body through machine learning. After the device’s emotional learning, the device can evaluate the acquired emotional factors and compute the target’s emotional features [6]. At present, emotion detection technology has been widely used at home and abroad [7]. The specific areas covered include intelligent policing, facial recognition emotion, intelligent driving detection, EEG emotion, human neural network multimodal detection, and heart rate detection. At present, artificial intelligence has ushered in the era of “intelligent detection” when it comes to emotion detection. According to the latest information, Tiantian and Fan used the AU algorithm of artificial intelligence to realize the emotion recognition system of distributed edge computing and divided the original analysis function into the form of multistep execution and subtotal [8]. Wang’s team used the OpenVINO algorithm to realize the automatic emotional changes of rehabilitation patients during the rehabilitation process, which was completed through human-computer interaction [9]. In addition, Huiting and Yi also proposed a multidimensional information fusion algorithm and an artificial intelligence-based ECG emotion analysis and prediction algorithm, respectively, in their papers, which realized the detection of facial emotion dimension [10] and psychological emotion under artificial intelligence [11]. It also provides a theoretical basis for this paper. The creation of this paper is based on the preliminary analysis of the existing algorithms of researchers in related fields [12] and adds their ideas to upgrade the algorithm to lay a theoretical and practical foundation for the next step. The following Figure 1 is a technical analysis of human behavior detection.

1.2. Human Behavior Analysis Technology

Analysis of human behavior belongs to the category of human behavioral research. Human behavior arises from the transformation of human emotions into physical manifestations. Methods for human behavior analysis usually employ 3D skeleton detection techniques, human-computer interaction, prediction of human posture, human movement trajectory, etc. [13]. The method used in the research process is usually artificial intelligence technology to extract relevant data such as the coordinates of limbs and in vivo bones in the human body structure, and analyze the behavior through changes in the coordinates of the bones and changes in the angles of the limbs [14]. This is what we often call “sitting,” “standing,” “walking,” and other common features. Through the review of relevant literature, the main purpose of human behavior research is to understand the transformation methods and interaction behaviors between humans and artificial intelligence. A scientist in Australia once used artificial intelligence to play games with humans and used artificial intelligence to guide human behaviors [15]. As a result, after setting the behavior of the artificial intelligence, the positive and the negative guidance for the game participants were generated [16]. From this, we can find that artificial intelligence technology plays a decisive role in the analysis and prediction of human behavior. Because emotions can stimulate behavior, we can use special methods to obtain concurrent emotional data when analyzing human behavior [17].

1.3. Comprehensive Analysis

For the detection of human emotions and the analysis of behavior, we will be able to conduct integrated research through the same effective channel in the future [1820]. The creation of this paper will provide an effective machine learning method basis for later behavior and emotion integration research [21]. Through the algorithm implementation process in the process of human emotion detection in this paper, we use the advantages of the final algorithm to find out the effective basis for human behavior analysis and make the most reliable technical guarantee for later behavior research. The model diagram is shown in Figure 2.

2. Background

2.1. Research Background in the Field of Python

Python programming is widely used by a large number of information technology research enthusiasts in the 21st century. At present, it has fully covered domestic and foreign universities, enterprises, and related scientific research institutions. The Python program first appeared in the 1990s [22], when it was discovered and identified as the initial ABC language by Dutch scholars. Its advantage is that it can have a relatively advanced data structure and can write object-oriented programs. The program itself is also open-source and free and can be used by any program compilation enthusiast. At present, Python has been continuously revised and reformed, and it has been widely used in various scientific measurements and data analysis from the initial stage to the current one [23]. Python’s interpreter compiler has certain extensibility, through which it can be combined with the traditional compilation tools C and C++ [24]. Due to its convenience of expansion, Python programs have realized the “package” docking with a variety of scientific computing software [25]. For example, the well-known computer vision OpenCV, 3D visualization VTK, medical image processing library ITV, and other software can realize a direct connection with Python programs. After the direct connection to the package data, the Python program can import modules and data matching in various fields through its efficient interpretation and compilation function [26] and analyze and calculate the data results through its unique architecture. In the field of emotion detection, Python programming has been gradually used in the detection and analysis of human emotion [27]. Through the collection of literature, it is found that the Python program architecture has been used in many fields at home and abroad to realize the capture of target facial emotions and analyze emotional factors [28]. For example, in China, several tobacco and alcohol personnel from the Modern College of Shanxi University of Science and Technology realized the emotion detection of college students during the teaching process through the Python client. In this study [29], the camera captures the face picture and transmits the data to the main program. The main program identifies and compares the facial expressions accepted at this time through the imported and learned training set, thereby judging the real-time emotional point of each student. In the article, the author goes through the process of graphic acquisition, face recognition, expression recognition, and the download and installation of the convolutional neural network module library [30]. The research of this paper is based on the writing of Python code, which captures the facial features in human pictures, and judges the emotional characteristics of the characters in the pictures through facial feature points. This article first uses the Python main program to write the code to form an algorithm and test the recognition situation in machine learning and the relativity in the sample selection process with the sample training set data under artificial intelligence [31]. Thirdly, compile the new algorithm according to the actual detection method and compare the algorithms under different conditions, find out the most effective detection method, and conduct preliminary data analysis. Thirdly, through the optimization of the detection results at the current stage, a more accurate algorithm 3 is formed, the output of the algorithm and the analysis of the detection results are performed again, and the overall summary of the detection results is carried out. The overall process is shown in Figure 3.

2.2. Problem Statement

Through the mastered human emotions and behavior causes and related detection methods, we will find out the specific contributions and existing problems in the research process through the algorithm analysis in the following literature survey. The second half of the article will discuss how to obtain the advantages of existing algorithms, improve new algorithms through details, and give the corresponding research results. In Table 1, we will answer the 3 questions raised in this paper during the research process.

This paper is based on the main advantages and convenience of Python programs and summarizes the algorithm advantages of researchers in emotion detection in the literature in this field. According to the unique concept of this paper, system function analysis, environment construction, model diagram setting, sample data, and training are carried out. Multiple steps such as set testing and algorithm execution are used to judge emotions in face images [32]. The following are the specific implementation steps.

3. System Construction

3.1. Structural Design

The idea of this paper is to obtain static emotions in human emotion pictures through the current mainstream Python program architecture [33]. In the system, it will be judged by extracting the emotional factors of facial organs by acquiring the face of the picture and using the face data. According to the implementation process of the proposed system, this article will add a training set of sample data in the Python system and conduct data testing to find out the combination method between the sample data and the system. In addition, sample data and target data are to be detected. It will be stored separately for testing [34]. The test of the target data will be done by comparing the sentiment factors after machine learning in the sample database [35]. At present, the initial setting of the program is that if the system is successful in comparison, it will display the target’s emotion, and if the comparison is not successful or the comparison is wrong, it will return to the original image without displaying a prompt. The system detection concept diagram is as shown in Figure 4.

3.2. System Installation and Debugging

We choose Python 3.9.0 as the development environment to implement this research. According to the facial recognition rules, the import of the corresponding module library needs to be added. The initially imported library files include OpenCV, NumPy, Idlib, and materlib5. The installation process is to first install the Python 3.9.0 client and then install the completed program through Windows. In the CMD command, software PIP is used as a master program to input pip install to download and install the corresponding modules [36]and runs after the satisfying emotion plugin and executed module programs. Table 2 is the installation process diagram of some modules and the list of software and hardware.

3.3. Algorithm Implementation Process

The algorithm implementation of this paper will adopt two parts, preliminary and final. Among them, the preliminary mode performs machine learning of artificial intelligence equipment after installation and debugging of the system environment, and the learning part is divided into the machine algorithm output and program direct control. The device can perform data matching and analysis after machine learning. When the device fails to recognize the target data, the unrecognized result will be directly returned to the main program as a separate control. The ultimate mode is that after the initial algorithm is successfully implemented, the wearable device is used on the existing basis to measure the target’s built-in psychological data and combine it with the facial data in the traditional model [37]. The following Figure 5 and Table 3 are the description of the algorithm implementation process flow and the global algorithm execution process, respectively.

3.4. Process Analysis of Artificial Intelligence in Image Emotion Detection

The image below is one of the target files we will be detecting. This paper writes a Python-side program and makes the artificial intelligence capture the details of the face in the picture in a given area according to the output of the algorithm. It is divided into 3 steps: firstly, by identifying the range of facial features with human faces in the picture; secondly, by identifying and matching the emotional organs (eyes, mouths, noses) through the acquired face capture range (will be through the eyes, mouth, nose); finally, by matching the identified emotion data with the training set data after machine learning until the detection result is displayed. Figures 6 and 7 are the flow chart of artificial intelligence image emotion detection and the example diagram of picture detection, respectively.

3.5. Preliminary Algorithm Output Analysis of Artificial Intelligence in Machine Learning Detection
3.5.1. Detection Algorithm Design

In this paper, we will perform static recognition of emotions expressed in target images. The overall process is published in the Python main program, the target data is compared and identified in the sample database, and finally, the emotional result is formed. The following is the detection process flow chart and preliminary simulation algorithm. After the follow-up machine learning, the sample data training set matching and images are carried out. The emotion display results will be matched by obtaining organs from the target facial expressions. The mathematical function is calculated as shown in Figure 8.

In the preliminary algorithm conception, the standard value of the original sample is defined as 1, and the target data is the direct difference between the actual result and the standard value [38]. The smaller the deviation, the closer the actual calculated value will be to 1; otherwise, it will be close to 0 [39].

3.5.2. Analysis of Detection Algorithm Output

According to the concept and process of the preliminary algorithm, the next step of this paper will be to capture the face area of the target by using artificial intelligence. The specific intelligent detection process diagram will be tested on the training set data of the sample according to the above mathematical function. During the testing process, we will go through the definition of the result and take the value (including the output value of the algorithm) through the specific solution method of the function. We have implemented an artificial intelligence device using machine learning in this research. In the process of machine learning, the output result of the algorithm after the final learning of the training set of the sample training data is shown in Figure 8.

4. Preliminary Testing of Python Samples

4.1. Sample Correlation Test
4.1.1. Type Selection

The system needs to select factors such as gender, age, face size [40], shooting angle, and distance. This system sample will select some representative data from various data samples for sampling. The selected samples will be used in the subsequent training data for data comparison and training set testing and will be updated later based on the actual detection results [41].

4.1.2. Quantity Selection

According to the needs of the faces and faces of the samples, 4-5 types of each type are selected as the training set data according to the gender of men and women. The number of samples selected this time is 24 [42].

4.1.3. Algorithms in Sample Testing

After integrating and screening human emotion target data, the algorithm features of emotion research in related fields are combined. In this paper, the following algorithm is used to carry out the preliminary correlation analysis of the sample data [43]. The specific algorithm is constructed as follows (where E stands for emotion, and the subsequent algorithm improvements will add 1, 2…, after emotion).

Figure 9 is the algorithm output of the above algorithm in AI machine learning.

The idea of this algorithm is to measure the reliability of the selected sample data by finding the standard deviation of the selected target in the existing recognition database. The parameters appearing in this algorithm are , , , , and [44]. The operator symbols are the square root symbol and the upper and lower limit summation operators. In this algorithm [45], represents the value of the final standard deviation of emotion detection, is the -th sample picture in the detection data, is the total number of selected sample reference data, and is used in the target detection data from 1 start continuous testing, and the -th number tested is the current sample position [46].

4.2. Analysis of the Advantages of Sample Algorithms

This algorithm adopts the form of sampling separately. That is to say, in the process of continuous testing of each sample, the samples are independent, and the execution order of the samples will not affect the testing of other samples. The final result takes the square root form, which further narrows the correlation between samples. The test results are shown in the following Figure 10 and Table 4.

Through the test results, we can see that in the process of implementing the selected samples in the program, the overall mean value is relatively centralized, which is 0.82125, which is closer to the reasonable standard value. From the difference value in the test comparison [47], we can see that there is still a certain gap between the maximum matching and the minimum matching process, and the specific difference is 0.13. From this, we can see that the different values of different samples still exist. The algorithm will be improved in subsequent tests [48].

5. Analysis of Target Data Detection Algorithm

5.1. Comparative Analysis of Facial Algorithms under Artificial Intelligence

The improved new algorithm is tested on unified data to compare the difference between the new algorithm and the existing algorithm. The final difference results will be carried out through three comparison methods (existing, improving, and comparing three parts) [49]. The specific algorithms are given and analyzed and tested, and finally, the three comparison results are presented. Algorithm 1 which is created first is

Analysis of Algorithm 1 is as follows: the basis for the creation of this system lies in data prediction through face pictures. The scope of use of the algorithm is to focus on capturing the face range. The idea of using Algorithm 1 is to test the image target data of one type of data in turn with the test in the corresponding type of sample trainer [50]. Finally, after calculating multiple operation values, divide the final matching value by (the -th match is in place) and then open the quadratic root to get the final emotional data. Let us try the second algorithm again:

Analysis of Algorithm 2 is as follows: the construction concept of this algorithm is that the sample squares, sums, and adds the difference between the -th target data that match the data in the training set and finally divides it by numbers to obtain the judgment of the second emotional data. Figures 11 and 12 shows the emotion detection of the target image of the data by the two algorithms and compares the data and compare the data with the average value of the final result [51]. The specific values will be shown in Tables 5 and 6.

The mathematical functions of Algorithms 1 and 2 are subjected to artificial intelligence for target learning. The specific outputs of the two algorithms are shown in the following Figures 11 and 12.

The preceding content is the algorithm output in machine learning through the and emotion modules, where the letter E stands for emotion, which is used to distinguish the representation. The value of TE is outputted by the artificial intelligence system algorithm. We can see that the output is performed by finding the limit of the limb. The method of is an updated algorithm, and the middle part is the same as that of Algorithm 1, and the calculation is finally performed using integral summation. The output value range of Algorithm 1 will be between 0 and 1 according to the limited machine learning, and the actual calculated value will be between 0.65 and 0.7. The following is a list of device detection results using Algorithms 1 and 2. We will analyze from Tables 5 and 6.

5.2. Analysis of the Results after the Output of the Target Detection Algorithm

Based on the compilation of the algorithm and the output after passing through machine learning, we conducted a test comparison of two different algorithms. Regarding the analysis of the results of the two algorithms, we will conduct a detailed analysis from the following tables and pictures. The following Tables 5 and 6 contain the specific detection results of Algorithms 1 and 2. The data in Figure 13 are the comparison chart of the difference between the detection results of Algorithms 1 and 2 and the detection average value. The final statistical analyses of Algorithms 1 and 2 related data are shown in Table 7.

The data in this part is in the form of a scatter plot. By taking the detection results of Algorithms 1 and 2 and the number of samples as a reference, the advantages of the algorithms are compared. The blue dots represent Algorithm 1, and the red dots represent Algorithm 2. Judging from the average value, the latter is more authoritative than the training set sample data, and the overall average detection and recognition rate is higher. It is mainly manifested in the floating index of up and down detection [52]. The advantage of Algorithm 2 is that not only is the recognition rate of the overall detection result higher than that of Algorithm 1 but also the performance of the overall detection rate is more concentrated. In the middle part of the chart, the data of the red dots are more compact than those of the blue dots. The blue dots are discontinuous and interrupted in some parts [53]. The recognition rate of some emotions is higher, and the overall recognition rate and range are slightly lower. In Algorithm 2, scatter plot, we can see that the algorithm output of Algorithm 2 is significantly improved compared to that of Algorithm 1. In the statistical Table 7, we can also see that the value of the method of emotion2 is relatively negative, indicating that there are still some differences in the design process of the algorithm, resulting in this value being lower than 0, while the value of emotion3 is above 5.51. It is a positive distribution, indicating that the target data is in a positive trend in the test identification of the sample training set. Looking at the prob values of the two, they are relatively within 5%, which shows that the probability of error detection by the two algorithms is relatively small [54].

5.3. Algorithm Improvement Test

According to the data analysis results in the previous section, we will continue to optimize the calculation process of Algorithm 2 to gradually reduce the final detection difference [55]. At present, for the new Algorithm 3, we obtain logarithmic values and output them based on the original output through integral weighted square summation, which further reduces the situation that the target emotion is too different in the detection process. At the same time, the detection rate of the mechanism to target facial features is improved [56]. The specific improved mathematical function and algorithm output are as shown in Figure 14.

The design and concept of the third algorithm are that the value of summing and averaging before to has a situation where the variance is too large relative to the actual result. In addition, we can also continue to increase the usage of the logarithm through the relationship between the sample data and the target detection data [57]. If the sample individuals identified in the sample training data are used as the base of the logarithm, the value in the existing average is used as the true number of the logarithm. The algorithm with the addition of logarithms shrinks the value calculated by the formula between the target data and the sample data by taking the logarithm [58]. Compared with the existing algorithm, the emotional data after the comparison reflects that the difference between the sample and the target data is more accurate [59]. Next, we will use the improved Algorithm 3 to perform the next round of result testing and analysis while increasing the sample training set. The detection output of Algorithm 3 is shown in Figure 15.

After the data shown in Figure 16 which are the output results of Algorithm 3 and the previous Algorithm 2 and Algorithm 1, the recognition rates of the three algorithms for the samples are taken as the average value to compare the scatter plots. It can be seen from the picture that the average recognition value of the three algorithms is used as the reference number to show on the -axis, the recognition rate of Algorithm 3 itself is used as the upper comparison value of the -axis, and the left side is the number of sample sequences [60]. It can be seen from the figure that the detection results of Algorithm 3 are more concentrated than the those of the previous Algorithms 1 and 2, and the width of the red dots in the picture is narrower, indicating that the detection rate is relatively average, and the fluctuation before and after is less. In the previous detection process, we found that Algorithm 1 has the largest detection range, but the detection rate deviation of some emotional characteristics after the use of mathematical functions is also obvious. It can be seen from the frequency pop chart that during the testing process, the average distribution frequency of all target image data at the current stage compared with the sample data in the training set fluctuates less [61]. The highest is around 75, the lowest is around 62, and the up and down range is small. Algorithm 1 fluctuates the most, with a maximum of 87 and a minimum of 51. Algorithm 2 has a maximum of 72 and a minimum of about 58. It can be seen that the target identification frequency of Algorithm 3 improved data comparison of the sample data which is relatively concentrated [62].

5.4. Analysis of Test Results

According to the output method in Algorithm 3 above, the artificial intelligence mechanism implements a preliminary inspection of the facial expressions in the image under the Python architecture. After the output of the three algorithms, it can be seen that some image emotions in Algorithms 1 and 2 cannot be recognized or failed to be detected. Using the logarithmic calculation for the target in the improved Algorithm 3, the device can successfully identify the facial expression in the picture after machine learning. Figure 17 shows that Algorithm 3 successfully obtained the facial emotion of a girl in a picture. The girl's expression can be seen in the image as an emotional state of sadness. After compiling the Python code, the output detection result of the artificial intelligence in Algorithm 3 is also “sad”. It shows that the device obtains correct detection from the training data after using Algorithm 3 and completes the target detection. The test result is consistent with the actual emotion.

6. Summary of Preliminary Experiments

6.1. Analysis of Results

In the process of selecting existing algorithms for experiments, a small amount of sample data was selected according to the sample selection and classification principles adopted in the previous stage of this paper. After preliminary debugging of the modules, libraries, etc. of the system and using the initial training set, most of the preliminary detection process of human emotion and behavior data is completed. Because the current stage is the conception stage of this paper, the specific artificial intelligence technology and wearable technology to be used in this paper have not been used, and the detection accuracy is still far from the ideal state [63]. At this stage, the selection of emotional sample data is 8 categories in the following Table 8, and the last part of Table 9 is the detection and recognition rate of the current mechanism.

The above table continues the data test on 8 types of existing emotions according to the distribution of emotions. Combined with the part of the column chart used above to conduct centralized measurement. The final result is shown in Table 8. According to the content shown in the above table, after the improvement of the original Algorithms 1 and 2, the most advantageous core architecture of the original algorithm is selected. The final machine learning mechanism is significantly improved compared to the original algorithm for the acquisition and matching of face data in 8 images. First, the overall detection rate should have an obvious upward trend. Second, compared with Algorithms 1 and 2, the overall detection results are relatively average for each type of target data, and the overall recognition rate fluctuates around 70%.

6.2. Preliminary Experimental Data Analysis

The above Table 9 data is a summary of the methodological part through code writing, algorithm calling, and output of the next stages of detection values [6466]. Work is divided into five stages: machine learning, recognition ratio, sample testing, frequency fluctuation, and variance. The most obvious part of the algorithm improvement is the machine learning and sample testing stages. Through the construction of the initial algorithm and the optimization of the final algorithm, the detection accuracy before and after is more than 70%. The values of other recognition ratios, sample testing, and variance also changed significantly during the experiment. Compared with Algorithm 1 in the initial stage, Algorithm 3 further reduces the variance value in the detection process.

7. Conclusions and Future Work

This article is based on the architecture of the Python program, and the system is installed by downloading and installing emotion detection modules such as OpenCV, NumPy, Idlib, and materlib5. After that, after the sample training data test, through the existing algorithms such as taking the limit, weighted sum, and square root, one by one, the square root and logarithm are finally proposed, and machine learning is carried out one by one to output the algorithm. Finally, the logarithm algorithm is used to achieve more accurate detection of sample data and greatly improve the overall detection rate. The experimental data show that in the target emotion detection, compared with other algorithms, the logarithmic algorithm can effectively narrow the detection gap and effectively improve the recognition rate of artificial intelligence. Thus, a sufficient algorithm framework is established for the later algorithm research on human behavior.

The limitation of research work is that the current detection range is expression recognition detection in images. Although the output of artificial intelligence machine learning and detection algorithm under the output of Python code has improved compared with the previous algorithm, the overall level still needs to be further improved. In addition, in the next step of research, we will need to further expand the detection field and perform AI detection of facial emotions. This will require more familiarity with deep convolutional neural networks, facial location recognition, and discrete wavelet algorithms. When the conditions are ripe, we will gradually shift the target to wearable technology with built-in emotion recognition, which is also our future challenge.

In the future, we expect that while using the algorithm architecture, this research will continue to find more target sample data for the system, so that the system can learn more types and types of facial features through machine learning, and further improve the detection effect. In addition, because human psychological factors need to be obtained with the aid of auxiliary devices, in the future research on wearable technology, this paper will use wearable devices to obtain human psychological data, such as heartbeat frequency. Through the comprehensive analysis of built-in emotions and facial features, the most accurate behavior and emotional features of the target are determined, and the integrated mechanism of artificial intelligence behavior analysis and emotion detection under wearable technology is realized.

Data Availability

If data is required, then the corresponding author will provide data.

Conflicts of Interest

The authors declare that they have no conflicts of interest.