Abstract

With the exponential improvement of the integration capabilities of electronic hardware devices, digital images have become an indispensable information carrier, thus promoting the development of image recognition, detection, or tracking technologies. At present, there is a blank in the field of image recognition measured by the action standard of yoga standing three-dimensional movement, which will bring huge room for researchers to play. Based on the traditional image recognition technology, there are cases where missed detection or false detection cannot be correctly identified and made a correct judgment. In this paper, the ability to capture the action is improved by optimizing the network for the positioning of the stereoscopic action target of the yoga station. The linear discriminant analysis (LDA) after adding optimization is used to reduce the dimensionality of the captured image data, which is beneficial to improving image recognition rate and reducing image loss rate. Through the analysis of the three-dimensional movement of the yoga station, the model based on the improved algorithm is compared with the traditional model, and the final image recognition accuracy is improved compared with the network model before the improvement. The image recognition error rate steadily tends to 5%, and the loss rate is also below 2%. Through the optimized convolutional neural network, this paper can accurately capture the image position, and the recognition rate has also been greatly improved, which can provide a reference for future research in other fields.

1. Introduction

With the advancement of science and technology, the ability of computers to process large amounts of data has been confirmed, which has promoted the development of image recognition technology in this industry. Its core technology has achieved fruitful results under the background of the vigorous development of computers and has been widely used in computer image processing, language processing, the Internet of Things, and other fields.

It provides standardized and digital assessment methods for future yoga balance standing postures through image recognition technology and provides technical guidance for yoga personalized asanas. It finds the contradictions between practitioners of different levels in the practice of yoga asanas, points out the shortcomings of yoga beginners and the parts that need to be strengthened, and provides a scientific basis for the development of yoga and its application in national fitness. Common applications of image recognition technology include person photo classification, object recognition, and detection and recognition of intelligent driving road scenes. These all show that image recognition technology will become more and more important in people's life field.

Aiming at the problem of image recognition accuracy, the target localization optimization network is used to fit the mapping relationship between the candidate frame and the predicted frame, so as to improve the localization accuracy in the target detection model. By searching the enlarged candidate frame area, multiple interest frame positions are divided and sent to the target positioning optimization network. It adjusts the electrode signal according to the correct electrode placement point of the EMG electrode to the target muscle of the human body. According to the collected data on yoga standing poses, the image target is trained and tested by linear discriminant analysis by inputting the combined features of a single region and multiple regions.

With the development of the economy and the advancement of science and technology, the neural network method is used to analyze human behavior, so that the machine can perceive the state of the human body and the intention of the movement. Du proposed a streaming hardware accelerator for image detection using CNNs on the Internet of Thing devices. Through a unique filter factorization technique, the accelerator can support arbitrary convolution window sizes, thereby increasing throughput [1]. Zeng H proposed a multifeature fusion learning method for nonrigid 3D model retrieval based on convolutional neural networks. This method can study useful discriminative information for the Heat Kernel Signature (HKS) descriptor and the Wave Kernel Signature (WKS) descriptor. In addition, to further improve the performance of description ability, a cross-connection layer is constructed to combine low-level features with high-level features [2]. Jung H Y introduced a new convolutional neural network (CNN) architecture for the fingerprint liveness detection problem. It can provide a more powerful framework for network training and detection than previous methods. The proposed method uses squared regression error for each receptive field without using fully connected layers [3]. Jafari introduced a scalable, low-power embedded deep convolutional neural network (DCNN). It is designed to classify multimodal time-series signals. The time-series signals generated by different sensor modalities with different sampling rates are first converted into images, and then, the shared features in the images are automatically learned and classified using DCNN [4]. Niioka constructed and applied a CNN algorithm for automatic cell differentiation identification of myogenic C2C12 cell lines. Phase contrast images of cultured C2C12 were prepared as an input dataset. During the differentiation of myoblasts into myotubes, the cells were classified according to the shape characteristics of the CNN cells and according to the number of days in culture to induce differentiation [5]. In an effort to reduce depression, Nompo reviewed articles through an electronic database. Keywords used were as follows: complementary and alternative therapies of yoga, yoga and depression, effects of yoga, and effects of yoga on mental health. Selected articles form qualitative and quantitative research. The concept obtained in the results is related to the application of complementary therapy yoga for depression [6]. Observations by Babanov on 16 healthy volunteers showed that under this observation condition, the difference from the usual vertical posture was manifested by a slight decrease in stability and a redistribution of leg muscle activity according to surface electromyography. A more pronounced role of proprioception in body balance is associated with a clearer control of the balance of mixed poses in the exoskeleton [7]. Tang proposed a deep learning model to estimate the 3D pose of pedestrians from images. The model can simultaneously predict 16 joint landmarks and 14 pedestrian joint angles for each image with high accuracy. The average error of prediction is 0.54 pixels and 0.06° [8]. Au investigated the effect of exercise training on CALM in healthy men using a retrospective design. The findings suggest that CALM is resistant to transient changes in lifestyle factors, similar to wall thickness in other healthy populations [9]. These studies are instructive to a certain extent. But in some cases, the demonstration is not sufficient or accurate enough and can be further improved.

Because yoga exercise can achieve the beauty of shaping the body and can release the body and mind, it is loved by the public. With the rapid development of yoga, yoga training standards are uneven. In order to standardize the yoga industry, convolutional neural networks are introduced into the human action modeling and recognition of serialized signals. Aiming at the problem that the traditional convolutional neural network model cannot accurately locate and identify images, resulting in missed or false image detection, the encoding method of the feature map, which contains spatiotemporal information, is optimized according to target location recognition and linear discriminant analysis. Aiming at the action recognition of human posture, the action is analyzed and discriminated. Although this model can identify quickly and accurately, it still needs some wearable devices to collect data, hoping to improve this deficiency in the future.

3. Constructing a Three-Dimensional Action Feature Model of Yoga Standing Based on Image Recognition

3.1. Structural Design of Target Positioning Optimization Network

Object detection is an important research area in the field of computer vision. It is a technique that correlates the two tasks of object classification and localization [10]. The following is an appropriate improvement plan according to the technical target detection process. Current object detection methods can be roughly divided into two steps, first recommending candidate regions, and then classifying the regions. The coordinate position of the recommended area largely determines the position of the final detected target. The overall framework of the target positioning optimization network model is shown in Figure 1. The optimization of the positioning results is achieved through the convolutional neural network, which is called the target positioning optimization network. The target detection model uses a selective search algorithm to recommend candidate regions. It then uses CNN to classify and identify regions for further optimization and adjustment.

This model consists of two convolutional layers, two fully connected layers, parallel classification layers, and regression layers. The output result is five-dimensional data, including whether the frame has an identity of the target and the 4 position parameters of the frame [11]. In order to avoid the situation that the candidate frame is too close to the object, the network model can predict the boundary of the object that the initial candidate frame does not fully contain, and the area expanded by a constant coefficient γ is used as the new search area R [12] through the candidate frame. In theory, the candidate frame mentioned here can be the area generated by the selective search algorithm or the candidate frame recommended by the regional recommendation network RPN.

First, input an image I of size w  ×  h into a convolutional network. For example, the ZF network or the VGG-16 network generates a feature map after the convolutional layer conv5–3, which is set as FI. These divided regions of interest will be sent to the region of interest pooling layer together with the feature map extracted by the convolutional network and finally classified. Then, the candidate region B obtained by the region recommendation algorithm is expanded into the search region R, and R is projected onto FI. The feature map of fixed size is obtained by clipping and pooling through the RoI Pooling Layer. The vision is simulated by the pooling layer, and the image is represented by higher-level features so that the image features are invariant and prevent fitting to a certain extent. In order to facilitate the comparison with the features in Faster R–CNN, the size of the feature map is fixed to 6 × 6 in the experiment. Next, this fixed-size feature map is fed into the target localization optimization network designed in this paper. The features are learned through two convolutional layers and linear correction units, and the feature map FR about the region R is obtained. Then, after two fully connected layers, the feature vector is sent to a multitask classifier like Faster R–CNN to classify and judge the region category and regress to calculate the frame coordinates [13]. The network framework is improved through Faster R–CNN as the basis for improvement. The region pooling layer is used to locate and identify the selected target in the feature region.

During the target detection process, there may be multiple targets to be detected within the same detection frame. For example, the predicted bounding boxes of objects occupying a larger area in the image and the bounding boxes of smaller objects greatly overlap [14]. A typical example is that in a training course scenario, it is desirable to detect all participants. However, as shown in Figure 2, the detection frame of the front row personnel may include the rear row personnel, which will cause missed detection and false detection.

In order to solve the problem of missed detection and false detection, the target detection framework of region segmentation and object localization is used to input different regions for the localization optimization model and classification task [15]. Aiming at the positioning accuracy of target detection in the training course scene, a target positioning optimization network model is proposed to improve the accuracy of target positioning results [16].

A simple linear regression method is used in R–CNN to improve the localization effect. The mapping relationship between the recommended bounding box and the reference bounding box is fitted by a linear equation [17]. R-CNN follows the idea of traditional target detection and also uses extraction boxes. It performs object detection in four steps of feature extraction for each box, image classification, and nonmaximum suppression. Assuming that the coordinate vector of the recommended border is , where are the horizontal and vertical coordinate values ad width and height of the center position of the border, respectively, the coordinates of the reference frame representing the actual position of the target are defined as . In CNN, the relationship between the input recommended bounding box T and the bounding box F predicted by the network is assumed to be

Among them, equation is the linear equation of the fifth layer pooling feature of the recommendation region T, where represents a, b, c, and d. So, , and are parameters that can be learned by optimizing the regularized least-squares objective function:

Among them, i represents the sequence number of the recommended frame. The regression target is defined by the following formula:

Four sets of coordinate parameters are established according to the relationship between the recommended frame and the anchor frame and the relation between the reference frame and the anchor frame. The mapping relationship is similar to the mapping relationship designed in R–CNN, so the loss function of the border is

This regression calculation method used by the R–CNN family of algorithms has achieved good results, but the localization accuracy is not good enough [18]. This method is a one-step direct regression calculation, which may not fit the mapping relationship between the predicted bounding box and the reference bounding box well. It fits the optimization process of candidate region localization by cascading a neural network.

3.2. Convolutional Neural Network Based on Weighted Fisher Criterion Feature Extraction

Linear Discriminant Analysis (LDA) is also known as the Fisher classifier. It is mainly used for dimensionality reduction of data. After dimensionality reduction, a good distribution of the original sample data is still maintained, which is convenient for further data classification [19]. Assuming the distribution of two types of data points in a two-dimensional coordinate system, the LDA algorithm is used to project all data points onto the best straight line. At this time, it can be found that in this new space or line, the distribution of data points still maintains the previous characteristics, while the amount of data is simplified.

The goal of the traditional LDA algorithm is to obtain the optimal projection direction, so as to determine the sampling centers of different classes from the common sampling center as much as possible [20]. The optimal direction of projection obtained by the traditional LDA algorithm exaggerates the distance between the edge class with good classification ability and other classes. At the same time, it is easy to cause the other three classes with small class spacing to be close to each other or even overlap, thus affecting the final classification result. Suppose that four different classes of n-dimensional sample sets are predicted in a two-dimensional space, and the fourth class is an edge class, because the sampling of this category is far from the sample set of the other three categories, as shown in Figure 3.

To avoid the above problems by the distance between mines and the dominant position of edge features, the weighted Fisher criterion assigns different weights to classes with different interclass distances. The formula for calculation is as follows:where represents the Mahalanobis distance between samples of class i and class j, and represents the error function. The expression is as follows:

In this way, the discriminant projection vector obtained based on the weighted Fisher criterion is solved by the following characteristic formula:

In order to facilitate calculation and save time, a simple Euclidean distance is used to replace the Mahalanobis distance of the weight function , namely,

The main purpose of weighted Fisher is to make the distance between similar samples as close as possible while ensuring that the error between the actual output of the sample and the sample label is minimized. The cost function based on the weighted Fisher criterion can be expressed as

Among them, α and β represent coefficients, which are based on the weighted Fisher criterion. To facilitate classification, the interclass distance metric function and the intraclass distance metric function are defined as

Among them, represents the sample mean of the ith class, represents the sample mean of the jth class, represents the weight, and represents the actual output of the output layer.

When the function performs gradient calculation, each iteration will increase the interclass spacing between samples. In this iteration, the update of the weights will be adjusted in a direction that is more conducive to classification. The choice of projection direction is also very important. It can be seen from Figure 4 that the experimental results on the right are obviously better than those on the left. In the left figure, the distribution of the two types of data after projection has obvious overlapping areas. Such an area is an error area, and the samples in this area have a high probability of being misclassified. When there is a significant difference in the amount of data between the two types, the one with the largest number will be used as the criterion. Those few data points will always get the wrong classification results [21].

3.3. Standing Action Recognition Based on Convolutional Neural Network Modeling

The structure of convolutional neural networks is developing in a deeper and deeper direction. As the number of network layers increases, the feature information extracted by the network becomes more accurate. The mined data is also closer to the essence of the feature. Human action modeling and recognition methods oriented to image data can be divided into methods based on RGB color images, methods based on depth images, methods based on bone data, and methods based on multimodal data fusion. Based on this type of visual data, methods for human action recognition have been developed along with machine learning methods [22].

This paper summarizes the action sequence modeling and recognition of human skeleton joint point feature detection. The general process is as follows. First, the human skeleton joint points are detected, and then, feature extraction is performed. It then leverages the vocabulary rent model to model the serialized action features. Finally, the conventional machine learning method is used for identification, as shown in Figure 5.

With the popularity of wearable devices such as mobile phones and smart bracelets, it is very convenient to use the sensors of wearable devices, such as acceleration sensors, gyroscopes, and inertial sensors, to obtain human motion data. This makes it possible to use these motion data for human gait recognition, which has broad application prospects [23]. The main purpose of image recognition is to explore the kinematics and dynamics of yoga standing balance poses. Practitioners need to be highly concentrated and have good muscle strength and coordination in the process of yoga standing balance asanas in order to complete the standing balance asanas with quality. Therefore, according to the step-by-step principle of yoga asana practice and expert opinions, after screening this type of yoga asana, choose the standing balance posture that is supported by two feet to one foot, and the tree pose and the warrior third pose are the asana movements tested. The following are the specific names and action characteristics of each asana:(1)Tree Pose. It is prepared in a mountain pose, bending the right leg and bringing the heel of the right foot close to the root of the left thigh. It rests the ball of the foot on the left thigh with the toes down, supporting the balance on the left leg. After the palms are folded, straighten the arms above the head, maintain stability and maintain this pose for 2 to 3 breaths, return the hands to the original position by the side of the body, straighten the right leg, and return to the ready posture to stand in mountain pose. After resting for 2–3 breaths, switch to the opposite side and repeat this action.(2)Warrior Third Pose. It is prepared in the mountain pose, with both hands slowly raised above the head through the side of the body, swinging the left leg and extending the toes to the back, and the right leg is supported to keep it upright. Then, slowly bend forward until your upper body is parallel to the ground and at right angles to the supporting leg. Then, straighten your arms forward and bring your palms together to ensure that your torso, arms, and swinging legs are in the same line and parallel to the ground, and look down. Finally, keep this pose for 2–3 breaths and then slowly return to the mountain pose, rest for 2–3 breaths, then switch to the opposite side, and repeat this action.

4. Yoga Standing Pose Based on Convolutional Neural Network

4.1. Preparations for the Yoga Standing Three-Dimensional Experiment

According to existing research studies on yoga practitioners (instructors), the characteristics of Chinese yoga instructors are as follows: femininity and youth. Teaching age ranges from 1 to 10 years. And as far as yoga practitioners are concerned, the number of female participants also occupies a sufficient advantage.

Based on the above facts, the subject selection criteria are as follows. (1) Age is between 18 and 35 years old. (2) The teaching age is divided into two groups: 3 months-1 year (junior coach) and 5–10 years (senior coach) according to the research purpose. (3) Female coaches and male subjects are excluded. (4) There is no physical injury in the past year and good physical condition and function, insisting on practicing or teaching yoga for more than three hours per week. (5) There is no background in dance, martial arts, or professional sports training. (6) According to the existing relevant sports biomechanics research, the minimum number of samples is 8 people in each group, 16 people in total.

According to the inclusion and exclusion criteria, recruit qualified female coaches. The main way is to recruit subjects through the Internet, clubs, yoga clubs, and other channels. The summary of subject recruitment, exclusion, and inclusion was as follows: 30 subjects were recruited for the first time. After interviews and investigations, there were 5 people with backgrounds in dance, martial arts, and professional athletes. During the test period, one person was temporarily unable to participate due to bringing children. 2 had a cold and fever. 2 people were temporarily unable to participate due to family matters. 1 person withdrew from the experimental test for no reason. The total number of exits was 11. The data of the rest of the staff are complete, including 9 junior coaches and 10 senior coaches, meeting the expected sample size of the experiment. The basic information of the experimental subjects is shown in Table 1.

4.2. Specific Operation Process of the Experiment

Motion capture is based on passive motion capture technology, which firstly tracks the three-dimensional position of human landmarks through a camera called a charge-coupled device system. Then, the 3D position data is digitized and denoised and estimated using the geometric properties of the central projection observed by multiple cameras by close-range photogrammetry analysis. The surface EMG was collected by EMG, and the sampling frequency was set to 1000 Hz. Through ME6000 data acquisition and Megawin data processing and analysis, with the help of biomechanical analysis software, its function can satisfy the biomechanical analysis of kinematics and dynamics of yoga standing three-dimensional motion data [24].

4.2.1. Preliminary Preparation

Before the formal experiment, a preliminary experiment was conducted. It predicts the total test time of each person, promotes cooperation between experimenters, checks the experimental plan, and optimizes the experimental process. One week before the formal experiment, all subjects shall be notified of the specific time of this part of the experiment and told not to perform high-intensity training within a week. It checks the EMG test system and establishes the experimental test plan. On the day of the formal experiment, all equipment was calibrated again, including the motion capture system and the surface electromyography test system. It also installs a synchronizer to capture the distance between the lenses of the motion capture system and the angle of the lens and uses a calibration frame and a calibration rod to calibrate the experimental space in three dimensions, as shown in Figure 6.

4.2.2. Subjects Warm-Up

Subject Warm-Up. Subjects wear sports bras and shorts for 10–15 minutes of dynamic warm-up and 5–10 minutes of static stretching indoors. The experimenter explained the essentials of the movements, the purpose of the movements, etc., and the specific warm-up time was mainly based on the subjects’ own feelings.

4.2.3. Placement of EMG Electrodes
(1)It finds the correct electrode placement point for the target muscle.(2)Skin treatment: Razor alcohol disinfection, hair removal, skin stratum corneum polishing, alcohol cleaning of the skin, and natural drying.(3)Place the electrode: it determines the direction of the electrode according to the direction of the muscle fiber at the placement point. The reference electrode placement position is preferably close to the bony area of the recording electrode, and the distance between electrodes is 2–3 cm.(4)Connection and fixation: the connection of the wire and the reasonable fixation of the wire without affecting the movement of the subject, and the data collection is fixed.(5)Signal detection: the subjects sit and stand naturally, check the electrode signal strength, signal quality, etc. for detection, and troubleshoot problems in a timely manner. A total of 24 infrared-reflective marker balls are used to define the 10 links of the whole body, and the details of the pasting positions are shown in Figure 7.
4.2.4. Data Collection and Processing of Selected Yoga Poses for Testing: Tree Pose and Warrior Third Pose

According to the characteristics of the yoga asana movement practice process, the yoga asana practice process is divided, and the preparation posture before each asana enters the stage is stipulated.

According to the characteristics of practicing yoga asanas, the preparatory postures for each asana are prescribed. Combined with the results of the preexperiment, the joint angle of the swinging leg is studied and analyzed. As a result of the changes in the X-axis (left-right direction) and Y-axis (front-rear direction) of the center of gravity of the third form of the warrior, whether the practitioners of group A are doing the support leg on the left or the right, the degree of the center of gravity shift is relatively balanced. In contrast, the center of gravity of the practitioners in group B moved from the negative value area of the X-axis to the positive area of the Y-axis and remained stable in the positive area of the X-axis.

4.3. Sports Biomechanics of Yoga Standing Stereoscopic Movement

The tree-like center of gravity changes on the X-axis (left-right direction) and the Y-axis (front-rear direction), and the result shows that, compared with the practitioners of group B, the movement trajectory of the center of gravity in the plane of the X-axis and the Y-axis and the offset degree of the center of gravity in the tree style entry stage are relatively balanced, and the degree of dispersion is small.

It determines the maximum and minimum angles of the joint angle of the entry stage. Then, the difference between the two is the variation range of the joint angle, which is the variation of the joint angle when the practitioner performs the action [25]. The tree and standing hold phase joint angle results are the last moment the subject entersthe phase. The researchresults areshown in Table 25.

The results showed that there was no significant difference in the range of motion of the hip joint between groups A and B (P > 0.05). The range of motion of the knee joint was significantly different (P < 0.05). The range of motion of the ankle joint was significantly different (P < 0.01).

The results showed that there was no significant difference in the range of motion of the hip joint between groups A and B (P > 0.05). The range of motion of the knee joint and ankle joint was significantly different (P < 0.01).

The results showed that there was no significant difference in the range of motion of the hip joint between groups A and B (P > 0.05). The range of motion of the knee joint was significantly different (P < 0.05). There was a significant difference in the range of motion of the ankle joint (P < 0.01).

The results showed that there was no significant difference in the range of motion of the hip joint between groups A and B (P > 0.05). The range of motion of knee joint and ankle joint was significantly different (P < 0.05).

Comparing the data in Tables 2 and 3 at the last moment of the entry stage, that is, entering the static holding stage, it was found that there was no significant difference in the range of motion of the hip joint of the left swing leg between groups A and B. The range of motion of the knee joint varied significantly. The range of motion of the ankle joint varied significantly.

Comparing the research results in Tables 4 and 5 in the maintenance phase, it can be seen that when the action leg is on the left side, there is no significant difference in the range of motion of the hip joint between groups A and B. The range of motion of the knee joint varied significantly. The range of motion of the ankle joint varies significantly. When the action leg was on the right side, there was no significant difference in the range of motion of the hip joint between groups A and B. The range of motion of the knee joint and ankle joint was significantly different.

4.4. Comparison between the Three-Dimensional Model of Yoga Standing Based on Convolutional Neural Network and the Traditional Model

According to the stereoscopic action image of the yoga station collected by EMG electrodes, the sample space optimized by target positioning is then input into the convolutional neural network model as input data. This can greatly simplify the calculation, improve the calculation speed of the network, and further improve the accuracy [26]. The experimental flowchart is shown in Figure 8.

The original sample space is not processed, the image quality of the sample is uneven, and the image quality is improved by preprocessing. It starts from the nature of the image, taking the image as a whole, comparing the difference in integrity before and after projection, so that the effect of the LDA algorithm is better. The improved sample space is tested using the traditional LDA algorithm and the optimization algorithm, respectively, and the experimental results are compared and analyzed.

At the same time, the training time of the network model under the optimization algorithm is also reduced. It further reduces the training cost of the network model. This also greatly facilitates later fine-tuning and modification [27]. The optimization algorithms are tested based on the traditional network model, respectively. The final image recognition accuracy is improved compared to the previous network model. The image recognition error rate steadily tends to 5%, and the loss rate is also below 2%, as shown in Figure 9.

Through the above analysis of the experimental results, it can be seen that the optimization algorithm based on target recognition and LDA can map the data in the original sample space. The new sample space has a more distinct data distribution. This makes the data in the originally chaotic sample space preclassified, which greatly reduces the difficulty and time of model training. At the same time, the new evaluation strategy proposed in the optimization algorithm is mainly aimed at the optimization of the sample space of the image type. Therefore, in most image recognition tasks, this algorithm can optimize the sample space in advance before model training, which can greatly improve the training accuracy of the model.

5. Discussion

In order to effectively analyze the motion biomechanics of the three-dimensional movement of yoga standing, the target localization and Fisher classifier were further optimized through the convolutional neural network model in image recognition. It starts from the original data direction, fully analyzes the original data information, improves the traditional LDA dimensionality reduction algorithm, and adopts a more accurate similarity measurement benchmark. On this benchmark, it is transformed into the redivision of the sample space, the projection matrix is obtained by optimizing the objective function, the projection value of the original sample data in the new sample feature space is obtained, and the new sample space is used as the input space of the network model for experimental analysis.

The three-dimensional movement of the yoga station is collected through electromechanical electrodes, and the collected images are input into the optimized convolutional neural network to analyze the experimental simulation results. The optimized sample space has better data distribution, which can effectively speed up the convergence speed of the network model and further reduce the training cost. Since the optimized sample feature space mines the deep information between the data, the accuracy of the model is also improved to a certain extent.

6. Conclusion

With the development of society, more and more people begin to regulate and release stress through yoga. There are many schools of yoga and asanas, each with its own unique characteristics. Putting related image equipment into yoga asana research can synchronize more experimental equipment to collect indicators, add more data about yoga asanas, and conduct more comprehensive and detailed research on yoga asanas. It is hoped that more scholars can conduct more in-depth research on yoga to better integrate yoga asanas with modern scientific theories. This makes the practitioner's theoretical understanding of yoga asanas more intuitive and clear and further improves the safety and effectiveness of yoga asana practice. It further provides a scientific theoretical basis for yoga practice to achieve the ultimate goal of yoga practice and injury prevention.

Data Availability

The data used to support the findings of the study can be obtained from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.