Computer Vision and Image Processing in Mobile DevicesView this Special Issue
Feature Extraction Method of Snowboard Starting Action Using Vision Sensor Image Processing
There is a lot of noise in the snowboard starting action image, which leads to the low accuracy of snowboard starting action feature extraction. We propose a snowboard starting action feature extraction using visual sensor image processing. Firstly, the overlapping images are separated by laser fringe technology. After separation, the middle point of the image is taken as the feature point, and the interference factors are filtered by laser. Secondly, the three-dimensional model is established by using visual sensing image technology, the action feature images are input in the order of recognition, and all actions are reconstructed and assembled to complete the action feature extraction of snowboard. The interference factors are filtered by laser, the middle part of the action image is extracted according to the common features of multiple images, and its definition is described. The movement change and moving distance are used to count the most features and clarity. Finally, the edge recognition effect of snowboard starting action image and the action recognition effect under multiple complex images are taken as experimental indexes. The results show that the method has a good effect on image edge extraction, the extraction effect is as high as 95%, and the accuracy is as high as 2.1%. In addition, under multiple complex images, the action feature recognition rate is also high, which can prove that the method studied has better accuracy in snowboard starting action feature extraction.
In order to improve the professionalism of snowboard athletes, it is necessary to analyze the athletes’ starting action and side rotation turnover action and find the deficiencies in time . In analyzing athletes’ starting action of snowboard, it is difficult to accurately analyze the features of such action if only observed with the naked eye [2, 3].
Physical training and special training for the starting stage are two necessary links in the training of snowboard team athletes. At present, the physical fitness training of snowboard is still dominated by barbells and other traditional instruments [4, 5]. However, due to the inertia participation of counterweight in the exercise process, the resistance in the training process is extremely unbalanced. The greater the speed, the more unbalanced the inertia. The instantaneous impulse is very easy to increase the risk of sports injury. Based on extracting features of the starting action of snowboard and image of the traditional starting action of snowboard, this paper proposes a method to extract features of the starting action of snowboard based on visual sensing image processing. Then, laser is used for filter processing of the interference factors, and the Hough method is adopted to obtain several parameters. The results of experiments conducted on two true data sets show that compared with snowboard action features extraction, snowboard action features extraction is better than other literature methods in terms of stability, error rate, and accuracy.
The main contributions of this paper are as follows: (1) The overlapping images are segmented by laser fringe technology. After segmentation, the middle point of the image is taken as the feature point, and the interference factors are filtered by laser. (2) A three-dimensional model based on visual sensing image technology is proposed. (3) The three-dimensional model of visual sensing tracking is used to input the action feature images in the recognized order, reconstruct and assemble all the actions, carefully enlarge and capture the edge and center of the image, extract the feature points of the starting action, and complete the spatial construction of action features.
2. Related Works
Many scholars have carried out relevant research. There are many existing methods to extract the starting action features of snowboard [6, 7]. Literature  proposed a human action image recognition method under high-intensity motion. This method segmented the image in advance and then extracted the action features, so as to obtain the Gaussian distribution model of the action image background and realize the extraction of human action, but the feature extraction clustering of this method is poor. Literature  proposed a human motion recognition method based on 3D CNN, which encodes the motion information to recognize and extract the motion. However, the stability of motion feature extraction is low. Literature  proposed a human skeleton behavior recognition method based on spatiotemporal weighted pose motion features. This method extracts the space and information of different joint points of the human body to represent the action sequence and then realizes the extraction of motion features. However, the feature extraction error of this method is large. Literature  proposed the research on the feature extraction method of football players’ foul action based on machine vision. Machine vision is a branch of the development of artificial intelligence, which uses machines instead of human eyes to judge and measure. It mainly converts the target into image signals through machine vision products, transmits them to a special image processing system, and converts the color, brightness, and distribution of image pixels into digital signals. The target feature information is extracted by operation. However, the accuracy of foul action feature extraction is low. Literature  proposed a data feature extraction method based on deep learning. Firstly, based on the flexible setting of convolution kernel, the branch structure of neural network is introduced to extract the depth features of original data at multiple scales; then, the data features obtained from each branch are fused and used as the input of the next convolution layer. However, this method takes a long time. Although the features of the starting action of snowboard can be extracted, there are also such problems as the low accuracy. Visual sensing technology, as one of the most commonly used image detection technologies, can segment the target into several subimages and then detect and segment them one by one . Thus, the technology has a good recognition effect and is applied to extracting features of the starting action of snowboard. Visual sensing image processing is used to recognize and detect the features, which provides a basis for improving the action standard of Chinese athletes.
In Literature , Yu et al. proposed a human motion capture method based on a single RGB-D sensor. This method realizes motion tracking refinement at the body part level through semantic tracking loss calculation, which can improve the tracking accuracy in the case of severe occlusion and fast motion. However, the motion accuracy of this method is low. In literature , Maruyama et al. proposed a MoCap capture method based on inertial measurement unit. This method does not need any optical equipment and can measure the motion attitude even in the outdoor environment. However, the motion delay of this method is too long. Literature  proposed to use a series of precise physiological and biochemical instruments to detect the physical function of athletes, so as to master the law of the change of athletes’ mechanical energy in sports training, scientifically organize and arrange the training and practical activities of snowboarding skills on veneer U-shaped venues, and reasonably guide athletes’ diet. This can effectively prevent sports injury, prolong sports life, and improve sports ability. However, due to the loss of motion in this method, it is impossible to accurately capture snowboarding athletes’ skiing motion.
3. Preprocessing of Extracting the Starting Action of Snowboard
In the preprocessing of snowboarding starting action feature extraction, this paper mainly carries out image segmentation and image filtering and completes the feature extraction process.
3.1. Image Segmentation
Before extracting the action features, the laser fringe technique is used to separate the overlapping images, which is both simple and practical. Although there will be some difficulties in the methods of segmentation, a slight error will lead to the failure of segmentation. Therefore, the algorithm is generally used for laser segmentation. Suppose there exists the space which includes multiple simulation points , and the segmentation point is and , so the segmentation form of point iswhere is the coefficient, and it meets the condition , so it can be seen from the area that the displacement of the two points and will affect the feature extraction. The shorter the distance, the more accurate the feature recognition. Considering the changes of the distance, the distance of the algorithm can be expressed aswhere refers to the coordinate of the single feature, and refers to the square matrix, i.e., . To obtain the gray area in the center of the image, it is necessary to correctly distinguish the area and side length of the gray part  and take the position of the center as the coordinate center, which is taken as , and the formula of the central position iswhere is the gray index of the pixel point , and are points of two directions in the region, and is the coefficient. This method can not only accurately identify the central area of the image, but also is not disturbed by other factors.
In the segmentation of overlapping image [18, 19], changes of the laser should be used to determine the number of overlapping images. All images are segmented into multiple modules along the direction of the initial image and establish the matrix aswhere refers to the fuzzy function, refers to the pixel value of all points, refers to the feature distribution sequence, refers to action function, and refers to the action angle.
3.2. Image Filtering Processing
Before extracting the image feature points, we must first ensure that the extraction process is not disturbed by other factors, so as to maintain the stability of the sensing image. Therefore, the middle point of the image is taken as the feature point, the interference factors are filtered by laser [20, 21], and several parameters are obtained by Hough method. Suppose there are two critical points in the graph, i.e., and , the image’s center coordinates are substituted into the line equation.where refers to the horizontal axis coordinate. represents the vertical axis coordinates. The size of graphic structure area is the specific form of system tracking, and the formula of the area range iswhere refers to the feature set of correlation and refers to the analytic factor. The above processing process can be completed quickly in a short time, and the information left by the laser can be detected accurately, which provides a basis for starting action feature extraction.
4. Action Feature Extraction Using Visual Sensing Image Processing
Based on the above processing, the starting action features of snowboard are extracted, and based on the capture of athletes’ action images, the visual sensing and tracking technology is used to collect the features [22, 23]. Because the action is not standard, the presentation on the visual sensing image will be very uneven, and there will be some errors in laser detection. When the camera is at a long distance, the available laser becomes less, and the laser transmitted to the camera cannot achieve the expected effect, so the image display is not clear and impurities will appear. The specified detection principle is “when the action feature points are not detected , the search will continue in a certain area” . If the fuzzy pixel points are identified, they are impurity points in the image. Ignore this pixel and continue to identify in the range till the completion. The removal process of inactive feature points is shown in Figure 1.
On this basis, the contour is extracted. Generally, the center of the extracted action feature image is clear and the edge is fuzzy. Therefore, the length of the edge feature of the action image iswhere refers to the length range of image edge, is action fitting parameter, is the speed, is the length, is fitting index, is the rotation angle, and is the coefficient. Extract the middle part of the action image using the common features of multiple images:where is the distance, is the gray coefficient of the image, and is the natural number. Then, adjust the size of the image into the mode, and then conduct image recognition block by block to get the data about action features .where refers to the typical feature set of the image and refers to the fusion information of edge gradient. For the tracking action feature set, it just conforms to the normal distribution of the image pixel set , so the intensity values of typical features of the image arewhere refers to the balance factor, refers to the feature amount of the action, refers to the edge pixel coefficient of dynamic image, and represents fuzzy correlation. represents the fusion degree of feature extraction. and are 3D model of visual sensor tracking. Input the action feature image according to the recognition order, reconstruct and assemble all actions, carefully enlarge and capture the edge and center of the image, extract the feature points of the starting action, and complete the spatial construction of action features .where and refer to 3D model index, refers to time, and refers to feature parameter.
Suppose there is a point in the image, the pixel value of the image at this moment can be expressed by , and the pixel difference of the action feature of the image can be expressed by :where refers to parameter distribution set. When the pixel value of the image’s central area is smaller than the gray value, the image does not achieve the predicted feature extraction. The fuzzy pixel values are compared to mark the damaged points on the edge. The marking range is expressed aswhere refers to image area. According to the marking results of the above formula, the most features and clarity are counted with the change of action and the distance of movement .
The edge feature threshold of the starting action of single board at moment is , and the specific pixel value of the feature is
When , the total amount of features is , and is the variant. Suppose is all the parameters of the four angles in the image, and the function is . Obtain the difference value of the two images and draw them in the order of imaging successively. The pixel difference obtained is expressed aswhere and are two different vectors. The expression of the maximum gray value in the sensing area iswhere is the minimum gray value. Connecting the different points will form a waveform data map, mark the details of the four corners, compare the final gray value with the initial gray value, and evaluate the image. The transformation displacement is
Finally, after the clear edge contour is obtained, the typical feature information is analyzed as :where is action flip angle, and are two different sensors, is fitting index, and is area.
After the feature information is obtained, find the target image by relying on the original image, compare it with the image according to its typical features, confirm the usable action image, and finally determine the end-point pixel features.
Image segmentation method is adopted to extract the starting action of snowboard. Noise reduction is realized for the collected action feature segmentation method, the correlation spectrum feature quantity of the action is extracted, and the variance fusion model is obtained. The signal sampling node is defined as shown in Figure 2. Input: according to the test sample set and training sample set of covariance array data, the clutter scatterer defining snowboard action features is obtained. Output: the fusion result meeting the minimization target parameters.
Get the set of the first element; that is, the clutter parameters of the action characteristics are as follows: (1) the correlation spectrum’s feature quantity of the output layer is analyzed, and the multithreshold decision and threshold detection method are adopted to obtain the estimated value of the maximum covariance parameter . (2) The upper limit of noise reduction processing is estimated. (3) Combined with the image segmentation method, the snowboarding action feature fusion is realized to improve the recognition accuracy of action feature extraction.
5. Experimental Analysis and Results
In order to verify the application performance of this method in the feature extraction of snowboarding starting action, the experimental test and analysis are carried out.
5.1. Experimental Environment and Data Set
In order to ensure the applicability of the test results, the action feature extraction test room is set, three cameras are installed to obtain the action diagram, and the machine vision system is ov7670 camera.
This paper presents experiments and tests on two data sets (KTH and HMDB). KTH data set: it is also one of the data sets with high utilization rate in this paper. The data set contains 2391 groups of data, including 6 actions. Each action is completed by 25 characters in 4 different scenes. Therefore, there are 600 video sequences, and each video can be divided into 4 subsequences. The action of KTH data set is relatively standardized, and fixed shots are used at the same time. The number is also relatively rich for the current model training, so it can be said that it is a very useful data set for the task of simple action recognition. HMDB data set: there are 51 categories in total, with an average of 100–200 groups of data in each category. In terms of data volume and category, we can see that there are rich data, but this data set is mainly composed of videos taken by some movie shots and daily cameras, so the background is relatively complex, and there are also videos with dynamic shots and switching shots. Therefore, this data set is more suitable for target recognition and target detection.
In this study, the data in each task of training and testing are strictly implemented in accordance with the standards. The above data are in the form of KTH data set and HMDB data set. In the training process, 36000 tasks are randomly selected for testing. It is shown in Table 1.
5.2. Experimental Indicators
5.2.1. Poor Clustering
Clustering analysis can deal with the classification determined by multiple variables, eliminate the classification of variables, and improve the quality of feature extraction. In order to verify the operation performance of this paper, feature extraction clustering is selected as the index for analysis. The clustering calculation formula iswhere refers to clustering feature decomposition, and refers to clustering feature integration.
5.2.2. Low Stability
This can ensure that the extraction process is not disturbed by other factors, so as to maintain the stability of the sensing image and improve the quality of action feature extraction.
5.2.3. Large Error
The action standard will be presented on the visual sensing image. The error rate will be detected by laser. The smaller the error, the clearer the action of feature extraction.
5.2.4. Low Accuracy
According to the accuracy comparison results of several methods, the better the accuracy is, the stronger the applicability is.
5.2.5. Time Consuming
Through the time-consuming comparison results of several methods, it can be seen that the shorter the time-consuming, the more accurate the action of feature extraction.
5.3. Results and Discussion
In order to verify the effectiveness of the method to extract features of the starting action of snowboard based on visual sensing image processing, experimental analysis is carried out. The purpose of this experiment is to verify the accuracy of action feature extraction, which is mainly measured by two experiments. In the first experiment, by specifying the snowboard action image, the edge features of the image are extracted and the effect of edge feature extraction is compared; the second experiment mainly verifies the effect of snowboard action extraction under multiple complex images. In order to make the experimental results more illustrative, the methods of literatures , , , , and  are compared with the proposed methods. The experimental results are calculated by formula (5), and the experimental results are shown in Figure 3.
According to Figure 3, under multiple complex images, the recognition rates obtained by the feature extraction method in literatures - are low, so it can be verified that the action extraction effects of these four methods are poor, and the application effect is not as good as the method studied. The method to extract features of the starting action of snowboard in this method is good, and the recognition rate is more than 90%. This proves that the action feature extraction method in this study is less affected by the detection background and can effectively remove the noise in the image with good clustering, which can ensure the accuracy of snowboard action features. The stability test results of snowboard feature extraction are in Figure 4.
According to Figure 4, the feature extraction stability of the method in literature  is 0.41, the feature extraction stability of the method in literature  is 0.43, the feature extraction stability of the method in literature  is 0.47, the feature extraction stability of the method in literature  is 0.38, and the feature extraction stability of the method in literature  is 0.32, and the stability of feature extraction of the proposed method is 0.49. It can be seen that the snowboard action feature extraction using the method in this paper has better stability and practicability.
According to Figure 5, the feature extraction error rate of the method in literature  is 3.2%, the feature extraction error rate of the method in literature  is 2.5%, the feature extraction error rate of the method in literature  is 6.3%, the feature extraction error rate of the method in literature  is 5.6%, and the feature extraction error rate of the method in literature  is 6.1%, and the error rate of feature extraction of the proposed method is 0.49%. It can be seen that the snowboard action feature extraction error rate using the method in this paper is lower and the extraction level is high.
According to Figure 6, the feature extraction accuracy of the method in literature  is 1%, the feature extraction accuracy of the method in literature  is 0.62%, the feature extraction accuracy of the method in literature  is 0.29%, the feature extraction accuracy of the method in literature  is 0.99%, and the feature extraction accuracy of the method in literature  is 1.19%, and the feature extraction accuracy of the proposed method is 2.1%. Therefore, the accuracy of snowboard action feature extraction using the method in this paper is lower.
According to Figure 7, the feature extraction time of literature  method is 59, the feature extraction time of literature  method is 65, the feature extraction time of literature  method is 80, the feature extraction time of literature  method is 108, the feature extraction time of literature  method is 110, and the feature extraction time of the proposed method is 33. Therefore, the snowboarding action feature extraction using this method takes less time.
6. Conclusions and Future Works
To sum up, due to the large amount of noise in the snowboard starting action image, the snowboard starting action feature extraction is not so accurate, the error is large, and the feature extraction takes a long time. Therefore, this paper proposes a snowboard starting action feature extraction based on visual sensing image processing. The innovation of the method in this paper is that it uses the visual sensing image processing technology to delete the redundant images and can classify, extract, and segment the same features. In this way, each part of the image can be analyzed in more detail, the interference factors in the image can be filtered by laser, and the middle part of the action image can be extracted according to the common features of multiple images to obtain its definition description. The movement change and moving distance are used to count the most features and clarity. In the future, we also need to conduct in-depth research on the feature extraction of starting action of snowboard, so as to further improve the accuracy of action feature extraction and better the extraction effect.
The data used to support the findings of this study are included within the article. Readers can access the data supporting the conclusions of the study from KTH data set and HMDB data set.
Conflicts of Interest
The authors declare that there are no conflicts of interest with any financial organizations regarding the material reported in this manuscript.
This work was supported by the Foundation of General Project of Higher Education and Teaching Reform in Heilongjiang Province China (SJGY20200406), General Project of the China University Sports Association (202013508), Research on the construction of the ski competition system for college students in my country, Discipline Echelon Project of Harbin Institute of Physical Education (XKB04), and Research on the Cultivation of Student Sports Core Accomplishment from the Perspective of Sports and Education Integration, and Innovative Research and Practice of Skills Training Methods for Normal Students in Normal Universities (20-XJ21009).
C. Ding, K. Liu, L. I. Guang, L. Yan, B.-Y. Chen, and Y.-M. Zhong, “Spatio-temporal weighted posture motion features for human skeleton action recognition research,” Chinese Journal of Computers, vol. 43, no. 1, pp. 29–40, 2020.View at: Google Scholar
S. Memon, M. Ahmed, S. Narejo, U. Ahmed baig, B. Shankar chowdry, and M. Rizwan anjum, “self-driving car using lidar sensing and image processing,” International Journal of Grid and Distributed Computing, vol. 13, no. 2, pp. 77–88, 2021.View at: Google Scholar
D. Kim, “On-street parking vacancy and vehicle detection in video using computer vision technique,” International Journal of Transportation, vol. 7, no. 2, pp. 11–18, 2019.View at: Google Scholar
M. Ghavidel, P. Bayat, and M. E. Farashiani, “Evaluation of bouxus destruction using satellite image processing techniques in the northern forests of Iran,” International Journal of Future Generation Communication and Networking, vol. 13, no. 4, pp. 11–22, 2020.View at: Google Scholar
F. Lu, “Adaptive recognition method of aerobics decomposition action image based on feature extraction,” Science Technology and Engineering, vol. 19, no. 7, pp. 148–153, 2019.View at: Google Scholar
B. D. Da Silva, P. C. Bernardes, P. F. Pinheiro, E. Fantuzzi, and C. D. Roberto, “Chemical composition, extraction sources and action mechanisms of essential oils: natural preservative and limitations of use in meat products,” Meat Science, vol. 176, no. 3, Article ID 108463, 2021.View at: Publisher Site | Google Scholar
C. Yi, B. Chen, S. Yuan, and B. Xu, “A method for multi-person motion capture based on multi-mode in virtual geographic environments,” Journal of Geo-Information Science, vol. 021, no. 003, pp. 305–314, 2019.View at: Google Scholar
L. Brea Alejo, J. Gil-Cabrera, A. Montalvo-Pérez, D. Barranco-Gil, J. Hortal-Fondón, and A. Navandar, “Performance parameters in competitive alpine skiing disciplines of slalom, giant slalom and super-giant slalom,” International Journal of Environmental Research and Public Health, vol. 18, no. 5, p. 2628, 2021.View at: Publisher Site | Google Scholar
K. Gurucharan, S. S. Kiran, K. Babburu, and L. Vadda, “Computer vision based fruit recognition and classification system,” International Journal of Future Generation Communication and Networking, vol. 13, no. 3, pp. 1–14, 2020.View at: Google Scholar
D. Kim and Y. Chang, “Traffic vision analysis with convolutional neural network model,” International Journal of Transportation, vol. 8, no. 1, pp. 37–42, 2020.View at: Google Scholar
V. Rosso, V. Linnamo, W. Rapp et al., “Simulated skiing as a measurement tool for performance in cross-country sit-skiing,” Proceedings of the Institution of Mechanical Engineers - Part P: Journal of Sports Engineering and Technology, vol. 233, no. 4, pp. 455–466, 2019.View at: Publisher Site | Google Scholar
K. S. Ananda Kumar, R. Balakrishna, A. Y. Prasad, B. Worku, and K. Salih Siraj, “Development of integrated IoT application on vehicle tracking, traffic monitoring and vehicle theft,” International Journal of Future Generation Communication and Networking, vol. 13, no. 4, pp. 1–10, 2020.View at: Google Scholar