High-Performance Computing and Automatic Face RecognitionView this Special Issue
Evaluation of Physical Education Teaching Effect Based on Action Skill Recognition
In order to improve the effect of physical education teaching in modern colleges and universities, this paper combines the movement skill recognition algorithm to construct a physical education effect evaluation system and studies the multispectral image reproduction system in practical application. Moreover, this paper gives a detailed description of more advanced functional modules in the system, such as spectral acquisition, spectral reflectance reconstruction, and spectral color correction. In addition, this paper focuses on the algorithm design of the color appearance matching module. By adding the “color appearance transformation” on the source side and the “color appearance inversion” on the reproduction side, the source image and the reproduced image can achieve color appearance matching in the observation condition-independent space. The simulation experiment verifies that the physical education effect evaluation system based on action skill recognition meets the actual needs of physical education and has a good role in promoting the recognition of physical skills and the improvement of physical education effect.
In the process of physical education teaching innovation, teachers should learn more ways to develop students’ creativity, create a good educational situation for students, enable students to put forward their own views on the corresponding issues relatively independently, and ensure a good atmosphere for physical education teaching. In this way, students’ creative awareness will be improved, and they will be able to carry out sports learning more flexibly. A successful physical education class is composed of many elements, such as experienced physical education teaching, students with strong curiosity, scientific teaching mode, and sports equipment. These physical education classroom elements are interrelated, but sometimes they are opposite to each other to a certain extent. The change of each element may lead to the change of another element, and the quality of physical education teaching may also be affected. In this case, in order to make the innovation and cultivation of physical education successful, it is necessary to consider from a global perspective and seriously consider the connections between different elements. Only in this way can sports innovation teaching be carried out better and better results can be achieved.
Physical education space is not a purely material concept but the real life and the living conditions shared by teachers and students in the process of physical education. It is a humanistic structure full of life and cultural significance. Moreover, the physical education space contains political power, social relations, cultural dependence, psychological activities and other material, spiritual factors, and social factors. Materiality is the space that can be directly observed, spirituality is the space constructed by language, and sociality is the relational space between teachers and students, teaching and learning, and teaching and the environment. The extraordinary thing about physical education teaching space is that it is not a space for simple language construction but a space for artistic reproduction of body language—a practical space for showing sports technology. Moreover, it is a spatial concept that organically integrates materiality, spirituality, and sociality through the practice of physical education, so it also has more vitality and cultural significance.
The practical activities of physical education teachers and students and their discourse tendencies are first placed in the space, because the elements of the whole process of physical education, including physical education teachers, students, physical education concepts, physical education methods, and physical education effects, are presented in the space, in a specific space. “Space is the mediator of social action and actively affects social action. Space is the raw material of social process and also the product of social process”. In the special environment of physical education teaching, spatial behavior is the result of the mutual penetration and interaction of physical education teacher’s ability, students’ quality, spatial situation, and social culture. Physical education teachers’ grasp of space theory and its value directly affects the effect of physical education teaching. Every demonstration action of physical education teachers is a spatial art reproduction of body language. However, in reality, physical education teachers are constrained by the cultural environment, the background of the times, cognitive ability, and living customs, and they do not realize the priority of space in physical education teaching. In teaching practice, the measure of time conceals the priority of space. The weak concept of space does not only exist in physical education teachers. Students ignore the self-constructed space in the practice of physical activities and pursue the so-called autonomy. The basis of students’ autonomous sports practice is the mastery of individual motor skills. Needless to say, it takes time for students to master motor skills.
This paper combines the movement skill recognition algorithm to construct the physical education effect evaluation system and promote the progress and development of the modern physical education reform.
2. Related Work
This study proposes a new image stitching algorithm based on contour features. In the feature extraction stage, the convolution map is enhanced and the region growth method is used for auxiliary correction, which can improve the contour extraction effect; in the aspect of feature representation, the shape signature is used instead of the chain code to describe the contour, which improves the calculation speed and reduces the cost of effects of noise interference and lens distortion . Aiming at the continuous stitching of sequence images with translation, rotation, and scaling transformation, a stitching algorithm combining the registration algorithm based on regional features and the registration algorithm based on gray cross-correlation is proposed. The algorithm extracts regions with an iterative threshold segmentation algorithm, uses region features for registration, establishes an initial pair of regions with the same name, then uses the centroid points of the pair of regions with the same name as feature points, and selects the cross-correlation criterion as a metric based on the grayscale information of the image. Finally, the accurate transformation relationship between images is obtained, and the stitching of sequence images is realized . An image feature matching algorithm based on Laplacian matrix is proposed. First, we construct the Laplacian matrix of the feature point sets of the two images, respectively, perform singular value decomposition (SVD) on the two matrices, and then use the decomposition result to construct a relationship matrix that reflects the matching degree between the feature points to achieve the feature point which matches the two images . The traditional template matching algorithm is studied and analyzed, and a new fast template matching algorithm based on projection is proposed. The two-dimensional image is one dimensionally projected, and the one-dimensional projection value is further differentially quantized to obtain a set of character strings consisting of 0 and 1 numbers that describe the image and the template. The KMP fast character matching algorithm is introduced to directly compare the image and template . A two-stage correlation matching improvement method is proposed, and a fast hierarchical pyramid matching algorithm is constructed . Based on the principle of template matching, a new automatic color image stitching method is proposed . For two-color images with overlapping areas, the method first uses the image feature information to automatically find a small template image from the overlapping area of one image and then searches in the overlapping area of the other image according to the maximum similarity criterion. When the best image registration point is found, the final data fusion operation is performed on the overlapping area of the two images by using the smoothing factor, which can realize the fast and automatic stitching of color images . Two similarity measurement methods, sequential similarity detection (SSDA) and normalized product correlation, are used to establish the similarity measurement value between the template image and the input image and then used the simulated annealing algorithm to randomly search for the optimal solution quickly and accurately—best match. At the same time, on the other hand, stitching in the frequency domain and phase has become one of the new research hotspots. The fact that its unique stability and the characteristics that it have are not easily affected by the change of image grayscale errors has attracted the attention of relevant researchers. A lot of research work has been done in this area and many attentions have been made to . Using a tightly supported vector wavelet with orthogonality and symmetry, the two images were mosaicked and spliced. Because the wavelet transform has the property of a band-pass filter, the wavelet transform components at different scales actually occupy a certain bandwidth and scale. The larger the value, the higher the frequency of the component; so the bandwidth of each wavelet component is not large, the two images to be spliced are first decomposed into wavelet components of different frequencies according to the method of wavelet decomposition, then in different scales under different stitching widths, the two images are first stitched together according to the wavelet components of different scales, and then, the restoration program is used to restore the entire image . The research expands the application scope of the image registration method based on Fourier–Mellin transform from two aspects. The first is the stitching of panoramic images, and the other extension is the matching of image curves . The image curves are converted into binary images, and then, the Fourier–Mellin transform is applied to register these binary images, so as to match the two curves .
The grayscale-based segmentation method is intuitive and easy to operate and is widely used in practice. However, due to the influence of image grayscale distortion and various environmental factors, it is easy to cause mis-segmentation. For this problem, many scholars have given great energy and designed various algorithms to solve it . However, due to the influence of various factors, the grayscale image segmentation method still cannot completely solve the problem of incorrect segmentation and excessive segmentation error. Analyzing the reasons for the oversegmentation of the watershed algorithm, simplifying the grayscale transformation, and using the influence of different structural elements of mathematical morphology on the distance transformation, an improved algorithm combining the watershed and Livewire is proposed, which can reduce the oversegmentation effect and make the calculation more efficient . Aiming at the two defects of the watershed segmentation algorithm, time consuming and oversegmentation, a multiresolution-based watershed image segmentation algorithm is studied. This algorithm performs watershed segmentation on low-resolution images and improves the speed of segmentation . When returning from a low-resolution image to a high-resolution image, a merging function based on edge information is used to avoid the loss of edge information and ensure the accuracy of segmentation . A noise suppression method based on gradient images is designed, which can suppress the influence of Gaussian noise on gradient images and effectively avoid the problem of over-segmentation .
3. Action Recognition Algorithms
The multispectral image reproduction process in practical application is shown in Figure 1. Among them, the spectral image input device and the illumination light detection device are used for spectrum acquisition to obtain the reflection spectrum and the illumination light spectrum of the scene object and to provide a data source for computer processing. Computers are used for operations such as spectral reflectance reconstruction. When the spectral reproduction is remote reproduction, the network environment can realize the storage and transmission of multispectral images and related parameters. At the spectral reproduction end, the computer performs operations such as observational ambient light processing, spectral image processing, and spectral color correction to prepare the data to be the output for the spectral output device. In addition to this, the spectral output device is used to reproduce the spectral image, giving an image reproduction that meets the application requirements. The reproduced image can be either a spectrally matched reproduction or a color reproduction that is consistent in color appearance under different viewing conditions.
As can be seen from Figure 1, if only the reproduction of spectral matching is pursued, the red apple image at the source end may be reproduced as a purple apple, so a processing unit for color appearance matching needs to be added in the reproduction process. The multispectral image color appearance reproduction process is shown in Figure 2, and each functional module in the figure is discussed in the following sections.
The image output by the multichannel digital camera integrates various types of information such as the ambient illumination of the scene, the spectral reflection characteristics of the scene, the spectral sensitivity of the camera, and the spectral transmittance of the camera filter. To obtain reflectance data that only reflects the spectral reflection characteristics of the scene, it is necessary to use mathematical methods to estimate based on the multichannel images output by the camera, as well as the scene illumination information and the spectral characteristics of the camera. This process is called spectral reflectance reconstruction.
The process of shooting a scene by a spectral imaging system and outputting a multichannel image related to the color space of the camera can be expressed as 
Among them, k represents the channel, represents the output image of the kth channel of the system, and represents the spectral sensitivity of the camera. represents the spectral power distribution of illumination, represents the spectral transmittance of each channel filter, and represents the spectral reflectance of the scene. Since spectral sensitivity, illumination spectral power distribution, and filter spectral transmittance are all sampled in the visible wavelength range, formula (1) can also be represented by the following discrete matrix:
Among them, D is the multichannel image output by the camera, R is the spectral reflectance of the scene, S is the spectral sensitivity of the camera, L is the spectral power distribution of the illumination, T is the spectral transmittance of the filter, and . We can get 
Formulas (2) and (3) are called forward models of the spectral imaging system.
The spectral reflectance reconstruction is to estimate the spectral reflectance of the scene when the abovementioned relevant characteristics and the multichannel image output by the camera are known, that is, to obtain the inverse process of the model, that is, the inverse transformation Q, so that
The learning-based reconstruction method does not require the spectral feature matrix of the imaging system in the reconstruction process. It first establishes a set of training samples to generate the multichannel value and spectral reflectance transformation output by the system, andR-matrix method situation. Among them, the R-matrix method takes into account the spectral information and chromaticity information contained in the device-related image in the algorithm, so it can bring better spectral reconstruction results than other methods.
The color space value of the identification system is c, and after identification, the spectral reflectance space value measured with the spectrophotometer is s; then ,
Here, represents the nonlinear mapping relationship from the recognition system color space to the spectral space, represents the recognition system color space, and represents the spectral reflectance space. Correspondingly, for the spectral reflectance value that can be reproduced by the identification system, it can be transformed into the color space value of the identification system by , namely, represents the range of spectral values that the recognition system can reproduce, that is, the spectral domain of the recognition system. It is defined as follows:
In essence, the spectral color correction of the recognition system is to obtain the forward mapping relationship and the reverse mapping relationship and realize the correction transformation from the recognition system color space to the spectral space and from the spectral space to the recognition system color space. There are two main ways to find the forward and inverse transformations and . One is to establish an analytical mathematical recognition model, and the other is to use the lookup table method.
Color appearance matching needs to be implemented in chromaticity space, so its process can be shown in Figure 3. First, the source multispectral image is converted from the spectral color space to the CIE chromaticity space (CIEXYZ space). Since the CIE chromaticity space can only be strictly applied to the same observation conditions of the source and the destination and it is still an observation condition dependent space, it is necessary to transform the color appearance according to the source observation conditions and convert it to the observation condition-independent space . Then, according to the reproduction and observation conditions, the color appearance inverse transformation is performed to obtain the chromaticity value with the same color appearance under the reproduction and observation conditions, and finally, the chromaticity inversion is performed to obtain the reproduction spectral reflectance image .
The chromaticity transformation only needs to integrate the source multispectral image data, illumination information, and the color vision characteristics of the observer. The method of color appearance transformation will not be repeated here. The process of chrominance inversion is more complicated than chrominance transformation, which includes high-dimensional spectral estimation and spectral modulation.
T represents the chromaticity value obtained after the color appearance is reversed, and s represents the spectral reflectance estimated from the tristimulus value t; then, the relationship between t and s can be expressed as
Here, represents the standard observer color matching function, and represents a diagonal matrix with the spectral power distribution of the reproduced illumination as the diagonal elements.
Here, ones (1) represents an all-ones vector. To make the Y value of white light 100, we setc is the tristimulus value normalized to the [0, 1] interval of each component, namely,
Through formulas (9) to (13), the transformation between the spectral reflectance s can be estimated and the normalized tristimulus value c can be established :
For a set of n spectral reflectance , formula (14) can be generalized as
This section needs to solve the inverse problem of (14), that is, to estimate the spectral reflectance s from the normalized tristimulus value c, and H is the estimation matrix; then, the high-dimensional spectral estimation formula is
For the chrominance image after color appearance inversion, it can be generalized as
To calculate the matrix H, first, we select a set of samples with known spectral reflectance (this paper selects the standard target IT8.7/3 as the sample set, and the number of samples is 928), denoted as . According to formula (16), the normalized tristimulus value set of the sample set under the reproduction observation environment can be obtained, namely,
Second, the transformation from the normalized tristimulus value set to the spectral reflectance sample set can be established by estimating the matrix H:
It can be seen from the previous formula that H is the generalized inverse matrix of matrix D, which can be obtained by calculating and . Multiplying both sides of formula (20) by , we have
From this, it can be obtained :
After calculating the matrix H and substituting it into formula (18), the high-dimensional spectrum S can be estimated from the chromaticity space image after the color appearance inversion.
Due to the existence of metamerism, there should be an infinite number of spectral reflectances corresponding to a tristimulus value. In formula (18), H is the generalized inverse matrix of D, which determines that the spectral reflectance s estimated by the normalized tristimulus value c through (18) is only one of the infinite metamerism colors that reproduce the observation environment. It achieves color appearance matching with the source image spectrum before chroma transformation, but there is a large spectral error. Since both spectral matching and color appearance matching are critical to the reproduction quality of an image, in order to achieve (or approximate) spectral matching with the source spectral data, it is necessary to correct the estimated spectrum s to obtain the final reproduced spectrum . The metamerism of and S under the condition of reproduction observation, but the spectral error with the source spectrum is smaller; this process is called spectral modulation, denoted as
Here, represents the spectral modulation function.
In order to realize the spectral modulation function, the estimated spectrum can be decomposed by R-matrix theory. The matrix is essentially an orthogonal projection operator, which is defined as
Here, A is the combined action matrix of the reproduced illumination spectrum and the standard observer, which can be expressed as
The fundamental stimulus of the spectrum S is the projection of S on R, that is,
Here, represents the identity matrix. Therefore, the final reproduced spectrum should be
CMM is the core of color processing, and the working principle of spectral color appearance reproduction CMM is shown in Figure 4.
For any spectral image acquired by the spectral imaging system, the CMM uses the observed ambient illumination provided in the source spectrum profile and the characteristic parameters of the spectral imaging system to reconstruct the spectral reflectance to obtain the spectral reflectance vector value of each pixel of the spectral image. Then, it uses the observation condition information provided in the source and destination profiles to perform color appearance matching and to obtain a spectrally approximated reproduced spectral reflectance image that matches the color appearance of the source spectral reflectance image. When the image is reproduced, according to the characteristic parameters of the output device provided in the target spectrum profile, the reproduced spectral reflectance data is transformed into the color space of the output device and then the reproduced image is obtained through the output device.
As can be seen from Figure 4, the spectral color appearance reproduction CMM mainly includes three independent real-time calculation modules: spectral reflectance reconstruction, color appearance matching, and spectral color correction. Among them, the spectral reflectance reconstruction method has a relatively mature algorithm at present, and the experiment proves that the R-matrix method based on learning can obtain higher reconstruction accuracy. Spectral color correction can use two methods given in the literature: color correction method based on identifying spectral model and color correction method based on lookup table. When performing color appearance matching, the CMM first reads out the image from the source spectrum profile to obtain the illumination information of the environment and obtains the chromaticity value of the source spectrum image. Then, it reads out the source and destination observation condition information from the source profile and the destination profile, respectively, and performs color appearance transformation and inversion. After that, it calculates the estimation matrix H according to the spectral reflectance sample set and reestimates the high-dimensional spectrum. Finally, it uses the source reconstructed spectral reflectance data as a standard to spectrally modulate the estimated spectrum to obtain the reconstructed spectral reflectance. The process of color appearance matching performed by the CMM is shown in Figure 5.
4. Evaluation of Physical Education Teaching Effect Based on Action Skill Recognition
The flow of the physical education effect evaluation system based on action skill recognition is shown in Figure 6.
In this process, the initial camera parameter calculation module adopts a method basically similar to the court. Since the camera of the court is in a nonfixed state, the camera parameters need to be calculated in real time, which makes it necessary to consider the system overhead caused by the perspective transformation matrix calculation in addition to video fusion and moving object segmentation. In order to meet the real-time requirements, we designed a method using feature point estimation and direct line fitting according to the distribution characteristics of feature points in consecutive video frames. Figure 7 shows the digitized image for action recognition.
This paper analyzes the physical education effect evaluation system based on action skill recognition proposed in this paper through multiple sets of experiments and counts the accuracy rate of movement recognition and physical education teaching effect of this system. The test results shown in Table 1 are obtained.
The above test results verify that the physical education effect evaluation system based on action skill recognition meets the actual needs of physical education and has a good role in promoting the recognition of physical skills and the improvement of physical education effects.
Physical education is one of the basic teaching organization forms in school education, and it is an important path for cultivating talents with comprehensive development of morality, intelligence, physique, and beauty. Moreover, physical education is a bilateral teaching activity that teachers and students participate in and cooperate with each other under the guidance of physical education teachers. The quality of physical education teaching is directly related to the physical and mental health of students and also related to the professional growth and development of teachers. It is an important educational practice link in the school education system. Both the implementers and participants in teaching activities are driven by the power of teaching and promote the development of physical education. This paper combines the action skill recognition algorithm to construct the physical education teaching effect evaluation system. The simulation experiment verifies that the physical education effect evaluation system based on action skill recognition meets the actual needs of physical education and has a good role in promoting the recognition of physical skills and the improvement of physical education effect.
The labeled dataset used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no conflicts of interest.
This research was supported by Chongqing Education Science Planning Project under Study on the Construction of Students’ Physical Health Monitoring and Intervention Model Based on Big Data (2019GX025) and Research on Health Sports Behavior Cultivation of Children in Chongqing under Health Big Data Platform (2020GX141).
M. Li, Z. Zhou, and X. Liu, “Multi-person pose estimation using bounding box constraint and LSTM,” IEEE Transactions on Multimedia, vol. 21, no. 10, pp. 2653–2663, 2019.View at: Publisher Site | Google Scholar
J. Xu, K. Tasaka, and M. Yamaguchi, “[Invited paper] fast and accurate whole-body pose estimation in the wild and its applications,” ITE Transactions on Media Technology and Applications, vol. 9, no. 1, pp. 63–70, 2021.View at: Publisher Site | Google Scholar
G. Szűcs and B. Tamás, “Body part extraction and pose estimation method in rowing videos,” Journal of Computing and Information Technology, vol. 26, no. 1, pp. 29–43, 2018.View at: Publisher Site | Google Scholar
R. Gu, G. Wang, Z. Jiang, and J. N. Hwang, “Multi-person hierarchical 3d pose estimation in natural videos,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 11, pp. 4245–4257, 2020.View at: Publisher Site | Google Scholar
M. Nasr, H. Ayman, N. Ebrahim, R. Osama, N. Mosaad, and A. Mounir, “Realtime multi-person 2D pose estimation,” International Journal of Advanced Networking and Applications, vol. 11, no. 06, pp. 4501–4508, 2020.View at: Publisher Site | Google Scholar
N. T. Thành and L. V. Hùng, “An evaluation of pose estimation in video of traditional martial arts presentation,” Journal of Research and Development on Information and Communication Technology, vol. 2019, no. 2, pp. 114–126, 2019.View at: Publisher Site | Google Scholar
I. Petrov, V. Shakhuro, and A. Konushin, “Deep probabilistic human pose estimation,” IET Computer Vision, vol. 12, no. 5, pp. 578–585, 2018.View at: Publisher Site | Google Scholar
G. Hua, L. Li, and S. Liu, “Multipath affinage stacked—hourglass networks for human pose estimation,” Frontiers of Computer Science, vol. 14, no. 4, pp. 144701–144712, 2020.View at: Publisher Site | Google Scholar
K. Aso, D. H. Hwang, and H. Koike, “Portable 3D human pose estimation for human-human interaction using a chest-mounted fisheye camera,” in Proceedings of the Augmented Humans Conference 2021, pp. 116–120, New York, NY, USA, 2021 February.View at: Google Scholar
D. Mehta, S. Sridhar, O. Sotnychenko et al., “Vnect: real-time 3D human pose estimation with a single rgb camera,” ACM Transactions on Graphics, vol. 36, no. 4, pp. 1–14, 2017.View at: Publisher Site | Google Scholar
S. Liu, Y. Li, and G. Hua, “Human pose estimation in video via structured space learning and halfway temporal evaluation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 7, pp. 2029–2038, 2019.View at: Publisher Site | Google Scholar
S. Ershadi-Nasab, E. Noury, S. Kasaei, and E. Sanaei, “Multiple human 3d pose estimation from multiview images,” Multimedia Tools and Applications, vol. 77, no. 12, pp. 15573–15601, 2018.View at: Publisher Site | Google Scholar
X. Nie, J. Feng, J. Xing, S. Xiao, and S. Yan, “Hierarchical contextual refinement networks for human pose estimation,” IEEE Transactions on Image Processing, vol. 28, no. 2, pp. 924–936, 2019.View at: Publisher Site | Google Scholar
Y. Nie, J. Lee, S. Yoon, and D. S. Park, “A multi-stage convolution machine with scaling and dilation for human pose estimation,” KSII Transactions on Internet and Information Systems (TIIS), vol. 13, no. 6, pp. 3182–3198, 2019.View at: Google Scholar
A. Zarkeshev and C. Csiszár, “Rescue method based on V2X communication and human pose estimation,” Periodica Polytechnica: Civil Engineering, vol. 63, no. 4, pp. 1139–1146, 2019.View at: Publisher Site | Google Scholar
W. McNally, A. Wong, and J. McPhee, “Action recognition using deep convolutional neural networks and compressed spatio-temporal pose encodings,” Journal of Computational Vision and Imaging Systems, vol. 4, no. 1, 3 pages, 2018.View at: Google Scholar
R. G. Díaz, F. Laamarti, and A. El Saddik, “DTCoach: your digital twin coach on the edge during COVID-19 and beyond,” IEEE Instrumentation and Measurement Magazine, vol. 24, no. 6, pp. 22–28, 2021.View at: Publisher Site | Google Scholar
A. Bakshi, D. Sheikh, Y. Ansari, C. Sharma, and H. Naik, “Pose estimate based yoga instructor,” International Journal of Recent Advances in Multidisciplinary Topics, vol. 2, no. 2, pp. 70–73, 2021.View at: Google Scholar
S. L. Colyer, M. Evans, D. P. Cosker, and A. I. T. Salo, “A review of the evolution of vision-based motion analysis and the integration of advanced computer vision methods towards developing a markerless system,” Sports medicine-open, vol. 4, no. 1, pp. 24–15, 2018.View at: Publisher Site | Google Scholar
I. Sárándi, T. Linder, K. O. Arras, and B. Leibe, “Metrabs: metric-scale truncation-robust heatmaps for absolute 3d human pose estimation,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 3, no. 1, pp. 16–30, 2021.View at: Publisher Site | Google Scholar
A. Azhand, S. Rabe, S. Müller, I. Sattler, and A. Heimann-Steinert, “Algorithm based on one monocular video delivers highly valid and reliable gait parameters,” Scientific Reports, vol. 11, no. 1, pp. 14065–14074, 2021.View at: Publisher Site | Google Scholar