Abstract

Mental health and mental health problems of college students are becoming more and more obvious, and there is more and more negative news caused by psychological problems, and society from all walks of life has given high attention to this problem. Given the new situations and new problems, how to keep up with the times and reform and innovate in the content, method, and path of psychological education in colleges and universities is an important work of ideological and political education in colleges and universities. Because fine-grained category information can provide rich semantic clues, fine-grained parallel computing techniques are widely used in tasks such as sensitive feature filtering, medical image classification, and dangerous goods detection. In this study, we adopt a fine-grained parallel computing programming approach and propose a multiobjective matrix regular optimization algorithm that can simultaneously perform the joint square root, low-rank, and sparse regular optimization for bilinear visual features, which is used to stabilize the higher-order semantic information in bilinear features, improve the generalization ability of features, and apply it to the construction of mental health education models for college students to promote the construction of mental health education bases, improve mental health education network platform, and strengthen the construction of mental health education data platform. A new practical aspect has been added to the abstract. The saliency-guided data augmentation method in this study is an improvement on random data augmentation but reduces the randomness in the data augmentation process and significantly improves the results. The best result belongs to SCutMix data augmentation, which improves by 1.9% compared to the baseline network.

1. Introduction

With the rapid development of society and the increase in individual survival pressure, students’ thoughts and psychology have drastically changed and various psychological problems have emerged, so traditional ideological and political education is difficult to effectively solve students’ inner psychological confusion. In recent years, with the change of the talent cultivation model in colleges and universities, ideological and political education in colleges and universities focuses on the expansion to other work fields. Psychological education meets the needs of talent cultivation in higher education and provides a new vision for theoretical research of ideological and political education in colleges and universities. There is a content crossover between psychological education and ideological and political education, which is in line with the law of ideological and political work in colleges and universities and helps to further innovate the theoretical research of ideological and political education in colleges and universities [1]. In this study, to be able to obtain ideal optimization results, a two-layer parallel algorithm is proposed in two architectural modes of single-computer multicore and networked multicomputer, research is carried out on how to improve the computational efficiency, and a model system for college students’ mental health education is constructed through a multiobjective matrix regular optimization algorithm.

Fine-grained identification mainly distinguishes subcategories under the same broad category, and the fine-grained feature data collection is not simple due to the fine labeling of fine-grained features, which requires specialized knowledge. Fine-grained recognition should not only consider global information but also combine local information, which is even more important than the full information [2]. For some reason, classical convolutional neural networks have limited ability to extract local information. In traditional deep learning algorithms, to improve the classification accuracy of fine-grained features, additional annotations such as manual annotation boxes or key point annotations in addition to feature categories are introduced; however, such annotations are time-consuming and labor intensive, leading to poor usability. In recent years, fine-grained feature recognition has achieved stage steps due to the emergence and development of attention mechanisms, whose core idea is to add attention modules to convolutional neural networks, enabling the networks to focus on discriminative detail information and extract features in key regions for learning, effectively filtering out useless information, and greatly reducing research costs [3]. The goal of fine-grained feature recognition is to achieve the classification of different subclasses under the same category, and if we want to complete an accurate classification of fine-grained features, the information on discriminative regions of the target in the features is very important. In recent years, fine-grained feature recognition has been widely concerned, and various deep learning recognition algorithms have emerged. The impact of more powerful feature extraction algorithms on fine-grained feature recognition algorithms is significant. In fine-grained feature recognition, the emergence of deep convolutional neural networks confirms this conclusion. Convolutional neural networks rely on powerful feature extraction capabilities to achieve accuracy rates that far exceed those of fine-grained recognition methods based on manual feature implementation, achieving higher fine-grained recognition accuracy rates. The main principles of these algorithms are the same: first, to locate the discriminative regions in the graph, then extract the discriminative features and global features and fuse them, and finally use the fused features for feature classification. According to the involvement of annotation information, fine-grained feature recognition is generally divided into fine-grained feature recognition methods based on strongly supervised learning and fine-grained feature recognition methods based on weakly supervised learning [4].

Fine-grained feature recognition itself is a very challenging task. One of the major difficulties in fine-grained feature recognition is the small interclass differences and large intraclass differences. This requires algorithms with extremely strong representation learning capabilities that can capture very fine-grained feature cues [5]. The literature [6] proposes Part-based R-CNN algorithms specifically for fine-grained recognition, extending R-CNN to object detection and object part localization with geometric prior conditions. In such methods, two detectors are typically simultaneously used, one for object detection as a whole and the other for object part detection, and geometric constraints are imposed between these two detectors, trained with previously computed deep convolutional features to obtain a pose-normalized feature representation for prediction. Despite the great progress made by Part-based R-CNN, it cannot be trained end-to-end. For this reason, an end-to-end fine-grained feature recognition model called Part-stacked R-CNN is proposed in the literature [7]. Its specific structure contains a local feature extraction part and a recognition classifier. Unlike the R-CNN structure, the Part-stacked R-CNN uses a fully convolutional network to directly generate local feature maps and considers the correlation between different localities at the same time, which eventually effectively improves the fine-grained feature recognition accuracy [8]. However, in many practical scenarios, it is difficult for us to gather enough experts for fine-grained labeling to locate which regions in the features are important for judgment. Second, it is difficult for the model to consider all feature scenarios with access to a large amount of data and manual annotation, so the machine may encounter scenarios that it has not seen before [9]. These problems can lead to the existing strongly supervised fine-grained feature recognition algorithms that cannot achieve satisfactory results. For SIFT algorithms with 3D or high-dimensional expansion, parallel acceleration has been less studied, and the literature [10] improved the N-SIFT algorithm and developed a parallel GPU implementation, which implemented CUDA-based parallelization for the first two phases of the algorithm (constructing Gaussian pyramid, Gaussian difference pyramid, and extreme value point localization), achieving a speedup of more than 200 times compared to the serial version of CPU. However, their improved algorithm does not use the feature generation step of the original N-SIFT algorithm and, therefore, does not parallelize the full N-SIFT algorithm.

In the process of combining fine-grained parallel computing programming into the construction of a model of college students’ mental health education, we also paid attention to some research works in the field related to mental health education. According to the literature [11], psychological education refers to the theoretical guidance about the comprehensive development of human beings and based on respecting the law of students’ growth and success and the law of psychological development, psychological principles and methods are infiltrated into the whole process of education in colleges and universities, to realize the organic integration of “cultivating the mind” and “cultivating morality” and finally cultivate new people who can integrate into the society. In the end, we can cultivate new people who can integrate into the society. The literature [12] suggests that psychological education is a systematic process, and human psychology is mainly composed of cognition, emotion, will, and other psychological processes and personality psychology; therefore, to achieve the purpose of psychological education, it is necessary to cultivate people to form correct cognition, positive emotion, strong will, and sound personality. It can be seen that the connotation of psychological education has been enriched by the research of scholars, and it is divided into two directions: one is to equate psychological education with mental health education; the other is that psychological education refers to the use of the laws of psychology to teach and realize efficient education. In exploring the influence of mental health education on the reform and development of ideological and political education in colleges and universities in the past 30 years, it is believed that when understanding the value of psychological education to ideological and political education, attention needs to be paid to prevent the tendency of psychologizing ideological and political education, and again to avoid overexalting the status of mental health education [13].

Synthesizing the above literature, this study proposes a multiobjective matrix regular optimization algorithm based on a fine-grained feature recognition method with weakly supervised learning using parallel computing technology in the research process, which can simultaneously perform the joint regular optimization of square root, low rank, and sparse for bilinear visual features. Meanwhile, from the perspective of the intersection of ideological and political education and psychology, the current situation of psychological education work in colleges and universities is studied, the effectiveness of education is summarized, the problems in the process of education are explored, and the path to improving the quality of psychological education work is proposed, to provide new perspectives and ideas for psychological education in colleges and universities.

2. Construction of a Mental Health Education Model Based on Fine-Grained Parallel Computing Programming

2.1. Analysis of Fine-Grained Parallel Computing Algorithms

Fine-grained feature recognition technology has been developed for a long time. Early fine-grained feature recognition algorithms based on artificial features mainly used artificially constructed operators to extract features in features, such as POOFs and SIFT. These methods achieve the improvement of fine-grained feature recognition accuracy by enhancing the feature extraction capability of target recognition algorithms [14]. From this, it can be found that more powerful feature extraction algorithms have a significant impact on fine-grained feature recognition algorithms. In fine-grained feature recognition, the emergence of deep convolutional neural networks confirms this conclusion, and convolutional neural networks rely on powerful feature extraction capabilities to achieve accuracy rates that far exceed those of fine-grained recognition methods based on artificial features to achieve higher fine-grained recognition accuracy rates. According to the strength of annotation information in the fine-grained dataset, fine-grained feature recognition methods based on deep convolutional neural networks can be divided into fine-grained recognition methods based on strong supervised information and fine-grained recognition methods based on weakly supervised information.

This study adopts a fine-grained feature recognition method based on the strongly supervised labeling information. Fine-grained feature recognition based on strongly supervised labeling information requires not only the category labeling information in the training dataset when training the convolutional neural network, but also the labeling frame or part labeling point information. If the granularity is too fine, the communication overhead caused by data transfer will increase, the natural computation time consumed will increase, and the parallel performance will be reduced as a result; if the granularity is too coarse, the number of tasks will be uneven, some individual tasks will be large and take too long, thus the overall computation time is not reduced to the ideal state, and the parallel performance will be reduced as a result. Therefore, to achieve the best parallel performance, it is important to determine the suitable parallel granularity. These methods based on strongly supervised labeling information can detect foreground targets and exclude background interference with the help of labeling frames. The key discriminative part information or target pose alignment can be further extracted from the part annotation points.

As shown in Figure 1, the DeepLAC model integrates the part localization subnetwork, alignment subnetwork, and recognition subnetwork of fine-grained features into one network and proposes a valve connection function to optimally connect the localization subnetwork and recognition subnetwork, which can effectively reduce the errors arising from recognition and alignment and ensure the accuracy of recognition [15]. The localization subnetwork consists of 5 convolutional layers and 3 fully connected layers, and the last fully connected layer is used to regress the coordinate values of the upper left and lower right corners of the target frame. The alignment subnetwork receives the target position from the localization subnetwork, performs template alignment, and inputs the aligned part features into the recognition subnetwork for recognition. From the alignment process, the alignment subnetwork performs basic operations such as offset, translation, scaling, and rotation to generate the pose-aligned part regions, which are very important for recognition. In addition to performing pose alignment, the alignment subnetwork uses the identification and alignment results to further refine the localization when the DeepLAC model is back-propagated. The identification subnetwork is the final module that takes the input of pose-aligned components and obtains a fine-grained class of targets. The minimum external moments of these points are used as the truth mask, and the others are the background. The full convolutional network is then used to generate masks for local localization and to select useful depth feature descriptions. After obtaining the two partial masks without using any annotation information during testing, they are combined to form foreground targets [16]. For these part features, Mask-CNN constructs three-branched convolutional neural networks to extract features for the original input features, head features, and torso features, respectively, for training and aggregating foreground and part-level cues. Mask-CNN is end-to-end with selective depth Mask-CNN that is an end-to-end fully convolutional neural network with a selected depth of convolutional special description and achieves 85.5% recognition accuracy on fine-grained bird dataset with the help of part labeling information and feature class labeling information.

Multisample hybrid augmentation algorithms are classified into feature-level hybrid augmentation algorithms and feature-space hybrid augmentation algorithms according to the different hybrid dimensions, and Mixup and CutMix are the most representative algorithms in these two categories, respectively. One of the key issues of ABP is how to use CNN structure to efficiently compute features . When using the original bilinear pooling operation, the formula can be expressed as

The compact bilinear pooling employs a kernel function to approximate the matrix multiplication operation, while the matrix Union law cannot be applied to the kernel function operation. Thus, the kernel function version of equation can be expressed as

Since bilinear pooling and kernel function pooling are very different, this study hopes that the proposed adaptive pooling can be compatible with both feature interaction operations, which can improve the description of bilinear pooled features when using two different feature extractors. S-Net can be used to weigh each feature region in a feature by generating an attention space map and then the input to the ReLu layer to eliminate negative values impact.

Next, ABP uses the s function to map the values between [0, 1]. The intuition of using the s-function in this study is based on the following considerations: the use of the s-function ensures that only some of the regions in it have large responses, i.e., only those local features that are most discriminative are selected. After getting the data on college students’ mental health, it is possible to guide college students’ mental health education through qualitative evaluation methods. The qualitative evaluation method is a method to identify and judge the effect of psychological education by analyzing and synthesizing the whole and nature of the assessment object, and the main way is observation, analysis, induction, and description, that is, through observing the behavioral performance of college students’ daily study and life, the condition of interpersonal interaction and mental outlook, etc., comprehensively analyzes the current mental health condition of college students and then makes induction to reveal the causes of their problems to help solve the mental health problems of college students in a targeted way. The algorithm aims to increase the difficulty of data identification and thus the ability of the model to mine features. Therefore, in pairwise target selection, features with the least similarity between similar classes are mixed and features with the greatest similarity between different classes are mixed, thus increasing the difficulty of the model to mine and learn the target features and motivating the model to continuously optimize the parameters to find the most discriminative features.

Many complex problems to be solved may have two or more parallelizable factors, and if it is possible to develop two parallel computations at the level of software computational thinking based on two parallel factors at the same point in time, then a two-tier parallel computing architecture for the problem can be established. If the basic hardware requirement for implementing a parallel computing model is to have at least two processor units, then it is desirable to have more than four processing units to implement a balanced double-M parallel computing model [17]. Therefore, the implementation of two-tier parallel computing certainly requires hardware support in place. For example, considering the two-tier parallel computing architecture from the hardware and memory level, a single computer with 4 cores can be sufficient, as shown in Figure 2 for the basic hardware architecture to achieve parallel computing.

Parallel performance evaluation is an important evaluation work after the completion of parallel computing, which can reveal the size of the gap between the actual running time of parallel computing and the ideal time, according to which the model and algorithm can be further optimized for parallel computing mode, and the more commonly used parallel computing performance evaluation indexes are shown below. If the granularity is too fine, the communication overhead caused by data transfer will increase, the natural computation time consumed will increase, and the parallel performance will be reduced as a result; if the granularity is too coarse, the amount of tasks is uneven, and some single tasks are large and take too long; thus, the overall computation time is not reduced to the ideal state, and the parallel performance is reduced as a result. Therefore, to achieve the best parallel performance, it is important to determine the suitable parallel granularity. Scalability is used to evaluate the performance of parallel computing as the number of processing units in a parallel computer system varies. Factors such as computation speed, communication speed, problem size, and platform storage have an influential role in it. Parallel computation execution time, a key metric for evaluating parallel computing performance, can be measured for parallel computing counterparts by applying a multiprocessor computing platform that can regulate the number of participating computing processors. Alternatively, it can be compared with the serial computation time in the serial computation mode.

2.2. Application of Fine-Grained Parallel Computing in the Construction of Mental Health Education Models

Psychoeducation has been accumulating practical experience while psychological knowledge has been widely disseminated and psychological activities have been rapidly developed, and the psychoeducational model of adopting psychological laws to regulate human behavior has been generally accepted. In this process, different definitions and descriptions of mental health have emerged due to the different focuses, processes, and cultural and social backgrounds of psychoeducational research in national academies. For example, some scholars believe that psychological health is a positive adaptive situation. Others believe that psychological health is a state in which people have maximum efficiency, satisfaction, and acceptance of their environment and each other.

The mental health education model contains the following functions: prevention function, which means preventing students’ possible psychological problems and intervening in time; diagnosis and evaluation function, which screens students with psychological problems through psychological screening and other means and gives disposition advice; intervention function, which means carrying out psychological and pedagogical interventions based on the results of data collection and analysis; and finally, guidance function, which means providing all-round guidance [18]. Among them, ideological and political education mainly instills and educates college students in external passive forms such as classes, lectures, and symposiums, while psychological education focuses more on inner development, mainly through active consultation and communication to solve their psychological confusion.

The HDFS (Hadoop Distributed File System) is one of the two cores of the Hadoop framework, which is a distributed file system based on local file systems in different operating systems such as EXT4, F2FS, and NTFS, and its main role is to realize distributed storage of massive data. HDFS originated from a paper published by Google in 2003, and based on the idea of the GFS system, the core concept of “whole into local” is to divide large-scale data into blocks of the same size (the final blocks may not be the same size) and then redundantly store them. The data are then redundantly stored. The mental health education model contains the following functions: prevention function, which means preventing possible psychological problems of students and intervening in time; diagnosis and evaluation function, which screens students with psychological problems through psychological screening and other means and gives disposition advice; intervention function, which means the function of carrying out psychological and pedagogical interventions based on the results of data collection and analysis; and finally, guidance function, which means providing all-round guidance for students’ mental health. When users need to operate on these data, the metadata nodes and data nodes can query, retrieve, and delete the corresponding data through the metadata information. The whole operation process is present at the bottom of the system, and users can operate the data as easily as files on the local system, thus greatly reducing the learning cost and usage cost of the system.

The HDFS is often composed of a system metadata node and multiple data nodes. The system metadata node is the management node of the HDFS distributed file system, mainly responsible for the management and maintenance of the system namespace and external client access operations, and the metadata node determines how files are mapped to data blocks on the data node. The core architecture of HDFS is shown in Figure 3.

As can be seen in Figure 3, unlike traditional data access methods, in the HDFS distributed file system, the client first communicates with the metadata nodes to obtain an overview of the file distribution on the data nodes by querying the metadata information and then directly completing file operations with the data nodes, the process of which is divided into the following parts [19].(1)The client sends an I/O request, establishes RPC communication with the metadata node, and passes the file request to the metadata node.(2)Based on the client request, the metadata node queries the local metadata information base to find out the data block and file name mapping table of the data required by the client and returns the address of the data node corresponding to the data.(3)After receiving the information returned by the metadata node, the client establishes a new RPC communication with the corresponding data node on the HDFS distributed file system and then directly performs the corresponding data operation. During this period, the metadata node does not participate in the data operation process but only records the information of the data node corresponding to each data, and the data node is responsible for the whole data operation process.

The K-means algorithm starts to cluster the original mental health dataset after receiving the mental health data center points provided by the canopy algorithm. K-means algorithm first traverses the points in the dataset, calculates their distances from each centroid, and then groups them into the cluster corresponding to the centroid with the closest distance [20]. Subsequently, the K-means algorithm determines whether it is an iteration according to the preset number of iterations and threshold and outputs the clustering result if it is lower than the set threshold or reaches the number of iterations; if the set threshold is higher, the centroids of each cluster are updated, and the new centroids are calculated and selected and then iterated. This process should be iteratively repeated until the clustering result meets the threshold, and SSE can be found in equations (4) and (5).

The smaller the SSE, the tighter the points in the cluster, and the better the clustering.where denotes the distance between sample points, and denotes the average distance of all samples. When converges to 0, the sum of squared errors of each cluster in the algorithm tends to be stable, which means that the clusters converge and no more changes occur and the clustering results are stable. The larger the signal-to-noise ratio, the higher the quality of the extracted edges. The signal-to-noise ratio is defined as follows.where denotes the edge function, denotes the impulse response of the filter, and denotes the mean squared deviation of the Gaussian noise. The fuzzy C-means clustering method is a process of clustering the dataset by an iterative algorithm based on the affiliation function under the condition that the number of clusters is known. To remove false detections and obtain more accurate feature collocation regions, this study uses the fuzzy C-means clustering method to cluster the mental health feature values.

After getting the data on college students’ mental health, it is possible to guide college students’ mental health education through qualitative evaluation methods. The qualitative evaluation method is a method to identify and judge the effect of psychological education by analyzing and synthesizing the whole and nature of the assessment object, and the main way is observation, analysis, induction, and description, that is, through observing the behavioral performance of college students’ daily study and life, the condition of interpersonal interaction and mental outlook, etc., comprehensively analyzes the current mental health condition of college students and then makes induction to reveal the causes of their problems to help solve the mental health problems of college students in a targeted manner. However, most college students encounter, for example, adaptation problems such as anxiety, isolation, and interpersonal difficulties, and developmental problems such as love, career selection, and personal development. The changing psychological situation of college students requires the introduction of qualitative and developmental evaluation methods. In the process of psychological education, the first and foremost thing is to look at the psychological changes of college students from a developmental perspective, establish the developmental education concept of prevention as the main focus and treatment as a supplement, and flexibly use the methods of observation, analysis, induction, and description for evaluation.

Also, the mental health model obtained based on fine-grained parallel calculation can be combined with the law of market economy development, according to the growth characteristics and needs from freshman to the fourth year, as a way to improve the training program for talents. For example, for the junior students who have just entered university, the training of adaptive ability and independent learning ability should be strengthened, self-awareness and social consciousness should be enhanced, and mental health education should favor topics such as interpersonal communication and adaptation to university life. In the second and third years of college, the main focus is to develop interpersonal skills and interpersonal communication skills, which are essential skills to be cultivated as a “prospective adult.” For senior year graduates, we guide college students who need to be employed to establish a correct concept of employment and encourage them to participate in practical activities such as professional internships as much as possible. To exercise their abilities, accumulate experience, and continuously enhance their self-confidence, for students studying for graduate school and doctoral degrees, we focus on the cultivation of their antistress ability and psychological adjustment ability, provide timely psychological guidance, and guide students to make proper future planning.

2.3. Experimental Design

A key point in weakly annotated fine-grained feature recognition is how to generate strongly distinguishable visual features for fine-grained features using only category labels. In recent years, bilinear feature interactions are effective in enhancing the distinguishability of fine-grained visual features without requiring additional annotation knowledge. However, existing methods use a fixed feature interaction strategy for all samples, ignoring the heterogeneity of different features and different regions of the same feature in the dataset. To verify whether fine-grained parallel algorithms can accurately achieve mental health state discovery, four different experimental datasets were used, 1,000 reviews were sampled in the same way in each dataset, and each data were manually labeled. To estimate the CFA interpolation coefficients, the test features need to be partitioned into regular blocks that do not overlap. The feature block size should be moderately chosen; if the feature block size is too large, we cannot accurately estimate the CFA interpolation coefficients, which may affect the final feature stitching region detection results. On the contrary, if the feature block size is too small, the computational complexity of the method proposed in this chapter will increase. Therefore, in fine-grained parallel computing, for feature blocks of different sizes, we experimentally obtain the detection results of fine-grained stitching regions, and Figure 4 shows the parallel detection strategy for multi-GPU collaborative computing.

To perform fine-grained feature stitching region detection, we need to find the best threshold th to classify the texture intensity features of coarse-grained blocks. We use the Otsu method to estimate the best threshold th. Otsu’s method, proposed by Ostu in 1979, is an adaptive threshold determination method that is an effective binary classification method. c is the number of coarse-grained blocks. An element is randomly chosen and used to classify the set into two categories and , respectively, denoted as

The FNR increases and , and FPR decreases with increasing parameters. At the time , FNR and FPR are moderate. Therefore, we define T = 15.0 as the optimal threshold. The coarse-grained splicing region detection results of the features are obtained by constructing feature forensic features through the Laplacian algorithm. Subsequently, the coarse-grained splicing region detection results are further refined by extracting texture intensity features to remove false detections and obtain the fine-grained splicing region detection results of the features. And on this basis, the super-feature segmentation algorithm is used to edge smooth the fine-grained detected splice regions to get more accurate splice region localization results. To merge the suspicious spliced connected regions, we need to choose an ideal threshold T and take the merging threshold T = T0 = 0.008.

3. Results and Analysis

3.1. Calculation Accuracy Verification

Figure 5 illustrates the effect of feature-matching threshold selection on determining correct and incorrect matches. The set of correct and incorrect matches can be classified according to the predefined deformation field of each experimental feature sequence: the ideal position of the matching point in the target data can be inferred from its coordinates in the reference data and the predefined deformation field. If the error between the corresponding point of the matching point in the target data and the ideal position is within 3 features, it is considered as a correct match; otherwise, it is treated as a wrong match. When rejecting all matches with a distance ratio greater than 0.8, only less than 10% of correct matches are discarded, while more than 96% of incorrect matches are effectively filtered. This indicates that the threshold obtained from the conventional data is equally valid for the scattered points measured by the DIC. Therefore, subsequent experiments in this study use this threshold.

Figure 6 shows the comparison of the final calculation results between the SIFT PiDIC algorithm and the FFT-CC PiDIC algorithm. Since the displacement field is a function of the x-coordinates, the POIs with the same x-coordinates are counted, and their average error and the standard deviation of the calculation results are counted. The deformation field is sinusoidal, and the algorithm combined with SIFT can obtain accurate and high-precision calculation results in the regions with different deformation amplitudes, while the FFT-CC–-based method obtains completely wrong calculation results in the regions with large deformations due to its inability to handle large deformations. Due to the use of linear form functions, the average error is small in the regions where the deformation field presents a class of linearity, while the nonlinear regions have relatively large errors.

3.2. Parallel Accelerated Evaluation

To evaluate the speedup of the parallel implementation, experiments are performed on data sequences (a)–(d), and five experiments are performed on each target graph in the feature sequence to count the computation time. The GPU implementation of SIFT PiDIC has significantly better computational efficiency than the multicore CPU implementation, achieving an overall speedup ratio of 10.6 times. The GPU implementation processes each feature pair in less than 30 ms on average, which also meets the demand for high-precision, high-resolution, real-time processing of regular video streams (30 fps). The acceleration ratio of the GPU implementation compared to the multicore CPU implementation varies at different computational stages, and the fine-grained parallelism adopted by the GPU implementation in the feature extraction and IC-GN alignment stages, which are the most time-consuming in the CPU implementation, results in 41.1 times higher acceleration ratio. In the feature-matching phase, which takes the longest time in the GPU implementation, the GPU implementation achieves a speedup of 3.4 times compared to the complex search implementation used by the CPU, despite the use of a brute force search implementation mechanism. The parallel GPU implementation of the FFT-CC PiDIC algorithm is also used for comparison since sequence (a) is a small deformation field at the subfeature level. Compared with the FFT-CC method, the SIFT-based initial value estimation method imposes an additional computational burden in the feature extraction and matching stages, and the algorithm has a higher complexity and thus a higher time overhead. SIFT PiDIC has a more accurate initial estimate and thus has a lower average number of iterations (3.13) in the IC-GN alignment phase, while FFT-CC PiDIC requires an average of 3.59 iterations. However, this is not fully reflected in the computation time, which is slightly lower for the FFT PiDIC method in the IC-GN step than for SIFT PiDIC. This is since in the first iteration of IC-GN for FFT-CC PiDIC, the features of the target subregion are located in the whole feature position, avoiding the interpolation construction of one subregion.

Figure 7 illustrates the effect of the number of threads on the computation time and computation speed of the CPU implementation. As the number of threads increases from 1 to 10 (number of physical CPU cores), the computation time significantly decreases, and the slope of the decline curve becomes flat in the 10–20 interval. When the number of threads exceeds 20 (number of CPU logical cores), the computation time shows a slight fluctuation with no significant drop or increase. The computational speed (average number of POIs processed per second) increases from 4,350 for a single thread to 14,679 for 10 threads, reaching a maximum of 16,786 at 20 threads. Because the computational performance has been satisfied after the number of cores increases to a certain threshold, some of the cores are not running.

Figure 8 shows the distribution of the timeshare of each computational phase of the SIFT PiDIC processing feature sequences (a) (512 × 512) and (c) (2000 × 1000). Feature matching occupies most of the computational time in the GPU implementation, accounting for 51.2% in sequence (a) and reaching 74.8% in the nonuniformly deformed sequence (c). This is the reason why the computation time shows a weak correlation with the number of POIs, since feature matching is not related to the number of POIs, but only to the number of features. Considering that violent matching is used here, there is potential room for further optimization. The CPU implementation, despite the slightly different distribution, also occupies about 60% of the computation time in the computation phase independent of the number of POIs.

To verify the superiority of the data enhancement method in this study, consistent with WS-DAN, weakly supervised attention learning is chosen as the benchmark in this section, and the loss function also uses the cross-entropy loss function, and experiments are conducted on CUB-200-2011 to compare with other data enhancement methods, and the results are shown in Figure 9. From the data in the table, it can be seen that the effect of random data augmentation methods is not obvious, and none of them exceeds 0.5% of the benchmark. In contrast, the significance-guided data enhancement method in this study, although improved based on random data enhancement, reduces the randomness in the process of data enhancement and significantly improves the effect. Among them, the best result belongs to SCutMix data enhancement, which improves by 1.9% compared with the baseline network. Moreover, the data augmentation methods in this study are better than the original WS-DAN because they compensate for the shortcomings of the attention-guided data augmentation methods. To verify the effectiveness of the regularization term, the experimental results of the unregularized part-aware discard are also shown. Compared with the unregularized part-aware discard, the regularized part-aware discard improves the accuracy by 0.5%.

4. Conclusions

In this study, we propose a mental health education model for college students based on fine-grained parallel into computation, with a feature extraction network through a cross-attention mechanism, which performs redundancy elimination in spatial dimension and powers exponential enhancement in sample level for emotional features, respectively. These two feature enhancement mechanisms are arranged in a cross-attentive manner to improve feature complementarity and finally mapped to second-order covariance space for feature fusion by a bilinear pooling operation. However, the bilinear pooling operation leads to drawbacks such as feature noise information amplification and weak generalization ability due to the excessive dimensionality. For this reason, this thesis further proposes an efficient multiobjective matrix regularization algorithm that can perform joint regularization optimization of features in the second-order matrix space from multiple dimensions, resulting in more compact and clean features. In the follow-up research work, the monitoring model of college students’ mental health needs to be more refined. It is also integrated with the actual situation to propose more practical solutions. In this study, we improve the quality of pseudo-labeling at two levels by using data distillation to data augment unlabeled features from multiple perspectives and model distillation to train multiple models to machine label the same sentiment features and finally fuse their labeling results, respectively. After obtaining the model of college students’ emotional characteristics, the theme of psychological education in college ideological and political education is used as the basis of research with relevant theoretical contents and realistic situations as the focus of research to suggest ways to improve the effectiveness of psychological education in college ideological and political education.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

The study was supported by “the Nation Social Science Fund of China—Spatiotemporal Evolution Mechanism and Common Track Spillover Effect of Dual Cross-Border FDI Driving Chinese Innovative Development, (grant no. 19BJL076).”