Abstract

This study exploits a novel enhanced genetic neural network algorithm with link switches (EGA-NNLS) to model the professional university course evaluating system. Various indices should be employed to evaluate the learning effect of a professional course comprehensively and objectively, and the traditional artificial evaluation methods cannot achieve this goal. The presented data-driven modeling method, EGA-NNLS, combines a neural network with link switches (NN-LS) with an enhanced genetic algorithm (EGA) and the Levenberg–Marquardt (LM) algorithm. It employs an optimized network structure combined with EGA and NN-LS to learn the relationships between the system’s input and output from historical data and uses the network’s gradient information via the LM algorithm. Compared with the traditional backpropagation neural network (BPNN), EGA-NNLS achieves a faster convergence speed and higher evaluation precision. In order to verify the efficiency of EGA-NNLS, it is applied to a collection of experimental data for modeling the professional university course evaluating system.

1. Introduction

Currently, the quality of colleges and universities is an essential issue. The learning effect of courses has become an important criterion for evaluating students’ mastery of knowledge, and it also reflects teachers’ teaching achievements and school management. However, the interference of objective and subjective factors in the real world makes it challenging to develop a mathematical model for evaluating the course effect. Some scholars in academia and education have presented their ideas on constructing an evaluation system for courses in colleges and universities in recent years. Meanwhile, some methods have been proposed for evaluating the learning effect of courses with specific attributes, including analytic hierarchy process, cluster analysis [13], fuzzy comprehensive evaluation method [48], and multiple regression analysis [912]. However, since different colleges and universities have different situations, a recognized and ideal learning effect evaluation system has not been constructed. Since it is challenging to obtain the course evaluation effect through a rigorous mathematical model, it is crucial to establish an objective, effective, and easy-to-operate course learning effect evaluation model.

Recently, artificial neural networks (ANNs) have grown quickly [1315]. Since this method originates from the simulation of the brain nervous system, it has strong adaptability and self-learning ability in a complex environment. Meanwhile, another important feature of neural networks is that they can approximate each nonlinear continuous function with any precision and simulate actual systems realistically. The structure of neural networks can be regarded as the mapping of an actual system. Due to these characteristics, neural networks have been widely applied in various fields, including automatic control [16, 17], artificial intelligence [18], and fault diagnosis [19]. Different neural networks can be formed according to the neurons’ topological structure. At present, the typical network models involve the backpropagation (BP), the perceptron, the radial basis function (RBF), the Hopfield, the Boltzmann machine, and the self-organizing networks. The mentioned network models can achieve various goals like pattern recognition, data clustering, function approximation, and optimization of computer prediction.

Currently, due to the prevalence of the BP neural network, as one of the widely used neural networks, the backpropagation neural networks (BPNN) learning approach has been utilized in most of the previous ANN literature. Nevertheless, the network structure is fixed in almost all of the previous studies. Such a fixed structure cannot provide neural network optimal performance. Although a large network may incur unnecessary implementation costs, it is challenging to obtain satisfactory accuracy by adopting a small network. For this reason, various techniques have been proposed for simultaneous optimization of the network’s framework and connection weights, such as the quantum-based algorithm [20], improved genetic algorithm [21], a hybrid Taguchi-genetic algorithm [22], an evolutionary program called the generalized acquisition of recurrent links (GNARL) [23], the simulated annealing, and Tabu search algorithms [24].

Besides, the BP algorithm may fall into the local minimum, especially for large and irregular data sets. Genetic backpropagation neural networks (GA-BPNN) can be applied to solve this problem. GA-BPNN, which employs the parallel searching capability of genetic algorithms to improve the ability of BP neural networks in weight learning [25], has been widely utilized in the relevant literature [2628].

This paper constructs the course learning effect evaluation model via an enhanced genetic BP neural network with link switches (EGA-NNLS), which depends only on historical I/O data during the evaluation process and significantly reduces the cost and complexity compared with the first principles-based modeling methods.

EGA-NNLS, which combines a neural network with link switches (NN-LS) with an enhanced genetic algorithm (EGA) and the Levenberg–Marquardt (LM) algorithm, exhibits the following properties:(1)The network’s framework and connection weights can be adjusted simultaneously.(2)Compared with the standard GA (SGA), the proposed EGA subject to the course evaluating process characteristics can achieve a faster convergence rate.(3)It entirely employs the network’s gradient information.

There are three stages while applying the EGA-NNLS. The first stage analyzes the influencing factors and the main contents of the evaluation system of the course learning effect in colleges and universities. The evaluation results are then quantified and divided into several grades. In the second stage, an initial NN-LS is built, which is learned and updated by EGA to simultaneously optimize the input-output relationship and the network framework of NN-LS. The following three enhancement approaches are employed to implement EGA: triple selection operation (TSO), which can protect high fitness individuals from being randomly varied by mutation operators and provide performance superior to the traditional “double selection” [29]; self-tuning crossover operation (STCO), which is extended from the typical arithmetic crossover operation, but more appropriate for the course evaluating problem; and the multispecies approach, which is utilized to solve untimely convergence problems partially. The third stage adopts the Levenberg–Marquardt (LM) algorithm to tune the partial weight connected network attained at the previous stage to utilize the network’s gradient data.

The EGA-NNLS method avoids the impact of the experts’ subjective factors in the traditional evaluation methods on the evaluation results and provides a general model of the course learning effect evaluation system. Therefore, this method has particular significance in theory and practice. The actual data of a university course is selected to construct the EGA-NNLS, and the efficiency of EGA-NNLS is demonstrated through the simulations.

The organization of the current paper is given in the following. Section 2 briefly describes the problems of the evaluation system of the course learning effect. In Section 3, according to the characteristics of the evaluation system, the EGA-NNLS is constructed to investigate the course learning effect. Section 4 simulates and analyzes the algorithm. In Section5, the conclusions of this paper are drawn.

2. Description of the Evaluation System of the Course Learning Effect

2.1. Connotation and Significance of the System

Theoretically, the evaluation of the course learning effect in colleges and universities shall make a comprehensive, objective, and fair judgment on the courses, teachers, and the students by using the theories, methods, and techniques of educational evaluation and teaching according to the policies, regulations, talent training objectives, and the school requirements. It provides valuable information for educational decision-making methods to maximize the development of courses with ideal effect. The effect evaluation system performs evaluation activities involving many contents and aspects. It is crucial for implementing the national education guidelines and policies and the microteaching management of the school and plays the following roles:(1)Stimulating teachers’ enthusiasm and improving their quality.The evaluation of the course learning effect is an essential means to stimulate teachers’ teaching enthusiasm. It feeds back the scientific and objective evaluation results to teachers to play their strengths more actively and provide more knowledge, thinking enlightenment, and innovation guidance to students.(2)Improving the teaching quality.Implementation of the teaching evaluation system according to modern theories and methods for the course learning effect evaluation can provide scientific standards for teachers’ teaching quality and reference standards for students’ acceptance of knowledge and the rationality of the courses. Teachers play an essential role in the teaching process, dialectically unified with students’ dominant role in teaching. The learning effect evaluation can prompt teachers to clarify their teaching objectives and tasks and reflect and enhance the teaching impact and their personal teaching quality.(3)Strengthening the scientific construction and management of colleges and universities.By evaluating the learning effect of each course, the structure, quality, and working conditions of the whole teachers can be understood. Also, it is possible to find out the problems and adjust the teachers’ team pertinently. According to the scientific index system and evaluation standard for the course learning effect and the guiding ideology, principles, methods, and procedures of the modern education evaluation, objective and fair conclusions can be drawn to provide a reliable basis and objective standard for school leaders to improve the teaching staff structure and implement the objective management.

2.2. Description of the Problems of the Learning Effect Evaluation System

Some of the courses in colleges and universities are highly theoretical or practical. Since there are courses about tool use methods, ideology, and methodology, the evaluation indicators should be selected from the students’ learning process. From the perspective of process management, multifactor interaction and multilinks are integrated into the whole teaching process. Thus, it is not easy to classify different disciplines and compare the learning effects of various courses, learning links, and learning objects.

Accordingly, the primary factors that can directly reflect the learning effect and have common features should be employed to design the evaluation system, enhancing the system’s practical operability. The following elements are often employed in the existing course learning effect evaluation system in most practical cases:(1)Learning attitude.Whether the learning is seriously taken, whether the preview is timely, and whether the homework is finished carefully.(2)Learning content.Whether the difficulty of the content is appropriate, whether the content closely depends on the basic knowledge, and whether much cross knowledge is involved.(3)Learning ability.Whether the students have a solid foundation of knowledge, the understanding ability, the ability to look for information, the practical ability, the ability to connect theory with practice, and the ability to understate all individual parts.(4)Learning methods.Whether students can flexibly choose methods according to their own strengths, whether they pay attention to the cultivation of creativity, and whether they often consult teachers and interact with students.(5)Learning purpose.Whether to master their own skills for the purpose, or to satisfy the requirements of teachers, parents, or examination.

Based on the above five aspects, nine evaluation indices are obtained, as shown in Figure 1. The above nine evaluation indices are employed to study the measurement indexes in the proposed learning effect evaluation system.

Based on the analysis in Section 2, the final learning effect of a course can be evaluated according to the comprehensive output results of the nine indices of determined through the learning process, effect, and ability, which can be described as follows:

Meanwhile, the final evaluation levels can be divided into five levels of A–F, from good to bad. For the convenience of computer simulation, the five output levels are denoted as 3-bit binary codes. Table 1 shows the correspondence between the levels and binary codes.

In this paper, the 3-bit binary number of the evaluation level is taken as the network output, written as

Thus, the professional course evaluating process is represented aswhere and are denoted as equations (1) and (2), respectively.

The experts employ the traditional method according to the above indices. However, in some specific cases, the subjective factors of experts can affect the results of this evaluation method, leading to inaccurate judgment results. Therefore, the design of a data-based course evaluation system employs massive data to overcome the defects of conventional evaluation approaches and obtain more objective evaluation results.

In order to attain this goal, an EGA-NNLS method is presented. Through this method, it is not necessary to directly analyze the input-output relationship of the course evaluating model, while a data-driven approach is utilized for the modeling procedure.

EGA-NNLS comprises three essential components. As described in Section 3.1, the first component is an NN-LS, which can produce a partially connected network. The second component is an EGA, which is adopted to find the NN-LS’s optimal weights. EGA is the generalized form of an SGA and will be described in Section 3.2. As the final component, the LM algorithm can further update the NN-LS, as illustrated in Section 3.3.

3.1. Building NN with Link Switches (NN-LS)

Conventional NNs generally have a fixed framework. Although an extensive network with many nodes and links may unnecessarily cause high implementation costs, a tiny network cannot provide satisfactory accuracy. Thus, a multi-input multi-output (MIMO) three-layer neural network with an adjustable number of links is employed.

Figure 2 shows a standard network with link switches, in which a switch function can be described as

The connection weights of the NN-LS are described as follows:where , , and , where , , and stand for the number of input, hidden, and output nodes, respectively. describes the connection weight between input and hidden nodes, and stands for the connection weight between hidden and output nodes. and describe the link switches, demonstrating the lack or existence of the corresponding link. For an entirely connected network, in which all link switches are present, and will become the standard connection weights and , respectively, similar to the corresponding ones in standard NN without link switches. For the university professional course evaluating process, and are chosen as 9 and 3, respectively.

The network’s input and output vectors are denoted as equations (1) and (2), respectively. For the proposed NN-LS, the hidden layer output is described as

Based on Figure 2, the input-output relationship of the proposed NN-LS is given bywhere stands for the weight matrix of the link between the input and hidden layers, stands for the weight matrix of the link between the hidden and output layers, and describe the biases for the hidden and output layers, respectively, and stands for the tangent sigmoid function defined as follows:

Remark 1. The transfer function of neural network mainly includes Purelin, Log-Sigmoid, and Tan-Sigmoid. Among them, Purelin is only applicable to the samples of linear mapping relationship. Compared with Purelin, Log-Sigmoid has a nonlinear mapping ability, but the gradient will disappear in the process of neural network backpropagation, and the output mean value cannot be zero. Tan-Sigmoid can avoid the problems of the above two functions, and it has a fast convergence speed and high precision. Therefore, equation (6) selects Tan-Sigmoid as the transfer function.

3.2. Adjusting the NN’s Framework by EGA

Generally, the SGA can be applied to learn the network’s input-output relationship in most cases. In the current study, the efficiency of SGA can be affected by too many optimal variables of the algorithm induced by the NN-LS’s link switches and connection weights. Besides, since the course evaluation process’s characteristics can reduce the diversity of individuals in the population, the premature problem can occur within the evolution process, reducing the SGA’s performance.

For the mentioned reasons, EGA, an extension of the SGA, is proposed, which aims to effectively optimize the network framework and connection weight so as to ameliorate the premature problem.

To evaluate the algorithm performance, the following fitness function was selected:wherewhere is the training set size and denotes the measured output at sampling time and is defined as

For applying EGA, all the constructed NN-LS’s connection weights and link switches presented in Figure 2 are transformed into the following chromosome:

Then, the EGA aims to find an optimal chromosome (12) to obtain the maximum fitness (9). And the following part 3.2.13.2.3 describes the three enhancement approaches adopted in the presented EGA.

Remark 2. The Mean Square Error (MSE) is a widely used evaluation index, which can avoid the problem that the positive and negative errors cannot be added together. In addition, because the error is squared, the role of the error with a large value in the index is increased, and the sensitivity is improved. Of course, other indicators can also be used, for example, Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), and Symmetric Mean Absolute Percentage Error (SMAPE).

3.2.1. Triple Selection Operation

Based on (12), the number of genes of a single chromosome, namely, the dimension of one possible solution in EGA, is calculated as

The number of genes in a single chromosome , composed of all switches and connection weights, is relatively high even though there exist a few hidden nodes in the net. For instance, the length of a chromosome can reach 100 for . An individual’s fitness can vary sensitively to the number of genes as grows. In such circumstances, the high fitness individuals are mostly chosen, but they also can be randomly varied through the mutation operation. In this regard, the first approach, named triple selection operation (TSO), can be employed to partially solve the problems induced due to the chromosome’s high dimension.

The schematic diagram of the triple selection operation is described in Figure 3, in which stands for the population of the initial group. The first choice of all generations starts from Group 1 with initial individuals. Group 2 is then formed by employing the standard “Roulette selection” approach with the reproduction probability aswhere stands for the fitness of the chromosome. Group 3 incorporates Group 1 and Group 2 to generate Group 3 based on the second selection, crossover, and mutation operations. Rather than being taken as the final group, Group 4 is incorporated with the initial group again, and the third selection is then applied to the resultant group (Group 5). At last, Group 6 is achieved as the group employed in the next generation.

Remark 3. The proposed “double selection” approach [29] may not have enough ability to protect the high fitness individuals due to the high dimension of the studied chromosome. In “double selection,” Group 1 directly generates Group 4. However, an alternative selection operation is employed in TSO for forming Group 3 from Group 1. Compared with “double selection,” employing the additional selection for TSO can increase the probability of the final group inheriting superior genes. Besides, based on various experiments, the selection times are more than three, while the EGA’s performance is hardly further enhanced, and the computational load will be increased.

3.2.2. Self-Tuning Crossover Operation and Mutation

The selection operation is employed in all generations to determine the searching orientation direction toward the best individuals. Nevertheless, not any new individual is generated. In order to enhance the population’s diversity, crossover operation is inevitably utilized in the genetic algorithm to interchange genes from the parents attained within the selection process. The crossover operation influences the selected population pairs and adds new individuals to the population according to the crossover rate .

The second approach is called the self-tuning crossover operation (STCO), which is based on the standard arithmetic crossover operation and is more appropriate for the research in this paper. STCO creates three candidate offspring and takes two of them with the highest fitness as the final offspring by completing with each other. Accordingly, the parents’ data can be entirely employed.

The following three steps are required to perform the STCO. Consider that each two selected parents are denoted by and , where superscripts a and b stand for the indices of the parents to spread offspring.

Step C1. Calculate the following values:where the superscript is the number of the population. and stand for the two artificial chromosomes involving genes with maximum and minimum values within the total population of this generation, respectively.
For the chosen parents, obtain

Step C2. Create the following three children:where crossover parameters are randomly chosen for each generation’s evolution.

Step C3. Compute the fitness of , , and , and select two of them with the highest fitness as and , respectively.
If and , replace and with and , respectively.
If and , keep and unchanged.
Else, replace with .
Figure 4 presents an example to illustrate the concept of STCO, where is obtained from (blue line) and (red line) using an approach similar to the standard arithmetic crossover operation. The main difference is that STCO sophisticatedly generalizes the parent group scope for crossover operation from to a broader domain spanned by , , and . In the sets and , two more children, and are generated, as represented in (17) and (19), respectively. Thus, STCO more utilizes the parents’ information compared with the standard arithmetic crossover operation. Finally, two offspring by STCO with the highest fitness, but not one by regular crossover operation, are selected.
After the crossover operation, the mutation operation will be applied to the population, by which the value of a gene of the population is randomly varied to incorporate new genetic materials into the population. The proposed EGA adopts the nonuniform mutation operation, which is widely utilized in genetic algorithms.

3.2.3. Multiple Species, Migration, and Real Coding

The last improvement approach of the proposed EGA concentrates on dividing the population into various species. In most cases, the premature problem, which implicates a trade-off between exploration and exploitation, generally happens while employing SGA. The multiple species approach is utilized to solve the mentioned premature problem. By utilizing the mentioned approach, various species’ search procedures can be applied simultaneously, and falling into local minimum can be avoided, which is generally induced by one species. Notably, three species are employed in the presented EGA. Specie 1 and Specie 2 perform the search individually by concentrating on a local but refined scope rather than broad scope. Specie 3 is utilized to merge Species 1 and 2 after completing the EGA.

The migration operation swaps individuals randomly between Species 1 and 2 in all generations. The migration rate adjusts the number of individuals moving between species. Generally speaking, a migration rate of about 5% per generation is a good choice.

There are various encoding algorithms like binary encoding and real encoding. The binary coding employs a simple on/off mechanism. Contrarily, applying binary encoding to the problems represented by real numbers may cause the “hamming cliff” phenomenon. In such circumstances, the real encoding should be employed just as we implement in the current paper.

3.2.4. Layout of EGA and Benchmark Tests

Figure 5 describes the primary layout of the EGA. The fitness function (9) and the number of populations are employed to produce the initial population randomly. In virtue of the multiple species technique presented in the last part, the original population is randomly categorized into two species. The mentioned two species evolve separately using the presented TSO and STCO for a given number of generations. After that, the above two species are merged into Specie 3. Moreover, in this study, various crossover probabilities (described by and ) and mutation probabilities (described by and ) are adopted for two species in all generations.

The efficiency of the EGA is verified by four benchmark test functions to be applied in the next section. The benchmark functions are presented as follows:

Function 1:where the maximum value is at .

The fitness function: .

Function 2 (De Jong function):where n = 3 and the minimum value occurs at

The fitness function: .

Function 3:where n = 2 and the minimum value is at .

The fitness function: .

Function 4 (Shubert function):where n = 2 and the minimum is at

The fitness function: .

The EGA employs the mentioned four test functions. The results are compared with those attained by the SGA with arithmetic crossover and nonuniform mutation. For every test function, the population size is 200 for EGA and SGA. The algorithm takes 80 iterations. The crossover probability is chosen as 0.6 for SGA and 0.6 and 0.8 for Species 1 and 2 in EGA. The mutation probability for functions is set at 0.04 for SGA and 0.04 and 0.06 for Species 1 and 2 in EGA. Figure 6 presents the population’s average fitness results obtained with EGA and SGA. Generally, it can be found that the population’s average fitness value of EGA (solid black line) is better than that of SGA (red dashed line).

3.2.5. Tuning the NN-LS’s Framework and Connection Weights through EGA

After describing the EGA in part 3.2.13.2.4, the EGA can be used to adjust the proposed NN-LS framework and connection weights.

In each generation of EGA, the best chromosome of the contemporary population is chosen, and then all genes in this chromosome with the highest fitness through the population are derived. Specifically, the values of link switches and can be chosen either 0 or 1 depending on and , which are available for each value of , , and . Accordingly, the existence of and can be verified. In this regard, the framework of the NN-LS may change generation by generation to attain the optimum solution.

As presented in Figure 7, the original entirely connected feed-forward NN becomes a partially connected one after learning by EGA. Accordingly, the NN’s implementation cost is reduced in terms of computing time.

3.3. Further Tuning the Connection Weights via the Levenberg–Marquardt Algorithm

As discussed in Subsection 3.2, the NN-LS framework and connection weights are tuned simultaneously by employing the presented EGA. Various performed tests indicate that higher accuracy can be obtained by NN-LS incorporated with EGA instead of the SGA. However, after EGA learning, the solution of maximum fitness function (9) may fall within the vicinity of the global minimum; therefore, in order to obtain the optimal solution, it is necessary to adopt a gradient-based algorithm to further update the weights in the neighborhood. The L-M algorithm is the most widely used algorithm with local search ability, which can effectively reduce the search conditions of GA and improve the stability of GA [30, 31]. Compared with other methods, the LM algorithm has faster convergence speed and accuracy and has been used to solve practical problems such as information physics system [32] and concrete compressive strength [33]. In order to attain higher efficiency, the (active) connection weights can be more tuned, as introduced in this subsection.

The steps of the L-M algorithm are listed in the following:

Step L1. Build a new connection weight vector asWith EGA, the nonzero initial values , , , and , , can be obtained.
Calculate the following performance index, which evaluates the square error between the net’s output and measured output,where stands for the size of the training set. can be regarded as a function with respect to the new weight vector defined by (24). represents the training set’s measured output vector.
Here, updating the L-M algorithm is in batch mode. Define the following new error vector:where denote all elements of .

Step L2. Noticing that the error vector is also a function of , calculate the Jacobian matrix asLetwhere denotes a parameter to be tuned, and then update the connection weights as

Step L3. Tune the parameters in (27) as follows:
If , then .
If , then , where describes a positive constant.

Step L4. Stop if .
For calculation of in Step L2, the chain rule can be employed, by which can be updated by the standard BP method.

3.4. EGA-NNLS

By synthesizing NN-LS, EGA, and L-M algorithm in Subsections 3.13.3, the following two stages can describe the EGA-NNLS algorithm:

Stage 1. Tuning the framework and weights of NN-LS simultaneously via EGA.
Calculate , , and to construct a three-layer NN-LS. Then, all switches, weights, and bars within the NN-LS are mapped into a chromosome as (12) with the fitness function defined as (9). Accordingly, the initial population with a fixed number of individuals is randomly generated. The proposed EGA is then adopted to adjust the framework and weights of NN-LS simultaneously until converging the population’s average fitness.

Stage 2. Further updating the nonzero connection weights.
When the first stage finishes, the structure of NN-LS is determined. In order to further search for the optimal nonzero weights, the L-M algorithm is applied.

Remark 4. For many problems such as the one discussed in (9), the nonzero connection weights tuned by EGA can be directly regarded as the final values of the network. However, the second stage is necessary for the studied problem, in which EGA-NNLS is incorporated with L-M.

4. Simulations

In order to verify the EGA-NNLS, this section performs simulations using the actual data of the evaluation effect of a particular university course. In the simulation process, the units of each evaluation index are unified, and the data are normalized to avoid the influence of data dimensions. This paper employs 40 groups of data samples, of which 30 groups are training set samples, and the other ten are the evaluation set samples. Table 2 list the ten sets of test data to verify the effectiveness of the EGA-NNLS for predicting the learning effect. Based on the above data, EGA-NNLS is established, and the simulation parameters are listed in Table 3.

After setting the simulation parameters, EGA-NNLS is established. This paper takes a real number between −1.5 and 1.5 as the network output value. The binary code result can be obtained after rounding the absolute value of the network output result.

After completing the training of EGA-NNLS, the ten groups of test data listed in Table 2 are utilized to evaluate the efficiency of EGA-NNLS, and Table 4 lists the final results. It can be seen that the network output values after rounding are consistent with the actual binary code value in the ten groups of data for simulation.

Figure 8 illustrates the performance function variations during the training process. Two weight updating algorithms are utilized in this paper. One is the proposed EGA-NNLS algorithm, and the other is the traditional BP algorithm. As shown in Figure 8, the performance functions of the two algorithms gradually decrease during the training process. However, compared with the conventional BP algorithm, the EGA-NNLS achieves the desired training accuracy after the second iteration (goal = 10−2, as shown in Table 3). Therefore, the superiority of the EGA-NNLS algorithm to the traditional BP algorithm is demonstrated in terms of efficiency and convergence speed.

It can be seen from the above simulation that EGA-NNLS can evaluate the learning effect. Compared with the traditional expert evaluation method, EGA-NNLS avoids the influence of subjective factors on the evaluation results. Meanwhile, the performance function of the EGA-NNLS network has a faster convergence speed in the training process than the conventional steepest descent BP neural network. Besides, EGA-NNLS can eliminate the adverse effect of gradient amplitude on the evaluation results in the network training process, achieving an accurate course evaluation effect. This demonstrates the excellent practicability of the proposed method.

5. Conclusions

The current work studies the modeling of the evaluation system of the course learning effect in colleges and universities. The existing evaluation system characteristics are employed to propose the EGA-NNLS evaluation and analysis system. A particular NN-LS is presented, and three enhancement approaches are utilized to combine it with the proposed EGA. Besides, as EGA utilizes the NN-LS’s link switches, the LM algorithm further updates the network. Therefore, EGA-NNLS can learn the input-output relationships and the network framework simultaneously to model the course evaluation system, while the network gradient data is also entirely employed. Finally, the efficiency of EGA-NNLS is demonstrated through simulations of the actual data.

The proposed model is general, and the employed algorithm is based on the system data. Compared with the existing evaluation approaches like the analytic hierarchy process, fuzzy comprehensive evaluation approach, clustering method, and multiple regression analysis methods, the proposed model is mainly related to the data than the model mechanism. This demonstrates the efficiency and application value of this study.

Data Availability

The data used to support the findings of this study are included within the paper.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation (NNSF) of China under Grant 61903004, Beijing Municipal Natural Science Foundation under Grant 4212035, North China University of Technology YuYou Talent Training Program, North China University of Technology Scientific Research Foundation, and Beijing Municipal Great Wall Scholar Program under Grant CIT&TCD20190304.