With the passage of time and recent advances in science and information technology, the development in the area of course selection based on some defined criteria has made the choice of the mechanism easy and effective. In this paper, the approach is based on an innovative perspective, and swarm intelligence was introduced for the intentional choice mechanism of course selection. The study has considered the course selection in English as an example. Swarm intelligence algorithm and integrative course selection were combined with the recommendation algorithm and the intent of course selection in English course to discuss the relevant decision mechanism. Firstly, the comprehensive selection intentional recommendation algorithm and PSO algorithm were introduced, and the algorithm was initialized. Secondly, the operation process was described in detail, and the application process was analyzed. Then, it was introduced into the English course elective process. Finally, the experimental results of the study came with the conclusion through the test that the PSO algorithm has a higher degree of accuracy and can better judge individual behaviors, which contributes to the establishment of choice-choice mechanism. The effectiveness of the study was demonstrated through experiments.

1. Introduction

With the continuous deepening of economic globalization, the society has become increasingly demanding on college students' English proficiency. It not only requires students to have a certain level of English basic knowledge but also requires students must have strong English comprehensive ability and cross-cultural communication skills [1]. This puts forward new requirements for college English teaching reform. At present, college English is influenced by traditional utilitarianism. Students’ motivation to learn English has a great deviation. And, English has become a “tool” for employment [2]. In our country, higher education adopts compulsory and elective methods in English education. In a certain sense, the quality of elective courses can better reflect the perfection of the credit system because the credit system is based on the elective system [3]. However, with the reform of the curriculum in our country, the college entrance examination has canceled English subjects, which is still facing a crisis of lack of elective intention in higher education. The original compulsory high school English has also become an elective course. There is a lack of normative and curriculum diversity in the opening of English courses, and students lack guidance for elective courses. Meanwhile, elective course teaching and students’ incorrect understanding of the relationship between elective courses and college entrance exam need to be improved [4]. Therefore, studying the intention of elective courses in English courses can help students to strengthen their enthusiasm for learning English courses. In-depth understanding is inadequate concerning the students’ influence on the choice for taking a particular course. Imperfect understanding of modality choice has significant implications to institutions and students.

The swarm intelligence evolutionary algorithm, a kind of optimal algorithm, has attracted more attention of researchers. Both artificial life and EA are correlated intensively in evolutionary strategy, especially in the domain of genetic algorithms [5]. There are mainly two kinds of algorithms in the field of swarm optimal theory. One is ant swarm algorithm and the other is particle swarm optimization. The second one originated from the simulation of simple social systems, and it was originally the process of foraging for flocks. However, later, it was found that it is a good optimization tool [6]. The first one (PSO) was proposed by scholars in recent years [7]. The PSO algorithm and the simulated annealing algorithm are similar as much as possible and both of them are evolutionary algorithms. PSO algorithm utilizes a random solution to start the iteration and then circles the algorithm to obtain the optimal solution. It uses the fitness function to assess the effectiveness of the solution, there are no operations such as “crossover” and “mutation,” and it is easier than the rules of GA. The important thing is that it gains the global optimal by following the optimal value of the current search. For its advantages of easy implementation, high precision, and fast convergence, the algorithm has attracted more attention of the academic community, and it has demonstrated its superiority in solving practical problems [8]. PSO can be calculated parallelly. With the continuous progress of the innovation and reform process of English major teaching, the traditional single teaching model has long been unsuitable for today’s students. Those boring and old-fashioned presentation methods make students feel that they can only passively accept and are far from interactive fun [9]. Therefore, it is important to improve interactivity and understand the individual conditions in teaching reform and innovation.

The contribution of the proposed research is devise swarm intelligence for the intentional choice mechanism of course selection. For the course selection, English was considered as an example of the study. Swarm intelligence algorithm and integrative course selection were integrated with the recommendation algorithm and the intent of course selection in English course for discussing the applicable mechanism of decision. The effectiveness of the study was demonstrated through experiments.

2. Methodology

2.1. Particle Swarm Algorithm Content Interpretation

The location of the Locator in the space is as shown in the dark rectangular position in the figure. And, their coordinates are , , , and . During the experiment, the four Locators emitted ultrasonic waves to the Tag. When the ultrasonic wave detected the Tag, it was reflected back to the Locator by the Tag. The propagation time of the ultrasonic wave was monitored and the TOA method was used to calculate the distance from each Locator to the Tag. Based on this initial condition, the following will begin to introduce the PSO algorithm [10]. Within the problem set is placed a particle group that consists of four particles: , , , and . The initial position of these particles is randomly placed. Follow the steps below. Firstly, ultrasonic sensors and communication between tags were used to prove the existence of ultrasonic transmission cycle. Secondly, the TOA method was used to find the distance between the Locator and the Tag [11]. The operation is as follows. The distance from Locator L0 to Tag is denoted as . Distance from Locator L1 to Tag is denoted as. Distance from Locator L2 to Tag is denoted as . The distance from Locator L3 to Tag is denoted as . Then, under the initial conditions, the coordinates of the four Locator and the four particles are also known conditions. Therefore, the distance between Locator and the particles can be calculated separately, such as the distance between Locator L0 and Particle [12], as follows:

The distance between Locator L1 and Particle can be calculated as follows:

The distance between Locator L2 and Particle can be calculated as follows:

The distance between Locator L3 and Particle can be calculated as follows:

According to the above formula, the distance between the Locator and the other three particles can be expressed as follows: , , , and represent the distances between L0 and , L1 and , L2 and , and L3 and , respectively, , , , and represent the distances between L0 and , L1 and , L2 and , and L3 and , respectively, and , , , and represent the distances between L0 and , L1 and , L2 and , and L3 and , respectively. Then, the distance between the Tag and the Locator and the particle and the Locator was found. Combining these two conditions with the following formula, it is possible to find the particles with the closest Tag among the four particles [13]:

Among them, , , and represent the distance between particles and Locator. , , , and represent the distance from the Tag to the Locator. , , , and are called distance degrees, whose size indicates the distance between the particle and the Tag. It can be seen from the formula that the smaller the distance, the smaller the distance between the particle and the tag. So [14], among the four particles, the particle with the smallest distance is calculated to be closest to the Tag. For example, if the value of is the smallest, then it can be determined that particle is closest to Tag. After this aspect, faced with the first key point in the PSO algorithm, it can be effectively processed clearly defined in the particle swarm that which one is the closest particle separated from the tag [15].

2.2. Spark Cluster Iteration Calculation

After narrating the particle swarm algorithm, in this paper, the distributed platform-Spark Cluster Particle Swarm Optimization was used. With the advantages of speed, ease of use, and sophisticated analysis, Apache Spark, as a computation platform, can be used to handle big data. It started up originally in 2009 and opened in 2010 greatly. Frankly speaking, Spark widely extends the MapRed and Ce models to hold with various kinds of calculations. Speed always plays an important role on large data sets when processing the interactive queries and stream data. Spark calculates in memory specifically [16]. In addition, Spark can perform complex computation on disk. In generally, Spark was proposed to handle various computation situations, such as batch processing applications, pass generation algorithms, interactive queries, and stream [17]. The versatility of Spark not only enables simple and convenient processing in different application scenarios but also reduces the administrative burden. Spark provides many version interfaces such as Python, Java, Scala, and SQL and provides a rich set of default tool libraries. Spark was also combined with other tools slightly. The driver program running on the master node controls the critical flow of the program [18]. The driver program defines the operations such as map, reduce, and filter. Figure 1 is a working principle diagram [19].

Then, we will elaborate on the specific implementation of the algorithm in the program. From the analysis above, firstly, it can be seen that, in the Spark application, the data file was read from a document teaching platform (such as HDFS). Secondly, the elastic distribution data set (RDD) was set up. Thirdly, the driver program was used to parallelize RDDs and assign them to various points. If the RDD is frequently reused in the application, it can perform well in cache. When RDD is filed, it is possible to perform parallel operations on the RDD. The new algorithm operation used in Spark cluster environment implementation is shown in Table 1 [20].

As you can see from the above table, Spark provides rich functions. The Spark ecosystem consists of a general execution module, a structured data module, a stream analysis module, a machine learning module, and a graph calculation module [21]. The first one is the execution system of the platform and is the core of functions. SparkCore supports cache capabilities, a common execution model, and application programming interfaces for Java, Scala, and Python, which allows Spark to efficiently calculate and stand for a wide range of applications. Spark processes structured data by QL. It provides DataFrames, program abstraction, and acts as a distributed SQL query engine. SparkSQL enables native Hive queries in Hadoop clusters to be up to 100 times faster than existing configurations and datasets. At the same time, it has strong integration capabilities with other modules in the Spark ecosystem. The stream analysis module allows for strong interactivity and analysis applications for stream data mining and historical data, while continuing Spark’s ease of use and fault tolerance. Spark Streaming easily integrates with all types of common data. Machine learning module (MLLIB) is a scalable machine learning library that provides both high quality and efficiency algorithms. The MLlib library can be used as part of Spark applications in languages such as Java, Scala, and Python.

3. Results and Discussion

3.1. Particle Swarm Algorithm Validity Test

After calculating the three ideas of the ant colony algorithm to select courses for English courses, the three curriculum selection intentions were expanded to Spark cluster environment to improve the efficiency and expansibility of the elective courses. These experiments can be used to identify the performance of our proposal in this paper. In this paper, all experiments were done in a lab Spark cluster environment. The data sources are based on the original foursquare check-in data set, and the data sets with corresponding numbers are gained by replication. The experiment of verifying the efficiency and scalability of large data volume is performed on this case [22].

By repeating these experiments to modify these parameters of the influence factors of the following factors in the linear combination and probability fusion, for the linear method, the result of the selection intention preference recommendation is the highest under the values of 0.4 and 0.5. For the second one, the order is in accordance with social factors, time factors, and geographical factors. Besides, the first rough and then fine-grained result were made in order, and the effect of the selection intention preference is highest.

In the probabilistic fusion recommendation method, when the values of and are 0.2 and 0.4, respectively, the result of the selection intention preference recommendation is the highest. Among the three influencing factors, which influence the choice of course intention, the degree of influence of geographical factors and time factors is greater than that of social factors, and the degree of influence of geographical factors is stronger than that of time. Then, we compare the results of our proposal and others to validate the advantages, and it can be shown in Figure 2.

The experimental results show that the introduction of social factors, geographical factors, and time factors can hardly recommend the intention of elective courses, which verifies the conclusion of the existing research results. The three integrated methods proposed in this paper further improve the effectiveness of the results of recommendation of choice intention. Specifically, we found that the linear-weighted recommendation effect was better than GT, indicating that social factors could hardly choose the outcome of recommendation intention. However, the recommendation effect of GT is better than that of T and G, indicating that geographical factors and time factors can also enhance the recommendation effect. Then, all push methods are analyzed. The results show that the linear weighting method has the highest F1_measure and can provide the most vivid comprehensive recommendation effect. In addition, the RECAU of EL combination method is relatively low, while the accuracy is the highest among all methods, and it is suitable for applications requiring high accuracy. The recommended results of the probabilistic fusion method are second only to the linear weighting method. And, all results are better than others.

3.2. Verifying the Preference of Particle Swarm Optimization

In this experiment, the efficiencies of the particle swarm algorithm for testing stand-alone environments and Spark cluster environments were compared. The extended foursquare check-in data set was used to gradually add the data volume from lG to 32G, and then, the execution time was observed in the two environments, respectively. The comparison results of the three comprehensive recommended methods show the result of linear weighting. It shows that the particle swarm algorithm in the Spark is the best one. With the size of the data enlarged, the difference is even more significant. The result is shown in Figure 3 and Table 2.

Then, the scalability of the recommended method was we verified. Within these, keep the available memory of each executor as the default value of 1G. Change the number of executors in the cluster and change the number of executors in the Spark cluster environment by changing the number of cores __tota1_executor_cores available to all available executors in the cluster. That is, when the number of available cores and memory changes, the execution time of our proposal changes. Cache sizes are 1G, 2G, and 4G enlarged four square check-in data sets. Finally, the expansibility experimental results of three kinds of particle swarm optimization algorithms are as before. These show that, with more executors, the more the increase of the number of available cores and memory is, the more it decreases linearly. In addition, the stability of the algorithm was tested in two experiments. The test results are as follows. The algorithm has good convergence and stability. The convergence of the algorithm in 2 experiments is shown in Figure 4.

The above experiments testify the performance of the particle swarm algorithm. The experimental results of verifying the validity of the integrated method show that the linear one is the best one. The second one performs well on accuracy. The third ones are the second following linear-weighted method, and it performs well on sparseness problems. The experimental results of verification efficiency show that the particle swarm algorithm has higher efficiency in the Spark cluster environment than the crash environment. With the increase of the data set's scale, the efficiency advantage becomes even significantly. It shows that, as the number of available cores and memory in the cluster increases linearly, the time decreases linearly within a certain range.

4. Conclusion

At present, the development of computer and Internet technology is rapid and growing with the passage of time. With this, the combination of modern education and computer is getting closer. An effective and efficient way is needed to efficiently and precisely consider the intentional choice mechanism of course selection. For achieving the aim of the proposed study, a recommendation model and algorithm for the selection of course intention in English courses were constructed. The research results recommended by the current selection of course intentions confirm that various reinforcement factors can indeed improve the recommendation quality, but it does not consider the recommended methods for the three influencing factors. In our proposal, based on the recommendation of interest points of the existing research results, three kinds of influencing factors were integrated by three methods to improve the results of elective intentional ones. On the one hand, the problem of scalability of collaborative filtering was solved. However, efficient and easily scalable solutions were supported by point of interest recommendations. The good points and disadvantages of the three integrated methods were discussed and verified, and conclusions were conducted. Besides, the performances of the method of selection of point-of-entry intentions were verified, and it confirms that the linear one is the best choice. The experimental results perform well and show the effectiveness of the proposed study.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.