College English translation instruction is an important part of developing students’ English application skills. The generation network in GAN (generative adversarial network) is combined with reinforcement learning technology in this paper to create a basic text generation model that solves the problem that the original GAN model cannot handle discrete data. The correctness of students’ English translation ability is analyzed using a neural network model trained by PSO (particle swarm optimization), which can help teachers estimate students’ translation ability and provide a reference for the next teaching. The results show that the proposed model’s accuracy rate is clearly higher than the comparison model’s, with a maximum accuracy rate of over 85%. The findings indicate that this research model has the potential to improve the quality of English translation instruction.

1. Introduction

The methods and contents of translation education in China currently do not meet the needs of modern society, and trained professionals lack strong translation abilities in practical work and life [1, 2]. The means of spreading knowledge and information has gradually shifted from traditional paper media to online media as science and technology have progressed and the Internet age has arrived. People obtain required information resources through networked information technology, and the transmission channel shifts from computer platform to mobile client over time. As a result, the traditional teacher-centered teaching mode of college English translation is gradually giving way to the student-centered teaching mode. Students will be at the center of a student-centered college English translation teaching mode. Students’ awareness of autonomous learning will improve as a result of the new teacher-student relationship, which will stimulate their enthusiasm and initiative in English translation learning and gradually cultivate their translation ability, learning ability, and thinking ability.

Zhao and Jiang improved the traditional rule-based machine translation model by employing an English semantic network machine translation model [3]. To analyze machine translation, Ban and Ning overcame the pipe-type layer-by-layer analysis technology and compared the segmented phrase words with the phrase corpus to analyze the part of speech and syntax [4]. Starting with the definition of student-centered college English translation teaching, this paper examines the current state of neglecting student-centered college English translation teaching and proposes innovative student-centered college English translation teaching strategies [5]. Chai uses a statistical algorithm to determine the collinearity and correlation of words in documents based on the attribute degree of word-document-case.

The goal of creating a corpus is to identify the parts of speech in sentences in two languages and assign basic functions to these sentences in order to improve automatic recognition accuracy and speed. English is a universally understood language. English translation teaching reform should seize this development opportunity and form a standardized English teaching system based on network information technology as part of the national education standardization reform process. As a result, this paper applies GAN (generative adversarial network) [6] to college English translation instruction, taking into account all aspects of English words and semantics and improving the accuracy of English sentence translation. The specific innovative achievements of this paper mainly include the following contents:(1)Aiming at the shortcomings of the existing research, this paper improves the framework of GAN text generation and reinforcement learning and introduces hierarchical reinforcement learning into this framework. This unique signal transmission structure between discriminator and generator can effectively alleviate the problem of feedback signal difference in reinforcement learning.(2)Based on the mathematical model and algorithm flow of the PSO algorithm and the basic principle of the artificial neural network model, a learnable habit analysis model is proposed, and the topological structure of the neural network and the number of nodes in the network are determined. This research mode can promote the improvement of English translation teaching quality and the mutual benefit of teaching.

The following are the main contents of each of the five sections in this paper: the first section discusses the research’s background and significance before moving on to the paper’s main work. The second section focuses on GAN-related technologies, while the third section discusses the research’s specific methods and implementation. The superiority and feasibility of this research model are confirmed in the fourth section. The summary of the full text appears in the fifth section.

2.1. GAN Research

The work of Tang et al. proves that GAN has always shown strong practicability in the image [7]. Hu et al. put forward the image translation structure Pix2Pix [8] based on conditional GAN, which can get good results under the condition of supervised learning. Shin and Lee added an enhanced network on the basis of the traditional GAN structure to enhance the realistic connection between input and output and thus proposed a new GAN network structure [9]. Xu et al. proposed a new GAN algorithm: multiscale discrimination of GAN, which can improve the defects of the original GAN [10].

As one of the implicit variable models of GAN, it is different from the self-encoder. The self-encoder is mainly composed of an encoder and a decoder, while GAN is composed of a generator and a discriminator. The generator here has a similar function as the decoder in the self-encoder, which takes the implicit variable Z as input and outputs the generated text. Cheng et al. put forward a text generation model based on GAN and TextGAN, which uses LSTM (long short-term merry network) as the generator and convolution network as the discriminator [11]. Zhang et al. made a change in the generator because the binary output of the traditional GAN discriminator cannot transmit more meaningful information to the generator, so it cannot generate long text well [12]. Deng et al.’s research shows that the depth generation model has a more comprehensive feature expression ability. The realization forms of depth generation model are very diverse [13]. Motamed et al. skillfully used supervised network structure and training methods for unsupervised fit data distribution [14]. The depth encoder can learn the low-dimensional representation of high-dimensional text data, and the decoder can be used as a text generation model.

2.2. Research on Intelligent Translation Algorithm

Jiang evaluates the quality and writing style of articles based on the shallow text feature analysis method in statistical technology. Just because the system adopts the shallow text characteristic analysis technology, its main feature is that it pays more attention to the language surface structure of the composition but ignores the content level of the composition and does not involve the content, organizational structure, and genre of the composition [15]. Wu et al. combined dictionary, semantic analysis, and other analysis tools and various resources in the system, which improved the performance of the system to a certain extent, avoided the problem of over-reliance on the surface features of the composition, and greatly improved the scoring effect [16]. Cao et al. adopted information retrieval technology, screened out the indicators that have an influence on the writing level, and calculated the proportion of influence of each indicator [17]. Ullman used LSA (latent semantic analysis) to analyze the semantics of the composition and complete the grading work, and from its literal meaning, it can be concluded that LSA broke through the surface features of language and went deep into the semantic level [18].

Dai combines the latent semantic analysis of deep semantic structure, focuses on shallow text expressions, makes linear regression with the help of singular value decomposition technology in LSA, and discusses the coincidence rate of various text features on English composition performance prediction [19]. Chen et al. selected some scoring items with a high linear fitting rate with the composition scores, trained the scoring engine, and calculated the distance between students’ compositions and the standard corpus [20]. Pang et al. represented the semantic lexical structure by a matrix through latent semantic analysis, and after singular value decomposition of this matrix, three different matrices were obtained, and then the three matrices were correlated to reconstruct the matrix and reduce the dimensions, thus better representing the semantic structure and, more importantly, filtering out the interference of multiword synonymy and polysemy [21].

3. Methodology

3.1. GAN Overview

GAN is made up of a generator and a discriminator, with the generator modeling the potential distribution of data and generating a new sample and the discriminator separating the real sample from the generated sample. Its generating function is practically infinite; it can approach any real sample distribution; and it has a very broad application range. GAN can generate more realistic samples than other generation models, as evidenced by numerous research papers. The final generator can fit the distribution of the training data by alternating between optimizing the generator and the discriminator many times, whereas the discriminator network cannot distinguish whether the input data is from the generator or the real data. The structure of GAN is shown in Figure 1.

During the training process, the generator and the discriminator alternate iterative training and play with each other. When the Nash equilibrium is reached, that is, the discriminator cannot distinguish whether the sample is true or false, it is considered that both the generator and the discriminator have achieved the ideal effect. The functional expression of this process is shown in the following formula:

The distribution of the real sample is , and the distribution of input noise is , usually Gaussian, with representing the generator and representing the discriminator. Maximizing requires to be as large as possible and to be as small as possible.

Although GAN has solved some generation model problems and inspired the development of other methods, it is not without flaws, and it has also introduced new issues while addressing old ones. The generator generates a completely real sample when the game between the generator and the discriminator reaches Nash equilibrium, and the discriminator cannot judge the generated sample and the real sample. Because GAN is the objective function’s minimum problem, not a simple minimization problem.

3.2. Text Generation Model

College students today are frequently influenced by the logical relationship of Chinese sentences in English translation learning, and they lack the ability to translate the entire sentence, demonstrating that they are unable to use various sounds and tenses in English. Grammar and vocabulary errors are common. As a result, universities should concentrate on the modern network technology environment, innovate teaching methods, cultivate students’ English translation thinking, gradually transit students from short to long sentence translation, and improve English sentence accuracy. It is clear that current college English translation instruction not only disregards students’ subjective initiative but also lacks practicality in the application, making it difficult for students to improve their English level. It is the best teacher who piques students’ interest and naturally stimulates active learning motivation in a pleasant learning environment. As a result, college English translation instruction should place a strong emphasis on students, encouraging them to take charge of their own learning, empowering them to become masters of English translation course content, and empowering them to achieve learning autonomy.

Text generation is a long-standing topic in artificial intelligence, and it is closely linked to other natural language processing technologies like text understanding. Machine learning can summarize text templates as well, but their versatility and scalability are limited. However, the study hopes that data science-only methods, such as statistical-driven text generation and machine learning, have not yielded convincing results. The network can only give a broad assessment of each generated and trial-and-error text, with no recommendations for how to improve the process. Only more random sampling will allow for further optimization. In contrast to RNN (recurrent neural network), which calculates and extracts features sequentially, CNN (convective neural network) completes feature extraction in text processing using local calculation and high-level combination, as shown in Figure 2.

To extract features, CNN typically employs multiple convolution kernels of varying scales, with multiple convolution kernels for each scale. The feature graph obtained through convolution usually uses the maximum pool in the time series dimension to reduce the dimension of features, and after splicing the reduced dimension features, it enters the structure of a higher-level network for further processing. This network structure is used to base the generator enhancement in GAN in this document. Because the reinforcement learning model used in this paper requires a lot of sampling and trial and error during the training process, increasing the discriminant network’s execution speed will significantly improve the model’s overall speed. Furthermore, the high-order dependency relationship between characters is rarely necessary for a preliminary judgment of a text’s authenticity. The generating network has no way of knowing which part of the sequence results in more reward or punishment, but the discriminating network does. The advanced features extracted from the penultimate layer are used in the classifier network’s last layer to determine the likelihood that the text is real.

Suppose that the discriminator network uses the structure of the word embedding layer, the convolution layer, and the full connection layer, and the last output layer uses a sigmoid function to obtain the probability value belonging to the real sample. The sigmoid function is in the form of

Then such a discriminator network structure can be expressed in the form ofwhere represents the whole discriminator network; represents the parameters in the discriminator network; represents the parameters of the last layer of the discriminator network, specifically, a weight vector; and represents the parameters of the discriminator network except the last layer of the network. is equivalent to a feature extraction network because all layers except the output layer can be regarded as a feature abstraction layer.

The output of this feature extraction network can be denoted as , and this process can be expressed by formula (4) alone:

On the one hand, the high-order feature of the text can be input into the output layer of the discriminator as an intermediate result, and on the other hand, it can also be input into the generator network as a guide signal for reinforcement learning.

The generation model used in this document is an automatic encoder based on the sequence-to-sequence framework, which is mainly composed of an encoder and a decoder. The purpose of the encoder is to fully extract features other than emotional style from content. The network structure unit used here is GRU (gated recursing unit). The state of each word in a sentence is not only related to the previous word but also retains the information of the following words. The current state of the GRU unit iswhere represents the connection of two hidden state vectors, and if both are dimensions, then the connection is dimensions.

For the decoder, the hidden vector obtained by the decoder is a single-layer unidirectional GRU network. Its inputs are coding and target emotion , and for each neuron hidden state , there is an abstract representation:

where the current input word is different according to the target emotion. When the target emotion is different from the input emotion, the output of each unit needs to be related to the output and hidden state of the previous moment.

The network loss function consists of two parts. Firstly, the contradiction loss is used to ensure the authenticity of the generated image. Because we have designed a set of discriminators to judge the authenticity of different types of images, we can judge the authenticity of images according to the domain to which the actual images belong and the construction time of the generated images, and the false images are input into the corresponding discriminators, respectively. The final global loss function is

Among them, the parameter ratio of the control generator against loss and reconstruction loss can be adjusted to obtain the best generation effect. In the experiment, we set .

Through pretraining, the model can generate texts with high return value in a very short time, reaching the state of a balanced game between the generator network and the discriminator network. The overall structure of the improved model is shown in Figure 3.

The improved model continues to use the GAN framework, and it is divided into two networks: generator and discriminator. Two RNNs make up the network, one for feature conversion and the other for text generation. The text generation module, for example, can only use real samples for pretraining before the entire system begins training, and it must also perform proper tag monitoring. The potential factor vector will determine the distribution of the next character in conjunction with the text generation network after the linear transformation layer’s dimension transformation. The extended partial text generation is achieved by splicing the sampled characters into the partially generated text columns. Repeat until you have reached the desired text length.

3.3. English Translation Application Model

College English translation teaching mode is student-centered, reforming the traditional teaching mode, making translation teaching develop in the direction of active learning and personalized learning, stimulating students’ interest in learning, and cultivating high-quality translation talents. Therefore, teachers can try various teaching methods. For example, adopting different majors and different levels of teaching methods, according to students’ majors, the corresponding translation skills and translation practices are different for students of different majors, so as to truly achieve the purpose of teaching. Different teaching contents are adopted for students with different foundations. After studying students’ personalities, cognitive types, learning motivation, and so on, clear stratification is carried out to truly achieve the teaching purpose of personalized teaching.

In this process, the teacher is no longer the leader of the whole teaching process but guides and encourages students to carry out inquiry learning inside and outside the classroom. According to different students, different annotations can be made and saved to better suit their different teaching purposes. Before class, teachers can organize according to their own teaching tasks. They can publish the searched corpus on the teaching platform in advance through the campus network for students to view or sort out the original corpus for students in the form of practice. In the process of English translation teaching, it is very important to obtain objective data and use them for correct analysis. Therefore, an application model is proposed to analyze students’ learning ability in the process of English translation teaching, that is, the learning ability analysis model, as shown in Figure 4.

The purpose of the learning ability analysis model is to analyze some learning-related characteristics of students in the process of learning English translation, obtain relevant information about students’ learning situation through the analysis, and use the analysis results to formulate specific teaching tasks for students so as to promote the development of English translation teaching. In the data extraction stage, the original data must be preprocessed to eliminate the interference of useless data in the whole analysis process. Due to the partial missing, omission and incompleteness of the original data, it is necessary to complete the filling process according to certain standards and then input the processed data into the neural network for analysis.

Determine the topology of the neural network model, as well as the number of nodes in the input layer, output layer, and hidden layer. The hidden layer node calculation formula is shown in

where is the number of hidden layer nodes, is the number of nodes in the output layer, and is the number of input layer nodes.

Aiming at the similarity of concept words between sentences, on the basis of related research [16], the similarity calculation method of concept degree is redefined, that is, the semantic similarity value between two concept words is determined to be in the interval [0, 1], and the mutual relationship is as shown in the following formula:where is the shortest path between concept words , is the depth of conceptual words in the public epigraph set and the depth corresponding to , and is a constant.

For formula (9), it can be understood that when the shortest path between two concepts is smaller, the depth of the common epithet is larger, and the distance is shorter, the semantic similarity is greater.

In the vectorization representation of English sentences, first, two sentences are represented by vectors of equal length. For example, for sentences , all the words of the two sentences are collected into a joint word set as shown in the following formula:

Get rid of the same words in to ensure the differences of elements in joint word set , where is the combination of single words in sentence and is the word set in sentence .

4. Experiment and Results

4.1. Experimental Environment and Data Set

The specific experimental environment of this experiment is as follows:Experimental platform: cloud server with GPU accelerationVerification platform: MacBook Pro, 16 GB memory, processor 2.7 GHz Intel Core i7Training parameter setting: the maximum number of iteration steps is 30,000, and the learning rate is 0.0002Proportion of data: the proportion of the training set is 0.8, the proportion of the test set is 0.1, and the proportion of the verification set is 0.1

In the data set, this document uses the Yelp restaurant review data set, which is the most widely used text style migration and the aggregation of Yelp restaurant customer reviews. The original file is stored in json format, and each comment has a customer rating ranging from 1 to 5, which expresses the emotional information of the customer. In addition, please note that this article assumes that all individual comments have the same emotion, that is, the emotional information from previous and subsequent comments is consistent, but this is usually impossible in reality, especially in comments. Therefore, this article will only keep sentences with a length of less than 15.

4.2. Experimental Result Analysis

For a feature transformation network, it is necessary to generate potential factor vectors after receiving high-order text features. In this experiment, the length of the latent factor vector is 16, while the length of the high-order feature vector of the text is determined dynamically by the structure of the discriminator network. Figure 5 below shows the experimental results of using the improved GAN-based model and hierarchical reinforcement learning on the Yelp restaurant review data set.

As shown in Figure 5, because it is difficult to complete the task of text generation in the Yelp restaurant review data set, the performance difference between models is more obvious in this experiment. The model performance of the enhanced GAN-based network and hierarchical reinforcement learning is obviously better than that of the pure RNN model and GAN model.

GAN and RNN, the improved models based on GAN and hierarchical reinforcement learning, have the highest similarity scores of 0.217 and 0.387, respectively. The improved model has been greatly improved within the framework of GAN. The text generation model based on the GAN framework has many directions to improve the model because of its open structure. By randomly pooling the input data of the classifier network, the original two-class classifier is replaced by a classification model based on cosine similarity, and finally, the classification score is output as a feedback signal so that the classifier feeds back more continuously. In this paper, the open-source code of the RankGAN project will be used to compare the two data sets in this paper as shown in Figure 6.

The text generation quality of the RankGAN model is significantly higher than that of the original reinforcement learning model, as shown in Figure 6, and the convergence rate is faster. However, in both data sets, RankGAN’s text generation quality is still inferior to the improved model based on CNN and hierarchical reinforcement learning presented in this paper, demonstrating the improved model’s efficiency in text generation. Table 1 shows the average execution time of each training epoch in this paper’s three main models’ training. The basic model is the fastest, while the enhanced model is the slowest, as can be seen. Simultaneously, when the model structure is combined with data set characteristics, it is discovered that the RNN time bottleneck is primarily caused by data set size, as shown in Table 1.

Based on the above experiments, the improved model obtained by using CNN to enhance the discriminator network and then using hierarchical reinforcement learning to enhance the generator network is obviously superior to the previous model and RNN-based model in terms of text generation quality. In terms of the convergence speed of the model, the improved model has also been greatly improved. Experiments on data sets also show that the improved model, with or without pretraining, can approach the upper limit of model performance.

Therefore, this paper uses a pretrained sentiment classifier for testing and extracts the generated text of reference [18] and this model and inputs it into the classifier to get the accuracy. The relevant data is shown in Figure 7.

It can be found that the accuracy of the two models can increase steadily in the first 10 epochs, and then the growth slows down and tends to be stable. However, the accuracy of the text style transfer model proposed in this paper is obviously lower than that of reference [18], and its maximum accuracy exceeds 85%. The superior performance of this model is very prominent, and it is more suitable for the distribution of comments with two different opinions. In addition to the accuracy of generating text emotion, the retention degree of the main content of input and output texts is also an important indicator. The evaluation results of content retention degree are shown in Table 2.

As can be seen from Table 2, for content retention, both our model and the comparison model get good scores because they use similar methods to separate the content and style of hidden vectors.

Because the cost change distance term in the loss function of the generated model training is related to the accuracy of the generated text and the retention degree of the content, here, 20,000 random texts of style test sets are sent to the detection model after 20 epochs as shown in Figure 8.

As can be seen from Figure 8, with the increase of parameters, the accuracy of the generated text relative to the target emotional style also increases. This paper thinks that this is because the original intention of the cost change distance loss term is to ensure that it is the same as the generated text distribution and the target real emotional text distribution. When the weight increases, the generated model makes the final loss smaller. The weight of emotion classification and reconstruction loss terms is reduced, which are more important for the preservation of the generated text content than the cost change distance terms. In this way, the generated text not only has a high accuracy of emotional style but also makes the content retention score as high as possible.

The effect of English translation teaching is verified using the PSO (particle swarm optimization) neural network algorithm model. First, finish collecting data on students’ English translation learning characteristics and then proceed in two steps. The first step is to train the neural network using the PSO algorithm; the second step is to evaluate and test the model’s effectiveness. It is necessary to process the existing data as well as set some parameters when using the PSO algorithm to train neural networks. Set the advance factor to 0.01, and the rest of the parameters will be generated at random. As shown in Figure 9, set the number of different particles according to the particle swarm size selection convention.

When combined with the practice of English translation, Figure 9 shows that training neural networks with the PSO algorithm can yield optimal solutions for various particle populations with a very low error value. The sample training model can be used to assess the accuracy of students’ English translation abilities, as well as to assist teachers in estimating students’ translation levels.

5. Conclusions

The traditional education and teaching mode can no longer adapt to the modern education environment, nor can it meet the strong demand of students for the specialization of learning resources, as the new curriculum reform and network informatization accelerate. A method based on CNN and hierarchical reinforcement learning is proposed in this paper to improve the GAN model by increasing the information interaction between the discriminator and generator networks. The discriminator network sends the high-order features of the hidden layer text to the generator network using CNN’s modified text feature extraction method. With an accuracy rate of over 85%, the model proposed in this paper is clearly superior to the comparison model. The model performs exceptionally well and can better fit the distribution of comments from two different emotions. This method can help teachers estimate students’ translation ability and provide a reference for students to improve their English translation level, according to the results of the experiments.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.