Abstract

In the era of Internet of Everything, English teaching has evolved continuously undergoing radical changes. Informatization and modern technologies are continuously promoting the reform of education, and the application of these technological tools makes information sharing a reality. Timely information sharing contributes to the development of English teaching as a part of curriculum requirements. As a consequence of this rapid progress, the theoretical and practical research on IoT-based English teaching sharing systems is also increasing. In this paper, we initiate an exhaustive research of the popular methodologies and then adopt LSTM network management platform data to address the shortcomings of the current teaching system has memory mechanism issues. At the outset, the standard English curriculum is discussed to establish the background, and then, data fusion among various types of information is considered. This is done by completely utilizing independent and scattered heterogeneous data in the information platform without changing or minimizing the information available in the shared system. Then, the information fusion is thoroughly analyzed resulting in the proposal of a multisource and multigranularity information fusion method based on a neural network. The method implements a study considering multisource heterogeneous data, which is followed by carrying out of functional design and process design. The paper also conducts a study on CNN-based multisource information association and designs information association algorithms to customize the dataset. Finally, the study of multigranularity feature fusion based on CNN and LSTM is implemented. The design relevant to granularity calculation algorithm helps in the customization of the dataset. The multigranularity feature fusion is conducted integrating CNN and LSTM. The experimental results show that the proposed method effectively integrates the scattered teaching information in the system, which is conducive to improving the utilization rate of teaching resources and simultaneously promoting the development of teaching information.

1. Introduction

As an important part of the future Internet, the task of the Internet of Things is to build the infrastructure for the extensive connection and perception of everything and to build a highly fully connected network composed of networks, information, services, and objective things. Therefore, the essential feature of the Internet of Things is large-scale interconnection, that is, universality. Due to its universality, Internet of Things data has its own characteristics. In magnanimity, as the Internet of Things collects data information in the way of data flow characterized by time, the amount of data keeps increasing over time, and the multiangle perception makes the data have higher attribute dimensions. In the Internet of Things, awareness device deplorers at all levels store and maintain data in their corresponding data centers.

The Internet of Things is transitioning from the “physical network” stage of monitoring objective things based on various intelligent sensing devices and their communication networks to the “virtual resource network” stage of reflecting objective things and states centered on data information [1]. Therefore, such perceptual data information is considered to be one of the key points to realize the goal of the Internet of Things—“wide connectivity, thorough information perception, integrated intelligent services” [2, 3].

With the development of online learning and communication, the environment of English language teaching and learning has changed greatly. As the core of online learning, Internet of Things learning resources have significant advantages over traditional teaching resources. First, the interactive and friendly man-machine interface reflects the people-oriented ideology. Second, the Internet of Things resource collection is composed of picture, text, sound, video, and image in one, for learners to create audiovisual stimulation and psychological impact. Third, the application structure of hypertext Internet of Things conforms to the characteristics of human thinking and reading habits. Fourth, the rational use of a variety of network design elements greatly enhances the actual value and appreciation value [4]. Therefore, how to effectively organize and manage these complicated Internet of Things resources and provide learners with enough, accurate, and rich resources has become a key point of developing online learning platform. How to make better use of the idea of teaching design and efficiently develop online courses has become the primary task in front of teaching designers. Based on my own development practice and exploration experience, this paper designs the overall structure of the English language information sharing platform, as shown in Figure 1.

In the current teaching information construction, teaching administrative departments and teaching institutions at all levels usually have their own teaching information management systems, which store a large number of data resources that can objectively reflect the local teaching characteristics, the situation of students and teachers, and the operation of the teaching system [5]. But because of the lack of overall planning, the information management system, and physical isolation between the different developers use of development tools in the development of information management system, database management systems are different, the system also exists in the different data structure and data types, the resulting semantic differences, and the difference between the field name; data of each system of resources on the macro is in disorder. Therefore, information sharing and overall decision-making among teaching departments, teaching institutions, and schools cannot be implemented, which seriously hinders the construction of national teaching informatization.

For Internet of Things information sharing problem under the background of English language teaching, this paper based on the deep study of multisource information fusion multigranularity English teaching research, integration of existing data and make full use of and analyze such isolation, the distributed data resources without changing or as little as possible to change the original system and its business on the basis of reasonable utilization of existing data resources, improve the efficiency of the use of data resources, strengthening the independence of data communication between departments, agencies, speed up the informatization in teaching English, and then, comprehensive joint departments and agencies implement teaching over a large area of data exchange between management information system [6], improve the degree of teaching information, and actively respond to the national information development policy.

2.1. Research Status of the Internet of Things

In the continuous exploration of the Research on the Internet of Things, researchers at different stages come to understand the Internet of Things from different perspectives, as shown in Figure 2.

Initially, the usual perception was that such networks that used EPC and RFID for the completion of identification, positioning, and tracking of objects could be implemented for IoT application development. The rapid development of embedded technology, especially sensor equipment technology and sensor network technology, has made IoT-based applications extremely popular across all verticals of the society [7]. The application of semantic technology in IoT separates actual-original data sent by intelligent sensing devices and the resources accessible through the web interface from meanings expressed by this data. It also integrates and fuses the meaning of the data. On this basis, a strict virtual mapping of “things” in the objective world and their situation is constructed, an intelligent space of the Internet of things. The application of semantic technologies in the Internet of Things not only enables an application-oriented resource network but also enables machines to begin to understand the meaning of data. In this way, machines can process information automatically without human intervention, and machines can understand each other and process information. The fundamental task of the “semantic” Internet of Things is to separate the actual raw data from the meaning expressed by the data and manage and process the data and information in the huge network of data resources [8]. Its fundamental purpose is to realize a truly widely connected resource network that can collaborate freely, so that people and things and things can fully communicate and share information, and make it possible for data and information to flow freely in the Internet of Things.

Many scientific research institutions have carried out relevant research. Xu summarized the relevant key technologies involved in the Internet of Things and focused on the common problems existing in the application of the Internet of Things in the field of industrial automation and related solutions. Chen discussed QoS, energy efficiency, and security issues involved in the indoor application of Internet of Things and smart home scenarios from the perspective of M2M and proposed a solution that comprehensively considers performance and adaptability [9]. Gubbi proposed a cloud service-oriented Internet of Things model from the perspective of cloud computing, discussed the problems to be solved in data privacy, and realized the classification and processing of private data in the Internet of Things based on the concept of public cloud and private cloud. Aneka platform was used to build a prototype system of cloud computing oriented Internet of Things. Miorandi summarized key technologies, application fields, and related security issues in the Internet of Things. Sen discussed the problems and difficulties in the formulation of relevant standards in the Internet of Things and presented a complete set of Internet of Things related standard framework that needs to be formulated by the industry. By comparing Internet of Things applications in different scenarios, Zorzi and Gluhak summarized nine unique constraints for Internet of Things applications, including scale, heterogeneity, repeatability, connectivity, concurrency, experimental environment, mobility, user interaction, and limitations of scenarios.

Sharing among different users gave rise to the concept of Web of Things [10]. Guinard developed Internet of Things application systems in different scenarios based on the existing RESTful architecture style based on distributed Web systems, making the system more scalable and narrowing the distance between Internet of Things and end users. Zhong proposed the concept of Wisdom Web of Things from the perspective of harmonious coexistence of human beings, computers, and intelligent devices in the future information world [11]. Li mainly discusses how WoT can use existing standards in an open Web environment to better combine the properties of smart objects to extend the existing standards and proposes a general framework for developing WOT-related applications and implements a prototype system.

2.2. Research Status of Teaching Information Sharing Methods

The new era has brought about a new society and also brought about significant breakthrough changes for the development of society. In the new historical journey, the computer is changing our living mode, working form, and the way of information exchange. This new change and new form also make the traditional way of learning face new opportunities and challenges [12]. Multimedia computer and network communication technology as a new knowledge dissemination media effectively promote the cognitive development of people, so thousands of people from different regions gather together to learn online education function. The mode of Internet plus education enables students to have more and more learning resources and teachers to have more and richer teaching modes. The field of basic education is building an open learning ecology with Internet information technology and promoting the education and teaching system to integrate the open characteristics of information development in courses, teaching, evaluation, and other links. Online learning teaching information sharing platform can break the time and geographical constraints, in the best way to let everyone live efficient learning effect [13]. Traditional education methods, with limitations of region, time, and information, cannot meet their desires for independent learning, personality pursuit, psychological needs, culture and art, and spiritual needs. Therefore, online learning shows its strong vitality and gradually expands its influence in the world.

Teaching information sharing platform is produced under the background of network technology changing with each passing day. The teaching information sharing platform is aimed at achieving the functions of student information management, teacher information management, course resource management, online teaching, and teaching forum management through the website technology. Through the planning of these functions, students can better break the restrictions and carry out independent learning. After fully studying the present situation and development trend of the network course system, the paper puts forward the direction and characteristics of the research and design of the network teaching communication system and uses the method of combining computer software technology and computer database management technology to realize the function planning of the system. Teaching information sharing platform implementation is designed to achieve the openness, richness, the high efficiency of the classroom, and the development of the teaching system, students by teaching information sharing platform and online learning online registration account, registered accounts online teaching, teaching the BBS to realize knowledge sharing, student free BBS, and student online knowledge questions. Teachers answer online teaching guidance. Students buy paid courses online, collect learning courses, and ready to learn courses and a series of new learning modes to continue the Internet plus education [14]. We believe that, consistent with the basic direction of national education reform and development, realizing the support of Internet plus education will surely play a role in leading the reform in the field of basic education and play a role in the construction of diversified and high-quality digital courses that serve learners.

Stepping into the 21st century, we are facing a brand new era. Information technology with the Internet of Things as its core is bringing great changes to human society. It is changing the way of human work, life, economic operation and information exchange, giving a new look to today’s era, and these changes also make the traditional way of learning face new opportunities and challenges. Multimedia computer and network communication technology, as its ideal cognitive tool, can effectively promote the learners’ cognitive development, which makes thousands of different region different class people of different learning style, can break the limit of time and space, with the lowest cost, and can get the best learning effect [15]. At the same time, with the continuous improvement of living standards, people have a growing need for self-development. Traditional education methods are limited by region, time, and information, which cannot meet their desires for independent learning, personality pursuit, psychological needs, culture and art, and spiritual needs.

Therefore, online teaching information exchange is increasingly showing its strong vitality and gradually expanding its influence in the world. Some universities and even some information technology companies at home and abroad have designed network courseware for teaching or training. They have developed various network courseware databases to manage teaching and developed various tools to organize teaching activities. The rapid development and popularization of network teaching have promoted the development and perfection of teaching design theory, so network teaching must be the direction of reform and development [16]. At present, network teaching has been paid close attention at home and abroad, and the construction of network courses, online learning tools, and environment has been deeply studied. Network teaching depends on students’ independent learning to a great extent, so the basic functional structure of network courseware should be designed according to the requirements of students’ independent learning.

2.3. Research Status of Information Fusion

The information fusion technology based on Bayes theory updates the probability of event occurrence according to the observation evidence that is to use new information to update the prior probability of event to realize information fusion. Bayes network is an extension of Bayes method, which uses a directed graph model based on a network structure to represent the joint probability distribution of uncertain variables and reflect the possible dependence between variables. The multisensor system is modeled as a Bayes network. Finally, the posterior probability is calculated by Bayes, and the hypothesis with the highest posterior probability is selected as true. For many real-time application fields such as military, which are limited by time resources and need to make fast decisions, active information fusion is studied based on dynamic Bayes network [17]. Active information fusion can not only select the information source with the most information value to the problem being solved but also ensure the minimum fusion cost (including computational complexity and resources needed to obtain the information).

In a multisensor system, due to the influence of sensors on precision error of target sensing data, internal structure and operation factors of the system, external environmental conditions, and reliability of data transmission and other factors, the system will have uncertainty. In the information fusion based on D-S evidential reasoning, the information collected by each sensor is taken as evidence, and the corresponding basic probability distribution function (or trust function) is established. Under the same identification framework, different evidence is combined into a new evidence by using the synthesis formula of evidence theory, and then the decision is made according to the discriminant rule [18]. Fuzzy set theory is an extension of general set theory, which is mainly used to describe inaccurate and fuzzy concepts and is successfully applied in the field of information fusion.

Sensor data fusion based on SVM is first formed by multiple sources of input feature vectors, due to the use of more sources of information conflict, consistency, and integrity, leading to some component that cannot be obtained and needed to pass the integrity of the data which is not modified to improve the input vector; the revised eigenvector by SVM classifier performs classification processing. In addition, a genetic algorithm is used to optimize the parameters and select the features of the information fusion system, mainly using the nonlinear integral generated by the nonadditive set function. A fuzzy-genetic information fusion method uses fuzzy integration function to reason and combine information, and the parameters of operator are obtained by a genetic algorithm. The information fusion method based on rough set theory is a mathematical tool to deal with uncertainty problems [19]. An artificial neural network is the simulation of the structure and function of human nervous system, so as to complete the conversion function from signal to information. The information fusion method based on neural network mainly makes use of neural network’s powerful classification learning ability. In fact, it is a fusion classifier that can give certain classification ability through learning. The transfer function of neurons by using S function, using the error back propagation learning algorithm of gradient descent algorithm, in the study, the output node of the expected output, and the actual output of the mean square error, step by step to the input layer back propagation, adjust the connection weights, using the algorithm of gradient descent. The mean square error is minimized.

3. Algorithm Design

3.1. Overall Design

In line with the principle of doing great things with less money, making full use of the limited investment, and selecting the equipment with the best performance and price under the premise of ensuring the advanced nature of the network, we believe that the construction of teaching information sharing platform should follow the following principles: (1) advanced nature, (2) standardization and openness, (3) reliability and availability, (4) flexibility and compatibility, (5) practicality and economy, and (6) security and confidentiality.

In view of the different purposes of multisource information, different system environments, and different data characteristics, information association needs to face different problems and challenges [20]. The method of information association processing data information may have different characteristics, among which the biggest challenge is the imperfection of information, mainly manifested as the uncertainty, ambiguity, mutual exclusion, polysemy, and so on. Combined with existing data processing literature, information association and fusion need to be considered.

See the following problems: there are many different definitions of the uncertainty of data, which can be generally understood from two aspects: one is the randomness of event changes or the objective state of event occurrence; the other is the lack of absolute certainty of the source of objective data, which mainly refers to the former in this system. In fuzziness, generally speaking, the data or the name of the thing that describes it is not a single value, but an interval or set, so some fuzzy and ambiguous data will be generated. In addition, incomplete information or data will increase the difficulty of data interpretation and reasoning. In mutual exclusion, due to the objective factors of heterogeneous data, it is very likely to have conflicting interpretations for the same thing, and the network condition and acquisition accuracy will also make the data not accurate. In ambiguity, multiple source data acquisition nodes are affected by the same noise, or there are various problems such as double calculation in the process of data transmission and association and fusion. If the problem of data correlation is not solved, the associative fusion algorithm may get less accurate estimates [21]. Based on the above related background information, therefore, the use of a multisource information association method based on CNN, the main idea is to use the CNN to model the characteristics of the automatic feature extraction and correlation between multisource heterogeneous data information; the CNN model has the characteristics of the automatic learning and at the same time, by learning new association rule, constantly improves the system robustness.

The training of CNN model requires the use of large-scale datasets. Through the information association algorithm designed in this section, the association relation, field name, attribute, primary key, and other database information of each source database can be abstracted, data labels can be extracted, and datasets of convolutional neural network can be made. The algorithm flow chart is shown in Figure 3.

3.2. Model Design

This model is a variant of CNN proposed by Collobert in 2011, as described in Figure 4. Among them, the random CNN model is the parent model, in which all the word vectors will be randomly initialized. The initialized word vectors are not invariable and will be gradually modified by applying BP algorithm in the subsequent training process. The static CNN model is a variant of the random CNN model. The initial vector is character embedding by Word2Vec. In this model, all word vectors are static, and the learning process of CNN model only modifies other parameters of the model [22]. The nonstatic CNN model is the same as the static CNN model, but the pretraining vector is fine-tuned for each task. The multichannel CNN model has two sets of word vectors, which are called channels, and the two channels adopt Word2Vec for character embedding. Each convolution kernel in the network will be suitable for the two channels, but the gradient propagated back through BP algorithm will only react in one channel.

In the forward propagation, the training data is transmitted from the bottom layer of the network to the upper layer of the network. In the process of back propagation, if the prediction results obtained in the forward propagation stage are found to be inconsistent with the label of the dataset, back propagation will be carried out to modify the network parameters to realize the training of the neural network. This training process is called the learning process.

For English text data, affected by different processing granularity and various selection methods of neural network model, a single text data classification method is limited everywhere. The convolutional neural network works on the basis of feature extraction. It extracts local features of the data yielding enhanced performance when processing the input data after normalization. It extracts local features of the data by emphasizing more on local information thereby reducing the calculation time. The Bidirectional Gated Recurrent Network (BGRU) processes data having time series attributes. The basic concept of BGRU includes an input sequence that is passed through a feedforward neural network and a backward neural network. The outputs generated from the two are fed into the same output layer. A CNN+BGRU structure model is proposed in the literature. This model uses character-level granularity as the model input and has better performance compared with traditional deep learning classification [23]. Here, we choose two classification methods, namely, phrase granularity and single-word granularity. The amount of semantic information varies according to the classification and selection of granularity. Word granularity is the smallest unit of text information processing. Word granularity can extract more abundant information in text processing. In this database, CNN extracts text features through multiple convolutional kernels and hidden layers and enters into classifiers through pooling layer and full connection layer. LSTM processes data in a sequential manner and learns long-term dependencies.

The learning process of LSTM is shown in Figure 5. Long Short Term Memory Networks are known as LSTM in its abbreviated form is a special type of recurrent neural network (RNN) which is capable of ensuring long-term dependencies. The model was initially introduced by Hochreiter and Schmidhuber in the year 1997. LSTMs help in resolving long-term problems pertinent to dependencies. In case of recurrent neural networks, a chain of repeating modules of neural network are formed which has an extremely simple structure similar to a tanh layer. LSTMs also have similar structure, but instead of a single neural network layer, this model consists of four interconnecting neural network layers. LSTM works on the basis of feedback connectivities wherein the recurrent neural network processes not only single data points but also sequence of data in the form of speech or video. The common structure of LSTM constitutes a cell, input gate, output gate, and a forget gate. The cell remembers value of data over arbitrary time intervals and the other three gates regulate flow of information inside and outside of the cell. LSTM networks work best for classification, processing, and predictions on the basis of time series data. LSTM has been used in various applications, even in the teaching sector. As an example, the study in [24] presented a novel teaching and learning optimization model that implemented LSTM bases sentimental analysis for stock price prediction using twitter data. The LSTM network helped to classify tweets considering positive and negative sentiments relevant to stock prices. The contents of the tweets were correlated to the stock prices. The study in [25] used LSTM that helped to predict students’ learning behavior based on the analysis of eye movement. The study helped in redesigning of the curriculum and related resources in flipped class room. The study in [2628] suggested the use of AI in English education system that helped to analyze the various factors pertaining to learning adaptability. The results of the study enabled academicians to propose strategies that improved students’ English education. CNN is good at extracting local characteristics of data, while LSTM tends to understand semantics in general and is more suitable for processing English text data because it can remember values of indefinite length. Considering that the LSTM network model can be learned by memory units and can also forget and update information according to the cell state, we came up with the idea of using LSTM to perform word segmentation tasks.

In the database, the fields or recorded data in the data table of the source data are divided into word or single word by word granularity and word granularity, respectively. We apply Word2Vec to the two libraries for character embedding. Finally, the output results of CNN/LSTM and CNN/LSTM neural network models are fused according to a certain weight, so as to obtain higher precision text data classification results and identify data features effectively [29]. The purpose of multigranularity feature fusion of text data is to improve the accuracy of text classification.

The word segmentation process is transformed into the process of annotating each word in a sequence of text. In English text, since each word in a word occupies a certain lexeme, word segmentation can be regarded as a machine learning process to learn the word’s lexeme information. Therefore, the fields or records in the data table of the database are represented by a pretrained word embedding vector and word embedding vector, so that each field is transformed into text vector of different granularity [30]. Data classification was carried out according to word granularity CNN, word granularity LSTM, word granularity CNN, and word granularity LSTM, respectively. Finally, the output results obtained from the four models were calculated according to different weights (loss function and accuracy value) for feature fusion, and finally, accurate data classification results were obtained. The algorithm flow chart is shown in Figure 6.

4. Experiments

4.1. Dataset and Model Training

This paper conducts experiments on Google’s open source deep learning framework Tensorflow. In development language Python, we use a minibatch Adadelta optimization method to train our model. The batch size is 256, the epoch is 100, the dropout rate is 0.4, and the learning rate is 0.01 (CPU: 2xIntel Xeon CPU e5-2620 [email protected] GHz, GPU: 2xNvidia Tesla K20, 32GB memory). In the evaluation of English part of speech performance, precision, recall, and -measure, which are commonly used in word segmentation evaluation, are adopted. Here, the cross validation method is adopted to divide the dataset. The basic idea of this method is as follows: (1)The whole training set is divided into 5 mutually exclusive subsets on average, including a total of 30 categories, with a total data amount of 150,000, of which the data amount in each subset is 30,000. The corresponding subset is called a set and is denoted (2)Then, select the last element in the subset (randomly selected, train05 is selected in this experiment) as the test set, and the remaining data as the training set. The sizes of various datasets are as follows: training set—120,000 pieces of data (), and test set—30000 pieces of data(3)Four times of model training were carried out according to the training set divided in Step 1 and Step 2, and the test accuracy was obtained by using the test set

4.2. Experiment Results and Analysis

In this system, four kinds of neural network models (word granularity CNN, word granularity LSTM, word granularity CNN, and word granularity LSTM) are trained first. Then, the dataset mentioned above is used to divide the training set into four parts and train four kinds of neural networks, respectively. Four models are obtained for each neural network. The test results of equal weight fusion for 4 times are taken as the results of a single neural network. Finally, the test results of 4 neural network models are fused according to the corresponding weight, where the weight refers to the average accuracy of each neural network in the 4 test sets.

The experiment was carried out in three stages. In stage 1, the convolutional neural network model and LSTM model with different network depths were tested to compare the different influences of network layers on experimental results. In stage 2, the experiment was carried out by selecting word granularity and word granularity, respectively, so as to compare the different effects of different particle sizes on the experimental results. In stage 3, feature fusion is performed on text data to verify the effectiveness of multigranularity feature fusion. Network depth has a great influence on experimental results, and different network depths often produce different classification results. Therefore, it is very important to select the network layer structure suitable for different datasets. In the first stage of the experiment, word granularity is selected as the basic unit of feature fusion processing to test the performance (accuracy) of networks with different layers. The experimental results are shown in Table 1. In Table 1, the number before CNN and LSTM in the “model” column represents the number of network layers. For example, 3-LSTM represents the feature fusion model of a 3-layer LSTM network. Specifically, neither the LSTM model nor the CNN model needs to extract the features contained in the text artificially. In addition, the traditional LSTM model can mine more features than the CNN network model, and the LSTM model has a more significant effect on improving the accuracy of classification.

In addition, the classification accuracy is also compared, and the comparison results are shown in Figure 7. Under the condition of the same number of neural network layers, the accuracy of LSTM model is superior to that of CNN model. It was originally thought that the effect of the 5-layer deep convolutional neural network on phrase classification would be better than that of the 3-layer convolutional neural network, but the experimental results were unexpected. The author believes that the reason is that the 5-CNN model has overfitting problems in the training process, which weakened the generalization ability of the model.

In the second stage of this experiment, the two models with the best effect on word granularity are selected to conduct a comparative experiment on word granularity. After word granularity segmentation of the previous text data, word vector corresponding to each word is generated through Word2Vec to construct the text word vector matrix. Then, the same experiment is carried out on 3-CNN and 5-LSTM models. The comparison results of different text granularity are shown in Table 2. In Table 2, “model” is listed as different neural network models, and “word granularity” column and “word granularity” column are used to cross-verify the accuracy of model training and testing under the condition that the granularity is divided into words and words. Because word granularity contains more semantic information than word granularity, the feature fusion effect of CNN and LSTM models under word granularity is better than word granularity.

The accuracy of 3-CNN under the condition of word granularity and word granularity is shown in Figure 8. The reason is that the word-granularity based model already encodes the language information necessary for the language modeling task compared to the word-granularity based model. Moreover, word granularity segmentation also has problems. One of the key problems is polysemy, which is alleviated to some extent by words.

By phase one and phase two of the experiment, the conclusion we can draw the following conclusion: selection of neural networks in different model, neural network layers, and properties of different particle size classification leads to changes in accuracy of text classification; thus, simply changing the characteristics in a certain respect of the neural network model is often difficult to complete to extract text feature. In view of the above conclusions, in order to obtain better feature fusion effect, different neural network models are mixed in the third stage of this experiment, so as to comprehensively consider different features of the text. Table 3 shows the experimental results of feature fusion of different neural network models. As can be seen from Table 3, the CNN model and LSTM model can better focus on the interaction between words and words and better capture the semantic connection between units after the fusion of the characteristics of word granularity and word granularity. The accuracies of 5-LSTM and 3-CNN models are both improved by about 2%. The fusion accuracy of 5-LSTM and 3-CNN is also improved to a certain extent. Experimental results show that the combination of CNN and LSTM is very effective for feature fusion.

5. Conclusions

This paper considers the Internet of Things as the basic technology for development of the proposed framework. Initially, the study conducts research on the basic theories and key technologies related to the Internet of Things information fusion in association with the use of neural networks. Based on the knowledge gained, the study constructs an English language teaching information sharing method. The sharing platform is designed using the concept of application, openness, and socializing which leads to improving teaching efficiency and competitive teaching thereby helping schools to build an education platform that combines formal learning, informal learning, online learning, and offline learning. The information fusion is used for constructing the neural network model. The data after multigranularity classification of the above two neural network models helped to carry out feature fusion operation and experiments which are designed to verify the effectiveness of the multimodel and multigranularity fusion scheme proposed in this paper. The cross-validation method was used in the experiment to train the model, and the multigranularity feature fusion. The CNN+LSTM combined model yielded higher accuracy than the single-granularity model. This method provided diversified learning forms and multichannel information sharing methods for English language teaching, which helped teachers guide students to actively learn English language knowledge.

Data Availability

The datasets used during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The author declares that he has no conflict of interest.