Abstract

The Data Envelope Analysis (DEA) is a technique that has been implemented in order to assess the effectiveness of various entities, including programs, organizations, and so on. They are accountable for making use of the available resources in order to obtain outputs that are of interest. It has been applied to the task of analyzing a variety of activities. The DEA is a fractional programming model that can contain multiple outputs and inputs without having to resort to prior weights or explicitly stating the functional relationship between inputs and outputs. This is because DEA is a model that is based on dynamic programming. It computes a scalar measure of efficiency and establishes the level of efficiency at both the organization’s inputs and outputs that are being evaluated. The assessment of the excellence of English instruction in universities and colleges is primarily plagued by two issues: first, the evaluation index system is lacking in its coverage, and second, the evaluation model struggles when confronted with highly nuanced signs. In order to find solutions to these issues, in this work, we delve deeply into the topic of assessing the quality of English instruction in colleges and universities and develop a model for doing so that is based on the DEA fusion algorithm. To assess the quality of English instruction provided by colleges and universities, a model, that takes into account both quantitative and qualitative research findings, has been developed. The findings of the implementation suggest that the DEA fusion algorithm that was proposed is capable of successfully assessing the quality of English instructions and teaching provided in colleges and universities. The proposed algorithm outperformed the traditional Multilayer Perceptron (MLP) algorithm and the Decision Tree (DT) algorithm. The findings of this research have been very helpful in encouraging the enhancement of the quality of English instruction in colleges.

1. Introduction

The DEA is a method for calculating the product input-output ratio by finding solutions to linear programming models [16]. It is an evaluation method for data efficiency. Since the approach was first proposed in 1978, there has been an ever-increasing amount of studies conducted on DEA. The DEA in my nation is leaning more and more toward the field of economic management as the level of research here continues to gradually increase. In the modern age, there has been a paucity of research conducted on the specific evolution trend of this topic, as well as the most recent development trend of this topic both domestically and internationally. The DEA is a technique for determining how efficient something is. Some academics have only utilized efficiency as a keyword and conducted a visual examination of the efficiency of domestic and international knowledge graphs under the condition that they were only allowed to study a limited number of subjects. this research may be only the top of the iceberg. After conducting research, it was discovered that there is only one study that uses the term DEA as its primary keyword and focuses on the application of the DEA approach to the management sector. The DEA approach for performance evaluation is a non-parametric production frontier method that takes into account variable returns to scale assumptions. This allows for the acquisition of purely technical efficiencies that are unaffected by scale effects. Because of how effective it is, it has found widespread use in a variety of disciplines like medicine, sports, education, and finance, amongst others.

The DEA is capable of managing many inputs as well as several outputs simultaneously. Second, it takes into account the different inputs and outputs that are being considered. In addition, this nonparametric method does not employ any subjective weights, which demonstrates a great deal of flexibility in the process of designing algorithms for performance prediction. The vast majority of DEA models are built for ex-post efficiency analysis using data on inputs and outputs that have been pre-specified, and very small research has looked at the possibility of making predictions about future performance. The data-driven proposal highly evaluates the value of the data, and how to fully mine the available information hidden under the big data [710], which has gradually become a research hotspot. This evaluation takes into consideration everything from the large amount of data that is being considered to the knowledge and information that lies behind the data.

At this point in time, higher education places a significant emphasis on the all-encompassing and high-quality training of senior personnel [1113]. In nations where English is not the native language, it is important to cultivate college students who not only have strong proficient knowledge and abilities, nevertheless, they also have good college English talent, in order to foster senior talent interchange and development. For this purpose, it is of the utmost importance to enhance the quality of English instruction offered in higher education institutions and to foster the development of high-level talent that is fluent in English. However, English instruction in secondary schools, colleges, and universities is frequently influenced by a variety of financial considerations, including English teachers, English instruction, scientific research, English teaching management systems, English teaching mechanisms, the teaching philosophy of English, the teaching methods for English, as well as, social factors, human factors, and material factors. The process of enhancing and evaluating the quality of English instruction at colleges and universities is made more challenging by the complexity of the situation. In light of this, a number of researchers have carried out complementary study and research on the enhancement and evaluation of the quality of basic English classroom instructions and teaching.

The quality of English instruction has been the topic of discussion amongst a few academics, who have considered novel instructional strategies for engineering English instruction [14]. A number of researchers have investigated the quality assessment of medical English instruction based on requirements analysis [15]. Several academics have investigated the building of a SPOC-based college English teaching quality evaluation index system and presented their findings. Some academics have presented a methodology for evaluating the quality of English instruction that is based on an RBF neural network that has been tuned using genetic algorithm. A fuzzy evaluation approach of English teaching quality based on the bat algorithm has been the subject of discussion amongst a few academics [16].

The digital English teaching mode is a novel style of instruction for use in the classroom that is being promoted by the Ministry of Education. It not only improves the resources available for teaching English and the methods used for teaching English, but it also reintegrates and makes full use of the advantages of teaching, provides an information-based teaching environment, and alters the conventional mode of teaching. It satisfies the needs of the ongoing building of the golden course for hardware facilities and encourages the balanced growth of the university’s educational ecosystem. The term software building of golden courses refers to the enhancement of the quality of classroom instruction [17]. Among other things, the development of a scientific and efficient course quality evaluation system is essential in order to ensure that the course will be of a high standard. Currently, the majority of classroom teaching quality evaluations at domestic colleges and universities are coordinated by an educational administration agency. This coordination includes the construction of an evaluation system as well as the publication of evaluation data. Many colleges and universities do not differentiate the nature of the courses that they offer and instead use unified evaluation indicators that include not only the teaching attitude and content but also the teaching methods and effects. This is especially true when it comes to the setting of evaluation indicators. The advantage is that it guarantees the evaluation of common difficulties, but the downside is that it ignores the qualities that are unique to each course [18]. The flaws, lags, and limits of this teaching evaluation method become more apparent over time as a result of the introduction of a new notion in educational evaluation. As a consequence of this, the development of an evaluation system that is compatible with the new teaching reform model for the digital age and that conforms to the new teaching concept has emerged as one of the most essential tasks associated with the reform of college English instruction in universities and colleges.

Evaluation of the quality of teaching in the classroom is the foundational component of education quality evaluation. An efficient model for evaluating teaching needs to be supported by efficient evaluation indicators in order to generate results that are scientific in nature from the evaluation. Over the past few years, the academic community has made significant efforts to conduct research on the evaluation of digital classroom education. Planning and preparation, classroom instruction, second classroom extension, and teaching responsibility are the four components that have been suggested as the basis for the construction of an evaluation model for second language teachers by various academics [19]. Some academics have developed evaluation indicators to assess the teaching ability of online teachers, among other things. Nevertheless, the focus of these assessment model studies is on the study of the validity of evaluation indicators, whereas the research on evaluation methods still needs to be carried out in greater depth.

The evaluation of the quality of English instruction in colleges and universities is primarily plagued by two issues: first, the evaluation index system is lacking in its coverage, and second, the evaluation model struggles when confronted with highly nuanced signs. The key contributions can be listed as follows.(i)In order to find solutions to these issues, in this work we delve deeply into the topic of evaluating the quality of English instruction in colleges and universities and develop a model for doing so that is based on the DEA fusion algorithm.(ii)To evaluate the quality of English instruction provided by colleges and universities, a model, that takes into account both quantitative and qualitative research findings, has been developed. This model’s primary focus is on the indicators.(iii)The findings of the implementation suggest that the DEA fusion algorithm that was proposed is capable of successfully evaluating the quality of English instruction provided in colleges and universities.

As a result, we developed a system for determining the efficacy of English instruction at higher education institutions that is based on the DEA fusion algorithm. Following that, we will present our work by utilizing parts. The recent relevant work is covered in Section 2. The introductory discussion of the algorithm model makes up Section 3. The comparison and analysis of the results of our simulations is the topic of Section 4. Discussion over the obtained outcomes is covered in Section 5. The debate is contained within Section 6 of the whole text.

The DEA is a strategy for making decisions based on multiple criteria by making use of objective facts. It is a research field that crosses the boundaries of management and operations research, and it is utilized extensively in the examination of efficiency in a variety of other fields. The DEA was initially conceived of by the well-known operations researchers Charmes et al. [20], and its primary function is to assess the comparative efficacy of decision-making units that are comparable to one another. Furthermore, it is necessary to compare the production efficiency of each decision-making unit objectively and efficiently to quantify the productivity of each decision-making unit in the production system with multiple inputs and multiple outputs. This is necessary in order to measure the productivity of each decision-making unit. It is not essential for the DEA to carry out dimensional processing on the data units that are input to and output from each decision-making unit. It is non-parameterized, therefore there is no need to specify any weights, and it does not require setting any parameters. This should be noted that DEA is one of a kind when it comes to multi-output and multi-input analysis for evaluating various complex systems because it takes into account every possible input and output combination for the decision-making unit itself, which allows it to more accurately replicate the information and features of the assessment and evaluation object itself. At present, DEA is being utilized in an extensive manner in the validity study of public utilities, banking, the service industry, transportation, and the evaluation of projects.

The DEA can be represented mathematically using a linear model, and the solution to the DEA problem can be solved using linear programming, which is also known as a conditional optimization problem. The linear programming method is currently the method that is considered to be the standard way of solving DEA problems. In this method, the DEA model that needs to be optimized is first structured as a typical linear programming problem, and then the problem is solved with the assistance of linear programming software and toolkits such as DEAP2.1, Lingo and Cplex among others. However, these programs require a high level of specialization, and the procedure as a whole is very complex. As a result, it is difficult for individuals who are not specialists to use them. In addition, Excel software can be used to solve DEA problems. However, the process of finding a solution is highly laborious, and it is essential to specify the limitations of input and output one at a time. The amount of work is substantial, and the current system is not suited to cope with increasingly intricate DEA issues.

Since the DEA problem is, at its core, an optimization problem with several constraints, it is possible to find a solution to it by using the concept of optimization. Optimization methods that are currently in widespread use include heuristic algorithms like the particle swarm optimization algorithm [21, 22], the genetic algorithm [2325], the simulated annealing algorithm [2628], and other optimization algorithms. In order to find solutions for a wide variety of optimization issues, many people turn to the usage of these heuristic algorithms, which are not only straightforward but also efficient and straightforward to comprehend. However, these conventional optimization methods do not have a strict mathematical theoretical basis, and the effect of solving optimization issues depends on the parameters being specified reasonably and effectively. Since different optimization problems require the setting of parameters with varied values and the selection of parameters depends on the user’s expertise, it cannot be broadly utilized to solve a variety of optimization problems at once.

College English classes are particularly vital to the functioning of the nation’s higher education system because of the breadth and scope of their coverage. It is quite clear, in light of the effects that the pandemic has had, that the fast transition of traditional classes to online instruction did not make enough preparations for college-level English instruction. Obviously, this was a problem for a great deal of other courses as well. There are significant distinctions between teaching in a regular classroom and teaching in an online environment. The teaching and learning that take place in a traditional classroom setting are combined, however, in an online classroom, these two components are kept distinct. The way in which one teaches does not necessarily have an effect on the way in which one learns, and there are numerous elements that cannot be controlled. There is no way to ensure the high quality of the instruction, and the activities associated with teaching are challenging. At this point, it appears that online teaching has evolved into a kind of solo dance for instructors. Traditional educators are used to employing a variety of strategies for maintaining order in the classroom, instructing students, and motivating them to do better. These strategies are less useful in the online context. The force that binds. How can this tendency be reversed, the quality of course instruction be successfully improved, and the level of satisfaction felt by teachers be increased? Because of this, the knowledge system needs to be reshaped according to the features of the field, teaching objects, and teaching content, and the design needs to be totally based on the peculiarities of the online environment. Teaching materials, using gamified examinations to encourage and test students, using scientific and technology ways to feedback on teaching impacts, and using team-based approaches to cope with online teaching are all important aspects of effective education today. At present, university teaching is limited by the teaching platform, large class size, outdated technical means, and other problems. However, it is entirely possible to gradually improve college English online teaching from the perspectives of course resources, interactive feedback, and improvement of tests, and it is also possible to gradually improve course satisfaction. The problem is that it is impossible to customize learning according to the needs of students.

The existing research on college English teaching primarily focuses on the current situation of college English teaching [29, 30] and teaching reform, the cultivation of autonomous learning ability and strategies, the application of flipped classrooms and MOOCs in college English, and the development of college English teachers. Few people concentrate on the development of online resources for college English classes and online instruction. Some academics, based on their experience in the construction of college English online courses, have pointed out that the construction of foreign language online courses must first understand eight issues. At the core of these issues are the questions of whether or not to understand the cognitive psychology of college students in the new era, how to solve the problem of homogenization construction, and how to grasp the pain points of English learning. Other issues that must be understood include whether or not to understand the cognitive psychology of college students in the new era. At the same time, it proposes eight principles for the construction of college English online courses, which are as follows: the design ought to be fresh, the navigation ought to be astute, the topic selection ought to be stringent, the excavation ought to be deep, the expansion ought to be wide and the interest ought to be robust, the application ought to be extensive, and the impact ought to be immediate. The researcher emphasized that the development of high-quality online classes ought to involve many academic institutions, multiple schools, multiple software platforms, co-creation and sharing, and mutual advantage.

3. Research Design

3.1. Data Processing
3.1.1. Data Sources

The data that we used to compile this report were taken from an evaluation of English instruction at a domestic university. In order to train our suggested DEA fusion model, we make use of 75 percent of the data as the training set, and we use the remaining 25 percent of the data as the test set. Figure 1 presents a block diagram of the DEA evaluation framework.

3.1.2. DEA-Based Data Preprocessing

The first thing that has to be done in order to begin the process of data preparation is to identify variable indicators. In this article, the input and output variables are figured out by picking out the pertinent variables. The comparative efficiency of the suggested DMU is deliberated and, subsequently, rated by the suggested DEA model based on the input and output data that is provided, and any data that is either incomplete or redundant is removed from the analysis. In this study, the efficiency value of DMU is determined using the time-honored CCR model as the basis for the calculation.

Let’s now undertake that there are several DMUs of the same category, and that every DMU has N and R indicators that respectively correspond to input and output indicators. Furthermore, we also assume that the classical CCR approach is used to estimate the efficiencies of DMAs for these DMUs. Finally, we consider that the input indicator data of the decision-making unit can be represented by the matrix Q. Note that the matrix both for input and output indicators is given by , where represents the first entry in the matrix. The nth input metric for the mth DMU in the system. Similarly, the data for the output indicator are represented in the matrix . The regulation states that the effectiveness of DMU can be measured as the fraction of the linear weighted combination of input indicators to the linear weighted combination of output indicators. In other words, the efficiency of the DMU is articulated as a ratio. The formula can be broken down as follows in:

In (1), stands for the efficiency evaluation index of the DMU, and is one of these components. Similarly, , which stands for input indicator weight vector, and , which stands for output indicator weight vector, are the other components.

The standard CCR model determines the following problems for every DMU (1 ≤ P ≤ n) in the following order, as illustrated in

The frontier of the optimal decision-making unit is determined by the DEA, and the comparative efficiency of other decision-making units is determined by their proximity to this frontier. All of the DMUs that were tested produced efficiency values that were either less than or equal to 1, and the DMU that was used to determine the optimal frontier surface produced an efficiency value of 1. When calculating the efficiency value, the classic CCR model is used to do a comparison between the prevailing DMU and the current DMU on the frontier. This allows the model to effectively differentiate between valid and invalid DMU. Within the context of this article, all DMUs that have an efficiency value of 1 have been chosen and applied as the greatest efficient level for more training and testing.

3.2. The Building Process for the DEA Model

In order to build the RBF model, the DEA preprocessed dataset was put to use. RBF will pick at random an adequate quantity of data for the learning process, and the remaining data will be utilized to evaluate the RBF model’s level of accuracy. If the accuracy does not meet the requirements, it is possible to perfect it by modifying the model parameters. In fact, RBF is a powerful technique for nonlinear modeling that has a high learning efficiency and is able to deal with datasets that have an ambiguous link between the variables that are input and those that are output. An interpolation method, which can process a vast quantity of discrete facts and generate a function over and done with the discrete sampling points, in order to anticipate all the unidentified points, is what the algorithm is, at its core. Its primary and fundamental role is to forecast unidentified points.

Assuming that there are L units in the sample, and that each unit contains and input and output variables, respectively. Then, the sample dataset can be characterized as a set denoted by where . This is because each unit in the sample has and input and output variables. Among them, the input is denoted by the symbol , the predicted output is denoted by the symbol , and the actual output, , is the appropriate representation. The following is a mathematical expression for the RBF model, which stands for radial basis function:

This should be noted that the above model is being able to fulfill the interpolation condition given bywhere denotes the weight of the ith node, which is produced by the interpolation condition through the matrix solution. Furthermore, is used as the foundation function, and the Gaussian function is frequently utilized. The expression for the Gaussian function is mathematically expressed as follows in

In the above formula (5), characterizes the midpoint vector of the ith hidden node, while exemplifies the girth or the variance of the radial foundation function, which is implemented to alter the sensitivity of the model. Together, these two variables are used to find the ith hidden node’s center vector.

In the next phase, we construct a set of equations simultaneously like follows:

This reveals the one and only solution to the problem:

The RBEF uses the training data to determine the relevant weights, and then adjusts those weights in order to optimize the regression prediction of the data. Then, it implements a set of test data to the altered and adjusted model in order to obtain its anticipated value. And finally, it evaluates the model by calculating the error that exists between the true value and the predicted value. In case, the model correctness does not reach the standard threshold (predefined), then the model can be fixed by continuously altering its various parameters. This is the case regardless of whether or not the accuracy fulfills the requirement.

The DEA-RBF approach syndicates the benefits of the DEA and the RBF methodology into one, and it is able to handle large data sets whose variable properties are unknown. In most cases, the length of the RBF training process will be impacted by the amount of the data volume. The cost of storing the data and the cost of operating the data increases at an exponential rate with the increase in the data volume. An excessive data volume may even cause the system to fail due to the strain it places on its resources. Therefore, in order to reduce the amount of time required for RBF training and to increase its overall operational efficiency in the face of large amounts of data, the data set must be condensed, at least to some degree. Nevertheless, there is a possibility that the accuracy of the RBF training will vary as a result of the reduction in the dataset. The traditional method of data sampling only involves simple sampling, and it, potentially, does not take into consideration the association amongst various variables that are located within the dataset. Additionally, it cannot assure that the reduced dataset (filtered) can still preserve the features of the original data (non-reduced), which may have certain effects on the experiments that use the data set for prediction. a damaging effect on. DEA is able to remove some unnecessary data as well as data that interferes with other data without altering the generality of the data, which allows it to filter out the data layer that is the most useful in the data collection.

This paper standardizes, among two predefined thresholds, the nominated valid dataset in order to avoid data distortion and, as well as, prevent small data from being swallowed up by the large data. As a result, the paper is able to achieve a more accurate prediction effect. This is because the sizes of the data sets that are represented by the various variables are not identical. The normalization formulation using the mode is given by

In (8), stands for the original data, while and refer to the highest and lowest values in the original data, correspondingly.

The reduced dataset (filtered) is then separated into a training dataset and a test dataset, and the efficiency of the hybrid technique is evaluated based on how well the multi-output RBF model is learned and tested. The model that is proposed in this study has the potential to increase the accuracy of model prediction while simultaneously reducing the amount of data, significantly reducing the amount of time needed for training, and improving the efficiency with which data is processed.

3.3. Evaluation Indicators

We introduce the confusion matrix, which is illustrated in Table 1. The following are the formulas for calculating precision and recall, respectively:where TP stands for true positive and FP represents the false positive ratio between the predicted and actual outcomes. Similarly, FN characterizes the false negative.

4. Results

The DEA method will hereafter be referred to by us as the OUR algorithm, and we will contrast it with the Multilayer Perceptron (MLP) algorithm and the Decision Tree (DT) algorithm. Comparison of the three algorithms demonstrates, from a variety of perspectives, that the OUR algorithm is superior to the other two.

Figure 2 depicts the accuracy change trend diagram for the OUR method, the MLP algorithm, and the DT algorithm as the number of iterations is increased. It is plain to see that the precision of the OUR method, the MLP algorithm, and the DT algorithm gradually improves along with the number of iterations that are applied to the calculation. Even if the accuracy rate of the OUR algorithm is the lowest at the beginning of the process, it has the fastest increase rate in the early stage, according to the increasing trend. In addition to this, the overall accuracy rate is superior to that of the MLP algorithm and the DT method.

The evolution of the loss functions of the three algorithms is depicted in Figure 3, which is organized according to the number of iterations. It is clear to us that as the total number of iterations grows, the amount of data lost by each of the three algorithms steadily lessens, with the final amount of data lost by the OUR algorithm being the least. The second problem is the destruction of the MLP algorithm. However, the DT method has the most amount of loss when compared to the other two algorithms.

Figure 4 presents a comparison of the accuracy and recall rates achieved by each of the three methods. Even if the accuracy of the MLP algorithm is only 0.14 percent lower than that of the OUR method, it is clear that the OUR algorithm has the highest level of precision. The DT algorithm has the worst accuracy of all the ones we looked at. When it comes to recollection, the OUR algorithm is superior to the DT algorithm and achieves better results overall. In comparison to the MLP algorithm, the recall rate achieved by the DT method is significantly higher.

In terms of accuracy, precision, recall, and loss, we contrasted the OUR algorithm with the MLP method and the DT algorithm. It is evident from the attained outcomes that the generalization performance of the OUR model is exceptional because the OUR algorithm has the highest level of accuracy across the board.

5. Discussion

Although the Chinese government invests very little money in education, the country’s education system is intended to be exceedingly comprehensive. School administrators should pay attention to the issue of school-running efficiency, seek to increase school-running efficiency, harness potential from inside the school, sensibly distribute limited educational resources, and make maximum use of those resources. ongoing development. Evaluation is something that needs to be paid attention to if we want to see improvements in how efficiently schools are operated. It combines the analytic hierarchy process, establishes a scientific model, and combines the reality of the widespread application of information technology in the digital age. This study uses the DEA fusion algorithm to evaluate the teaching quality of college English teachers. It comprehensively considers the characteristics of multi-index, multi-factor, and multi-objective evaluation. It is a better solution to the problem that a single assessment index cannot adequately reflect the comprehensive level of teaching that is being done by teachers since the multi-factor problem is incorporated into a single factor. At the same time, the grades are more precise, rigorous, and scientific, which eliminates the influence of the correlation of each evaluation index on the evaluation results to a certain extent. Additionally, this eliminates the inaccuracy of the evaluation results caused by errors caused by the subjective factors of a single evaluation model.

The findings demonstrate that the evaluation procedure is exhaustive and highly operable. The evaluation results are accurate and trustworthy. The calculating method and process are efficient. And they satisfy the software requirements for the Golden Course construction. In the future, we plan to conduct in-depth research on a greater number of universities, teachers, and courses. Promote and apply the quality evaluation model for English classroom teaching that was proposed in this study. And based on the results of the evaluation, solve the issues that are currently present in English classroom teaching in the digital environment in a manner that is targeted. In order to find a solution to the problem, you need to reframe it as an opportunity for educational reform. Stimulate college students to learn English in English with initiative and enthusiasm, and form the autonomous learning ability of English learning by deeply integrating advanced information technology and college English education, exploring diverse online and offline innovative teaching modes, forming more learning space, and forming the ability for college students to learn English on their own. In addition, we will continue to revise and improve the evaluation model in future course practice in accordance with the standards for the construction of golden courses. This is being done in order to improve the scientificity and effectiveness of the evaluation model of the quality of college English instruction. In terms of accuracy, precision, recall, and loss, we contrasted the OUR algorithm with the MLP method and the DT algorithm. It is evident that the generalization performance of the OUR model is exceptional because the OUR algorithm has the highest level of accuracy across the board.

6. Conclusions and Future Work

This study examines the evaluation index system used for English instruction in colleges and universities from the perspectives of teaching, learning, management, innovation factors, integration factors, and specific results in the process of college English instruction. Based on the findings of this investigation, a proposal is made to establish an environmental impact report system that is both more systematic and comprehensive. The newly developed comprehensive environmental impact approach for evaluating the quality of English instruction in colleges and universities has substantial implications for guiding students in the right direction. A fuzzy evaluation model of college English teaching quality evaluation has been established on the basis of the DEA method and the grey system theory. This model not only clarifies the physical meaning of college English teaching quality evaluation, but it also ensures that the calculation is both simple and reliable. The suggested algorithm outperformed MLP and DT algorithms.

The proposed system gives a means for quantitative analysis, as well as, a method for evaluating the quality of English instruction in colleges and universities. The relevant strategies to improve the quality of college English teaching are initially discussed based on the DEA fusion algorithm which is based on the evaluation method model of college English teaching quality. This has a certain reference value for improving the quality of college teaching as a whole. In the future, we will continue developing more sophisticated algorithms and will integrate deep learning and big data technology to improve the precision of the prediction and computational time. Similarly, the training model of the machine learning approach can be made fast enough to improve the system response time using big data technologies like cloud and edge computing. We intend to advise a framework that will improve the prediction durations of the proposed system. Finally, we will compare the proposed system with other closest rivals.

Data Availability

The data used to support the findings of this study are available from the author upon request.

Conflicts of Interest

The author declares that he has no conflicts of interest.

Acknowledgments

The study was supported by Shandong Social Science and Research Foundation of China (Grant nos. 21CYYJ10 and11CWZJ39)” and Teaching Reform Project of Shandong Technology and Business University (Project no. 11688202080)