Abstract

The accurate prediction of the transaction volume of the core accounting system is of great significance to the stable operation of commercial banks. After fully investigating the transaction volume history of the core accounting system and discovering the unique time-series attributes of the data, this study proposes a transaction volume prediction model of the core accounting system of commercial banks based on the improved bat algorithm and the optimized gating loop unit neural network. The chaos algorithm, reverse learning, evolution, and search mechanisms are implemented to improve the search efficiency of solving the global optimization problem, to overcome the shortcomings of the original bat algorithm, such as early maturation and easily falling into the trap of a local optimal solution, and to enhance the algorithm’s optimization ability and precision. Moreover, the bat algorithm’s optimization ability is fully utilized, and the optimal parameters of the GRU model, such as network layers and neural units, are determined. Finally, the effectiveness of the combination model is evaluated using historical transaction data from the core accounting system stored in a bank data warehouse. The experimental results show that the improved combined GRU model performs well in mean squared error, root mean squared error, and mean absolute error, which is superior to the original GRU model and the traditional time-series forecasting ARMA model. The proposed combined model can be effectively applied to the transaction volume prediction of the core accounting system of commercial banks.

1. Introduction

With the continuous spread of the worldwide epidemic, more and more transactions are moving from offline to online, and the demand for online financial services of commercial banks in various industries has shown a surging trend. According to the statistical data analysis of the China Banking Service Report in 2021 released by the China Banking Association, the banking industry achieves multidimensional digital intelligent service transformation through product service iterative process around the development concept of technology-enabled business innovation, assisting enterprises in resuming production as soon as possible, supporting entities’ economic development, and preventing financial risks [1].

At present, commercial banks offer various service channels to customers through the combination of online and offline operation modes. Customers can do business online via Internet banking, mobile banking, and bank APPs, or they can accomplish associated tasks manually at a counter. The development of various service channels will test the transaction capacity of the core accounting systems of commercial banks [2]. To avoid hardware failure due to a rise in transaction volume, the service load is often enhanced through hardware redundancy. This strategy, on the other hand, will make equipment operation and maintenance more complicated and, as a result, will surely prolong the time it takes to maintain the equipment. At the same time, it will make managing the software service interface more complicated, and it may even result in a waste of service resources and computer computing power. Therefore, the research on the accurate prediction of system transaction volume is of great significance in reducing the redundancy of commercial bank systems [3].

Machine learning, as a hot topic in current forecasting research, has the advantages of mining the internal implicit relationship of historical data, analyzing the data hierarchy, and automatically feeding back data features through learning [4]. Because of the advantages of machine learning, more and more machine learning is applied in different fields [5]. Wang [6] used the gated recurrent neural network (GRU) model to analyze the big meteorological data, excavate the correlation between meteorological elements, and estimate each meteorological element. Tong [7] adopted the GRU model based on social, economic, and epidemiological characteristic data to study the current COVID-19 epidemic situation and predict its changing trend. Aiming at the problem of network intrusion security, Liang [8] proposed to dig deep into the network data information by summarizing the characteristics of network activities and used the GRU model to classify and identify the information, providing a brand-new idea for the processing of network intrusion search. In particular, the GRU model has a high prediction accuracy for time-series attribute data. However, its complicated internal structure leads to more complicated super parameters, which has a large influence on the prediction accuracy [9, 10].

Presently, the bat algorithm (BA) has been widely used in wind energy development, model prediction [11], UAV path planning [12], and economic development [13]. Some scholars have made corresponding improvements to BA. Cai et al. [14] introduced Gaussian disturbance into BA and adopted different speed updating strategies to improve the global exploration ability. The authors in [15] combined the tone fine-tuning operator of harmony search to improve the convergence accuracy and speed of BA. Huang et al. [16] proposed an improved binary BA by using neighborhood search and dynamic inertia weight strategy. The authors in [17, 18] used the ergodicity of chaos to study the optimization efficiency of different chaotic mapping for BA. In [19], the chaos of elite individuals is optimized by logistic mapping and dynamically shrinks the search space to avoid BA falling into local optimum prematurely. Many scholars further simulate bat behavior from the biological point of view, combined with habitat selection and adaptive compensation of the Doppler effect, and apply it to practical engineering problems [2023]. He et al. [24] integrated the simulated annealing algorithm and Gaussian mutation into BA and made use of the advantages of the simulated annealing algorithm, which not only enhances the exploration ability of BA but also accelerates the convergence speed. Although BA has the advantages of rapid convergence, few parameters, and strong robustness, BA also has the disadvantages of easily falling into local optimum and low solution accuracy, like other heuristic optimization algorithms.

Therefore, this study proposes a combined BU-GRU neural network model to predict the transaction volume of the core accounting system of commercial banks. The major contributions of the proposed study are as follows:(i)Fuch mapping is introduced into the bat algorithm to produce uniformly distributed individuals. Improving the initial population quality can improve the search efficiency of solving the global optimization problem.(ii)Evolutionary mechanism and search mechanism are introduced to enhance the ability to get rid of local extremum and give full play to the information leading ability of the optimal individual.(iii)A GRU combination model based on the bat algorithm is proposed, which can give full play to the optimization ability of the bat algorithm and optimize the super parameters of the GRU neural network.(iv)Aiming at the unique time-series attributes of transaction volume data in the core accounting system of commercial banks, it is proposed to optimize GRU neural network model based on an improved bat algorithm, which can overcome the problems of low precision and large error of traditional time-series data prediction.

The rest of the paper is organized as follows: Section 2 provides an overview of the GRU neural network. Section 3 provides a detailed description of the bat algorithm and different optimization techniques. Section 4 describes the proposed ESCBA-GRU model. In Section 5, experimental analysis is presented, and Section 6 is about the conclusion.

2. GRU Network

GRU neural network is a variant neural network model proposed to solve the gradient explosion problem of the recurrent neural network (RNN) model. The core of this algorithm is a combination of a sigmoid function and some relational operations, which together constitute the GRU control and information selection mechanism. By using the sigmoid function, the inherited memory state and the current input data are calculated and mapped between 0 and 1 to simulate the memorizing and discarding state of data. Similar to the mechanism of three control gate units in the long short term memory (LSTM) model, there are two control units in GRU, reset gate and update gate, because the GRU model evolved from the LSTM model [25]. Therefore, the two-gate structure is simplified from the three-gate structure. The input-output structure of the GRU model is shown in Figure 1.

h indicates the information transmitted between each nerve unit, x specifies the input information, and y is the output information of the current nerve unit. However, at every moment or in the final full connection calculation, GRU is constantly updating itself. The update is based on its inheritance and combined with the current alternative answers. Inside the neural unit, this genetic information will be processed and calculated. In this way, such neural units are connected in series to form a network-like model with a complex structure and direct or indirect connection between neural units [26]. The internal structure of the model is shown in Figure 2.

2.1. Update Gate

indicates the update gate of GRU at the current time t, which is responsible for determining how much information content is transmitted to the next neural unit. If it is infinitely close to 1, will be equal to , which means that the received information will be directly transmitted to the next unit without being processed or forgotten. On the contrary, if it is infinitely close to 0, is not equal to , which means that the received information will be forgotten through machining calculation and will not be passed on to the next unit [27]. The process of updating the gate is expressed as follows:where is used to indicate sigmoid function, is the update gate weight corresponding to when solving , and indicates the currently entered information.

2.2. Reset Gate

indicates the reset gate at the current time t, and determines the influence of on . When is approximately equal to 0, will not pass on information to . When is approximately equal to 1, will pass on information to . The reset gate operation is expressed aswhere indicates the reset gate weight corresponding to when solving .

2.3. New Memory

The update gate determines the current time, and the information amount of the previous time is included in the condition of result judgment. indicates the comprehensive calculation result at the current input information and the information transmission of the hidden layer at the previous moment, which includes the newly input information and the above information. The new memory process is computed as

2.4. Hidden Layer

The hidden layer state of GRU at the current moment is calculated based on and at the previous moment, and the weight relationship between them is controlled by . The equation of its information transmission and the calculation process is as follows:

2.5. Results

After determining at the current moment, the output of this neural unit can be completed, and finally, data can be obtained after calculation. The output process is as follows:

Because of this special result, each unit has new input information and genetic information at the previous moment, which makes the network have the characteristics of dealing with nonlinear feature sequences.

Studies have shown that the GRU’s hidden layer number H, model iteration number N, learning rate lr, and learning rate calculation factor decay have a great influence on the prediction results, while the bat optimization algorithm has better performance than other swarm intelligence algorithms in the whole optimization process [28]. Therefore, the bat algorithm is proposed to optimize the relevant super parameters of GRU.

3. Bat Algorithm (BA)

The BA is a new heuristic optimization algorithm based on swarm intelligence. It is developed according to bats’ characteristics of using echolocation to search for prey and avoid obstacles.

3.1. Basic Bat Algorithm

In BA, and , respectively, indicate the position and speed of the i-th bat in the t-th generation. and shows the frequency and loudness of the bat, respectively, and is the pulse emission rate. The position of the bat indicates the feasible solution. Because bats do not know the specific location of prey at first in the algorithm, they are randomly initialized by the following equation:

After the initial setting of each parameter is completed, the algorithm enters the following three main iterative processes:(i)Random flight: the spatial position of bats is updated bywhere and are the minimum and maximum values of frequency, β ∈ [0, 1] is a random number that obeys uniform distribution, and indicates the global optimal solution currently searched.(ii)Local search: once the optimal solution is generated, a local search is performed around its neighborhood by a random walk of the following equation:where  ∈ [0, 1] is a random vector and is the average loudness.(iii)Update of control parameters: when bats accept new prey, their loudness and pulse emission rate are updated by the following equation:where and are both constants. As the iteration progresses, when the bat approaches the optimal solution, the loudness gradually decreases, and the pulse emission rate gradually increases.

From the characteristics of the BA, we can see that its main problems are summarized as follows.

The BA generates the initial population solution through random initialization as given in equation (6). If the random solution is close to the optimal solution, it will converge at a faster speed. Once the random solution is far from the optimal solution, it may converge slowly, which has a great influence on algorithm optimization [29].

The algorithm is performing global exploration (equation (7)). At this time, bats fly towards the best individual of the population. If the best individual is far from the global optimum, the population will easily fall into the local optimum. Equation (8) means local development. With the iteration, increases, and the possibility of decreases. So, the algorithm performs development in the early stage and performs exploration in the later stage. Too much development and too little exploration will lead to premature convergence, and too much exploration and too little development will make it difficult to converge to the optimal solution, which will reduce the overall search performance.

No matter the exploration or development, the search process is completely dependent on a random walk, and the method is single, which makes bat individuals lack flexible mutation ability in iteration. There is no effective mechanism to jump out of the local extremum, which cannot guarantee rapid convergence to the optimal solution.

Therefore, the improvement of the BA in this study is embodied in three aspects: generating a suitable initial population, having the ability of continuous evolution and getting rid of early maturing, and updating adjustment parameters.

3.2. Fuch Chaotic Mapping

Chaos is a characteristic of a nonlinear system, which is mathematically defined as a random number generated by a simple deterministic function. Integrating chaos into a metaheuristic optimization algorithm, the core is to use the randomness, ergodicity, and irregularity of chaotic variables to replace random variables that obey the standard probability distribution for optimization search, to search the whole solution space, generate evenly distributed individuals, and improve the initial population quality [30].

At present, the chaotic mapping widely used in literature is logistic mapping and tent mapping. Among them, logistic mapping is sensitive to initial values. Only when the control parameter is 4 can it completely enter the chaotic state, with a high probability of taking values at both ends of the interval and uneven traversal. Tent mapping has better ergodic uniformity. But it has a small period and fixed point, which makes it easy to fall into local circulation and reduces the efficiency of chaos optimization. The Fuch map proposed has stronger chaotic characteristics than logistic mapping and tent mapping [31]. This mapping has the advantages of insensitivity to initial value, ergodic equilibrium, and faster convergence and can generate chaos even when the initial value is not zero. Because of the above analysis, this paper uses Fuch mapping as the mapping function of chaotic search, which is specifically defined as follows:where , .

Since the value range of variables generated by logistic mapping and tent mapping is [0, 1] when mapping the decision variable into a chaotic vector for chaotic search, it is necessary to first limit the chaotic initial value of each dimension to [0, 1] by the following equation:

Then, carrier operation is carried out on the generated chaotic initial vector through logistic mapping or tent mapping function to generate chaotic vector . Finally, it is transformed into the spatial vector of the original solution by the following equation:

Fuch mapping does not depend on the initial value; the chaotic sequences generated when the initial value changes slightly are completely different and irregular [32]. Therefore, when searching for chaos, it is not necessary to first transform the decision variable into the chaotic variable interval by equation (11). The chaotic vector can be directly generated by equation (10). After carrier operation, it is restored to the original solution space.

3.3. Fuch Chaotic Search

Assuming that indicates the solution vector to be optimized, with D indicating the number of dimensions , where and indicate the minimum and maximum values of the k-th dimension of the solution vector, respectively, the maximum iteration number of chaos searches is MaxCt, and indicates the objective function, the steps of Fuch chaotic search proposed in this paper are as follows:Step 1. Initialize the initial value of chaotic vector iteration, where i = 1, 2, …, MaxCt, k = 1, 2, …, d.Step 2. Substitute into equation (6) to perform Fuch mapping iteration to generate chaotic vector.Step 3. Carry the chaotic vector to the original solution space to generate a new solution by using the following equation:Step 4. Repeat steps 2–4 until the maximum number of chaotic searches MaxCt is reached, and keep the solution with the optimal fitness value.Step 5. Compare the new solution with the solution to be optimized, and keep the optimal solution.

3.4. Fuch Chaotic Reverse Initialization Strategy

To overcome the defects caused by random population initialization in the BA, this study combines Fuch chaotic initialization with reverse learning initialization and proposes Fuch chaotic reverse learning initialization strategy. In fact, according to the probability theorem, the probability that the current solution is farther away from the optimal solution than its corresponding inverse solution is 50%, so searching the current solution and the inverse solution at the same time and keeping the optimal solution can make the population distribute as evenly as possible in the search space.

In this study, it is found that the upper and lower bounds of the optimization variable are symmetrical, and the benchmark function has a square term or an absolute term; if the general reverse learning strategy is adopted, the initial solution and the reverse solution are only opposite in sign, but the corresponding fitness values are the same. Therefore, when selecting the initial population, the same fitness value corresponds to multiple solutions, which reduces the diversity of population fitness values and often produces larger average fitness values. To reduce this phenomenon, this study proposes a new equation for generating the inverse solution:

Each dimension of the initial solution is mapped to the vicinity of the interval center value. Moreover, each dimension of the generated inverse solution is no longer just the inverse number of the corresponding initial solutions, so the inverse solution has different fitness values, which further improve the diversity of population fitness values and the quality of the initial solution.

If NP is the number of bat individuals, the initialization steps are as follows:Step 1. Iterate by equation (10) to generate a chaotic vector sequence:Step 2. Carry the chaotic vector to the original solution space by equation (13) to generate the initial solution .Step 3. Get the corresponding inverse solution from the initial solution according to equation (14).Step 4. The initial solution and the reverse solution are sorted according to the fitness value, and the top NP solutions are selected as the initial solutions of the population.

3.5. Local Search for the Best Individual Evolution Guidance

In the BA, local search relies on the random walk in the neighborhood of the current optimal solution, which leads to the lack of mutation mechanism in the population. Local search adopts the method of mutating randomly selected individuals, which ignores the population optimal individuals and cannot make full use of the location information left by them [32]. Therefore, we learn from the differential evolution operator with strong local development ability, take the best individual of the population as the reference position, and introduce mutation and crossover operation to guide the movement of the population. The mutation operator is as follows:where , j ∈ {1, 2, …, d}.

The difference between and indicates that when performing difference disturbance on the optimal solution , the smaller the difference vector, the smaller the disturbance. At the initial stage of iteration, the larger the difference between different individuals, the greater the disturbance. So the search is performed around the optimal solution in a wide range. As the individual gradually approaches the optimal solution in the later stage of iteration, the disturbance is small. The search range near the optimal solution is reduced to enhance the local search ability and improve the convergence performance. In equation (16), the new solution is to exploit a new search area near the optimal solution obtained in the last iteration and randomly select two individuals to make a difference to guide bats to search the optimal area as much as possible to improve their development ability. Cross-operation of the individuals generated by variation is performed according towhere Cr is the crossover probability, with a value within [0, 1], and jrand is an integer randomly selected from [1, D]. Individual crossover ensures that the value of at least one dimension can be obtained from the mutant individual, thus ensuring that the crossover individual is different from the current optimal individual and mutant individual and avoiding invalid crossover [25]. Otherwise, no new individuals will be produced, resulting in no evolution of the population. The search will also come to a halt.

3.6. Introducing the Chaotic Global Search of Investigation

In the artificial bee colony algorithm, if the honey source has not been improved for L consecutive iterations, the corresponding honey collecting bees will give up the honey source and change to scout bees to search for new honey sources, which is beneficial to get rid of the local optimum and enhance the global optimization performance. Inspired by the idea of scout bees, bats trapped in local optimum are defined as scout bats in the bat algorithm, and new positions are searched in the whole search space to replace the old ones [25]. However, the scout bats’ research for the location is not just like the scout bees’ giving up the old location and searching randomly. They search for the new location with two chaotic search strategies, respectively. For each scout bat, based on the current position and random position, by using Fuch mapping with good ergodic equilibrium, an intermediate solution is generated by a chaotic search strategy of a certain number of times. After comparing with the original position, the optimal solution is kept as the current position. In this way, the bats constantly jump out of the local extremum.

For the scout bats and in the T-th iteration, they indicate the number of Fuch chaotic sequences generated according to equation (10), and their chaotic search based on the current position is as follows:where k ∈ { 1, 2, …, d}, I, j ∈ {1, 2, …, NP}, and i ≠ j. Chaos search based on random position is as follows:where and indicate the minimum and maximum values of the K-th dimension of the solution vector, respectively. At this time, the two intermediate solutions are searched by chaos iteration for a certain number of times according to the described steps, and finally, the better solution is selected.

For the above equation, the exploration ability is certainly improved by flying from the current position to other randomly selected positions, which, however, may not be a better position. This may lead the flight to a worse position. This method limits the search to the neighborhood of the current position [33]. Although the nearby areas are effectively exploited, the effect of better searching the whole solution space cannot be achieved. By further expanding the search scope and combining it with chaotic mapping, the whole solution space can be traversed effectively, but the subtle search cannot be carried out, and the information around the current individual is ignored. Therefore, the two search strategies are used as independent ways to generate intermediate solutions. Their respective advantages are combined while avoiding their shortcomings so that individuals trapped in local optimum can independently choose a better new position to replace the original position. The combination of these two strategies can constantly get rid of the constraints of the local extremum and enhance the global exploration ability.

3.7. Adaptive Chaotic Updating of Parameters

In the basic BA, the pulse emittance r controls the mutual conversion between global search and local search, and the loudness A determines whether to receive the new solution, which affects the optimization efficiency to some extent. Therefore, this study proposes a strategy of adaptive updating of parameters with the change of iteration times and chaotic sequence, as follows:where and are the initial and final values of loudness A, and are the initial and final values of pulse emissivity, t is the current iteration number, shows the maximum iteration number, and is the chaotic sequence number. The renewal equation of A is the linear decreasing function multiplied by a chaotic number, and the renewal equation of R is the linear increasing function multiplied by the chaotic number. The linear function accelerates the updating speed of parameters, and the chaotic sequence ensures that the values can traverse the whole interval, thus controlling the balance between development capability and exploration capability.

3.8. Chaotic Bat Algorithm with Evolution and Search Mechanism (ESCBA) Flow

To sum up, the implementation process of the chaotic bat algorithm with evolution and search mechanism (ESCBA) is as follows:Step 1. Initialize algorithm-related parameters, including the number of individual bats NP, variable dimension number D, maximum iteration number MaxIt, search threshold L, chaotic search number MaxCt, and individual update flag bit T.Step 2. According to the description, initialize the population with a chaotic reverse learning strategy, calculate its fitness value, and find the optimal individual .Step 3. Traverse all individuals, update the frequency, speed, and position information according to equation (7), and generate a new individual .Step 4. Generate a random number . If it is larger than , perform mutation and crossover operation according to equations (15) and (16), and reassign the individual after evolution to .Step 5. Generate a random number . If the fitness is better than that of the current individual and is less than P4, accept the new individual, reassign T to 0, update A and r according to equation (19), and go to step 7; otherwise, go to step 6.Step 6. Increase the value of T by 1. If T is greater than or equal to L, then carry out MaxCt subchaotic search according to equations (17) and (18), respectively, to generate intermediate individuals and . Select the better intermediate value and compare it with the current individual, and keep the optimal value as the new position of the current individual; if T is less than L, go to step 7.Step 7. Compare the obtained new individual with the optimal individuals in the population. Update the optimal individual, and judge whether the iteration times reach MaxIt. If so, output the searched optimal solution and the corresponding fitness value and exit; otherwise, go to step 3.

4. ESCBA-GRU Combined Model

4.1. GRU Super Parameter Optimization Strategy

Since the parameter adjustment process of the GRU neural network is blind and random, the bat algorithm is introduced to complete the optimization work [34]. Firstly, the number m of parameters that the GRU neural network needs to adjust is determined, and it is regarded as an m-dimensional vector that needs to be processed by the bat algorithm. Secondly, determine the value range of this m-dimensional vector; that is, lock the vector space of this m-dimensional vector that needs to be optimized by the bat algorithm. Then, regard GRU neural network as a function, which is called and executed by the bat algorithm. Every time, this m-dimensional vector is used as the super parameter of GRU to train the data model. Finally, the trained network model is used to predict the test data, and the error value is obtained by comparing the predicted value with the real value. The error value is returned to the bat algorithm as the evaluation index of optimization. To clearly show the error between the predicted data and the true value, the mean square error (MSE) function and LOSS function are introduced to express the difference between the predicted value and the original value. The language description of this function is to calculate the sum of squares of distances at corresponding points between predicted data and original data. The evaluation function is expressed as follows:where parameter N indicates the number of samples, is the true value of the samples at the ith position, and y indicates the predicted value of the ith position predicted by the model. The geometric meaning shows the distance between the true value and the predicted value at the same point, which is used here to measure the gap between the predicted value and the true value. When the gap between the predicted value and the true value is large, whether it is positive or negative, it will get a large value. That is, it is far from the true value. In this algorithm, the result value of MSE will be used as the evaluation function, and the numerical value will be fed back to the improved bat algorithm for the evaluation and judgment of the current optimization result value. The optimization algorithm will judge whether the current position is better than the historical point through the evaluation result to decide whether to update the current point.

4.2. Execution Process of the Combined Model

The purpose of constructing the combined GRU model based on the optimized bat algorithm is to improve the results of data prediction by the neural network and reduce the error between the predicted value and the true value. According to the description of the combined neural network model in the previous section, it can be concluded that the steps of the model in the process of execution are as follows:Step 1. Initialize parameters, including the population size N of the algorithm, the dimension D of the problem variable, and the maximum iteration times of the algorithm.Step 2. Determine the adaptive function and the LOSS function after GRU training, and calculate the mean square error to get the evaluation index.Step 3. Search for the best GRU parameter in the parameter vector space.Step 4. Complete the optimization process of the ESCBA and obtain the optimal solution. The results are imported into GRU neural network as parameters to complete the data prediction.Step 5. According to the prediction result, calculate the error value, and return the error value to ESCBA as the evaluation index. Finally, update the optimal solution position according to the evaluation index.Step 6. Iterate step 3, step 4, and step 5 until the iteration requirements are met or the maximum number of iterations is reached.Step 7. Get the network parameters of the optimal solution, import them into the neural network, and complete the model training.Step 8. Use the model to predict and analyze the test data.Step 9. Get the prediction result value.Step 10. End

5. Experimental Analysis

5.1. Experimental Data

The experimental data of this study was from a bank’s data warehouse, and the target data collected were the transaction volume data of the bank’s core system. The data collection time was from July 20, 2018, to July 1, 2020. The data were summarized in minutes, with a total of 1,026,721 samples of data. Among them, the data from July 20, 2018, to January 30, 2020, were selected as model training data, including 763,349 samples of data. The data from January 30, 2020, to July 1, 2020, were selected as test data to test the fitting effect of the model, including 263,372 samples of data.

5.2. Data Processing

For the abnormalities of the collected data, according to the convention of cleaning data with big data, the missing part of the collected data was filled with the average value of the data in the same period of this point. After cleaning, the experimental data were processed and normalized in the range of [0, 1]. After cleaning and processing, the data had a good comprehensive contrast property and were all at the same data level.

5.3. Experimental Analysis of Bat Algorithm

Nine benchmark functions with different characteristics were selected to test the performance of the bat algorithm and make a full investigation of the optimization ability of the algorithm. The function name, value range, global minimum, and characteristics are shown in Table 1. Among them, f4, f5, f8, and f9 are unimodal functions, mainly testing the optimization accuracy and convergence speed of the algorithm. f1, f2, f3, f6, and f7 are multimodal functions, mainly testing the global search of the algorithm and its ability to jump out of the local extremum to avoid premature convergence.

5.4. ESCBA Initialization Performance Analysis

In this paper, random initialization, general reverse learning initialization, and Fuch chaotic reverse initialization are used, respectively. By calculating the mean value of initial population fitness (Fit) and diversity (Diver), the performance of different initialization strategies is compared. The population diversity is computed as follows:where NP indicates the population number and m indicates the average initial solution. The greater the diversity, the more dispersed the population distribution; otherwise, it is dense. The smaller the fitness, the higher the quality of the initial population solution.

In the experiment, the nine benchmark functions all take 50 dimensions. The maximum number of iterations is 500, and the population size is 40. All of them run independently 50 times. It can be seen from Table 2 that, in the test of each function, the average fitness and average diversity of the population initialized by the Fuch chaotic reverse learning strategy are better than the other two initialization strategies, showing that the Fuch chaotic reverse strategy of equation (9) can expand the diversity of the population to a certain extent and improve the quality of the initial solution.

5.5. ESCBA Optimization Accuracy Analysis

To verify the performance of the ESCBA, this study selected the basic BA [1], BAGW [6], HSABA [17], dBA [18], and SABA [19] for the comparative experiment. In the experiment, the bat number NP of all algorithms was set to 40, and the maximum iteration (Imax) was 500. Except that, the parameter settings of other comparison algorithms were consistent with their respective literature. The specific settings of each parameter in the ESCBA were as follows: the maximum pulse frequency fmax = 2; the minimum pulse frequency fmin = 0; the crossover probability Cr = 0.75; the search threshold L = 25; the range of loudness A [0.6, 0.9]; the range of pulse emittance R [0.1, 0.7]; the number of chaotic searches Cmax = 100. After each algorithm runs independently 50 times for 9 benchmark functions in 30 dimensions, 50 dimensions, and 100 dimensions, the best, the worst, the mean, and the standard deviation (Std) were calculated for the optimization results, respectively, as shown in Tables 35.

From the results of Tables 35, with the increase of dimension, the accuracy of each algorithm decreases. On the whole, the optimization effect of ESCBA and SABA is better than other control algorithms. Among them, for f2, the mean and variance of SABA are not as good as BAGW, dBA, and ESCBA in 30 dimensions, and it is not as good as dBA and ESCBA in 50 dimensions. Except for f2 and f3 in 30 dimensions and 50 dimensions and except for f6 and f7 in 100 dimensions, dBA’s solution performance is better than that of HSABA, BAGW, and BA, and BA has poor optimization performance. Whether for unimodal function or multimodal function, ESCBA has better performance than other algorithms in different dimensions, among which ESCBA has more obvious advantages in 100-dimensional function. For multimodal functions f2, f3, f6, and f7, the accuracy of ESCBA in each dimension is improved compared with SAB. For f6, f7, f8, and f9, ESCBA can find the theoretical optimal value in different dimensions, which indicates that ESCBA has a stronger global searching ability. Moreover, for f8, with the increase of dimension, the optimization value of SABA has a large deviation, and the variance further increases, while ESCBA still keeps a good optimization result. In addition, for f9 in 30 dimensions and 50 dimensions, ESCBA and SABA both reach the theoretical optimal value. For f3 in 50 dimensions, SABA gets the best solution. But its mean and variance are not as good as ESCBA, and the difference is obvious. Except for f9 in 50 and 100 dimensions, ESCBA has the smallest standard deviation in different dimensions of other functions, which shows that ESCBA has good stability and robustness as a whole.

5.6. GRU Combination Model Trading Volume Prediction Experiment

To verify the ability of the GRU combination model to predict the transaction volume data of the core accounting system of commercial banks, the GRU model was trained based on the training data set to generate a data model. Finally, test data are used to test the predictive ability of the model. In the experiment, to verify the predictive ability of the GRU combined model, the original GRU model and ARAM model are introduced to predict the experimental data. The experimental data are shown in Table 6, and the curve change of the predicted data is shown in Figure 3.

It can be seen from the experimental data table that the prediction error of the improved combined neural network model is 17% lower than that of the standard GRU neural network model. Besides, the RMSE evaluation index is 8% lower, and the MAE evaluation index is 17% lower, respectively. The prediction effect of the combined neural network on the result value is better than that of the standard GRU neural network.

Figure 3 shows that the curve change of the predicted value of the GRU combined neural network is closer to the true solution curve, and many points coincide with the true value. This fitting effect has a certain guiding significance for data decision-making.

6. Conclusion

Commercial banks are extremely important for social and economic growth. As a result, predicting the transaction volume of the core accounting system is of both theoretical and practical significance. In this study, the theoretical basis and optimization process of the bat algorithm is systematically studied, and the characteristics and shortcomings of this algorithm are deeply analyzed. Based on keeping the bat algorithm to simulate the hunting behavior of the bat population, a chaotic bat algorithm with evolution and search mechanism (ESCBA) is proposed, which introduces Fuch chaotic reverse learning strategy to improve the initial population diversity. The evolutionary mechanism is introduced to ensure that the population can make full use of the guidance of optimal individual information; the search mechanism further enhances the ability to get rid of the local extremum. The control parameters A and R are adaptively updated so that the algorithm tends to be stable and convergent and keeps the balanced exploration and development ability. Finally, based on ESCBA, a model combined with GRU neural network is proposed, and the super parameters of the GRU neural network are optimized by giving full play to the advantages of the optimization algorithm. Results showed that the improved combined GRU model performs well in MSE, RMSE, and MAE, which are superior to the original GRU model and the traditional time-series forecasting ARMA model. Future work will focus on further exploring the function of the GRU combined model and the realization of automatic deployment and startup management of application systems according to the prediction data and building intelligent and dynamic application management and early warning system.

Data Availability

The data used in this research are confidential and not open to the public.

Disclosure

This article was originally published in The International Conference on Natural Computation Fuzzy Systems and Knowledge Discovery.

Conflicts of Interest

The authors declare that they have no conflicts of interest.