Abstract
Aiming at the problem that the probability transition matrix is set by prior information and the filtering parameters are fixed in the traditional interactive multimodel (IMM) algorithm, which leads to the model probability lag in the switching process and the low filtering and tracking accuracy, an interactive multimodel filter tracking (IABBSCA-IMM) algorithm with improved parameter adaptive and bare bone sine cosine optimization is proposed. First, the Markov probability transition matrix is dynamically adjusted and limited conditions are added through the parameter adaptation method; then, the filtering parameters Q and R are optimized by the bare bones sine cosine algorithm (BBSCA); finally, three motion models of CV (uniform velocity motion), CA (uniform acceleration motion), and CT (uniform velocity turning motion) are used to conduct filtering and tracking experiments on the target. The simulation results show that, compared with the IMM algorithm, the AMP-IMM algorithm, the IASCA-IMM algorithm, and the IABBSCA-IMM algorithm proposed in this study have the smallest position and velocity root mean square error (RMSE) in the X and Y directions, and the accuracy is better.
1. Introduction
In recent years, with the vigorous development of electronic information technology and modern control technology [1], target filtering and tracking technology has been widely used in automatic driving [2], smart security [3], traffic planning [4], navigation, and positioning [5] and has played a vital role. Its essence is to use the state information of the target at the current moment and the measurement information obtained by the sensor to realize the method of estimating the state information at the next moment [6]. The interacting multiple model algorithm (IMM) [7] is a very effective target filtering and tracking algorithm. It adopts multiple fixed parallel models to describe the motion pattern of the target and introduces the Markov probability transition matrix to associate the input state of each motion model with a dynamic model to interact with each other. The commonly used motion models include the constant velocity model (CV), the constant velocity turning model (CT), the constant acceleration model (CA), and the Singer model [8]. The optimal state estimation is obtained by calculating the filtering results of each filter using weighted fusion. Although the IMM algorithm overcomes the problem of fixed single model, there are also problems such as the presetting of Markov probability transition matrix elements and the fixed filtering parameters Q and R, which have a great impact on the filtering and tracking accuracy.
The prior setting of the probability transition matrix leads to the probability lag problem of the switching process model. Rongchun Z et al. proposed the AMP-IFIMM algorithm, defined the concept of error compression ratio, and adaptively adjusted the Markov probability transition matrix through the ratio of the error compression ratio to increase the probability of matching models and reduce the probability of mismatching models. However, this method is only suitable for two-model systems and cannot be generalized to multiple models [9]; Puwen et al. proposed a Markov matrix modified IMM filter tracking algorithm (AMP-IMM), which extended the method of literature [9] to three or more models, but some necessary conditions still need to be met during adaptive adjustment, and there are also certain restrictions on the use of the model [10]; Biao et al. proposed to use the current measurement information to adaptively update the model probability of the target and use the updated model probability to make a hybrid prediction of the target state, which improved the filtering performance of the algorithm to a certain extent [11]; YE et al. defined a new correction factor and used the posterior information to correct the probability transition matrix in real time, which improved the probability of matching the model and improved the filtering and tracking accuracy to a certain extent [12]. The above algorithms modify the probability transition matrix and improve the probability lag problem of the switching process model but do not solve the problem of filtering parameters Q and R fixed.
Aiming at the problem of low filtering and tracking accuracy caused by fixed filtering parameters, Lijun Y et al. used adaptive innovation Kalman filter to estimate the process noise covariance matrix Q and the measurement noise covariance matrix R, which improved the prediction accuracy, but it has not been extended to multimodel [13]; Ullah I et al. adjusted the covariance matrix of noise through artificial neural network, which improved the filtering performance of the algorithm to a certain extent, but the calculation amount was large and the model design was complicated [14]; Zhao C et al. used fuzzy control method to adjust the weight of the matching model and automatically adjusted the process noise level through the fuzzy inference mechanism, which enhanced the adaptiveness of the algorithm [15]; Liu H et al. adaptively adjusted the error covariance matrix of the dynamic model through a synchronous optimization feedback learning algorithm for maneuvering target tracking based on Elman neural network (ENN), optimized the final state estimation, and improved the estimation accuracy of the algorithm [16]. The above algorithms adjust the filtering parameters Q and R and improve the filtering accuracy to a certain extent. However, the problem of the probability lag of the switching process model caused by the prior setting of the probability transition matrix is not studied.
Based on the IMM algorithm, this study uses the parameter adaptation method to dynamically adjust the Markov probability transition matrix after each model probability update, which increases the switching probability to the matching model. The BBSCA is used to optimize the filter parameters Q and R, which improves the filter tracking accuracy. Then, the IABBSCA-IMM algorithm is proposed, and the effectiveness of the proposed algorithm is verified by simulation experiments.
2. Principle of Interactive Multimodel (IMM) Algorithm
When filtering and tracking a target, since it may contain a variety of motion states, if a single model is used for filtering and tracking, then it may cause large errors or even loss of the target. The IMM algorithm first sets the model set M = {m1, m2, ..., mr}, which contains multiple possible motion models; then, the different models in M are filtered in parallel by the filter to match the model with the motion state; then, the optimal estimated value of the previous moment is mixed to obtain the initial condition of the filter matching the specific model and filtered; finally, the maximum likelihood function is used to update the model probability, and the optimal state estimate is obtained through weighted calculation.
If the target has r motion states, corresponding to r models and state transition equations, then the target state equation represented by the jth model is as follows:
The measurement equation is as follows:where Xj(k) is the state vector of model j at time k, Zj(k) is the measurement vector of model j at time k, Φj(k + 1) are the state transition matrix and noise coefficient matrix of model j at time k, respectively, and H(k + 1) is the measurement matrix. Both the process noise Wj(k) and the measurement noise V(k) of model j have zero mean, and the covariance matrices are Gaussian white noise of Qj and Rj, respectively. The switching between models is determined by the Markov probability transition matrix, Pij represents the probability that the ith motion model is transferred to the jth motion model, and the probability transition matrix is as follows:
The IMM algorithm is mainly divided into four steps: input interaction, parallel filtering, probability update, and output interaction. The specific analysis is as follows: Step 1. Enter the interaction. The mixed estimate and covariance are calculated from the state estimate at the previous moment and the probability of each model, and the mixed estimate is used as the initial state of the step cycle. After normalization, the predicted probability of model j is as follows: The mixture probability of model i to model j is as follows: Mixed state estimation for model j is as follows: The mixed covariance of model j is estimated as follows: where Pij is the transition probability from model i to j, and ujk(k-1) is the probability of model j at time k-1. Step 2. Carry out the filtering process. Kalman filtering is performed with , , and Zj(k) as input, and the prediction state and filter covariance are updated. Predicted state is as follows: Prediction error covariance is as follows: The gain is as follows: result of filtering is as follows: Filter covariance is as follows: Step 3. Update the model probability. The model probability is updated using the likelihood function, and the likelihood function of model j is as follows: In the above equation, Then, the model probability is as follows: where is the normalization constant. Step 4. Obtain the output.
According to the model probability calculated in the previous step, the results of each filter are weighted and fused to calculate the total state estimate and the total covariance estimate . The total state estimate is as follows:
Total covariance estimate is as follows:
3. IABBSCA-IMM Algorithm
In this study, an IABBSCA-IMM target filter tracking algorithm is proposed. First, the Markov transition matrix is dynamically adjusted and limited conditions are added to increase the switching probability to the matching model; then, the bare bones sine cosine algorithm (BBSCA) improved on the basis of the SCA is used to optimize the filtering parameters Q and R to improve the filtering and tracking accuracy; finally, the detailed flow of the IABBSCA-IMM algorithm is given and verified by simulation.
3.1. Dynamic Adjustment of Markov Transition Probability
Analyzing the error compression rates proposed in literature [9] and [10], it can be seen that the denominator parts are all Cj defined by equation (4), while the compression ratio defined by the AMP-IMM algorithm proposed by the latter has fewer numerator terms. According to literature [17], it can be seen that for the system including multimotion models, the expression of the molecular part is no longer the same, so the error compression ratio of each model is also different, and the ratio of the error compression ratio needs to be redefined:where , dij represents the ratio of the error compression ratio for switching from model i to model j, and the adjusted Markov transition matrix is as follows:where the diagonal element is and the off-diagonal element is . This study takes three models as an example, and the ratio of the error compression ratio is as follows:
Comparing the model error compression rate defined in literature [9], it can be found that equation (21) is a generalization in three or more models, and the Markov transition matrix elements in the multimodel system no longer have the characteristic of p11 = 1 − p12. The necessary conditions for adaptive adjustment at this time are as follows:
For a system that satisfies the above equation, the Markov transfer matrix can be adaptively adjusted by equation (19), and the condition is satisfied, but after multiple adaptive adjustments, the main diagonal elements of the Markov transfer matrix may not always be the same. Occupying a large proportion, the transition probability pii<0.5 corresponding to the model with smaller μi will appear, which is contrary to the physical meaning of the Markov transition matrix. pii indicates that the target maintains the current motion mode. If its value is less than 0.5, then it means that the state is an instantaneous switching state rather than a stable motion mode, and it may also cause the covariance matrix to be nonpositive definite. Therefore, this study adds a limit to this situation and adds a judgment link in the process of algorithm iteration. If the value of the main diagonal element of the Markov transition matrix is greater than 0.6, then the matrix element is updated; otherwise, it is not updated.
To sum up, the specific steps of dynamic adjustment of Markov transition probability are as follows: Step 1. Complete interactive input, filtering process, and model probability update according to equations (4)–(16). Step 2. Determine whether the model switching probability satisfies equation (22), and if so, adjust each element in the probability transition matrix according to equation (19). Step 3. Determine according to the elements in the adjusted Markov transition matrix. If the main diagonal elements are all greater than 0.6, then update the probability transition matrix; otherwise, do not update. Step 4. According to equations (17) and (18), calculate the results of each filter and weight them for fusion, and calculate the total state estimation value and covariance matrix.
3.2. BBSCA Optimizes Filter Parameters Q and R
Through the dynamic adjustment of Markov transition probability, the problems of fixed probability transition matrix and slow adjustment of model probability are solved. However, the IMM algorithm still has the disadvantage of fixed model noise. The stronger the degree, the larger the covariance, and vice versa. In this study, the BBSCA proposed on the basis of the SCA is used to optimize the filtering parameters Q and R, and the optimized parameters are used as a new round of filtering input to optimize the filtering process and improve the target filtering tracking accuracy.
3.2.1. Basic Sine Cosine Algorithm (SCA)
The SCA is a new swarm intelligence optimization algorithm [18], which has the characteristics of high flexibility and fast convergence. The optimization process can be divided into two phases: search and development. The first stage quickly finds feasible regions in the search space by combining a random solution among all the random solutions. In the second stage, the random solution changes gradually, and the change is lower than that in the first stage. The algorithm starts from a series of random solutions, optimizes by combining sine and cosine functions and random factors, and continuously approaches the global optimal solution. The update equation is as follows:where is the position in the ith dimensional search space in the tth iteration of the current particle; r1 is a linearly decreasing function, and its expression is as follows:where a is a constant 2 and T is the maximum number of iterations; r2 is a random number in ; r3 is a random number in ; r4 is a random number in ; represents the position of the ith dimension of the optimal individual position variable in the tth iteration. Parameter r1 determines the location area (or moving direction) of the next solution, and it controls the algorithm for global search or local development. The parameter r2 determines the current solution direction or the distance away from the target solution. Parameter r3 is a weight given by the target solution, and the purpose is to randomly enhance (r3 > 1) or weaken (r3 < 1) the influence of the target solution in defining the distance of the candidate solution. The function of parameter r4 is to switch the components of sine and cosine with equal probability.
3.2.2. Bare Bones Sine Cosine Algorithm (BBSCA)
In SCA, the direction of particle movement is only determined by the parameter r1, which causes the algorithm to converge prematurely and fall into a local optimum. The BBSCA improves the problems of the SCA from three aspects [19]. First, the directional control parameters are reasonably selected, and a nonlinear decreasing control strategy based on an exponential function is adopted. The newly proposed update mechanism is shown in the following equation:
The parameter a is a constant, t is the current iteration, and Tmax represents the maximum number of iterations. As r1 changes, the value of will also change when the position is updated. The expression is similar to equation (23) in the SCA, and the formula is expressed as follows:where r2, r3, and r4 are consistent with the standard SCA and r1 is the newly proposed parameter in equation (25). The BBSCA uses the Gaussian skeleton search equation to revise the updated position equation and uses the useful information of the global optimal solution to improve the SCA convergence speed. The formula is expressed as follows:where is the jth component of the optimal solution at the tth iteration, is a randomly selected indicator, is a randomly selected global optimal solution, and the reason for using random selection is that it is fast, easy to calculate, and maintains diversity. Finally, in order to improve the local search ability, the greedy selection strategy and the frame optimization algorithm are integrated into the optimal solution search equation to maintain the optimal fitness value of the individual in the next generation. The formula is expressed as follows:where fit is the fitness function and the “ ” sign reduces the possibility of search stagnation. The search equation introduced according to the previous introduction is as follows:where r5 is a random number between 0 and 1.
To sum up, the BBSCA maintains the advantages of simple principle and convenient use of SCA. It obtains useful information contained in the optimal solution through the Gaussian search equation and uses the exponential decrease and greedy selection strategy based on fitness value to guide the search to a better direction.
3.2.3. Filter Parameter Optimization
In this study, the BBSCA is used to optimize the parameters Q and R, and the actual mean square error of the innovation is used as the objective function, as shown in the following equation:
In the above equation, is the innovation sequence, defined as , represents the fitness value of the ith particle, and j represents the filter corresponding to the jth model. Since three motion models are selected, N = 3.
First, the innovation sequence of each filter is calculated to obtain the actual mean square error of the state information, and then the BBSCA is added to optimize the filter parameters online. The parameter Q is a diagonal matrix, and R is a constant matrix [20]. The specific optimization process is as follows: Step 1. Initialize each filter and corresponding parameters after adjustment of Markov transition probability parameters, and establish a filter model. Step 2. Set basic parameters. Since there are a total of 6 parameters in 3 models to be optimized, the problem dimension D is selected as 6, the number of individuals N is set as 10, the maximum optimization algebra itmax is 100 generations, the BBSCA control parameters r1–r5 and the related control equations are initialized, and the parameters Q and R contained in the three filters are optimized respectively for each group. Step 3. According to the parameter optimization range, randomly initialize the position information of the particle swarm in the search area, and limit the boundary. Step 4. Set the fitness function as the actual mean square error of the state information, substitute the parameters of each particle into the algorithm to establish a filter model, and obtain the fitness value of each particle through calculation. Step 5. Compare the fitness values of 10 particles in the first population and find the particle with the smallest value, take its value my_bbi1(j) as the optimal Q1 value of the jth generation result, and record its location as my_bbg1(j) as the optimal position of the jth generation group. If the result is the first generation optimization result, then bbg1(j) is used as the optimal position (i.e., the optimal parameter) of the current population, and the same is true for other groups. Step 6. Compare the minimum fitness value my_bbi1(j) obtained by the jth generation of particles in the population with the minimum fitness value obtained after the optimization of the previous generations of particles, if the minimum fitness value of the jth generation particle is smaller, then replace the values loaded by my_bbi1(j-1) and my_bbg1(j-1) with the minimum fitness value and optimal position of the jth generation, otherwise unchanged, and the same is true for other groups. After the jthgeneration optimization in this way, the optimal group parameter with the smallest fitness value is the optimal position of the current group. Step 7. Adjust the particle control parameter r1 according to equation (25), and regenerate new random numbers r2–r5 according to the range. Step 8. Determine whether the end condition is reached, if not, go to step 5; otherwise, output the optimal value, and the optimization ends. The end condition of the optimization process is that the number of iterations reaches a predetermined value or the fitness function is zero. After the end, the obtained individual optimal positions (Q1, R1, Q2, R2, Q3, and R3) are passed to the IMM filter tracking algorithm dynamically adjusted by Markov transition probability, and apply the new optimized Q and R to the next state estimation model.
3.3. IABBSCA-IMM Algorithm Flow
As shown in Figure 1, it is a schematic flow chart of the IABBSCA-IMM algorithm. First, set the initial Q and R values, initialize the filter model, establish a state estimation model, and perform input interaction, time update, and measurement update; secondly, complete the state update and covariance update according to the new measurement value, calculate the model probability, dynamically adjust the transition probability matrix, and complete the weighted fusion output of the filtering results; then, calculate the fitness function, use the BBSCA to optimize the filtering parameters, and obtain the next round of new filtering parameters Q’ and R'; finally, filter and track the next moment according to the new filtering parameters and transition probability matrix, and loop until the end of the algorithm.

4. Experimental Simulation and Process Analysis
Experiments were performed using MATLAB R2018b on Intel(R) Core(TM) i5-1135G7 CPU @2.40 GHz, 16.00 GB RAM, and Windows 10 system. Let the target be maneuvered in a two-dimensional plane, and the state variables represent the position, velocity, and acceleration components of the target in the x and y directions. The initial state of the target is . The observations are the position and velocity information of the target. The sampling interval T = 0.1 s and the simulation time t = 43 s. Suppose the target does uniform linear motion between 0 and 10 seconds with = 0 m/s and = −5 m/s, does uniformly accelerated turning motion between 10 and 14 seconds with constant acceleration = 0.25 m/s2 and = 1 m/s2, ending with = 1 m/s and = −1 m/s, does uniform turning motion at the end of the phase above 14–34 seconds with and , and does uniform turning motion between 34 and 38 seconds and does uniform acceleration motion between 34 and 38 seconds, and constant acceleration is applied in the x and y directions, the end of = 0, = 8 m/s; and does uniform linear motion in the rest of time. The system noise and observation noise are independent of each other. The CV process noise covariance matrix is . The CA process noise covariance matrix is . The CT process noise covariance matrix is . The measurement noise covariance matrix of each model is m3/s6, m3/s6, and m3/s6. The obstacle filter tracking algorithm uses three models CV (uniform motion), CA (uniform acceleration motion), and CT (uniform turning motion) for tracking. The initial probability of each model is . The initial value of the Markov probability transfer matrix is as follows:
First, the error compression rate is introduced and the Markov probability transfer probability is dynamically adjusted. Then, the filter parameters Q and R are optimized. For optimization, the problem dimension D is selected as 6, the number of individuals N is 10, and the maximum optimization generation itmax is 100 generations. The SCA and BBSCA are used for parameter optimization. In this experiment, the filter tracking of state information is performed from the 2nd second (i.e., the 20th sampling point). Since parameter optimization is performed for each filtering, the convergence curve of the fitness function can be obtained as shown in Figures 2(a) and 2(b) for the first filtering, where the horizontal axis is the number of iterations and the vertical axis is the fitness value, that is, the actual mean square error of the innovation.

(a)

(b)
Respectively optimized by the standard sine cosine optimization algorithm (SCA) and bare bone sine cosine optimization algorithm (BBSCA), the respective new filter parameters Q and R can be obtained for each corresponding CV, CA, and CT motion models. Using them for the next moment of filter tracking can further improve the filtering accuracy. The time consumed by the above two algorithms and the parameters of each motion model after optimization are listed in Table 1.
As shown in Figure 3, the optimization results and time consumption of the two algorithms are compared. It can be seen that the time consumed by the two algorithms is basically the same, the mean square error of the test results of the BBSCA is smaller, its optimization results for the filtering parameters are significantly better than those of the standard SCA, and the optimization accuracy is higher. Therefore, it is effective and feasible to combine the bare bone sine cosine algorithm with the Markov transfer probability parameter dynamic adjustment method to form the IABBSCA-IMM algorithm for building the filter tracking model. Its convergence is better, the optimization accuracy is higher, and the stability and robustness are stronger.

After parameter adjustment, 50 Monte Carlo simulations are performed respectively for the IABBSCA-IMM algorithm and IMM algorithm, AMP-IMM algorithm, and IASCA-IMM algorithm. The AMP-IFIMM algorithm of literature [9] is not considered since it is only applicable to two-model systems. The root mean square error (RMSE) was used to evaluate the algorithm performance. The results of filter tracking of the target motion trajectory are shown in Figure 4(a), where the X axis is the lateral displacement, the Y axis is the longitudinal displacement, and the vertical axis t denotes time, and Figure 4(b) shows a local zoom in.

(a)

(b)
It can be seen from Figure 4 that in the process of filtering and tracking the target state information, the trajectory of the IABBSCA-IMM algorithm is closer to the true value. The IASCA-IMM algorithm, the AMP-IMM algorithm, and the traditional IMM algorithm are less effective.
In the process of filter tracking by the IABBSCA-IMM algorithm, the three motion models of CV, CA, and CT are switched automatically. Under the conditions set in this experiment, the model probability change curves are shown in Figure 5.

The probability of the CV model prevails between 0 and 10 seconds, which corresponds to the target’s uniform linear motion. The probability of the CA model prevails between 10 and 14 seconds, which corresponds to the target’s uniform accelerated turning motion. The probability of the CT model prevails between 14 and 34 seconds, which corresponds to the target’s uniform turning motion. The probability of the CA model prevails between 34 and 38 seconds, which corresponds to the target’s uniform accelerated motion. The probability of CV model prevails in the rest of time, corresponding to the target’s uniform linear motion. The process is consistent with the experimental setting, which indicates that the probability model curve is correctly switched, and the probability of the corresponding model in the specific motion mode is superior and switched rapidly.
The variation curves of position and velocity RMSE with time for each algorithm in X and Y directions during filter tracking are shown in Figure 6.

(a)

(b)

(c)

(d)
The mean values of the specific RMSE values for each algorithm in the process of target filtering tracking are listed in Table 2.
It can be seen that compared with the traditional IMM algorithm, the IABBSCA-IMM algorithm proposed in this study reduces the RMSE by 19.36% for the position in the X direction and 30.52% for the velocity and reduces the RMSE by 37.61% for the position in the Y direction and 35.99% for the velocity. Compared with the APM-IMM algorithm and the IASCA-IMM algorithm, the RMSE of this method is smaller and the accuracy is better.
5. Conclusion
After dynamic adjustment of Markov transition probability and optimization of filtering parameters, the IABBSCA-IMM algorithm proposed in this study speeds up the switching probability to the matching model and improves the filtering and tracking accuracy. It can be applied to automatic driving, collision avoidance warning, traffic monitoring, and other fields. The main conclusions are as follows:(1)By dynamically adjusting the Markov probability transition matrix and adding constraints, the switching probability to the matching model is increased, the influence of the nonmatching model is reduced, the speed of model probability switching is accelerated, and the fast and accurate switching of the system motion model is realized.(2)The improved bare bones sine cosine algorithm (BBSCA) is used to optimize the filtering parameters Q and R, and they are used as new inputs to optimize the next round of filtering process, which improves the filtering and tracking accuracy of the target.(3)Compared with the IMM algorithm, the AMP-IMM algorithm and the IASCA-IMM algorithm, the position and velocity RMSE of the IABBSCA-IMM proposed in this study are reduced by 19.36%–30.52% in the X direction, the position and velocity RMSE in the Y direction are reduced by 35.99%–37.61%, and the accuracy is better.
In addition, with the continuous maturity and improvement of intelligent algorithms such as vulture search algorithm, artificial neural network algorithm, and ant colony flower algorithm, it is expected to be combined with interactive multimodel algorithm to improve the tracking accuracy of the algorithm.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by the Program of National Natural Science Foundation of China (no. 61871318), the Key Scientific Research Program of Shaanxi Provincial Education Department (no. 20JY046), the Key Research Program of Shaanxi Province (no. 2021GY-259), and the Key Science and Technology Program of Xi 'an City (no. 21XJZZ0044).