Abstract

An interest is often present in knowing evolving variables that are not directly observable; this is the case in aerospace, engineering control, medical imaging, or data assimilation. What is at hand, though, are time-varying measured data, a model connecting them to variables of interest, and a model of how to evolve the variables over time. However, both models are only approximation and the observed data are tainted with noise. This is an ill-posed inverse problem. Methods, such as Kalman filter (KF), have been devised to extract the time-varying quantities of interest. These methods applied to this inverse problem, nonetheless, are slow, computation wise, since they require large matrices multiplications and even matrix inversion. Furthermore, these methods are not usually suitable to impose some constraints. This article introduces a new iterative filtering algorithm based on alternating projections. Experiments were run with simulated moving projectiles and were compared with results using KF. The new optimization algorithm proves to be slightly more accurate than KF, but, more to the point, it is much faster in terms of CPU time.

1. Introduction

Tracking moving objects or a substance is a common optimization and control problem [1] in various fields such as aerospace [2, 3], medical imaging [46], data assimilation [79], and video surveillance [1012]. Methods, such as Kalman filter, have been devised to solve this problem. However, these methods come with a price, for large systems, of being slow in computation since they involve matrix-matrix multiplication and even matrix inversion. Remedies, such as Expectation Maximization Filter (EMF) algorithm, have been proposed including when nonnegative results are desired [1316]. A new remedy is yet introduced here.

The novel filtering algorithm presented in this article aims, as does KF, to find an estimate , , to the unknown of the problem modeled by the two linear space-state equations, The error vector has and ; the latter is the covariance of the error in modeling the transition from to . and are the mean and covariance, respectively, of the noise vector . The assumption is that the error and noise are white so that where is the identity matrix with order known from context; denotes the square matrix that has the in its main diagonal and otherwise. Both matrices and are assumed to be diagonal. The new algorithm is then numerically tested to solve a reconstruction problem arising in object tracking.

Tracking a moving object (or moving objects) with large unknown variables and few given data, as in aerospace, medical imaging, or data assimilation, is an ill-posed inverse problem. This ill-posedness is further inflated by physical degradation of the acquired data causing noise. Filtering algorithms, as the one presented in this article, are therefore very suitable tools.

The remainder of the paper is organized as follows. Section 2 describes the stochastic modeling of the state evolution and projection in space. The state evolution and space linear models are the basis to apply the new method. Then Section 3 reviews KF and introduces the new filtering algorithm. The application of the new filter to tracking moving objects is covered in Section 5. Section 6 details the simulations and numerical experiments of moving tracking objects using both KF and the new filter; these corroborate the effectiveness of this new algorithm in terms of convergence and CPU time. Section 7 closes the paper by summarizing the findings.

2. Problem Formulation

2.1. A Stochastic Model of Moving Objects

A new filtering algorithm is introduced. Without loss of generality, it is illustrated, with an example of tracking moving objects. It could be applied to any instance where KF was and is used when modeled by the two linear state-space equations (3) and (9). For instance, the number of photons in [4, 6] is the unknown of both these equations.

To illustrate the new algorithm, tracking moving objects is studied here where detectors, such as radars or cameras, measure their positions over time. The goal is to estimate their positions, velocities, and/or acceleration, all at once, given a few noisy measurements of their positions. Kalman filter can average out the noise of the measured data and gives somehow smooth tracks of these over time moving objects in space [11, 17]. Filtering is a widely used method in engineering where a filtering algorithm, such as KF, reduces the noise from signals while keeping the utile information. As in the case of KF, both a process/evolution model and a measurement equation are used to estimate the unknowns. Note that the unknown vectors of both state-space equations (3) and (9) in this illustrating example include the positions, velocities, and acceleration of the moving objects.

Velocity and acceleration can be easily derived from the position. Someone could be interested in finding only the positions and can easily derive the velocities and acceleration therefrom; however, having velocities and acceleration helps to build the state evolution of (3). In addition, having more than the positions in the unknown adds more to their components, therefore increasing the size of the problem to illustrate the algorithm.

Detectors are exploited to register the positions of the moving objects. The problem is then modeled as follows. Let , be an index of a sequence of acquisition times, the total number of unknowns, and the total number of detectors during the time . The variables and denote the state vector containing the unknowns of the moving objects and the measured data during the time, respectively. The unknowns include the 2D or 3D components of positions, velocities, and/or acceleration. The entry in the vector has a component of either the position, velocity, and/or acceleration during the time . The observations are random vectors. The entry in the vector measures the component of position of a moving object registered at the detector during the time . Furthermore, each observation depends on only. The sequence satisfies Markov property with unknown time-varying transition matrix . That is to say, the present state depends only on the previous one according to the following linear equation: where is the error random vector in modeling the transition from to with equal to zero and covariance matrix .

2.2. Example

To make sense of this linear evolution model, consider the following basic 2D example with a constant time increment . Assume that the state of a moving object is the vector where and are the horizontal and vertical components of position, respectively, with corresponding velocity components and and acceleration components and . That is, , , and The basic kinematic equations with constant acceleration are Equations (5)-(7) can be written in a matrix form,

2.3. Assumptions

Let be the quantity that links the unknown in position during the acquisition time to detector . The time-varying matrix defined by is called projection or observation matrix. This matrix accounts for the relationship between the measurement vector and the state vector . The observations/measurements and state vectors are related by the linear equation where and are the mean and covariance, respectively, of the noise vector .

For instance, consider example 2.2 where the state vector is of size 6. Two noisy measurements, , namely, the horizontal and vertical components of the position, are known. Then the matrix is

Note that this is just an illustrating example with very small sizes; , , and have sizes , , and , respectively. In practice, the sizes are of hundreds, thousands, or even more.

The optimal is to be found given the data , the evolution matrix , and the observation system matrix at each time step . Equations (3) and (9) are the state-space form of a particular case of a more general filtering problem [18, 19]. The actual model is a linear dynamic system for which the analytic filtering solution is given by the KF [20].

The covariance matrices and being diagonal and the error as well as the noise being white are three assumptions adopted in this paper. In case one assumption or more are not satisfied, a prewhitening process [13] can be used to bring a general setting to the one of this paper.

3. Filtering and Reconstruction

The index is dropped whenever it is needed for convenience. For instance, at each instance time , the state vector , the data vector , and the matrix are referred to as just , , and , respectively. The moving objects problem amounts to finding solution of the linear equation where , are the given observation data vector and the observation system matrix, respectively. The vector represents the additive noise in recording the data . A new method, based on alternating projections over convex sets and compared to KF, is introduced, but, first, a review of the KF follows.

3.1. Kalman Filter

The KF update [13, 20] is the following: Given an unbiased estimate of the state vector , the prior estimate of based solely on the activity dynamics is The estimate will have the form (refer, for instance, to [6]) and in (13) are the covariances of the estimated activity at time and , respectively. As mentioned before, the matrices’ sizes involved in (13) and (14) are of the order of hundreds, thousands, or even more. Therefore, a major drawback of KF is the matrix-matrix multiplications involved in (13) and (14) and matrix inversion involved in (14). Attempts have been made to rectify these two shortcomings; see, for instance, [18, 19] for more details. The goal of the new filter is manyfold, but without imposing nonnegativity as in [13, 14]. It is intended to find a substitute algorithm to KF that filters out errors from modeling the dynamical system and the noise from the data. It will insure temporal regularization and will be an optimal recursive estimate. In addition, it does not require the storage of past measurement data, use matrix-matrix multiplications, or necessitate any matrix inversion. Furthermore, it does not need to calculate, update, or store any covariance matrix. The aim is then to keep properties of KF while ameliorating it by requesting few more. Each recursive step in the new approach is an iterative operation that involves only matrix-vector multiplication. This should then handle the problems of huge number of variables, such as the case in aerospace.

4. Alternating Projections Minimization

The approach here to solving for both linear state-space equations (3) and (9) is by solving first, in this section, a regularization problem (16) that involves a regularization parameter . Once the scheme in solving problem (16) is developed, both (3) and (9) are put together, at each time step , in one regularization problem (30) similar to problem (16) using a regularization parameter updated at each time step . As for the variable of problem (16), it is substituted by the state transition in time vector of (3) and (12). This process of solving (3) and (9) using the scheme developed in this section is done in the Section 5. Thus, a regularization problem is introduced and solved next.

The authors in [13, 14, 21] studied a regularization problem. This paper considers the following similar problem, for , using the Euclidian distance instead: The new filtering algorithm takes care of both underdetermined and overdetermined cases. Moreover, an iterative method, within the framework of alternating projections, is developed.

First, two convex sets are defined of -dimensional vectors similar to what is done in [2124] as The sets and are nonempty, closed, and convex in . Recall the function in (16); is also defined as Expand the most right hand side of expression (18). The alternating projections minimization method works as follows. First start with an initial guess . Then, having obtained the iterate, set . Afterwards perform the following two steps:

Step 1. Minimize with respect to to get , which is in the convex set .

Step 2. Minimize with respect to to get . The corresponding is in the convex set .
The sequence converges to a limit , which is the estimate we are looking for. This iterative scheme is applied to the problem at hand while performing step 1 using (19): However so that ; therefore , where , which leads toStarting with a guess , the update expression is Now, Step 2 is performed using (19): Therefore, So that, This leads to the update formula, Both update formulas (22) and (26) could be combined in only one: where . Starting with a guess , the sequence converges to a limit , which is the desired estimate . Let us apply the above update formula (27) to the tracking moving objects problem.

5. Application to Moving Objects

The goal here is to solve for both linear state-space equations (3) and (9) by using the finding of Section 4. It goes as follows. The error and noise covariance matrices in tracking the moving objects problem are modeled as diagonal matrices with nonnegative entries; refer to Section 2.1. That is, where is the identity matrix; its dimension is known from context and . In case , it suffices, for instance, to multiply matrix by and use instead of . In case when in expression (29), is used instead of ().

Recall that the aim is to find an estimate , , to the unknown of the problem modeled by the two linear space-state equations (3) and (9).

The functional to be minimized at each recursion step is where . Note that since , which makes the same functional to be minimized as in (16).

It suffices now to use the change of variables, given in step 2 of the Alternating Projections Filter (APF) algorithm 5.1 that follows, and to recall that the predicted state vector is . The iterative method, based on alternating projections and given by the updated formulas (22) and (26), or the combined formula (27) at each time step , is used. The clustering point will be the estimate state that is solved for using (3) and (9). Thus the following APF algorithm is obtained.

5.1. Alternating Projections Filter Algorithm

(1)To solve the linear state-space problem given by (3) and (9), start with a and set . For execute the following steps.(2)Assume the recursive steps up to time are done; carry the change of variables (3)To get , start with .(4)Make (5)Do .(6)Compute(7)The update formula for the next estimate is , where is the cluster point of the sequence .

Observe that the two steps and could be combined into one step as was done before in (27). Note how the iterate in (33) is formed as a ratio of two convex combinations. The numerator is a convex combination of the predicted , associated with the same coefficient as in problem (16), relying only on the evolution model, and of the calculated APF iterate associated with the same coefficient as in problem (16) as well, relying only on the observation model. The denominator is also a convex combination; it is a weighting factor for the updated vector estimate. The APF does not involve any matrix-matrix multiplication or any matrix inversion. It is iterative, involves only matrix-vector multiplication, and does not need to calculate, update, or store any covariance matrix.

The temporal regularization parameter is well defined and takes values between 0 and 1 when varies between 1 and . If at each time step , so that , then is in (33). That is, the observations are discarded completely to the extent that only the evolution model is relied on. This defeats the purpose of using the full power of detectors. When at each time step , the predicted in step is not needed in step . Indeed using (33), Thus, with the above update (34), the Landweber iteration [25] is retrieved in the static case. Surely, choosing means that or simply . That is, the covariance matrix in the transition equation (3) is very huge, which implies there is no confidence at all in the evolution model. In other words, the evolutionary state of the variable is discarded. Only the observations are meaningful in finding the , which is then a stationary state as it should be.

6. Numerical Experiment

Data Availability. The simulated data used to support the findings of this study are described as follows. The APF algorithm 5.1, presented here in the linear case, is validated with moving objects while modifying the setting of example 2.2 to have a fairly large size problem. To achieve this goal, the APF algorithm is applied to an example where there are different moving objects starting at the same position with the same velocity that varies over time.

The experiment is done with different large values of , the number of unknowns in the state vector , to compare results to KF for accuracy and efficiency. Indeed, the goals of this comparison are twofold. First it makes sure that the APF algorithm gives similar results to those of KF; this serves to compare the quality of the outputs of both algorithms for accuracy. Second it compares the CPU running times of both APF and KF for efficiency. The KF algorithm is used here in its conventional/classical formulation. Furthermore, matrices involved in this illustrative example are quite sparse. Both algorithms APF and KF, as they are used here, do not use this sparsity property to decrease the running times. Thus, this approach shows and compares the capacity of both algorithms in a more general case, including when matrices might not be sparse.

The is set and the test is done with different values of . The simulated camera detectors measure only the position with some error but not the velocity, so that . The evolution matrices are taken as of size . This matrix is The value in the position in the matrix is arbitrary; its sole purpose is to make sure that the velocity is changing over time. The unknown state vector is represented as and the projection/measurements matrices , where is of size , are The experiment is run with an initial state vector of size where m and m/; both values and are arbitrary, so that The measured data is simulated with the Gaussian noise process in (39). It counts for constant and arbitrary acceleration, accel = 0.6 m/, to be added to each position measurement, so that where randn1 is a random vector of the standard normal distribution. The current velocity is updated from the previous one as The current is then updated according to The state vector in (41) is updated with ProcessNoise as defined in (39) and also with ProcessNoise equal to zero; no significant change has been noticed. The data or measurements at time are simulated with the state vector, without the last component which contains the velocity, with an added Gaussian noise according to where DataNoise is some arbitrary value and randn2 is a random vector of the standard normal distribution. As for KF, the covariance matrices are set to be equal to with and ; a slight improvement for KF in terms of accuracy is noticed. Recall that we set ; choosing this regularization parameter depends on how much trust is put in each of both state-space equations. The example presented in Table 1 has . The results presented in Figure 1 over 100 s are done using parameters summarized in Table 1.

The three curves in Figure 1 are averages of their corresponding positions. The latter represent the simulated moving objects, reconstructed positions using KF, and reconstructed positions using the APF algorithm 5.1. Taking averages of the positions is for plotting and comparing errors purposes only. Averages are not taken over the time interval , but are rather averages of all the positions at every time step . The sampling interval has no impact on the resulting averages.

The relative error between the simulated and both reconstructed curves has a median around 30%. Running CPU times of both algorithms depend on the size and the size of time step . As increases and/or decreases, time of both APF and KF increases; however APF is much faster than KF. Indeed, APF is over 25 times faster than KF with size of the unknown state vector and time step s, corresponding to , over 100 s.

7. Conclusion

A novel filtering algorithm APF, to be applied to the linear case, was presented; it could be implemented in aerospace and other fields. It could also be applied, for instance, anytime tracking of moving objects is desired, such as in medical imaging and data assimilation where large size systems are involved. Experiments were run comparing APF and KF in terms of accuracy and computer speed. Quality of reconstructed curves is about the same in both algorithms, although APF performs slightly better than KF. More importantly, APF is up to 25 times faster than KF, seconds instead of minutes. As it is the case with KF, APF algorithm filters out errors from modeling the dynamical system and the noise from the data. Both algorithms insure temporal regularization and output an optimal recursive estimate. However, APF does not use any matrix-matrix multiplication and does not necessitate any matrix inversion. Furthermore, APF does not need to calculate, update, or store any covariance matrix. This is not the case for KF regarding these last three properties. Indeed, these three properties are at the heart of making APF take much less CPU time compared to KF, so that APF is very suitable for large scale systems such as the ones in aerospace. APF could be used in any discipline which has used, for instance, KF or in any field that is interested in time-varying variables such as financial risk assessment/evaluation and forecasting or control. The results substantiate the efficiency of this novel APF algorithm.

Nomenclature

:Time parameter
:Maximum value of
:Position vector of a moving object at time
:Evolution matrix from position to position
:Measured data vector at time
:System matrix relating to
:Expected value of a vector
:Covariance matrix of a vector
:Error vector in modeling the transition from to
:Covariance of
:Noise vector of measured
:Covariance of .

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The author declares no conflicts of interest.