Research Article  Open Access
Bukeun Byeon, HoKyoung Lee, DoSik Yoo, "Direct Signal Space Construction Luenberger Tracker with Quadratic Least Square Regression for Computationally Efficient DoA Tracking", Mathematical Problems in Engineering, vol. 2020, Article ID 4697381, 17 pages, 2020. https://doi.org/10.1155/2020/4697381
Direct Signal Space Construction Luenberger Tracker with Quadratic Least Square Regression for Computationally Efficient DoA Tracking
Abstract
In this paper, we propose a computationally efficient directionofarrival (DoA) tracking scheme called the direct signal space construction Luenberger tracker (DSPCLT) with quadratic least square (QLS) regression. Also, we study analytically how the choice of observer gain affects the algorithm performance and illustrate how we can use the theoretical results in determining optimal observer gain value. The proposed scheme (DSPCLT) has several distinct features compared with existing algorithms. First, it requires only a fraction of computational complexity compared with other schemes. Secondly, it maintains robustness by treating separately the special case of object overlap in which subspacebased algorithms often suffer from lack of resolvability. Thirdly, the proposed scheme achieves enhanced performance by a method of delay compensation, which accounts for observation delay. Through numerical analysis, we show that DSPCLT achieves performance similar or superior to existing algorithms with only a fraction of computational requirement.
1. Introduction
In this paper, we propose a computationally efficient directionofarrival (DoA) tracking scheme called the direct signal space construction Luenberger tracker (DSPCLT) with quadratic least square (QLS) regression. During the last several decades, various subspacebased schemes have been proposed for DoA estimation such as multiple signal classification algorithms [1], estimation of signal parameter via rotational invariance technique (ESPRIT) [2], and rootMUSIC method [3]. Recently, because of their high resolution, these algorithms have been studied in various applications such as nonlinear arrays [4–6], 2dimensional arrays [7, 8], and MIMO radar [9]. However, early subspacebased algorithms generally suffer from the issue of large computational complexity, typically due to the need for eigenvalue decomposition (ED) or singularvalue decomposition (SVD). For this reason, various schemes have been proposed to reduce the system complexity, such as the propagator method (PM) [10], orthogonal propagator method (OPM) [11], subspace method without eigendecomposition (SWEDE) [12], direct signal space construction method (DSPCM) [13–15], and projection approximation subspace trackingbased deflation (PASTd) algorithm [16].
However, these algorithms exhibit difficulty in estimating the DoAs when one or more signals suddenly disappear or when DoAs overlap. [17] As means to overcome such issues, a number of DoA tracking schemes have been proposed in [18–20]. Recently, authors in [21, 22] proposed to combine existing tracking techniques with computational efficient subspace techniques to avoid the issues of overlapping or suddenly disappearing targets while keeping the complexity minimal. For example, the authors in [21] proposed an algorithm by combining PASTd and Kalman filtering [23]. We shall refer to this algorithm as PASTd Kalman in this paper. PASTd Kalman reduces the computational complexity of constructing the subspace by avoiding ED and SVD with recursive least square (RLS) technique. However, the algorithm still requires relatively high complexity due to the adaptation of Kalman filtering. More recently, the authors in [22] proposed an algorithm called the adaptive method of estimating DoA (AMEND) which replaces the Kalman filter with Luenberger state observer [24]. However, the complexity of AMEND is not smaller than PASTd Kalman because it employs, for subspace construction, a relatively computationally expensive algorithm called the subspacebased method without eigendecomposition (SUMWE) [25].
In this paper, we propose DSPCLT that achieves similar or better performance with computational complexity significantly lower than existing algorithms. The proposed scheme has several distinct features compared with existing schemes. First, it achieves very low complexity by employing DSPCM [14, 15] for subspace construction and Luenberger observer in place of the Kalman filter for estimation and filtering. Secondly, it achieves robustness by treating separately the special case of object overlap in which subspacebased algorithms suffer from the lack of resolvability. Thirdly, the proposed scheme achieves enhanced performance by a method of delay compensation, which takes into account the fact that the DoA estimation at present time is performed with observed data collected previously. We show, through numerical analysis, that the proposed scheme achieves performance similar or superior to existing algorithms with only a fraction of computational requirement.
Another important topic studied in this paper is the choice of observer gains used to implement Luenberger observer. In most existing schemes employing Luenberger observer, optimal observer gain values are determined through simulations. For example, a certain observer gain value was suggested in [22] after simulations. However, we note that there is infinite variety of situations depending on the number of antenna sensor elements, the numbers of objects, the speeds of movements, the signaltonoise ratios of the target, the shape of object trajectory, and so on. Hence, for a selection of a certain observer gain value to be accepted as a valid choice, there must be some degree of guarantee that a particular observer gain value will provide optimal or nearoptimal performance in other situations, which is generally very difficult with pure simulations. For this reason, we study analytically how the choice of observer gain affects the algorithm performance and how we can choose optimal observer gain value.
This paper is organized as follows. In Section 2, the system model and basic assumptions are described together with the notational definitions used throughout this paper. In Sections 35, the proposed scheme (DSPCLT) is delineated, which consists of three stages called initialization, prediction, and estimation. The stages of initialization and prediction are introduced in Section 3 and that of estimation is presented in Sections 4 and 5 depending on whether objects are resolvable by the subspacebased algorithm employed. In Section 6, we study theoretically how the selection of observer gain value affects the algorithm performance and describe how we can make optimal observer gain value selection using the analytical results. Next, we show, in Section 7, that the proposed scheme achieves similar or better performance with only a fraction of computational requirements compared with other algorithms. Finally, we draw conclusion in Section 8.
The following notations are used throughout this paper. Boldfaced letters are used for matrices and vectors. The complex conjugation, the transposition, and the Hermitian transposition of matrix are denoted, respectively, by , , and . The zero matrix and identity matrix are denoted, respectively, by and .
2. System Model and Basic Assumptions
We consider a uniform linear array (ULA) consisting of sensors with intersensor spacing and assume that narrowband farfield signal waves of wavelength impinge on the array at distinct directions, respectively, making angles with the linear array. We denote, by , the received noisy signals collected at the sensors. Then, if we define the steering vector by with , the system can be modeled with the following equation:where , , , and is the vector consisting of the noises added at the sensors. In this paper, we make, on the system model, the following basic assumptions:(1)The signal wavelength is known a priori and that is chosen to be . And the number of impinging signals is predetermined by appropriate methods such as the one in [26].(2)The vectors and are statistically independent complex random vector processes with zero mean satisfying where is the Kronecker delta function. We further assume that is a diagonal matrix with positive real numbers on its diagonal and that the value of is assumed to be estimated accurately by the receiver through longterm observation of the noise.(3)The system tracks the DoAs by obtaining angle estimates at time , , where is a predefined positive real number. We assume further that is chosen to be small enough that the DoAs are essentially constant during the time duration for each .
3. Initialization and Prediction
In this paper, we propose a computationally efficient DoA tracking scheme called the direct signal space construction Luenberger tracker (DSPCLT) with quadratic least square regression. As in AMEND, the DoA is tracked by sequential update of state that is defined as a threedimensional vector consisting of directionofarrival and its first and second derivatives. The process of update is carried out in two stages, which we call prediction and estimation. In the prediction stage, the state is tentatively updated purely based on the previous state. Then, more accurate updated state is obtained in the estimation stage, which is the major part of the proposed scheme.
As in AMEND, we propose to use, in the estimation stage, the Luenberger observer using a subspacebased algorithm. In particular, we use Newton iteration method to obtain the improved estimate of the DoA for use in the Luenberger observer. While such subspacebased tracking schemes provide excellent performance in general, we need to take special care when the DoAs of objects overlap, in which case the Newton iteration may not provide robust estimate of the DoA. Consequently, it is wise to consider such a special case separately.
Since it is rather complicated to describe the estimation stage, we describe only the initialization stage and the prediction stage in this section. Then, in the next two sections, we consider the estimation stage in nonresolvable and resolvable situations, respectively. The proposed scheme employs, for each object, a 3dimensional state vector whose components represent the directionofarrival and its first and second derivatives, respectively. At the initialization stage, the state vector is assumed to be obtained by a certain existing method such as MUSIC [1]. After the initialization stage, the state vector is updated at a regular interval in two stages called prediction and estimation described in the following.
Before proceeding, let us clarify the notations used in the following. First, we denote by the state vector at time and by the first component of . We shall regard as the estimate of the DoA of object at time . Next, we denote by the predicted state vector obtained in the prediction stage at time and by the first component of .
3.1. Initialization Stage
The initialization stage begins by obtaining the estimates of , , and for , through a certain DoA measurement scheme such as MUSIC [1], ESPRIT [2], or DSPCM [14, 15]. The estimates , , and shall be denoted, respectively, by , and . Then, the initial state vector is computed by
We note that the third component of is set to be zero in AMEND [22] and PASTd Kalman [21]. In fact, this initialization provides better performance in the proposed scheme as in AMEND and PASTd Kalman according to our simulations. Consequently, we shall use this method of initialization previously used in AMEND and PASTd Kalman when we evaluate the performance in a later section. However, we shall use initialization (3) for the proposed scheme despite slightly worse performance. This is because mathematical analysis in later sections is slightly simpler with (3).
3.2. Prediction Stage
After initialization, the state vectors are updated periodically, with period , in two stages called prediction and estimation. We assume that the state vector at previous time is given, where . We assume that has been obtained in the initialization stage if and in the previous estimation stage if . In this stage, namely, in the prediction stage, rough estimates of the next state, which we denote by , are obtained by the following dynamic matrix equation:where
Here, the first, second, and third components of the vector are interpreted to represent predicted values, based on the previous state vector, for , , and , respectively. After the prediction stage, the distanceis computed for . The object is regarded to be in the nonresolvable situation if is less than or equal to a certain predetermined threshold value for some with . Naturally, the object is regarded to be resolvable if it is not in the nonresolvable situation.
4. Estimation Stage in the Nonresolvable Situation
We note that the nonresolvable situation becomes less likely if the number of antenna elements is increased. In other words, the nonresolvable situation does not happen very often and does not last very long even when it happens if the number of antenna elements is large enough. Although the nonresolvable situation may not happen very often and the algorithm used in this situation is not the major contribution of this paper, we consider this situation first because the proposed algorithm for the resolvable situation is much more complicated to describe.
In the nonresolvable situation, the estimated state vector is obtained by using quadratic least square regression using the past estimated DoAs , . To be more specific, we assume that the true DoA satisfiesand attempt to obtain the constants and by the least square method with the previously estimated DoAs , . We note that the resultant constants and are given by [27]
Once the constants and are computed by (8), the estimated state vector is obtained by
We note that the second and third components of can be obtained by the first and the second derivative of at .
5. Estimation Stage in the Resolvable Situation
In this section, we describe how we obtain the estimated state vector in the resolvable situation. First, we recall that the predicted state vector is obtained, in the prediction stage, by simply multiplying the previous estimated state vector by a constant matrix without reflecting the data collected between times and . While the data collected, captured at the antenna, are still not used to obtain the updated estimated state vector in the nonresolvable situation, the data are used to obtain the updated estimated state vector in the resolvable situation. In the following, we describe how the data are used to obtain in the resolvable situation.
In particular, the updated estimated state vector is obtained, using the Luenberger observer, bywhere denotes a 3dimensional column vector called observer gain [24]. Here, the quantity is an estimate of obtained by a certain algorithm using the data collected at the antenna. In the following, we describe how we obtain in the proposed scheme. Before proceeding, we note that the performance of the system can depend on the choice of the observer gain . In many cases as in AMEND, no serious discussion is provided regarding how the choice of the observer gain affects the performance or how we can achieve optimal or nearoptimal performance. We discuss this issue in the next section. Hence, we focus, in this section, solely on how we obtain the quantity , which we call the delay compensated direct signal space construction method Newton estimate (DCDSPCMNE) of . The DCDSPCMNE is obtained in two steps. In the first step, we obtain a quantity called the direct signal space construction method Newton estimate (DSPCMNE). Then, in the second step, we obtain more accurate estimate by a method of motion compensation.
5.1. Direct Signal Space Construction Method Newton Estimate
Let us first discuss how we obtain DSPCMNE . In many subspacebased DoA estimation schemes, the DoAs are estimated by minimizing a certain cost function defined bywhere is the orthogonal projector into the noise subspace. One major source of computational complexity in subspacebased schemes is in the construction of the orthogonal projector . In this paper, we propose to use the direct signal space construction method [14, 15] to reduce the computational complexity significantly. In Appendix A.1, it was described how we can construct the orthogonal projector in the proposed scheme.
Since the minimum search process also takes some nonnegligible computational complexity, various methods have been proposed to reduce the burden. One such method, which is particularly useful when some coarse estimate of the DoA is available, is to employ Newton’s iteration method of searching zeroes with [11, 12, 22, 25]. We also adopt Newton’s iteration method in this paper. Let denote the coarse estimate of a zero of . Then, according to Newton’s method, a refined zero is given bywhere denotes the real part of . Here, we note that the first term of the denominator is much smaller than the second term of the denominator since . For this reason, the first term of the denominator is open dropped in implementations.
Using (12) and neglecting the first term of the denominator, we obtain DSPCMNE bywhere and are vectors defined, respectively, byand
Also, the matrix in (13) denotes an estimate of the propagator . We discuss, in Appendix A.2, how we can construct the estimate in the proposed scheme.
5.2. Delay Compensated Direct Signal Space Construction Method Newton Estimate
We note that the orthogonal projector is obtained by using the data collected during the time interval . Hence, while DSPCMNE is an estimate of , it is not exactly at time . In this section, we regard as an estimate of at a certain time () and obtain the DCDSPCMNE by compensating the movement during the delay . For this purpose, we first need to assess the value of and then estimate the amount of change in during this delay.
Before proceeding, we shall assume that and are small enough so that does not change appreciably over time period . However, we note, even with this simplifying assumption, that it is very difficult to determine theoretically justifiable value for since the process of obtaining does not depend linearly on the observation times. As described in Appendix A.2, the data are collected at times , and forgetting factor is used to give more weight to more recent data. In the special case in which and , we note that the data are collected at times , and all the data are equally weighted since . Consequently, we observe that the data are collected symmetrically around time and are equally used. While it is theoretically difficult to justify the validity, we shall assess the value of as in this special case.
The assessment of the value of is more difficult in the general case. In the general case, considering the fact that the data are collected at and that data collected at time are weighted by in the computation of in (A.3) and (A.4), we shall regard the weighted time averageas the estimated observation time .
Next, we note thataccounts for the angle change between times and . The change between times and can be estimated asassuming that angular velocity does not change over . Using this approximation, we obtain the DCDSPCMNE by the following equation:
Before proceeding, we note that (20) reduces toin the special case of .
6. Selection of Observer Gain
In Section 5, we discussed how the updated estimated state vector is obtained in the normal situation, namely, in the resolvable situation. In particular, the estimated state vector is updated, in the proposed scheme, using the Luenberger observer as described in (10). One important issue that was not discussed in Section 5 was the selection of the observer gain . If certain conditions are met, the estimation errors approach zero as time progresses. For example, the state error vectors converge to zero if and only if all the eigenvalues of the matrix have magnitudes strictly less than one in the case of the algorithm in [22]. However, not so much information is available about the effect of observer gain selection beyond such theoretical basics. Hence, observer gains are generally selected through simulations. However, the question arises whether an observer gain selected to be optimal through simulations in certain situations will lead to reasonably good performance in other situations. In this section, we study the effect of observer gain selection through theoretical and numerical analysis and describe how we can obtain reasonably optimal observer gains.
6.1. State Estimation Error
We first define the state estimation error bywhere denotes the state vector defined as . Also, we define a vector , which we call state evolution mismatch at time , by
As discussed in [22], this vector equals to a zero vector if the angular acceleration does not change between times and . However, the angular acceleration may change although the amount may be small, and becomes small but nonzero vector. We define the angle measurement error by
In the next proposition, we obtain recurrence relation between and .
Proposition 1. The state estimation errors satisfy the following recurrent relationship:
Proof. We first note, from (10), (4), and (24), thatNext, from (22), (23), and (26), it follows thatNow, since and , we next obtainFinally, we obtain (25) by rearranging (29). □
6.2. Root Mean Squared Error and Observer Gain
In this section, we attempt to study theoretically how the observer gain affects the performance of angle estimation. In particular, we study how the selection of observer gain is related to the root mean squared error (RMSE) of the DoA of the object, which is defined bywhere is the total number of observation intervals. Before proceeding, we note that can be expressed, using , assince
First, we diagonalize the matrix into , where is a diagonal matrix whose diagonal elements are eigenvalues of and is a matrix whose column vectors are the corresponding eigenvectors. Here, we note that it is possible to represent in terms of and as described in the following lemma.
Lemma 1.
Proof. Since , it follows thatHere, we note that is simply the component of the matrix , which is 1. Consequently, we obtainfrom which (33) readily follows. □
Moreover, the observer gain vector is determined solely by the eigenvalues, namely, the diagonal elements of , as described in the following lemma.
Lemma 2. Let , , and denote the diagonal elements of , where and are nonnegative real numbers and is a real number. (Since the characteristic equation of is a thirdorder equation with real coefficients, at least one of the three eigenvalues must be realvalued and the other two must form a complex conjugate pair). Then,where denotes the element of .
Proof. We note that the characteristic equation of is given byConsequently, for , , and to be solutions to (37), the identity equationmust hold. Now, comparing the coefficients of the left and righthand sides of (38) and solving for , we obtain the results. □
Although the result in Lemma 1 is simple, it is important in that we can construct the observer gain vector by suitably choosing the eigenvectors and eigenvalues that directly affect the system performance. Before proceeding with the performance analysis, let us rewrite (25) asby using (33). By repeatedly using (39), we can represent the state estimation error in terms of its initial value , the angle measurement error , and the state evolution mismatch as described in the following:
We note that the second and third terms of (38) involve summation from to . Consequently, the second and third terms may grow larger as grows. However, if we choose the magnitudes of all the diagonal elements of to be sufficiently less than one, only the error and mismatch with close to affect the values of since converges to a zero matrix as grows. Moreover, we note that the magnitude of is order of magnitudes smaller than that of unless the observation time interval is chosen to be very large, in which the tracking scheme in this paper does not work properly. Consequently, if we choose the magnitudes of all the diagonal elements of to be small enough compared with 1, the third term of (40) is negligibly small in usual situations. Consequently, in the following, we shall assume that can be approximated asand attempt to compute using (31). To obtain reasonably simple and useful result for , we shall assume that the angle measurement error has zero mean and that and are uncorrelated if , which are reasonably accurate assumptions in real situations. In the following theorem, we summarize the result.
Theorem 1. Assume thatand that
Then,where
Proof. See Appendix B. □
Corollary 1. If we further assume, in Theorem 1, that there exists some nonnegative real number such thatfor all , thenwhere
According to our test with various simulation environments, assumptions (42) and (43) hold very accurately. However, assumption (46) does not hold, and the value of usually fluctuates between and 2 times of the average value. Despite the invalidity of the assumption, the result in Corollary 1 is very useful. First, we note the simplicity of (47) compared with (44). Consequently, it is much easier, with (47), to grasp intuition about the behavior of in relation to the choice of . In particular, we note that there is no further information required about the variation of over , and the is represented in the unit of in (47). Secondly, we can regard (47) with as a simple and reasonable approximation of (44) since there is no reason to believe that will fluctuate extremely over .
Now, we shall show how we can select eigenvalues for optimal or nearoptimal performance with (47), which is not environment dependent. Later, in this section, we shall also discuss how much the results with (47) are different from those with (44) using a number of simulation environment setups. We first choose eigenvalues , and and compute the righthand side of (47). Apparently, we need to obtain for this purpose. However, we can compute , which equals to , using Lemma 2. We further note that the righthand side of (44) can be evaluated once we obtain .
We recall that converges to zero as grows if and only if both and are smaller than 1. We consider as candidate values for and and as the values for . Consequently, we consider a total of eigenvalue set combinations. Out of all considered values, the case of , , and provides the lowest value, which is . In the following, we shall express in the unit of for simplicity. In Figure 1, we illustrate the values as functions of and for the case of . In this figure, we note that the value rises as moves away from . In particular, we note that the value rises relatively mildly if or is decreased. However, the value rises significantly as and approach 1. We found that the values generally rise slowly as the value moves away from . Although it is not possible to include all results with different values, we include the results with and in Figures 2 and 3 to illustrate how the value changes with the value.
So far, we discussed how we can choose the optimal observer gain vector with (45) assuming that is a constant independent of . Hence, the validity of the proposed method of obtaining the observer gain will depend on how the effect of variations in affects the value of . According to our extensive test, the effect of variation of is very negligible except for extreme situations that do not affect the optimal selection of the observer gain. To illustrate in a space economical way, we consider a situation in which there are five objects with DoA trajectories illustrated in Figure 4. In realistic environments, the DoA trajectories may not change as fast as in Figure 4. However, if we choose a slowly moving environment, the value would not change very much, and hence the effect would also be negligible. For this reason, we have chosen a very fast changing environment shown in Figure 4. To obtain , we assume that only object moves following the trajectory and perform simulations with the proposed scheme (DSPCLT). We recall that we divided the situations into resolvable and nonresolvable situations to avoid overlapping situations. For this reason, we assume that only object exists in computing .
For numerical evaluations, we assume that the number of sensor elements is 12, the total number of observation intervals is 60, and the signaltonoise ratio is 5 dB. With simulation run trials, we computed , which is illustrated in Figure 5. In this figure, we note that the value of is generally related to the angular acceleration. After obtaining the estimate of , we computed using (44) in the unit of . To compare the results with the values computed using (47), we evaluated the ratio obtained by dividing the value computed using (44) by that using (47) for each set of , and values. Some results of the evaluations are summarized in Table 1. In this table, the number pair means the minimum and maximum values of the ratios over all and values for given value and sequence . For example, for and , the ratio falls between 0.965 and 1.141. With this set of numbers, we can conclude how different the two values computed by (42) and (45) are to the extreme. However, the extreme values do not tell the whole story since the extreme values, namely, the maximum and minimum ratio values, occur at and values close to 1 so that the value is very large. In other words, the extreme values happen with eigenvalue selections for which do not converge to zero very fast.
To understand what happens, let us first consider the case of and in which the minimum value is particularly small. In Figure 6, we plotted the ratios as a function of and for this case, namely, for and . We observe that the minimum value 0.799 happens near the region and for which the values are very large. In contrast, the ratio is very close to 1.0 except for the region in which remains to be acceptably small. Moreover, we recall that the results in Figure 6 correspond to some extreme case in which the ratio fluctuates a lot. As a more typical case, we depict, in Figure 7, the ratio for and . We note that the ratio is again very close to 1.0 except for the region in which and . Consequently, we observe that the value computed with (47) is very close to that with (44) except at , , and values for which does not converge to zero fast enough, and is very large.
7. System Performance and Computational Complexity
In the section, we compare the performance and the computational complexity of the proposed scheme with those of two existing algorithms, namely, AMEND and PASTd Kalman. Most of all, we observe that the proposed scheme (DSPCLT) exhibits excellent performance with significantly low computational complexity in comparison with AMEND and PASTd Kalman.
7.1. Performance Evaluation of Algorithms
First, let us compare the performance of DSPCLT with that of AMEND and PASTd Kalman. As measures of performance, we use the following quantities:where is the estimated DoA of and is the the total number of observation intervals. We note that is the RMSE of the DoA of the object at time and that represents the total RMSE over all objects and observation intervals. We recall that we regard as in the proposed scheme.
However, we need to be very careful in assessing the performance with RMSE. This is because there are some possibilities of tracking wrong objects after passing through a crossover point or of object tracking divergence due to lack of sensitivity to object movements, which can result in huge estimation errors. The value of RMSE can be meaningless when such catastrophic tracking failure happens. To make the value of RMSE more meaningful, we shall regard the situation as tracking failure if and only if the estimated DoA of any of the signals is more than away from the actual DoA and compute the RMSE value disregarding such cases of tracking failure. (In this paper, we shall measure DoAs and RMSE in degrees rather than in radians.) Since the RMSE values do not account for the cases of tracking failure, we shall also consider, in addition to RMSE, the probability of tracking failure as a performance measure.
For performance evaluations, we first consider a situation in which signals impinge upon a ULA with sensors separated by halfwavelength from timevarying directions , , and , , , illustrated in Figure 8. We assume that the signaltonoise ratio of each signal at each sensor is given as 5 dB. During each time interval , total data snapshots are uniformly captured at each sensor with .
We note that the choice of forgetting factor affects the tracking performance. Hence, we compare the performance with various values from 0.91 to 0.99. In the case of PASTd Kalman, we also consider, instead of , the forgetting coefficient , which was used in [21]. For AMEND, we use the observer gain vector obtained with the eigenvalues , and as suggested in [22]. However, we select the observer gain vector obtained, using Lemma 2, with eigenvalues and for DSPCLT. We recall that observer gain is not used in PASTd Kalman.
As the value of is used to compute in (A.3) and (A.4), we use since the RMSE performance enhances rapidly as increases from 1 to and then becomes virtually invariant as it further grows. Also, we separate the resolvable and nonresolvable situations with threshold values , and for , and 20, respectively.
Simulations are performed with trials. The failure probability and obtained through the simulations are summarized in Table 2. We may say that AMEND exhibits the most reliable performance in that it does not experience failure for any choice of value. However, AMEND is worst in terms of value. In particular, we note that both DSPCLT and PASTd Kalman exhibit not only better values than AMEND but also achieve zero failures. The best value for each scheme is marked in boldfaced fonts, and we note that DSPCLT achieves the lowest value among the three schemes. By the way, the value 0.6160 listed on the lowest row for PASTd Kalman corresponds to the forgetting coefficient suggested in [21].

To illustrate the performance variation over time, is plotted in Figure 9. In this figure, we used for AMEND and 0.97 for DSPCLT and PASTd Kalman. In Table 2, we note that AMEND exhibits the worst values. Also, we note DSPCLT achieves slightly better value compared with PASTd Kalman. In Figure 9, we observe that DSPCLT exhibits best overall performance except for the region where the DoAs of objects overlap. In particular, we note that the performance of DSPCLT is significantly better than others for the first object during . From Figure 8, we note that the DoA of the first object changes very fast over . Consequently, we can conclude that the delay compensation algorithm introduced in Section 5.2 indeed provides improved performance in fast changing environments. We also observe that DSPCLT is less susceptible to object crossover compared with AMEND. This is because the nonresolvable situation is separately treated, in the proposed scheme, using the algorithm described in Section 4.
One of the main reasons for the overall performance superiority of the proposed scheme is the comparatively larger effective aperture size with the same number of sensor elements. For illustration of the results of effective aperture size, we compare, in Figure 10, realizations of the function , defined in (11), at time index . In this figure, we observe that is much sharper for DSPCLT than for AMEND and PASTd Kalman. For this reason, DSPCLT provides better overall RMSE performance than AMEND rivaling PASTd Kalman that employs much more sophisticated Kalman filter algorithm.
We next study the performance variation with increased number of objects. For illustration, we consider the situation in which there are objects with trajectories illustrated in Figure 4. Here, we note that AMEND can track only objects satisfying , while DSPCLT and PASTd Kalman can track upto objects. Consequently, at least sensor elements are needed for AMEND to be able to track objects. To see the effect of the number of sensor elements, we consider three values , and 20 as the number of antenna elements of the ULA. Again, the received SNR of each of the 5 objects is set to be 5 dB, the number of snapshots is chosen to be 200 with , and the value of is chosen to be the same as .
The simulations are performed with runs again, and the results are summarized in Table 3 and Figure 11. As in the case of , we obtained simulation results with various forgetting factor values and included the best results in the table and the figure. In particular, the value indicates the forgetting factor value that leads to the best value. In Table 3 and Figure 11, we observe that AMEND exhibits the worst overall performance in almost every aspect. We observe that both AMEND and DSPCLT experienced nonzero failure probabilities, which decrease as the number of sensor elements increases. Consequently, we can conclude that PASTd Kalman achieves the most reliable performance among the three considered schemes. However, we note that DSPCLT exhibits performance better than PASTd Kalman with a sufficiently large number of sensor elements. We also note that DSPCLT, compared with AMEND, not only achieves lower failure probability but also exhibits lower . In particular, we observe that the value of DSPCLT is significantly lower than that of AMEND despite the fact that is computed excluding the cases in which tracking fails. Next, in Figure 11, the sequence of values for each of the five objects is depicted for the case of . As in the previous case of objects, AMEND exhibits the worst performance, and DSPCLT achieves similar or better performance, compared with PASTd Kalman, except around crossover points.
