Abstract

The existing sparse imaging observation error estimation methods are to usually estimate the error of each observation position by substituting the error parameters into the iterative reconstruction process, which has a huge calculation cost. In this paper, by analysing the relationship between imaging results of single-observation sampling data and error parameters, a SAR observation error estimation method based on maximum relative projection matching is proposed. First, the method estimates the precise position parameters of the reference position by the sparse reconstruction method of joint error parameters. Second, a relative error estimation model is constructed based on the maximum correlation of base-space projection. Finally, the accurate error parameters are estimated by the Broyden–Fletcher–Goldfarb–Shanno method. Simulation and measured data of microwave anechoic chambers show that, compared to the existing methods, the proposed method has higher estimation accuracy, lower noise sensitivity, and higher computational efficiency.

1. Introduction

Synthetic aperture radars (SARs) have been widely used in military and civil fields due to their all-weather, all-day, long-range, and high-resolution performance. High resolution requires a large bandwidth and a large synthetic aperture, which means high sampling rate and data volume for traditional SAR imaging methods. Many similar processes are employed in traditional imaging methods. Thus, great progress in imaging resolution for traditional SAR imaging methods is difficult to achieve. Sparse signal reconstruction and compression sensing technology have begun to be widely used in SAR imaging in recent years. For an echo signal that is obtained through sparse sampling or sparse representation, the imaging scene can be accurately reconstructed by optimising reconstruction algorithm. The sparse image SAR method can obtain higher resolution than the traditional methods [113].

Radar imaging depends on the geometric relationship between the radar and the target. In some cases, such as the motion error of the radar platform, mechanical jitter, and missing of data acquisition, the observation position of the radar sensor is changed [1416]. Autofocusing methods can be used to image the echo data with observation position errors, but these methods do not apply to sparse sampling data, and the computational efficiency is low [1719]. For sparse observation radar imaging and compressive sensing radar imaging, a base-space matrix has to be constructed to simulate the radar imaging process; thus, many scholars have realised the compensation of observation errors by optimising the base-space matrix [2028]. Reference [24] proposed a sparse reconstruction method with joint error parameters, which built a base-space matrix of error parameters to reconstruct signals. This method can estimate each observation position accurately, but the computing complexity is extremely high. References [25, 26] improved the sparse reconstruction method of joint error parameters by dividing the data into several subapertures and conducting unified position estimation and error compensation for each subaperture. Although the subaperture methods can improve the efficiency of the method in references [25, 26], they ignore the error variation within each subaperture and are unsuitable for the case of large spatial variation of position error. References [27, 28] improved the sparse reconstruction method of joint error parameters by phase error correction for approximated observation; this method can also improve the efficiency of the method in references [25, 26], but the approximated observation model will cause the decline in image quality.

This study constructs an imaging model based on target’s imaging scene sparse representation. By analysing the relationship between observation error of each observation position and base-space projection, a SAR observation error estimation method based on maximum relative matching degree of base-space projection is proposed. Firstly, it estimates the precise position parameters of the reference position through the sparse reconstruction method of joint error parameters. Then, the maximum matching degree of base-space projection between the estimating observation positions and the reference position is taken as the evaluation criterion, and the difference of the position error between the estimating observation position and the reference position is estimated by the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method. Finally, the precise observation error parameters are obtained by adding the relative errors to the reference position error. Our simulation results show that, compared with the existing algorithms, the proposed method is more accurate, more efficient, and more robust in a noisy environment compared with the existing algorithms. Furthermore, by building a test platform in the microwave anechoic chamber, we verified the effectiveness of the proposed method for the data with unknown missing location and the feasibility of applying this method in engineering.

2. Imaging Model

Radar imaging aims to determine the geometric relationship between the radar and the target. This study mainly analyses the influence of the change of radar observation position in imaging, which assumes that the target is stationary relative to the reference coordinate system in the imaging process. The radar observation position is known in general. However, due to the motion error or data acquisition missing, determining the precise observation position parameters of echo data is usually impossible. Sparse radar imaging or compressive sensing radar imaging requires accurate sparse representation of the target scene. Therefore, accurately estimating the position of the radar at each observation sampling point is necessary to obtain reconstructed images accurately.

In this study, the two-dimensional (2D) imaging is taken as the model for analysis. The geometric model of radar imaging process is shown in Figure 1. Let , , and denote the radar observation position coordinates, is the vertical distance from radar track to contour plane of the target, and is the vertical distance from center of the target scene to -axis. Let and denote the target scattering point’s coordinates in 2D imaging scene. For the ideal observation condition of radar imaging, radar motion path is a straight line, so and are constant values. The distance from radar to target scattering point can be expressed as

In this study, the transmitted signal is a stepped frequency electromagnetic signal; let denote the wavenumber of transmitted signal, which is a variable. Let denote the velocity of light, denote the frequency of transmitted signal, and and denote the minimum frequency and maximum frequency of transmitted signal. Let denote the imaging scene; the echo signal of all target scattering points in can be expressed aswhere is scattering intensity of the scattering point with coordinates and , To express equation (2) as a matrix operation, the object imaging scene is discretized and meshed. The scattering intensity distribution in imaging scene is expressed as a matrix ; let denote echo signal matrix. To facilitate matrix operations, stretch and as columns into a one-dimensional matrix. Ignoring the noise in the signal, can be expressed aswhere is base-space matrix, . For the radar observation condition with error, the radar path is not a fixed straight line, but a curve around the ideal straight line. Therefore, the coordinates , , and become variables. Set denotes the observation error, set , , and denote coordinates of each observation position with error , and the distance between the radar and the target is changed into

Then, the echo signal with radar observation error can be expressed as

Assume that the electromagnetic scattering intensity of target is isotropic; the scattering intensity distribution in imaging scene is also in this radar imaging system. Set denote the echo data matrix with observation error, and construct the base-space matrix with error parameter ; can be expressed as

3. Observation Error Estimation and Sparse Imaging

3.1. Estimation Method Based on Joint Parameter Sparse Reconstruction

Sparse radar imaging can be regarded as a process of optimisation reconstruction through equation (3). Let represent the estimated results of backscattering coefficient; the reconstruction model can be expressed as

It can be seen from equation (6), to accurately calculate, the base-space matrix must be constructed with accurate error parameters. Therefore, for each observation position, the base-space matrix is constructed according to the error parameters to be estimated, and the optimal solution of backscattering coefficient and error is obtained by solving the problem as follows:where is the set of error parameters to be estimated. As shown in equation (8), the sparse reconstruction method of joint error parameters can accurately estimate the error parameters and obtain higher imaging results. However, there are 2 problems in this method. First, the calculation cost is too high, each iteration needs a complete sparse reconstruction process, and each observation position needs many iterations calculation, so the calculation cost is huge. Second, because of the low amount of each observation position data, it is difficult to guarantee the accuracy. The subaperture estimation method can improve the calculation speed, and the overall error estimation accuracy of each subaperture can be improved, but the error variation in each aperture is ignored.

3.2. Estimation Method Based on Maximum Matching Degree of Base-Space Projection

Time-frequency analysis methods are commonly used in manoeuvring target imaging and spinning target imaging. These methods can quickly calculate the relative shift of the target at different sampling times by analysing the Doppler frequency of the signal. The present study assumes that the target is relatively static and the radar has shifted to the ideal trajectory. Thus, the position change of the radar can be estimated by using the time-frequency methods. However, these methods involve certain problems. Firstly, the accuracy is insufficient for accurate sparse reconstruction. For the short-time Fourier transform method, due to the use of the window function, obtaining high time resolution and high-frequency resolution at the same time is difficult, and the Wigner–Ville distribution (WVD) method is susceptible to cross-term interference in the case of multiple scattering points. Secondly, for sparse or CS imaging, if the echo data is sparsely sampled or is partially missing, the time-frequency analysis method often fails.

Time-frequency analysis method deduces the change of spatial position from the change of frequency. As sparse reconstruction depends on the geometric relationship between the target and the radar, the change of radar observation sampling position leads to the change of target position after reconstruction. By calculating the target reconstruction position of each observation sampling position signal, the change of the radar observation sampling position can also be deduced reversely.

Because there is no doppler information, only 1D range image can be obtained by the single observation echo data imaging. From 1D range image, only range coordinates of the target can be obtained. When the radar is at an ideal observation position, and the parameters , , and of the observation position are known, and the base-space matrix of single observation position is constructed based on this parameter, the range coordinates of the estimated results are shown as follows:

When the real radar observation position changes, and the real observation position parameters are unknown, if the ideal position parameters are still used in base-space matrix, the range coordinates of the estimated results with error are shown as follows:

Compared to one-dimensional range image of the ideal echo data, if the base-space matrix is constructed with ideal parameters, each pulse in one-dimensional range image of the nonideal echo data will have a translation, as shown in Figure 2. Therefore, based on minimum discrimination between the imaging results with error parameters and the ideal imaging results, the accurate error parameters can be calculated by iterating.

A certain observation position is selected as the reference position for accurate estimation and the ideal imaging result is obtained. Then by estimating the error parameters of all other observation positions, the accurate radar motion path can be obtained, as shown in Figure 3.

Using the sparse reconstruction method of joint error parameters for accurate reconstruction has two problems. The first problem is that the time cost is extremely high, and the second problem is that the results of sparse reconstruction are generally discrete and sparse, thereby causing difficulty in solving the maximum matching value through optimisation search such as gradient descent algorithm. To solve these problems, the reconstruction model must be modified. Equation (7) can be considered as the least-squares estimation model, and the resulting estimator can be expressed as

To use equation (11) directly to estimate the scene, a necessary requirement is that the matrix is invertible. However, is usually irreversible. Thus, equation (11) is a sick equation that cannot be directly solved. As does not exist, to approximately solve equation (11), we can use to cancel out the ill-posed term, namely, by multiplying both sides of equation (11) by [29], as shown in the following equation:

The solution obtained by this method is

As shown in equation (13), is equivalent to the projection of the echo signal on the base-space matrix. Therefore, through this base-space projection method, the approximate solution of the backscattering sparse of the target scattering point can be obtained. is a matrix with sinc response [29]; thus, the base-space projection matrix is a reconstruction result with sinc response. Sparse reconstruction is replaced by a base-space projection matrix and is expressed as

Although the reconstruction result of equation (14) is not an exact solution, its peak represents the same distance direction delay as the exact solution.

The preceding analysis shows that the deviation of the reconstructed signals from different sampling positions is directly dependent on the error of the sampling positions. However, the accuracy positions of the scattering points within the imaging scene are unknown in the real imaging process. Thus, we cannot directly determine the offset. First, we have to calculate the accurate error of the reference position through the reconstruction method of joint error parameters. Then, we need to consider the result as the benchmark and determine the accurate error of other observation sampling positions through the offset of imaging results.

Based on the assumption that the first observation position is the reference position, the precise observation position parameters are substituted into equation (14) to solve the base-space projection matrix. Then, the matching degree between the base-space projection result at the observation position and the base-space projection result at the reference position is

As the base-space projection matrix is a function with sinc response, it has a side effect with two problems. Firstly, side processing reduces the convergence speed of the optimisation algorithm. Secondly, when several scattering points are close to one another, their side bars tend to overlap and accumulate, thereby resulting in a side bar with higher energy, as shown in Figure 4. As can be seen from equation (15), the matching degree is the superposition of the energy of the reference and iterative results. Based on the assumption that a scattering point of another reconstruction result happens to be located in the high side office, the calculation of the matching degree will have errors that will affect the estimation result. Therefore, to further improve the estimation speed and accuracy, the reference results are Gaussian weighted with the accurate reconstruction results of equation (8), as shown in the following:where is the normalised result of , represents the imaging scene, and is a constant. The reconstruction result of the reference position after Gaussian weighting is

The side lobes in the Gaussian weighted base-space projection matrix are suppressed to eliminate the influence of false scattering points on position estimation. To solve the precise observation sampling position, the maximum matching degree can be used as the criterion. Suppose the optimisation estimation model can be expressed as

The optimisation function of equation (18) is shown in Figure 5. The wide range of solving methods includes trial point, Newton, and quasi-Newton methods. The convergence rate of the heuristic method is slow, while the calculation of Newton’s method is complex and the optimisation stability is not high. In this study, the BFGS method is used to calculate the optimal solution of equation (18), which is set as the current iteration position. Then, the next iteration position iswhere is the step size. To ensure that the optimal solution is not excluded, must meet the Wolfe–Powell criterion. is the step number, which is calculated from equation (19):where is a positive definite matrix:where , .

The error parameters of the observation sampling position are solved by the BFGS method. Based on the assumption that the reference position is the first observation sampling position, the accurate estimation result of each observation sampling position is is the precise observation and sampling location solved by the sparse reconstruction method with joint parameter errors.

4. Experiment Simulation and Verification

4.1. Experiment and Analysis of Computer Simulation Target

The 2D geometric distribution of the simulated target is shown in Figure 6, where the number of scattered points is 28. The size of the imaging scene is 5 × 5 m, and the ideal experimental parameters are set as follows: the transmitting signal carrier frequency is 15 GHz, the bandwidth is 3 GHz, the synthetic aperture length is 10 m, the distance between the center of the imaging scene and the radar moving track is 100 m, and the radar antenna height is 0 m. In generating simulation data, certain errors were set for all three coordinate components, and the search interval of 1 m was taken for all three coordinates in the experiment. Figure 7 shows the estimation results of the proposed method for the three position coordinates. The blue line in the figure indicates the theoretical coordinate value, the black line represents the actual coordinate value containing errors, and the red line indicates the estimated result. The figure shows that the method proposed in this paper can accurately estimate the precise coordinates of the radar accurately at each observation position.

Figure 8 shows the comparison of orthogonal matching pursuit (OMP) reconstructed images with different methods. Figure 8(a) presents the reconstruction results of the base-space matrix constructed with ideal experimental parameters. Figure 8(b) uses the estimated observation position parameters, where the target scattering point has been accurately reconstructed. To compare the results of the proposed method with those of other mature methods, Figures 8(c)8(f), respectively, adopt the point-by-point estimation method of joint error parameters (denoted as point-by-point method), the subaperture estimation method of joint error parameters (denoted as subaperture method), the phase error correction method for approximated observation-based compressed sensing radar imaging (denoted as phase error correction method), and the proposed method to process the simulation data through the OMP algorithm. Under the error condition set in this study, the point-by-point method and the proposed method can effectively reconstruct the target, and the subaperture method and the phase error correction method can also reconstruct the target, but there is a little position deviation of some scattering points in imaging results.

To compare the four methods’ imaging accurately, the quantitative analysis of their processing results is reported in the first and second rows of Table 1. The table indicates that the estimation accuracy of the proposed method is slightly lower than that of point-by-point method but higher than that of subaperture method and phase error correction method. The OMP reconstruction result of the proposed method is basically the same as that of point-by-point method but higher than that of subaperture method and phase error correction method.

To compare the four methods’ computation complexity, set the size of observation matrix to , set the number of observation positions to , and set the number of subapertures to . The computation complexity of sparse reconstruction is . Assume that the computation complexity of search algorithm in four methods is , and the computational complexity of the four methods is reported in the third row of Table 1. The point-by-point method has the highest computational complexity; through some approximation, the computational complexity of subaperture method and phase error correction method is lower than point-by-point method. The proposed method has the lowest computational complexity, because we use base-space projection matching to replace the sparse reconstruction in error estimation, which could greatly reduce the amount of computation.

To compare the speed of four methods further, the calculation time of four methods is reported in the fourth row of Table 1. In this experiment, the size of the simulation data is 256 × 256, the grid size of the imaging scene is 101 × 101, the number of samples in the three position coordinates to be searched is 51, and the computer processor used for data processing is Intel Core I7-8700K. On the data processing time, point-by-point method takes about 20 times as long as the proposed method, subaperture method takes about 5 times as long as the proposed method, and phase error correction method takes about 8 times as long as the proposed method. The reason for the long calculation time of these three methods is that the point-by-point method, the subaperture method, and the phase error correction method all need the accurate sparse reconstruction in each iterative calculation.

To test the effects of the four methods in a noisy environment, different degrees of white noise were added to the simulation data. The estimation errors of the four methods vary with the noise level, as shown in Figure 9. As the figure shows, the estimation error of the four methods between 0 dB and –10 dB has a small change, but the estimation errors of point-by-point method and subaperture method are more obvious than that of the method proposed in this paper. The estimated errors of the four methods start to show a relatively obvious increase between –10 dB and –15 dB, but the increase range of the estimated errors of the proposed method is still less than that of the point-by-point method, subaperture method, and phase error correction method. Figure 10 shows the OMP image reconstruction with the addition of –5 dB Gaussian white noise. When the noise is –5 dB, the effects of subaperture method, point-by-point method, and phase error correction method are not significantly different. Figure 11 shows the OMP image reconstruction with the addition of –10 dB Gaussian white noise. When the noise is –10 dB, the effects of the point-by-point method, subaperture method, and phase error correction method decrease considerably, but the effect of the proposed method only shows a small change, which indicates that the proposed method is highly robust in the noisy environment. The reason is that the point-by-point method, subaperture method, and phase error correction method rely on complete sparse reconstruction, and the sparse reconstruction is sensitive to noise, while the base-space projection in this method is less sensitive to noise.

4.2. Experiment and Analysis of Measured Data in Microwave Anechoic Chamber

To verify the engineering feasibility of the proposed method, we built a SAR semi-physical simulation system in the microwave anechoic chamber. Figure 12 shows the overall system framework, including the sampling frame, vector network analyser, sending and receiving antenna, and target scene. The vector network analyser and antenna move along the guide rail for radar aperture synthesis, and the target consists of five metal balls. The experimental parameters are reported in Table 2.

Due to the ideal test conditions of the anechoic chamber, no position error occurs in the antenna movement process. To test the proposed method, the known ideal parameters are not used in the data processing, but a certain range of parameters are set for accurate estimation. Set the search range of x coordinate to be 1.95–2.05 m, y coordinate to be −0.5–0.5 m, and z coordinate to be −0.1–0.1 m. Different from the echo data of computer simulation, the azimuthal sampling of the microwave anechoic chamber experimental data contains multiple synthetic apertures. Therefore, the aperture division is adopted in the processing, and the reference location is selected for each aperture to estimate. Figure 13 shows the position estimation results of the three coordinates. Although a small local fluctuation is observed, the estimated results are basically consistent with the overall actual position. Figure 14(a) shows the OMP reconstruction result with ideal parameters, and Figure 14(b) indicates the OMP reconstruction result with estimated parameters.

If a part of echo data is missing and the locations of the missing data are unknown, the proposed method can estimate the unknown of the lost data. The real missing locations and the estimated missing locations by the proposed method are shown in Figure 15; the proposed method can accurately estimate the missing locations. To construct the base-space matrix for unknown missing locations' data, it is assumed that the data is sampled at equal intervals between −0.5 and 0.5, and the OMP reconstruction results are shown in Figure 16(a). The figure indicates that scattering points of the reconstructed image are defocused. To estimate the exact locations of the missing data, the azimuth positions (y coordinate) of the echo signal are estimated, and then the locations of the missing data are obtained through comparison with the ideal azimuth position parameters. The experimental simulation results show that the locations of the missing data estimated by the proposed method are the same as the experimental setting. The OMP reconstruction results are presented in Figure 16(b). The figure shows that the positions of the five metal balls are all accurately estimated.

5. Conclusion

In this study, a reference position and relative error strategy is used to estimate observation position. The base-space projection matrix of the observation position is calculated by constructing a single azimuth observation matrix with error parameters. The matching degree between the reference position and estimated position is used as a criterion to calculate the relative error. Then the parameters of each observation position are obtained by adding the relative error to the reference position. This method is based on the premise of maintaining high estimation accuracy. Compared with the existing method, the proposed method greatly improves the operation efficiency greatly, which has better antinoise performance. The validity and engineering feasibility of the proposed method are verified by data obtained from computer simulation and microwave anechoic chamber simulation.

Data Availability

The experimental data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the NSFC under Contract 61472324.