The purpose of this paper is to present a deep learning model that simultaneously estimates targets and wall parameters in through-the-wall radar (TWR). As a result of the complexity of the environments in which through-the-wall radars operate, TWR faces many challenges. The propagation of radar signals through walls is further delayed and attenuated than in free space. Therefore, the targets are less able to be detected and the images of the targets are distorted and defocused as a consequence. To address the above challenges, two modes are considered in this work: single targets and two targets. In both cases, permittivity and wall thickness are considered, along with the target’s center in two dimensions and the permittivity of targets. Therefore, in the case of a single target, we estimate five values, whereas in the case of two targets, we estimate eight values simultaneously, each representing the mentioned parameters. As a result of using deep neural networks to solve the task of target locating problem in TWR, the model has a better chance of learning and increased accuracy if it involves more parameters (such as wall parameters and permittivity of the wall) in the target location problem. In this way, the accuracy of target locating improved when two wall parameters were considered in problem. A deep neural network model was used to estimate wall permittivity and thickness, as well as two-dimensional coordinates and permittivity of targets with 99% accuracy in single-target and two-target modes.

1. Introduction

Recently, through-the-wall imaging (TWRI) has become one of the most attractive fields of research that has different applications to locate, identify, and classify different targets [13]. Through-the-wall radar (TWR) faces many challenges due to the complex nature of the environments in which it operates. In comparison to signal propagation in free space, radar returns passing through walls are further delayed and attenuated. This reduces the detecting abilities of the targets, and the images of the targets are distorted and defocused as a consequence [4, 5]. To overcome these challenges, different methods and techniques are employed [69]. The TWR challenge can also be addressed by machine learning algorithms [10, 11].

Recent advancement in machine learning algorithms, particularly deep learning, and their penetration into other sciences have solved novel problems in various fields. The impact of machine learning on various fields demonstrated the high flexibility and ability to improve previous results and solve new problems [1219]. These characteristics of machine learning, particularly deep learning, to discover hidden signal patterns make it an excellent tool for analyzing radar signals. Among machine learning applications in radar, we focus on TWR and investigate how it can solve new problems in this field that conventional methods cannot [20, 21].

The estimation of wall parameters and the location of targets are two important applications of TWR that are gaining much attention. In each case, there are many challenges facing each of these applications. As a result of walls and other objects, there is ambiguity and interference in the received signal in target locating [22]. Furthermore, estimating the wall parameters to locate the targets helps to provide a more complete picture of the target, which is not a straightforward process [23, 24].

Generally, there are two types of methods for estimating wall and target parameters: conventional and machine-based. Methods such as time-delay [25], filter-based methods [26], M-Sequence sensors, and continuous basis estimators [27] are conventional methods for estimating wall parameters. A time-delay estimation approach also requires at least two experiments, which is time-consuming. By analyzing the time delay between different antenna intervals, it estimates the parameters. In the filter-based method, once the wall effects are removed, the filters are constructed in both the echo domain and image domain, and the estimation method is used to obtain the best focusing parameters. The method can only estimate the wall’s thickness and permittivity. A M-Sequence sensor used a metallic wall as a target and a sequence sensor was designed, and an echo time delay was calculated by compressive sensing. It is not practical to select metallic walls as target in a practical scenario, and estimation methods to estimate the thickness and permittivity of walls are complex.

Machine learning-based methods can be classified into two categories: those that utilize conventional machine learning algorithms such as SVM and methods based on deep learning.

In the estimation of wall parameters, Zhang et al. in [28, 29] attempted to estimate the wall parameters using an SVM-based method. Considering the scenarios mentioned, it is work only when target is unchanged. Since the location of the object is fixed, the generalization of the model is low, and it can only estimate the parameters of a wall when the target is fixed in a specific location. Wood et al. in [20] used machine learning methods for the reconstruction of target material properties.

Also, in the target locating, Zhang et al. [30] developed an SVM-based method for two-dimensional locating under a homogeneous wall and a circular metal cylinder object. Also, in [22], Zhang et al. presented a 3D positioning method proposed for a homogeneous wall for a spherical metallic object using an extreme learning machine. The method used can be used for one metallic target, and it has not been evaluated for purposes where the permittivity range is close to the human body and multitargets application. Wood et al. [20] investigated a machine learning (ML) approach for predicting the location of targets. This work performed two-dimensional positioning with a circular object using the K-Nearest Neighbors (KNN) algorithm and a homogeneous nonmagnetic wall.

In TWR, conventional methods are used to target locating and estimate wall parameters independently. In some of these methods, the wall effect must be removed to locate targets because ambiguities in the wall parameters distort the imaging and shift the target location. On the other hand, conventional methods are incapable of estimating the target characteristics and instead concentrate exclusively on the target location. It was also very time-consuming and complicated to estimate the target parameters in previous studies, and they only worked when the target position behind the wall was fixed and only one target was available. Furthermore, they are limited to single-target mode when estimating target location and properties.

In this work, we presented a model for simultaneously estimating the wall permittivity and the thickness, as well as the two-dimensional location of targets and the permittivity of targets, using a deep learning approach. In [21], we proposed two-dimensional positioning for the case that the wall is modeled as a complex electromagnetic wall by presenting three deep learning models. The wall is assumed to be a perfect nonmagnetic dielectric. We attempt to estimate the wall parameters and target parameters simultaneously using a deep neural network. Wall parameters include permittivity and thickness, and target parameters include target location and permittivity.

2. Methodologies

2.1. Deep Learning

Deep learning is a subset of machine learning and artificial intelligence that closely mimics the process by which the human mind acquires knowledge. This type of learning is critical in data science, which also encompasses statistics and forecasting modeling. Deep learning is highly beneficial for data scientists responsible for collecting, analyzing, and interpreting large amounts of data, as it speeds up and simplifies the process. Deep learning is the process of learning through neural networks with numerous hidden layers, and the deeper these layers go, the more complex and complete the models become. Deep learning is distinguished by its approach to solving problems. Working with conventional machine learning algorithms such as SVM, begins with manual feature extraction. Then, a machine learning model is constructed using these features. However, deep learning is designed so that the computer automatically detects and extracts the relevant features. Additionally, deep learning employs end-to-end learning, in which raw data is fed into the neural network and assigned a task, such as classification. Deep learning learns how to do this automatically. Deep learning models are taught using large sets of labeled data and neural network architectures that automatically learn features from the data without the need to extract them manually. A neural network is composed of multiple layers of neurons. Typically, neural networks consist of three layers: input, hidden, and output. As the number of layers and neurons in each hidden layer increases, the model becomes more complex. As the number of hidden layers and neurons in our network increases, it transforms into a deep neural network called deep learning.

Figure 1 shows an overview of artificial neurons. The inputs (input neurons) are , , …. Each in a neural network has a weight, denoted by . Indeed, each input is weighted independently. The neural network sum function (sigma) then adds the products of the X and W, and the activation function calculates the output value based on this calculation. The output of a neuron can be expressed as if the activation function is represented by Associate and b is the bias value. The neuron output can be described as follows:

2.2. Transfer Learning

Transfer learning is the application of the knowledge of a pretrained model to a different but related issue. This method allows us to train deep neural networks with less data. Learning with little data is very valuable and requires fewer hardware resources; on the other hand, in most fundamental problems, there is little data that can be used to train models. Transfer learning allows us to easily extend the model’s knowledge acquired in one area to other problems. In other words, instead of starting the training from the beginning, we can use the patterns obtained in the same problem to solve the new problem more efficiently. In transfer learning, the first layer and the middle (hidden) layers in a pretrained neural network are usually kept, and the output layer in a new network becomes replaced with another layer. Then the whole network is trained again with new problem data. The main advantage of transfer learning can be summarized in two ways: first, reduced learning time and the need for fewer hardware resources, and second, a more straightforward generalization of the problem to other issues.

2.3. FDTD

The finite difference time-domain (FDTD) is a method for solving Maxwell’s equations. The equations of Ampere’s Law and Faraday’s Law can be written as follows:

We use polarization to rewrite Ampere’s Law and Faraday’s Law as [31]

The scalar equations for are obtained from (3) and (4):

Equations (5)–(7) can be written in finite-differences form, and future fields can be expressed in terms of past fields due to the space-time discretization. The indexes and denote the spatial steps in the x and y directions, respectively, and the index q corresponds to the temporal step. Additionally, the spatial step sizes are and in the x and y directions, respectively. The finite difference approximation of (5) expanded about the space-time point (m, (n + 1/2) , q). The resulting equation is

The equation can be rewritten as follows for future value in terms of past value:

We can also write for equations (6) and (7) expanded about the space-time point ((m + 1/2), n, q) and (m, n, (q + 1/2)), respectively:

2.4. Data Gathering

The FDTD library (https://github.com/flaport/fdtd) in Python is used to simulate a two-dimensional TWR problem. In this case, a 30 cm square considered with as the target. A 3 GHz plane wave is generated using the FDTD library using a line source. polarization assumed, which implies that and are nonzero. The source’s emitted wave hits the wall, and some of it returns, while the remainder passes through the wall, hits the target, and scatters away from the target. Finally, the scattered wave is received by the detector. In this step, the fields of was retrieved and used to create the required dataset. Also, a homogeneous wall with , , and used for data generation varies from 3 to 9. To create a single-target dataset, the target moved in the two-dimensional space specified in Figure 2, also change the target permittivity from 5 to 85, the wall permittivity from 3 to 9, and the wall thickness from 10 to 20 cm. As a result, 16,200 datasets were generated in this mode. The two-target mode is similar, except that instead of one object, there are two objects, introducing three additional parameters to the problem, including the target’s two-dimensional location and permittivity. In this case, 58,320 datasets were generated. In Table 1, the parameters that were used for generating the dataset and their ranges are summarized. Also, allocated of the dataset for training, for the validation dataset, and for the test dataset.

3. Numerical and Experimental Results

In this work, we used a deep neural network (DNN) to estimate both object and wall parameters concurrently. Python is used to implement the DNN algorithm and the TensorFlow and Keras frameworks (https://keras.io). For this purpose, we presented a model in which the network input and backbone, which are the hidden layers of the neural network, are the same for single-target and two-target modes. However, the network output for these modes differs. In the single-target mode, we estimate five parameters: the target’s two-dimensional location, wall permittivity, target permittivity, and wall thickness. As a result, we consider the number of neurons in the last layer to be five. In the case of two-targets, we estimate three additional parameters: the 2D location and the permittivity of the second target. We used the ReLU activation function for this neural network’s first and middle layers but the linear activation function for the final layer. Equations (11) and (12) illustrate the activation of the ReLU and linear functions.

We trained the network using a batch size of 20, and a learning rate of 0.001, as well as the Adam optimizer and the Mean Squared Logarithmic Error (MSLE) loss function, as defined as follows:where y is the actual value, is the estimated value, and is the total number of data. MSLE is the mean of the squared differences between the actual and estimated values after log transformation. We sequentially combined the Dense and Dropout layers to achieve higher accuracy and less loss in the network’s backbone. The network is trained for 200 epochs, and the Loss diagram for the train and validation datasets are shown in Figure 3 in both single-target and two-target modes. Table 2 contains the accuracy and loss obtained on the validation and test datasets.

By including some target and wall specifications in the locating problem, we were able to improve the accuracy of the locating while also accurately estimating the wall and target material parameters. Indeed, each problem involves a large number of parameters. When we attempt to solve a problem using machine learning, if we enter all of the parameters involved in the problem, the model present by us can better learn the relationship between the inputs and outputs, thereby increasing the accuracy of the solution. We observed that it is sufficient to include additional parameters associated with the signal received by the receiver to achieve high positioning accuracy in the problem of two-dimensional positioning. By including the target and the wall’s permittivity, and the wall’s thickness in the problem, we discovered that the proposed model not only improved target location accuracy but also allowed for the integration of other critical parameters that had previously been estimated separately using a different algorithm. These parameters were estimated using the same deep learning model to locate the targets. The results of training network time, inference time, the size of deep learning models, and the number of deep neural network parameters are given in Table 3. The results are obtained using Google Colab and with a fixed GPU whose model is the Tesla K80 with 11 GB of RAM.

As described in this work, we tried to use the same backbone for a single or two targets. As shown in Figure 4, the input and backbone of the network are the same in both cases. This theorem helps us to generalize the proposed algorithm to other modes using transfer learning. As discussed in the Transfer learning section, it can use the first and hidden layers in a specific scenario and train the model in another scenario with fewer data. Next, it can be applied the trained model for the single-target mode and then tried to use transfer learning by removing the last layer of the single-target model and replacing it with a new layer for the two-target mode. Here, all available data for the single-target mode is used first to train the model. The last layer was removed (five neurons) to estimate the parameters in Figure 4 for single-target mode and replaced with a layer with the eight neurons that were again used to estimate the parameters in Figure 4 for two-targets mode. Then we take 14000 data from 58320 data that we have for two-targets mode (about a quarter of the total data obtained through trial and error). After replacing the last layers of the model trained for the single-target, we train the model with this 14000 data for the two-targets mode.

The training results, in this case, are given in Figure 5 and Table 3. As you can see, with this technique, we could generalize the model trained for the single-target mode to the other mode with fewer data. We selected model parameters such as the learning rate of the loss function, etc., like the previously selected mode, and training has been done for 100 epochs.

The signal can also be added with noise to make it closer to reality. To measure the performance of the model in the presence of noise, we added an Additive White Gaussian Noise (AWGN) with different SNRs. Figure 6 shows the accuracy and loss value for the test dataset for single and two-target modes. A comparison of the results of this study with previous work is shown in Table 4.

4. Conclusions and Discussion

Estimating wall and object parameters has many benefits. In TWRI, it is challenging to obtain a clear image of the wall because of the ambiguity in its characteristics. Due to the proximity of the electromagnetic properties of the objects, such as their permittivity, there are always challenges in estimating objects and targets behind the wall. Since most furniture in a room has a permittivity of 5 to 15, but the human body has a permittivity that ranges from 80 to 90, it is usually difficult for radars to distinguish between these targets. Occasionally, they are separated by radar cross-section (RCS) [32]. The presented model addresses these challenges and will permit separating these targets with their permittivity and also simultaneously estimating wall parameters. Meanwhile, the presented deep learning model is very small and can perform estimation in real-time with high speed, which makes it a good choice for TWRI applications.

This paper presents a model for simultaneously estimating the target and wall parameters using a deep neural network. Target parameters include the location and permittivity of the targets in two dimensions, as well as the thickness and permittivity of the wall. Two modes were considered, one with a single-target and one with two-targets, which required two parameters for the wall specification and three for each target. In this work, the dataset was generated by varying the parameters involved in the problem. Then, a deep learning model was presented that allows the parameters to be estimated for various targets by simply changing the model’s end layer. By incorporating the parameters in the received signal into the receiver, the locating accuracy was improved to 99% while simultaneously estimating parameters such as wall thickness and target and wall permittivity.

Data Availability

The data that supports the findings of this study are available from the corresponding author upon reasonable request.


An earlier version of this paper has been presented as a preprint according to the following link: https://arxiv.org/abs/2111.04568.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

F.G. and H.S. conceived the idea. F.G. set up the DNN model. F.G. conducted the simulations for dataset. Finally, F.G. wrote the manuscript based on the input from all authors. H.S. supervised the project and reviewed the manuscript.