Research Article  Open Access
Paul Fritsche, Bernardo Wagner, "Evaluation of a Novel Radar Based Scanning Method", Journal of Sensors, vol. 2016, Article ID 6952075, 10 pages, 2016. https://doi.org/10.1155/2016/6952075
Evaluation of a Novel Radar Based Scanning Method
Abstract
The following paper introduces a novel scanning method for mapping and localization purposes in mobile robotics. Our method is based on a rotating monostatic radar network, which determines the positions of objects around the scanner via a continuously running lateration algorithm. The estimation of surfaces with ultrawideband radar networks has been studied experimentally in lab environments, especially with lateration, envelopes of spheres, and SEABED algorithms. But we do not see a link to the field of mapping and localization of mobile robots, where laser scanners are dominating. Indeed, only few research groups use radars for mapping and localization, but their applied sensor principle is based on a rotating focused radar beam. Consequently, only 2D radar scanners are known inside the robotic world and methods for 3D scanning with radars need to be investigated. This paper will derive the theoretical background of the sensor principle, which is based on a radar network on a rotating joint, and discuss its erroneous influences. We were performing first scans of standard geometries and deriving a model in order to compare theoretical and experimental measurement results. Furthermore, we present first mapping approaches and a simulation of a scanner with multiple sensors.
1. Introduction
Obtaining allaround range information of an environment is essential in many areas of mobile robotics. Commonly, popular sensors like laser scanners, sonar sensors, and stereocameras have established themselves as stateoftheart for most tasks in mobile robotics. Nevertheless, radar sensors frequently appear in field robotics but are seldom used to perform tasks like mapping and localization. Radar can penetrate certain materials, basically nonconductors, which provides advantages in dusty, foggy, rainy, or other harsh environments. But limited resolution, noisy data, and influence of optical effects like refraction, reflection, and absorption make the application in mobile robotics challenging.
The use of radar sensors in mobile robotics is challenging but not impossible. The first appearance of radar sensors in the robotic community is traced back to the Australian Centre for Field Robotics in the early nineties, where fundamental work on probabilistic SLAM algorithms in combination with radar was developed (Clark and Whyte [1]). Because of their limited resolution and other aforementioned drawbacks, radar sensors are not very suitable to use in indoor environments. Nevertheless, Detlefsen et al. [2] were investigating the use of radar sensors in an industrial environment and Marck et al. [3] in an office. As far as we can see, all radar sensor principles in mobile robotics are based on mechanical beamforming. Usually, the radar beam is focussed via a parabolic antenna and panned mechanically over the environment. Electrical beamforming through phased array antennas is not seen very often in mobile robotics, but rather in automotive systems of the car industry.
Besides beamforming techniques, position estimation can be achieved through lateration, which is a common technique in radar networks for aircraft surveillance. Lateration is a measurement method, where the position of a point target is calculated through distance information from sensors with known location. The term trilateration refers to the measurement of three distances to define the position of an object (in contrast to triangulation, where three angles are used to calculate an object’s position). There exist two types of radar networks. In case of a monostatic radar network, the transmitter and receiver of the radar signal are collocated at the same location and can only receive signals that have been emitted by themselves. Multistatic radar networks consist of transmitters and receivers at different locations and can receive other sensors’ signal after it has been reflected from an object.
In this paper, we introduce a scanning method, which is based on a rotating monostatic radar network. We use frequency modulated continuous wave (FMCW) radar sensors, which provide distance but no angle information of objects inside the observation area. The sensors work in 24 GHz ISM band and accordingly are limited in Germany to a bandwidth (B) of 250 MHz, which corresponds to a theoretical distance resolution of 0.6 m (see (1), —speed of light). But the real distance resolution of most radar sensors is by the factor two or three larger. The availability of sensors with a high resolution depends on national and international bandwidth regulations. An ultrawideband (UWB) channel between 22 GHz and 26,65 GHz has been closed in 2013, but it is moved to 79 GHz for automotive purposes recently ([4, p. 20]):
The resolution of a radar sensor is equal to its minimum detection range. A radar’s resolution is its capability to distinguish objects. If the difference between the radial distances of two or more objects to the sensor is less than its resolution, then the sensor merges the two or more distance information into one. Additionally, the detection of objects depends on their radar crosssection (RCS) and the background noise of the environment.
This paper is organized as follows. In Section 2.1, we present a short overview on how position estimation via lateration in radar networks is commonly solved. In Section 2.2, we describe a problem that arises from data association and how to apply the bottomup data association method of Fölster and Rohling [5] to resolve this problem. Section 2.3 describes the influences of errors in a radar network. In our first experiment, which will be described in Section 3, we were performing first scans, whose results will be presented and discussed in Section 4.
2. Materials and Methods
Estimating the position of an object with a radar network can be solved by standard lateration methods. For example, in order to define an object’s position in twodimensional space, at least two sensors are necessary. Two radii can break down the object’s position to two possible locations. Usually, only one location is plausible due to the antenna’s direction. Geometrically, ghost objects can appear in lateration networks, which represents a wrong data association of the sensor’s object lists. A precise derivation, how ghost objects appear, is given by Rabe et al. [6].
Besides lateration techniques, the envelopes of spheres and SEABED method exist. This method has been studied by Sakamoto and Kidera et al. [7, 8], who use it as a surface estimation technique for threedimensional imaging with a UWB pulse radar system with high resolution. Like Sakamoto and Kidera, we assume boundary scattering on metallic surfaces to describe the model of our scanner.
2.1. Principle and Derivation
For terminology, we define the number of sensors in a radar network as and the sensorindex . Accordingly, are the sensors with the coordinates and . Every sensor outputs an objectlist , which contains distance information from the sensor to an object .
For a radar network in twodimensional space and two sensors (see Figure 1), we obtain two equations:
In case the number of sensors is higher than the dimension of the space plus one (e.g., three sensors in twodimensional space), then the system of equations becomes overdetermined. Due to errors in each sensor measurement, the overdetermined system of equations has no exact solution; hence a regression has to be found. The common way to solve this problem is the minimum mean square method. A detailed derivation and example of the method can be seen in the dissertation of Schneider [9, p. 11–14]. A general representation of the system of equations for a sensor network in twodimensional space is given in (3). An expansion to three dimensions is selfexplanatory. In order to estimate the object’s location, the system needs to be solved for :
2.2. Ghost Objects: A Data Association Problem
If more than one object is located inside the observation area of the radar network and the distance of the sensors to each other is larger than two times the sensor’s resolution, then the socalled ghost objects can appear (see Figure 2). Ghost objects represent a wrong data association of the distance information from the object lists due to geometrical ambiguity. For our experiment, the ghost object issue is not relevant because we chose a distance between the sensors smaller than 1,2 m. But if the experiments were performed with UWB radars, then ghost object would be an issue.
Fölster and Rohling [5] present a method called bottomup data association. In twodimensional space, at least three sensors are necessary for this method. In order to distinguish ghost objects from real objects, the observation area in front of the sensor network is discretized into cells, which represent a finite set of possible object positions. Then, a simple minimum distance calculation is done. Each cell contains an error value , which represents the square of the minimum distance of the cell to the sensor minus the distance between object and sensor , summarized over all sensors (see (4)). This calculation results in the lowest error values in cells that are closest to the real objects:
The grid can be represented in Cartesian or polar coordinates. The grid size should be chosen under the consideration of the radar network’s range resolution and azimuthangle accuracy [5].
2.3. Considerations on Erroneous Influences on Position Estimation in Monostatic Radar Networks
Understanding and analysis of errors that influence our sensor principle are essential to evaluate and discuss the feasibility and experimental results. Estimating the position of objects with lateration requires sensors with very high range accuracy. Nevertheless, every sensor has a measurement error. In case of lateration, the maximum position measurement error can be approximated by the maximal range measurement error of all sensors and the angle (see (5)). Figure 3 clarifies the relation between the range measurement error , the angle , and the position measurement error graphically. This figure displays the assumed case that is constant. A closer look on is done in the following and shows that it can not be assumed to be constant; indeed, it is impossible to predict. From (5) and Figure 3, it can be seen that the accuracy is getting very bad at the sides of the sensor network, where is approaching zero. An enhancement of the accuracy can be achieved by a larger distance between the sensors:
(a)
(b)
The range measurement accuracy is defined by the rootmeansquare (rms) measurement error [10, p. 167]. According to (6), the range measurement error is formed of the rootsumsquare of three error components. The dominating component is the S/Ndependent random range measurement error . Its standard deviation is given by (7). The fixed random range error is the remaining range error, which remains if the S/N ratio gets very high. It represents the remaining error, caused by the sensor and sensor electronics architecture in case of a perfect S/N ratio. Range bias errors are constant for measurements. In case of our sensor principle, the bias error does not affect the function or needs to be considered in lateration algorithms because it only results in a scaling that can be calibrated. In general, the accuracy of the sensor’s range measurement can be enhanced by increasing the number of measurements . If measurements, which are recorded during a scan of one object, can be averaged will be discussed in Section 4. In Section 2.3, it is explained that the root standard deviation is not constant and depends on unpredictable influences. Mathematically, if different groups of random variables with different standard deviations have the same expected value, then together they still have the same expected value and a weighted standard deviation. This rule makes it legitimate to average over measurement cycles if radar network and object are static. In case of a dynamic radar network, we need to consider the nonpointtargetcase (NPTC), in order to clarify the characteristics of the expected value in dependency of the angle of the radar network. The NPTC is based on the fact that a radar is not measuring the distance to the same point of an object, if the sensor is placed at different locations. Besides obtaining measurements from different measurement cycles, an averaging can be achieved as well by increasing the number of sensors; hence it was shown in Section 2.1 that overdetermined systems of equations are solved by a regression. A normal distribution of the range measurement values of our sensors has been investigated and confirmed, but will not be explained in further details in this paper:
The dominating component of (6) is the S/Ndependent random range error . Interestingly, the sensor’s bandwidth has influence on the sensor’s resolution and accuracy as well. So far, estimating the radar’s accuracy seems to be feasible. But a closer look at the signaltonoise ratio is leading to a complex relation, which is impossible to resolve in our experimental setup. Without going into details of the derivation of the S/N ratio, the proportionality in (8) is sufficient to know for our experiment:
From the proportionality in (8), we can summarize the following facts. First, the S/N ratio is higher and results accordingly in a better accuracy, if the RCS () of an object is high. Consequently, our radar principle results in better position estimations for objects with high RCS. But objects with a high RCS are entering the observation area earlier from the sides than objects with a lower RCS; hence an object with high RCS suffers more from the geometrical issue described in Figure 3(a). Another problem is that the RCS cannot be assumed to be constant for an object. Even for standard geometries, for example, corner reflectors, differs from the aspect of view. From a historical point, research on the RCS and its dependence on the aspect angle has been already performed in the year 1946 by Robertson [11]. But there is not only a change of the RCS because of its geometric shape. There is a fluctuation of the RCS, which can be explained by the Swerling models. Ludloff explains in [12, p. 3–14] how the fluctuation can be modelled. The model is based on the idea that one radar target consists of multiple reflector elements, which are distributed over the volume of the target. The model assumes the reflector elements to be isotropic and with the same RCS and neglects the effects of reflection or shadows among themselves. Through overlapping of reflected radar waves on this multiple isotropic reflector elements, phase differences result in complex interferences. This model explains the appearance of high fluctuation of the RCS, even if the aspect angle is changed only slightly. To sum it up, an exact estimation of the RCS, even of standard geometries, is not possible in the real world and consequently needs to be represented by a probability. In addition to other influencing parameters, like the wave length of the radar’s center frequency, the object’s distance to the sensor , and the thermal noise (see (8)), it leads us to the conclusion that it is not possible to give an exact estimation for the range measurement error :
After having clarified that the position estimation error depends on an unpredictable component, namely, the objects in the sensor’s observation area, we want to call attention to another source of error, which is caused by the sensor principle itself. We can see from the approximation in (5) that it is possible to lower the position estimation error by increasing the distance between the sensors. But, with a larger sensor distance, the influence of the NPTC raises.
The NPTC occurs due to the fact that in most environments we cannot assume to have only point objects, whose dimensions are much smaller than the radar’s resolution. Normally, objects exceed the resolution cells of the sensors; in other words, we need to be able to handle surfaces with our sensor principle. That boundary surfaces can be estimated with UWB radars in combination with lateration algorithms as was shown by Mirbach and Menzel [13]. But, as can be seen in Figure 4, the estimation of the object position is wrong, if sensors measure the distance to different points of the object.
3. Experiments
In order to evaluate the sensor principle, we were performing first scans of standard objects in an indoor environment. The goal of the first experiment is to find out about error influences in our sensor principle. As mentioned before, a limited resolution can be problematic in an indoor environment, for example, an office. There might be metallic radiators, stealbeams behind the walls, computer towers, and many other objects that can have a RCS huge enough of being detected by our radar sensor.
Accordingly, the probability of occurrence of two objects, with a smaller difference of their radial distances to the sensor than the radar’s resolution, is high; hence we can rarely trust our scan results, if performed in an indoor environment. For fundamental research, our radar sensors with an resolution of approximately 0.6 m are sufficient. Of course, by using better sensors, a better result can be achieved.
For our first experiment, we were performing two test series, where we placed a standard geometry, far away (at least two times the resolution) from disturbing objects as mentioned before. During experiment A, we were using a planar metallic board, and during experiment B, we were using a corner reflector. Both experiments will be explained in the following section.
3.1. Scan in front of a Metallic Planar Board and a Corner Reflector
During experiment A, we were scanning an indoor environment with our sensor unit. We were placing a planar metallic surface perpendicular to the sensor unit. We placed the board at the same height like the sensor unit in order to perform a 2D experiment in a 3D environment. Each measurement contains the accumulation of ten 360° degree scans with a step size of 0.7° degrees. Not every measurement cycle leads to a successful position estimation. A successful position estimation can be processed if both sensors detect an object. Experiment B is performed according to experiment A with a corner reflector. All relevant details of the experiments can be seen in Table 1.

In Figure 6, the results of experiment A and experiment B are displayed. Further discussion of the scan results will be given in Section 4. A picture of the experimental environment is shown in Figure 5.
(a)
(b)
3.2. Scan of a Hallway
In the second experiment, we want to find out if our sensor principle is suitable for robotic mapping. Therefore, we were recording raw data for a map of our hallway. To avoid influences of control and odometry errors of our robots, we were performing scans at known poses (see Figure 7). The walls of the corridor have a distance of approximately 2 m.
4. Results and Discussion
In order to make a correct interpretation of our scan results, two models for the case of the board and the corner reflector have been derived. A model can be helpful when designing parameters of a system or understanding effects.
4.1. Model of the Scanned Objects
As in most physical models, we assume simplifications in order to break down the complexity of the problem. We assume that all radar waves are reflected from the object’s boundary surface; hence a penetration of the material is neglected. Furthermore, we assume that a single sensor is only measuring distances to surfaces, which are inside its observation area and perpendicular to the sensor itself. Double reflections, like those can appear between two parallel walls, are neglected. In case of orthogonal lines or areas, we assume a distance measuring to the corner point.
Figures 8 and 9 demonstrate a simplified understanding of the experimental setup. The value represents the distance between the center of rotation of the sensor unit and the object. In case of the planar metallic surface (Figure 8), we cannot expect that each sensor is measuring the distance to the same point due to a NPTC. According to our model, the distances can be calculated with (9), where represents a random variable with normal distribution and standard deviation which is added to the geometrical part of the equation:
Due to the influence of the NPTC, it is not legitimate to average the measurement points of the planar metallic surface because the expected value, resulting from the lateration algorithm, is not constant and depends on the aspect angle.
In case of a corner reflector, we can expect a distance measuring always to the same point (the corner point). Consequently, an averaging is legitimate. A measurement cycle in front of a corner reflector can be represented with the following equation:
Figure 10 presents a comparison between the model of the planar surface and the measurements of the experiment A. We cannot confirm our model to be correct because of side effects and a nonperfectly planar surface. Nevertheless, we learn that the hugest error influence of our sensor unit is not the range measurement error . The biggest problem of our principle is the NPTC, which results in a wide spread of the measurement values. Figure 6 displays scan results with a KDE postprocessing, which result in a very high probability for our scanned object’s locations. Accordingly, we obtain probabilities which describe areas, where an object could be located.
(a)
(b)
Figure 11 represents the measurement results of the corner reflector, which fit very well with our model. We can assume to have almost no influence of the NPTC situation. The remaining spread of the measurement values is caused by the range measurement error of the sensors.
(a)
(b)
4.2. Mapping with Known Poses
There exist several mapping algorithms. An overview is given by Thrun in [14, p. 7]. Thrun introduces algorithms, which are suitable for mapping with unknown robot poses, which is named simultaneous localization and mapping (SLAM). In this paper, we focus on mapping with known poses, which is simpler. But mapping with known poses is leading to more promising results because odometry and control errors do not influence the map. Occupancy grid mapping with Bayes filter might be the most popular probabilistic representation of a map and will be the next step to combine with our sensor principle.
Figure 12 presents two simple approaches for grid mapping with our sensor principle. Figure 12(a) presents the measurement values of the hallway in a twodimensional histogram. In general, double reflection between parallel walls or from the ground can occur and cause wrong position estimations. Due to the fact that we can expect more measurements of true objects than from wrong detection, a histogram is accumulating accordingly true object locations; hence the hallway’s shape is visible. The histogram representation can be finetuned via changing the bin size. Our second approach for further grid mapping algorithm is shown in Figure 12(b). We applied on every 360° scan a Gaussian KDE and normalized the values. Afterwards, we summed all normalized kernel estimated scans to obtain the map. This method can be finetuned with the kernel size of the estimator. In general, this approach is leading to a blurry representation of our map. We realized that we can build a dynamic inverse sensor model with both approaches, which will be the focus of further investigation.
(a)
(b)
4.3. 3D Scan with Multiple Sensors
We simulated a scenario with six point targets and a scanner which is equipped with nine sensors. We used the simulation software VREP from Coppelia Robotics, which allows the simulation of sonar sensor, which have a similar behaviour to radar sensors. We added to every distance measurement a Gaussian noise and solved the system of equation from (3) via the minimum mean square method. Furthermore, the simulation assumes sensors with a very high resolution. Accordingly, the simulation is used only to demonstrate the influence of the NPTC effect without influences of low resolution, when detecting multiple point targets with one sensor. The setup of the scenario can be seen in Figure 13.
(a)
(b)
(c)
Due to NPTC, which can be seen in the upper picture of Figure 13, the lateration algorithm results in many wrong position estimations, which can be seen as a spread of the estimated points. Therefore, we defined a criterion to filter the measurement. is the distance between the average position of all sensors and the object position :
The filter criterion enhances the result significantly. A comparison of the result with and without filter is shown at the bottom of Figure 13.
5. Conclusion
Robust localization and navigation in hazardous and tough environments are still a difficult issue in field robotics research. Dust, rain, fog, and inadequate illumination are conditions, which make popular sensors, such as laser scanners or cameras, not suitable. Radar overcomes the aforementioned difficulties.
In this paper, we were investigating a new scanning method and took a closer look at failure influences in order to judge the principle’s feasibility. We were focusing on two influences: first, the range measurement error of the sensor itself and second, the influence of wrong position estimation due to the lateration principle (NPTC). We proved that the influence of the NPTC needs to be considered. Only objects, which are similar to corner reflectors, do not suffer under this effect.
The lateration technique focusses on the estimation of points inside the observation area. A comparison with surface estimation algorithms like envelopes of spheres or SEABED might be an interesting point of investigation.
We proved that our sensor principle is suitable for robotic mapping. An investigation on an dynamic inverse sensor model, which is obtained from kernel density estimations, will be the focus of our future research.
To sum it up, our proposed principle is an alternative to standard radar based scanningmethods. Mechanical beamforming techniques require an antenna and electric beamforming techniques need phase array radars, which are commonly more expansive. Although no antenna construction is required, our principle needs more than one sensor. From one single 360° scan that is obtained through mechanical beamforming, we can expect from each measurement step at an incremental angle distance information, which results in more measurement points than our principle, which detects only the objects with the highest RCS due to a nonfocused observation area. But our principle is recording more than one measurement of an object during one scan rotation, which raises the possibility of a correct detection of an object. An advantage over traditional rotating mechanical beamforming techniques is the possibility to perform 3D scans as well, which would be mechanically complicated in case of mechanical beamforming and is only known in combination with electrical beamforming radars. But our principle suffers more from the NPTC, bad accuracy and resolution, and wrong calibration or asynchronism of measurements than traditional techniques and is accordingly limited.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
References
 S. Clark and H. D. Whyte, “The design of a high performance mmw radar system for autonomous land vehicle navigation,” in Field and Service Robotics, pp. 281–288, Springer, London, UK, 1998. View at: Publisher Site  Google Scholar
 J. Detlefsen, M. Rozmann, and M. Lange, “94 GHz 3D imaging radar sensor for industrial environments,” EARSeL Advances in Remote Sensing, vol. 2, no. 1, 1993. View at: Google Scholar
 J. W. Marck, A. Mohamoud, E. van der Houwen, and R. van Heijster, “Indoor radar SLAM. A radar application for vision and GPS denied environments,” in Proceedings of the 10th European Radar Conference (EuRAD '13), October 2013. View at: Google Scholar
 G. Schmid and G. Neubauer, “Bestimmung der exposition durch ultrawideband technologien,” Tech. Rep., Bundesamt für Strahlenschutz, 2007. View at: Google Scholar
 F. Fölster and H. Rohling, “Data association and tracking for automotive radar networks,” IEEE Transactions on Intelligent Transportation Systems, vol. 6, no. 4, pp. 370–377, 2005. View at: Publisher Site  Google Scholar
 H. Rabe, E. Denicke, G. Armbrecht, T. Musch, and I. Rolfes, “Considerations on radar localization in multitarget environments,” Advances in Radio Science, vol. 7, pp. 5–10, 2009. View at: Publisher Site  Google Scholar
 T. Sakamoto, “A fast algorithm for 3dimensional imaging with UWB pulse radar systems,” IEICE Transactions on Communications, vol. 90, no. 3, pp. 636–644, 2007. View at: Publisher Site  Google Scholar
 S. Kidera, T. Sakamoto, and T. Sato, “Highresolution and realtime threedimensional imaging algorithm with envelopes of spheres for UWB radars,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 11, pp. 3503–3513, 2008. View at: Publisher Site  Google Scholar
 M. Schneider, Lsbmethode bestimmung von distanzunterschieden mittels parametrierter schwebungen [Ph.D. thesis], Universitat Rostock, 2013.
 G. R. Curry, Radar System Performance Modeling, Artech House, 2005.
 S. D. Robertson, “Target for microwave radar navigation,” Bell System Technical Journal, pp. 852–869, 1947. View at: Google Scholar
 A. Ludloff, Praxiswissen Radar und Radarsignalverarbeitung, Vieweg, 1998. View at: Publisher Site
 M. Mirbach and W. Menzel, “A simple surface estimation algorithm for UWB pulse radars based on trilateration,” in Proceedings of the IEEE International Conference on UltraWideband (ICUWB’11), pp. 273–277, Bologna, Italy, September 2011. View at: Publisher Site  Google Scholar
 S. Thrun, Robotic Mapping: A Survey, 2002.
Copyright
Copyright © 2016 Paul Fritsche and Bernardo Wagner. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.