Abstract

This paper focuses on a hardware-in-the-loop facility aimed at real-time testing of architectures and algorithms of multisensor sense and avoid systems. It was developed within a research project aimed at flight demonstration of autonomous non-cooperative collision avoidance for Unmanned Aircraft Systems. In this framework, an optionally piloted Very Light Aircraft was used as experimental platform. The flight system is based on multiple-sensor data integration and it includes a Ka-band radar, four electro-optical sensors, and two dedicated processing units. The laboratory test system was developed with the primary aim of prototype validation before multi-sensor tracking and collision avoidance flight tests. System concept, hardware/software components, and operating modes are described in the paper. The facility has been built with a modular approach including both flight hardware and simulated systems and can work on the basis of experimentally tested or synthetically generated scenarios. Indeed, hybrid operating modes are also foreseen which enable performance assessment also in the case of alternative sensing architectures and flight scenarios that are hardly reproducible during flight tests. Real-time multisensor tracking results based on flight data are reported, which demonstrate reliability of the laboratory simulation while also showing the effectiveness of radar/electro-optical fusion in a non-cooperative collision avoidance architecture.

1. Introduction

Following the most important guidelines about Unmanned Aircraft Systems (UAS) integration into civil airspace, the onboard avionics of these aircraft has to include a Sense and Avoid (S&A) system that is capable of replacing the human pilot in performing visual collision threat detection and avoidance [13]. Therefore, several experiences have been carried out worldwide in order to develop a system that can perform the above-mentioned function.

In all these projects, the configuration comprises two subsystems:(1)the Obstacle Detection and Tracking subsystem that permits to reveal flying intruders in a selected Field of Regard and to estimate their motion;(2)the Collision Avoidance subsystem that provides Conflict Detection and Resolution capabilities.

Regarding the “Sense” function, several solutions have been proposed depending on available budget and onboard resources, ranging from standalone electro-optical (EO) sensors [47] to integrated architectures which comprise airborne radars and collaborative systems based on broadcast information, such as TCAS and ADS-B [810].

Within the research project named TECVOL, the Italian Aerospace Research Center (CIRA) and the University of Naples developed a prototype S&A system which relies on an integrated radar/electro-optical (EO) configuration. In particular, the sensing system is composed of a pulsed radar, four EO sensors, and two processing units for image processing and real-time multisensor tracking. A hierarchical data fusion model is selected, in which the radar is the main sensor that must perform initial detection and tracking and the EO sensors are used as auxiliary information sources to increase tracking accuracy and measurement rate [10].

The prototype hardware/software system has been installed for flight demonstration and performance assessment onboard a customized optionally piloted Very Light Aircraft (VLA) named Flying Laboratory for Aeronautical Research (FLARE). FLARE is a modified version of the TECNAM P92 plane and represents a cost-effective platform to test innovative flight technologies. Flight tests successfully demonstrated radar and electro-optical detection capabilities [11], radar-based intruder tracking [12], and autonomous non-cooperative collision avoidance [13], while multisensor based autonomous avoidance tests are currently in progress and will be described in future works.

Within the system development, a key role was played by an ad hoc developed hardware-in-the-loop (HWIL) facility, aimed at testing the real-time operation of image processing and data fusion algorithms. In fact, the multisensor tracking algorithm had been extensively validated in off-line simulations [10], and a similar validation was needed for the real-time software and processing units.

Indeed, possible applications of the test system go well beyond flight prototype validation, as it allows real-time performance assessment in several scenarios (also the ones that it would be hard to replicate in flight) and also with different simulated sensors and/or processing architectures.

This paper focuses on this laboratory test facility, which comprises the obstacle detection and tracking processing units (with the flight software installed), a simulator of the flight control computer, a radar simulator (both simulators use the same hardware and protocols of onboard systems), a synchronization computer, and a scenario displayer that generates synthetic flight scenes on a monitor which is imaged by an electro-optical camera (the same used in flight tests).

The paper is organized as follows. First, the flight system installed onboard FLARE is briefly recalled. The reader is referred to [1014] for more details on the flight-tested prototype sensing system, which is not the primary focus of this paper. Laboratory system concepts, hardware/software components, and operating modes are instead thoroughly described in later sections. Finally, real-time multisensor tracking results based on flight data are reported and analyzed in detail.

2. Flight System Setup

The non-cooperative sensor suite comprises a pulsed radar, the AI-130 Obstacle Awareness System manufactured by former Amphitech, two visible cameras, panchromatic and color (Allied Vision Technologies Marlin), and two infrared (IR) cameras (FLIR A40V). The whole sensors package is placed on the top of the aircraft wing with the radar in central position. The two visible cameras are installed parallel to the aircraft longitudinal axis. It is therefore possible to get simultaneously color and panchromatic high resolution images of the same region. The IR cameras are pointed slightly eccentric to get an azimuth Field of View (FOV) comparable to the visible cameras.

The radar operates with a 35 GHz carrier frequency which constitutes a good compromise between antenna dimensions, angular accuracy, and sensitivity to rain and fog. Visible cameras have a FOV of approximately 49.8° × 38.9°, and they work at the maximum resolution of 1280 × 960 pixels at a frame rate of 7.5 Hz. The panchromatic camera is used for data fusion with radar echoes, while the other one is aimed at obstacle identification which will be implemented in the future. IR cameras are aimed at improving situational awareness in low-light conditions and have a smaller FOV of 24° × 18° with a maximum resolution of 320 × 240 pixels.

Two different processing units complete the system, which are a CPU devoted to Real-Time Tracking by sensor data fusion (RTT-CPU), and a CPU devoted to Image Processing (IP-CPU) and vision-based obstacle detection. The first one is based on a deterministic Operating System (OS), that is, Microsoft Windows CE version 5.0, and it is directly connected to the radar via an Ethernet link and the Transmission Control Protocol/Internet Protocol (TCP/IP). It runs the tracking algorithm and performs data exchange with the Guidance, Navigation, and Control (GNC) system by means of the deterministic Controller Area Network (CAN) bus. The other computer is connected to EO sensors via a Firewire link. It is based on a conventional OS, that is, Microsoft Windows Xp Embedded, and it is dedicated to the processing of visible and IR images. From a hardware point of view, the processing units communicate through an Ethernet link by the User Datagram Protocol (UDP). The flight system hardware architecture is depicted in Figure 1.

From the logical point of view, a hierarchical central-level fusion architecture has been selected where the radar is the main sensor that must perform initial detection and tracking, and the EO sensors are used as auxiliary information sources to increase accuracy and data rate. In particular, tracking estimates are effective in reducing the computation time and the false alarm rate of EO image processing, since EO obstacle detection process is applied to properly defined search windows with relatively small dimensions in pixels [15].

3. Laboratory System Setup

The overall architecture of the developed HWIL system is depicted in Figure 2. Within this architecture, some elements exactly replicate the flight system, that is, the visible panchromatic camera, the IP-CPU, the RTT-CPU, and the hardware connections and protocols (CAN bus, Ethernet link, UDP and TCP/IP protocols). It is worth noting that as the laboratory test system was developed with a primary interest for multisensor tracking, only the panchromatic camera is present. Some other elements, such as the GNC System simulator and the radar simulator, emulate the behavior of relevant flight system components. Finally, the scenario activation computer and the scenario displayer are necessary support tools for HWIL simulations.

Regarding the EO section of the setup, the camera is fixed to an optical bench and it processes images projected on an LCD display which is set in front of it. A suitable collimation lens has been sized and installed between camera and monitor to guarantee better uniformity conditions. These components are enclosed in a black box so that stray light effects can be neglected. Figure 3 shows the camera, the monitor and the collimator on the optical bench, while Figure 4 shows the black box which contains all the above mentioned components. The Scenario Displayer Computer runs a synthetic video relevant to a predefined flight scenario.

System operating modes and elements are described in the following section, along with a brief discussion of the flight software aimed at EO obstacle detection and data fusion.

3.1. Operating Modes

The indoor facility can operate in two basic operating modes, such as prerecorded and simulated flight scenario. While the different setup elements always operate in the same way, the main difference between the two operating modes lies in how data files relevant to HWIL simulation are generated.

In the case of prerecorded flight scenarios, purposely designed off-line software tools allow isolating given flight segments to be replicated in laboratory simulations. In this framework, data interpolation shall be carried out with care. For example, in data collection flights, the intruder position is known at a frequency of 1 Hz (GPS update rate), so that the scenario displayer computer has to perform an interpolation of intruder range, azimuth, and elevation in order to generate images at a frequency of 10 or 20 Hz. In this case, interpolation is carried out in North-East-Down (NED) coordinates. Attitude measurements (stored at 10 Hz) are later used for conversion in the body reference frame (BRF), in order to avoid introducing additional errors due to FLARE noisy attitude dynamics.

In the case of simulated scenarios, data files are generated in the same format but derive from off-line simulations of relevant physical processes. For example, synthetic radar data can be generated by simulating the target detection process by a Monte Carlo approach, taking into account radar parameters, environmental conditions, and intruder characteristics, such as mean radar cross-section and Swerling type [16].

Hybrid operating modes are possible, for example, relative dynamics relevant to experimented near collision geometries can be used together with simulated radar echoes, in order to quickly test the effect of alternative sensor choices. Furthermore, while synthetic flight images allow testing the IP-CPU software and camera in the flight configuration, real EO images gathered in flight tests can be used within an image processing software that almost completely replicates the flight version (apart from image acquisition from camera), in order to evaluate in real-time simulations performance parameters such as EO detection range and false alarm rate.

3.2. Scenario Activation

Basically, all the data files used for HWIL testing comprise a number of time-referenced sensor measurements, where time referencing is obtained by associating with all the data a proper GPS time of day. System operation requires that the radar simulator, the GNC simulator, and the scenario displayer run with very accurate relative synchronization. This is obtained by a synchronization signal that starts the simulation, which is sent by the scenario activation computer as a broadcast message over the Ethernet network. From the implementation point of view, after initialization procedures, the three computers initially wait for the start signal on a blocking UDP socket [17]. It is worth noting that, since it is possible to simulate flight scenarios of variable duration (isolating proper flight segments of interest), the effect of relative drift among the CPU clocks is negligible. In fact, non-cooperative near collision scenarios usually have a duration of tens of seconds or a few minutes, at most.

3.3. GNC Simulator

The GNC simulator replicates the behavior of the flight control computer. In particular, the following navigation data are sent on the CAN bus:(i)GPS-based aircraft position (latitude, longitude, and WGS-84 altitude) and velocity (NED components);(ii)attitude angles as estimated by the Attitude and Heading Reference System (AHRS);(iii)AHRS-based angular velocity components along body axes;(iv)AHRS-based accelerometer measurements along body axes.

As anticipated above, the input data files comprise all these measurements with GPS-based time tagging. After the activation signal (which is relevant to a given GPS time), CPU clock is used to send navigation measurements with the right timing.

At the same time, the GNC simulator monitors and stores RTT-CPU output thus enabling end-to-end analysis of the operation of the multisensor tracking system.

3.4. Radar Simulator

The radar simulator provides on the Ethernet TCP/IP link the same data generated by the airborne radar, as follows:(i)number of targets detected in every radar scan;(ii)for each target detected:(a)range;(b)azimuth; (c)elevation;(d)delay in ms (all the targets detected in a radar pass are sent within a single message as the pass is completed, thus the delay information is crucial for precise time referencing of detected echoes);(e)intensity.

As in the case of the GNC simulator, correct operation timing is based on the CPU clock and the activation signal.

3.5. Multisensor Tracking

The RTT-CPU runs the flight software for multisensor tracking. The algorithm is based on an Extended Kalman Filter (EKF) with linear dynamic model and nonlinear measurement equations. In particular, the state vector is composed of nine components which are the target obstacle coordinates in NED (reference with origin in the aircraft center of mass) with their first and second time derivatives. The adopted dynamic model assumes that every component of target acceleration evolves in terms of a correlated noise process with given time constant and instantaneous variance [16].

Navigation data are used by the algorithm at the frequency of 10 Hz so that own vehicle dynamics is easily tracked. In particular, GPS data are used in track initialization phase while acceleration and attitude measurements are used in all tracking phases. Angular velocity estimates are used to correct lever arm effects on acceleration measurements. The output data rate is 10 Hz. Issues relevant to real-time implementation such as variable sensor latency and ground clutter filtering are handled by dedicated techniques [11, 12]. As soon as a firm track is generated, range and angular estimates (converted in BRF) are sent (at the maximum frequency of 10 Hz) to the IP-CPU that has to carry out EO obstacle detection.

3.6. Image Processing and EO Obstacle Detection

As anticipated above, the panchromatic camera works as an auxiliary sensor to increase angular accuracy and measurement rate. In particular, the obstacle detection software built for the panchromatic camera is activated as soon as the IP-CPU receives (radar-based) estimates by the tracking module. Obstacle detection is carried out if the intruder aircraft is enclosed in the camera FOV. Azimuth and elevation predictions are converted from the Body Reference Frame (BRF) to the Camera Reference Frame (CRF) and a search window is built that is centered on these estimates, whose size depends on the estimated range and measurements uncertainty. The image processing algorithm is run only in the search window, thus enabling a significant reduction of the computational burden while ensuring more accurate and frequent angular measurements. The adopted image processing technique is based on a customized edge detection-labeling approach [15]. In case of intruder detection, CRF angular estimates are converted back to the BRF and transmitted to the RTT-CPU.

3.7. Synthetic Scenarios Representation

Synthetic scenarios are depicted on an LCD monitor whose dimensions and resolution in pixel are such that taking the distance between the monitor and the camera into account, and the angular error in intruder positioning is of the order of 0.04°. This uncertainty corresponds to the camera instantaneous field of view (IFOV) and, therefore, has been considered acceptable. In terms of refresh time, the monitor exhibits good performance also at a frequency of 20 Hz, which ensures large oversampling at standard camera update rate.

Regarding representation of flight scenarios, the objective is to have a realistic representation of the type of images that would be processed by the EO system in a real flight, although these synthetic images are not aimed at estimating optical performance, in terms of detection range and sensitivity to weather conditions, for instance. These performance parameters were accurately estimated in off-line analyses based on all the EO data gathered in data collection flights [15].

Synthetic images are produced by simulating the background (both above and below the horizon), estimating the horizon line position coherently with navigation state and generating a synthetic view of the intruder aircraft. The background replicates the different possible illumination conditions that have been encountered in flight in terms of mean and standard deviation of pixel intensities. Due to the relatively narrow camera FOV, the horizon is simulated by taking aircraft flying altitude and attitude angles (pitch and roll) into account as it is done in the attitude indicator included in aircraft cockpit instrumentation [18]. This procedure assumes that the variation in terrain morphology is negligible and enables calculating the line-of-sight of horizon points in the camera reference frame. Then, the actual position of the horizon line on the LCD display is found by considering the intrinsic camera parameters (such as focal length, skew coefficient, and distortion coefficient [19]) and the external calibration between camera and monitor which allows finding the connection between the CRF and the Display Reference Frame (DRF; see Figure 5).

The last simulated image feature is the intruder aircraft. It is represented by the shape of Figure 6, which reproduces the geometric invariants of a real intruder aircraft as estimated from flight images. Moreover, its intensity and contrast with respect to the background are simulated depending on actual range and simulated illumination conditions, and its dimensions are computed based on range and actual intruder dimensions. The intruder is represented on the monitor if it is enclosed in the camera FOV.

In the current facility implementation, intruder intensity and contrast with respect to the background are simulated heuristically, by replicating conditions encountered in electro-optical data collection flights. In fact, these flights were carried out in different weather and illumination conditions, so that a large experimental dataset was acquired. This is consistent with the objective of synthetic image generation within real-time hardware-in-the-loop simulations, which as stated above is to have a realistic representation of the type of images processed in flight, more than estimating EO detection performance in terms of detection range and false alarm rate.

However, the facility architecture enables usage of synthetic images generated in a more general way, also considering conditions that were not experimented in data collection flights. To this aim, for example, the model proposed by Dey et al. in [20], which is an atmospheric model for predicting aircraft appearance under wide ranging visual flight rules conditions, could be used to generate synthetic images of intruder aircraft.

4. Results from Hardware-in-the-Loop Tests Based on Flight Data

Several HWIL tests have been carried out in order to tune the multisensor tracking algorithm and the obstacle detection software and to evaluate their processing latency and consequent reliability. Results relevant to tests based on flight data are reported in this section. In particular, a near collision encounter between FLARE and the intruder aircraft is analyzed. Radar and GNC measurements are real sensor outputs gathered in flight and used within the real-time simulation environment, while the EO obstacle detection process is applied to synthetic images consistent with the flight scenario. In the considered case, the radar field of view is set to be 90° (azimuth) times 20° (elevation). The EO system is interrogated at a frequency of 5 Hz. FLARE and intruder GPS data gathered in flight are used as reference for computing tracking performance. It is worth noting that GPS-based estimates of intruder relative motion in NED are very accurate as they are not affected by attitude measurement uncertainties: the residual errors are due to GPS accuracy (differential GPS for FLARE, standalone GPS for the intruder) and residual synchronization uncertainty.

In the considered scenario, the intruder is detected by the radar for the first time at a range of about 2650 m (Figure 7). After three consistent detections, firm tracking is entered at a range of about 2450 m. As soon as the track is declared, relevant information is passed to the EO obstacle detection system to increase accuracy and data rate. An example of a synthetic image also comprising the EO search window is shown in Figure 8. Range estimation by the tracking system is very accurate in the whole encounter, as a result of radar error which is of the order of a few meters (Figure 7).

When considering azimuth and elevation angles in NED (often named “stabilized azimuth” and “stabilized elevation” in the literature [16]), the EO effect on tracking performance is significant, with a dramatic improvement of angular accuracy and data rate (Figures 9 and 10) with respect to the first radar-based tracking estimates. In fact, several highly accurate (error standard deviation is about one order of magnitude smaller than radar one) EO measurements are generated starting from a range of about 2 km, until the intruder flies outside of the camera FOV in azimuth (Figure 11). Furthermore, EO detections are provided with a very small latency of the order of 0.1 s (Figure 12), thus enabling effective latency compensation and real-time fusion with radar data.

In spite of the absence of EO measurements, in the final part of the encounter several radar detections are provided because of the larger sensor FOV: the system switches again to radar-only tracking, and the increase of angular uncertainties can be clearly observed.

The effectiveness of radar/EO fusion is even more evident when analyzing intruder relative velocity. While range rate is accurately estimated both in radar-only and in radar/EO tracking modes (Figure 13), the error on angular derivatives is greatly reduced when both sensors are integrated in the tracking algorithm (Figures 14 and 15) and is increased again when the intruder falls outside camera FOV.

Multisensor fusion effect on the improvement of autonomous situational awareness capabilities is shown in Figure 16 that depicts the estimated distance at closest point of approach ( ) between the two aircraft. Indeed, this is the most important variable for reliable collision detection [10, 12]. As before, GPS data are used as reference. In the first part of the considered encounter, the radar-only tracker exhibits a significant overestimation of because of the impact of relatively large errors on angular derivatives (order 0.5°/s on both azimuth rate and elevation rate, even a little worse than average radar-only tracking performance [12]), and relatively large range ( 2 km) and time to closest point of approach. Then, the improvement in angular derivatives estimation due to radar/EO fusion immediately impacts accuracy. When the intruder flies outside of camera FOV and the tracking system switches to radar-only tracking, angular performance is again worsened, but the impact on is limited because of smaller range and time to collision.

Besides the increase of tracking accuracy enabled by radar/EO fusion, it is also important to underline the effectiveness of the designed multisensor architecture in significantly reducing false alarm rate, which is a known issue of standalone EO systems.

In fact, as it has also been demonstrated by flight test results [12], the radar is in general a reliable source of information with large detection range and low false alarm rate, if ground clutter echoes are properly filtered.

Then, the implemented sensor fusion architecture, which is based on a hierarchical concept, and on the exchange of information at sensor level (cross-sensor cueing), is effective in increasing EO reliability: EO detection is activated only for confirmed tracks, and the EO detection process takes advantage from tracking-based estimates in different ways [15], such as search window definition based on coarse estimation of intruder line of sight, range based selection of search window dimensions, range dependant definition of edge detection threshold, and adoption of range dependant criteria for selecting valid edges in output.

In general, even in this architecture there is a non-null probability of EO false alarms, which can have negative consequences on tracking accuracy and robustness because of the small measurement covariance associated with optical sensors.

In order to reduce the risk of filtering EO false positives, if the image processing system successfully detects an intruder, the EO measurement is used only if it satisfies the gating process [16]. It has to be considered that during radar/EO tracking track covariance is reduced with respect to radar-only tracking, so that false alarms are likely to fall outside the gate and in such a case are discarded and do not affect track accuracy.

Furthermore, a conservative approach has been used for tuning the EO obstacle detection software, in order to get a very small false alarm rate in spite of a little increase of missed detections [15].

Indeed, no EO false alarms have been erroneously filtered in the tracking algorithm both in hardware-in-the-loop and in flight experiments.

Overall, the quality of the obtained results confirms the good reliability of HWIL testing, with the estimated system performance that is in good agreement with theoretical predictions. In fact, these laboratory tests based in part on flight data confirm that proper integration of radar and optical measurements can provide a significant improvement of collision detection reliability, even considering relatively low cost sensing architectures based on commercial-off-the-shelf (COTS) components.

In order to obtain this level of performance in flight, additional issues such as accurate relative sensor alignment and reliable EO obstacle detection in variable illumination conditions have to be dealt with. These topics will be discussed in detail in future works that will also report results from multisensor tracking flight tests with online processing of EO images.

5. Conclusions

This paper focused on an HWIL system that was designed to support development of a multisensor sense and avoid flight system. The facility is based on a modular approach and can work with different operating modes and enabling different possible combinations of simulated and experimental sensors data.

After a description of the system logic and the relevant components, results from real-time HWIL tests, comprising radar and navigation measurements gathered in flight tests, were reported and analyzed. Besides confirming the reliability of the developed HWIL architecture, these results clearly show the potential of radar/electro-optical fusion for non-cooperative UAS collision avoidance.

While the described system is a custom simulation environment, modularity and combination of flight measurements and synthetic data represent general concepts that can be usefully applied in the development of innovative sensing systems for UAS. Moreover, while flight tests usually cover a limited number of experimental conditions, in general real-time hardware-in-the-loop testing can provide performance assessment in a much wider set of operative conditions.

Future development of the laboratory facility foresees real-time testing of (simulated) prototype radar sensors and multiple intruder scenarios and the integration of simulators of cooperative systems such as ADS-B.