Abstract

This work proposes a fault detection architecture for vehicle embedded sensors, allowing to deal with both system nonlinearity and environmental disturbances and degradations. The proposed method uses analytical redundancy and a nonlinear transformation to generate the residual value allowing the fault detection. A strategy dedicated to the optimization of the detection parameters choice is also developed.

1. Introduction

Safety in intelligent vehicle is a key issue. In order to insure driver safety, it is of a high importance that all the information given by embedded sensors is more reliable. Indeed, a large number of applications are based on data fusion of information coming from several sensors, especially in vehicle localization application, as depicted in [13].

In the aerospace domain, physical redundancy and a voting system are often used, consisting in the direct comparison of the information provided by at least three identical sensors or systems and then the validation of the recorder data.

However, in industrial fields as the automotive industry, duplicating sensors correspond to a loss of profit and so solutions permitting the verification of the sensors confidence without any supplementary sensors have to be developed.

A large number of fault detectors have been developed during the past decades, to deal with complex systems [414]. A large number of them are based on system model [1517] consisting in the comparison of the predicted behavior of the system and the information generated by the sensors, allowing to determine the system current state. This kind of method needs a perfect knowledge and model of the system behavior to work efficiently. In the studied case, the vehicle manoeuvers could present strong nonlinearity which will add complexity to the system modelling. Some solutions have been proposed to deal with this problems [1820] but in the case of an automobile application strong and unpredictable environmental interaction could be added to behavior nonlinearity, and model-based solutions will be less efficient.

Considering this context, a solution using analytical redundancy seems a valuable alternative. Analytical redundancy allows comparing the estimation of a chosen metric from sensors of different types in order to detect and identify deviant comportment [21]. Huang and Su have proposed such a solution in [22] with the use of a set of extended Kalman filters to compare the estimated state of an ego-vehicle from different parallel filters, but this solution still needs to determine a system model to work optimally.

Our proposed solution is based on analytical redundancy using nonlinear transformations to generate residual signals in order to detect sensors faults. The use of nonlinear transformations allows improving detection robustness. The paper will be shared in five parts: first, the presentation of the architecture and some generalities are found in Section 2 and then the nonlinear transformation used will be studied in Section 3 and finally the decision process will be discussed in the fourth part. Some simulation results will be presented in Section 5 and, finally, Section 6 will conclude this paper and will provide some future works.

2. Generalities and Architecture

2.1. Architecture

The proposed architecture (Figure 1) is divided into two consecutive transformations which are dedicated to the generation of the residual signal . The first transformation allows obtaining a common measurement and the nonlinear transformation (TNL), giving the residual quantity. The second transformation consists in the decision process. The architecture can so be generalized for any type of sensor, according to a measurement under test , which has to be estimated from every sensor.

The values correspond to the decision for each sensor, taken into account in the global value estimation. means a sensor presents a faulty behavior ( corresponding to a nominal behavior). The first necessary step to apply this method is to define the measurements used and apply the transformation to ensure the analytical redundancy. If more than one measurement is chosen to complete the fault detection, this architecture will be applied on every measurement used in the fault detection, and a sensor will be considered faulty if at least one of its corresponding decision value is 1.where represents the Boolean decision values for the measurement m and the sensor . It is so primordial to define which measurements have to be tested according to the monitored sensors. Measurements have to be generated by at least three different sensors or sets of sensors and need to ensure the faults observability. These two requirements will be discussed in the two following sections for a specific example.

2.2. Analytical Redundancy

In order to implement this method, we need, in the first time, to determine the measurements which will allow us to make the comparison in order to study the tested sensor. In our case, we chose to study proprioceptive sensors, usually used to predict an ego-vehicle positioning state, to get inertial information from the Inertial Navigation System (INS) and to provide odometric data needed to get vehicle speed (longitudinal or wheel speed). Odometers permit us to measure each wheel speed and distance travelled, when the INS will be used to measure vehicle acceleration and yaw rates on the 3 axes (see Figure 2).

Using only this set of sensors, it is possible to determine vehicle yaw rate and longitudinal acceleration, respectively, named and . This information is directly given by the INS sensor, when it has to be deduced from the odometric speed and distance. Using each left-right pair of odometers independently, it is possible to determine both measurements with the front and back couples of sensors. The longitudinal acceleration is then given by the following when the yaw rate is given by (5):where is the vehicle speed determined with (3) with and being, respectively, the left and right odometric speeds, and is a sampling period.Now using the odometric distances and , it is possible to approximate the yaw rate by calculating the differential distance, , which can be approximate as the difference between the right and left distances. In Figure 3, represents the distance between the two wheels, usually known, depending on the vehicle characteristics.

It is then possible to determine the angle using (4) and then the yaw rate (5).

2.3. Observability

In order to be efficient, the transformation stage has to ensure that fault on one sensor can still be noticed on the transformed measurement. Basically, it consists in verifying that the measurement during a faulty behavior is different than the nominal behavior . In order to analyze the observability capability, we use faults models presented in Qi et al. [23], which consider that sensors noises and errors can be classified in four categories:(i)An additive bias:(ii)A scale factor:(iii)An aberrant error:(iv)A total loss, which corresponds to an output value of the sensor stuck at 0 or registering only a stochastic process.

Knowing these four faults levels and the transformation applied on the sensors data, it is now possible to determine the efficiency of the proposed architecture. Concerning the INS, every type of fault is observable as there is no transformation, but it has to be determined for the odometers.

First, a bias applied on only one odometric distance (or speed) will be mostly masked by the estimation of the acceleration. If the bias is a constant value, then it will generate no acceleration. However, during the appearance of the fault, there will be a large instantaneous acceleration which can be seen as a Dirac impulsion. The bias on odometric speed will so generate an abnormal aberrant behavior on the acceleration. It will also generate a bias on the yaw rate Vθz (positive or negative depending on the affected odometer) which is easier to detect. Here, the bias is applied on the right odometer measurement. Knowing these four faults nature and the transformation used, it is now possible to determine the efficiency of the proposed architecture. Concerning the INS, every kind of fault is observable as there is no transformation, but it has to be determined for the odometers. In the following equations, the nonfaulty measurements will be noted as VθzNF and AccxNF when the faulty measurements will be VθzF and AccxF, respectively, for the yaw rate and the longitudinal acceleration. Our objective here is to extract the nonfaulty value and observe the deviation introduced by the injected fault.Now applying a scale factor to the right odometric speed, the acceleration will be affected (10), but the resulting fault appears as a gain and a bias which is depending on the nonfaulty value from the other odometer. The yaw rate will be affected in the same manner (11).A total loss, usually represented by the sensor’s output stuck at 0, can be noticed on both acceleration (12) and yaw rate (13).Here, the acceleration obtains an additive term corresponding to the real acceleration of the affected wheel divided by two ( representing the real right wheel speed). The yaw rate will also be affected by an additive term proportional to the real distance travelled by the affected wheel, , as follows:An aberrant error will also lead to an aberrant error on both measurements, which means high measurements values during a brave time delay.

3. Nonlinear Transformation

Once the first transformation is done, it is possible to generate the residual value using a nonlinear transformation. The attributes of a residual value are generally as described in [24], zero-centered during a nominal behavior, and presenting a noncentered distribution with the appearance of a fault on the corresponding measurement. In order to distinguish the faulty source efficiently, the residual has to be insensitive to fault appearance on other measurements. The chosen transformation (Figure 4) consists in a nonsymmetrical Gaussian transformation, centered on an estimation of the measurement global value, . The value permits the adjustment of the sensitivity of the transformation.In order to estimate the measurement global value, we need to develop an estimation method, which will use all the inputs measurements but has to be insensitive to fault. The chosen method consists in a weighted mean value (15), where each normalized weight depends on the previous variation, as described in (16) to (19). The value corresponds to the measurement for the sensor .where is the total number of measurements. The normalization is done to obtain a sum of weights equal to 1. The original weight is calculated using a parameter reflecting the past and the current deviation between the connected measurement and the estimated global value. where is a coefficient allowing giving more importance to the past values rather than the current deviation from the global value. Replacing in the equation by the term ε, it is possible to generalize the calculation using the initial deviation .This estimation method will be evaluated and compared to two others estimations in the simulation section. The residual generation will also be evaluated in Section 5.

4. Decision Process

The decision process is done by comparison with a threshold which has to be defined. Usually it is possible to optimally determine the threshold value by using statistical tools [25, 26], knowing information about signals characteristics, a priori probabilities, and so forth, but the currently studied case does not allow knowing all the information needed. Some alternative solutions such as the Neyman-Pearson criterion have been developed to deal with only one part of the information [27] but it still does not allow optimizing decisions considering several faults natures.

Also the sensitivity of the nonlinear transformation will need to be adjusted in order to optimize the detection. The quality of detection is usually determined using the false alarm and the missed detection rates, but other parameters can be used to evaluate the test. For instance, the maximum error during a missed detection or the cost depending on that value and the probability of appearance can also be used as quality criteria. In order to optimize the decision process, we need in the first time to run simulations representing nominal behavior. Then, failures have to be virtually added according to the description made in Section 2.1. The simulation process will be presented in the next section.

Using this database, the detection process will be done (described in Figure 5), varying both sensitivity and threshold in order to compare results for different cases (faults nature and importance, sensor affected…). Then using quality criteria, the parameters determination will be possible. The developed method to optimize the decision process consists in the choice of a first priority criterion. This criterion will be adjusted by the user. Then every configuration allowing encountering this criterion is determined and every other configuration is removed. The second stage consists of choosing a second criterion for which the optimal value will be found and so the optimal parameters set can be defined.

For example, it is possible to limit the false alarm rate to 5% as a first priority constraint and optimize the missed detection rate, and then we obtain the optimal parameter set for the chosen constraints. This method is so dependent of the chosen strategy, and different cases will be presented during the simulation section.

5. Simulations Tests, Evaluation, and Validation

5.1. Transformations Stages

Simulations have been run in two steps. First, vehicle and sensors nominal behaviors have been simulated using industrial version of pro-SiVIC simulation platform, allowing the generation of driving scenarios on different tracks, with speed and direction variations, to get feedback on the vehicle state taking into account its dynamic. The research version of pro-SiVIC has already been used in the development of different ADAS systems.

This software also models the sensors behavior. In order to validate the proposed method, it is primordial to ensure the failure detection function whatever the vehicle dynamic is, so the proposed scenario will present a complex trajectory with various dynamic cases, with speed changes both in curves and straight lines and also constant speed periods; all these driving conditions and dynamic states will allow having results representative of a classic driving scenario involving only one vehicle. As we focus on longitudinal acceleration and yaw rate, both of them are represented in the Figure 6 for the complete scenario.

As the method is working identically for both measurements, we will focus in the first time on the acceleration. First of all, we will evaluate the proposed global value estimation. The results will be compared to two other estimation methods: a simple arithmetical mean value calculation (called method 1 in the results of Figure 7) and an estimation by Kalman filtering (method 2). The proposed approach will be referred to as method 3. First, the global value is directly estimated from the three (INS and both front and back odometers) measurements using the real acceleration as a reference (Figure 7).

The three estimation methods seem to work efficiently on a nominal case of study. The next step is to virtually add faults on measurements. Figure 8 represents estimations with one of the three measurements directly affected by a bias (left), a scale factor (center), and a punctual loss, represented by the measurement blocked at the previous registered value (right). The time interval affected by the default is highlighted in orange.

The estimation is visibly affected by injected errors, but this perturbation will depend on fault nature and value. Table 1 is presenting the mean quadratic error value during the exposure time for the three types of errors and for a set of both bias and gain fault values.

Except for a small scale factor, the proposed method (method 3) consisting in a weighted mean value calculation is always equal to or better than the two others studied method. Once the estimation method is validated, the nonlinear transformation will be evaluated.

Using pro-SiVIC data presented earlier, we compute in the first time the nonlinear transformation on the three measurements as described in Section 2, in a nominal behavior, for two different values, 0.1 and 0.2. We are supposed to observe zero-centered signal for each measurement presenting a standard deviation depending on the noise level and the configured sensitivity.

Results obtained represent perfectly the expected behavior (Figure 9). The sigma value has to be set in order to make the method more robust, keeping in mind that it has to be small in order to detect the smallest fault values. All the faults models presented in Section 2 have been virtually added to measurements in order to observe their impacts on the TNL results and are presented in Figures 1013. In all the figures, the appearance of a fault is represented by an orange line and the disappearance by a green line on the related sensor. When a bias is added on the odometers, it is added on the speed measurement and not directly on the acceleration estimation. The aberrant error is simulated here by the addition of an impulsion presenting an important value (at least ten times the current measurement value) during one sampling period when a total loss is simulated by a blocked value at zero. Figures 10, 11, 12, and 13 present, respectively, results obtained with the addition of bias, scale factor, aberrant error, and total loss on measurements.

As expected for a bias or a scale factor on the odometric measurement, the acceleration measurement shows a punctual perturbation during the appearance (and disappearance) event but a nonprominent one for the duration of the disturbance. The yaw rate measurement will be more effective for this type of fault.

An aberrant error will generate a high intensity perturbation on the nonlinear transformation.

These simulations allow visual validation of the impact of each kind of fault on the nonlinear transformation, but it is also possible to generalize the results, considering a perturbation whatever the fault nature is, virtually added to the measurement under test.where and are, respectively, the faulty and nonfaulty measurements. Considering the same constant value as an input for all the three measurements with the addition of noncorrelated stochastic processes on each of them in order to simulate measure noises, we compute in the first time the TNL, which will present the behavior observed in Figure 9. Then, the exact same conditions are used with the addition of a fault on one measurement. The simulation is repeated with different sensitivities and fault values. Then, we compute for each configuration the residual mean value and subtract the nonfaulty mean value from the faulty one (). The resulting value can be seen as the transformation output sensitivity to a fault according to the value. Figure 14 illustrates the obtained results according to fault and values. As expected, the highest output sensitivity is observed for the smallest values and the strongest faults.

Knowing that the proposed transformations are working efficiently, the focus is now put on the decision process optimization.

5.2. Decision Process

As depicted earlier, the decision is made by comparison between the residual and a threshold. The sensor is depicted as faulty if the absolute value of the corresponding residual is higher than the threshold. The sensor is still tested after the initial decision, but the corresponding measurement will not be used in the global value estimation as long as the residual is higher than the threshold. A recovery is still possible as the sensor remains under test, and if its behavior leads its residual to decrease below the threshold value, the decision value will return to the initial state 0.

Following this strategy, in order to optimize the decision process, it is required, in the first time, to run simulation using previous results for faulty and nonfaulty measurements and applying the decision, varying both TNL sensitivity and threshold values to establish ROC curves.

ROC curves (Receiver Operating Characteristic) allow evaluating hypothesis test quality by comparing good decisions () to false alarm () rates [28]. An example of representation is given in Figure 15 and Table 2 for a bias on the INS acceleration measurement. The different colors correspond to the injected fault value, to which will depend the decision results. As the sensitivity has also an important role to play in the decision process, the results are presented for three different values.

The simulation is done for every kind of fault, varying threshold, sensitivity, and fault values. In order to simplify, the example of acceleration measurement for the INS sensor is presented, but the proposed method can be easily applied on every sensor and measurement. As the acceleration measurement presents a large amount of values around 0, scale factor and total loss represented by a stuck at zero measurement will present a high missed detection rate, as the faulty measurement will present values similar to the real values. This kind of error does not present a high risk for the system function, as perturbed measurements will be the same as the real measurement value.

With the obtained information, the optimal parameter can be determined. The decision process optimization objectives are to determine the sensitivity and threshold values allowing obtaining the best performances according to the chosen criteria. As described in Section 4, a method has been developed to choose these two parameters. As the false alarm and missed detection (, ) rates have been determined for each parameters couple, and also the maximal measurement error resulting from each missed detection, the optimization process can be realized. It is possible in the first time to choose characteristics allowing the lowest error rate, but considering the current situation, it is more important to limit the encountered maximum error due to a wrong decision. So different strategies results can be compared to evaluate the best solution.First chosen strategy is as follows: 1st priority, false alarm rate limited to 5%, 2nd priority, and maximum error minimization.

In order to realize this optimization strategy, the determination of the maximum error is needed. So, for every wrong decision, the error absolute value between the faulty measurement and the real measurement has to be determined (21); then the maximum error value is connected to each configuration and then the optimization method can be applied.As the simulation is realized for every kind of fault, the error will be determined for each fault nature. As only one set of parameters has to be configured, the error corresponding to each couple sensitivity/threshold can be assimilated as the mean maximum error value.Using these two criterion, the results mentioned in Tables 3 and 4 are obtained, taking into account every nature of fault with the same occurrence probability.

The missed detection rate is high, as predicted, but the maximal error encountered is lower than 0.1 m/s2.Second strategy is as follows: 1st priority, Maximum acceptable error set at 0.05 m/s2, 2nd priority, false error rate minimization.

This second strategy takes into account the same parameters but changing the priorities. In a system where we want the smallest error possible between the measurement and the real value, this will probably be the best alternative.

Here the false alarm rate is higher (around 17%) meaning sensors will frequently be isolated. In a system where measurements can be analytically generated with the help of other sensors, the temporary isolation does not present an important inconvenience. This remains a strategical choice to be made by the user, depending on wanted characteristics.

6. Conclusion

In order to deal with strong system nonlinearity and environmental perturbations, a sensors fault detection algorithm has been developed, using analytical redundancy and nonlinear transformation. After the concept and architecture presentation, the residual generation consisting of the comparison of a common measurement nonlinear transformation (TNL) has been depicted, during which a global value estimation has been proposed in order to remove the measurement common part. A strategy to make the decision process optimization has then been discussed.

The different part of the proposed algorithm has then been evaluated through different simulations, using data from pro-SiVIC software, allowing the simulation of a vehicle dynamic behavior and the embedded sensor responses. In the first time, the global value estimation has been compared to two other estimation methods and shows better results than them for almost all the configuration tested. Then the behavior of the nonlinear transformation has been studied, according to different fault nature and sensitivity configuration. Then, as this parameter and the threshold value have to be determined in order to ensure a quality decision, the proposed method for the parameter choices has been evaluated. In this last part, two different strategies have been compared, according to the user preferences, allowing limiting or reducing test results characteristics.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This work is part of CooPerCom, a 3-year international research project (Canada-France). The authors would like to thank the National Science and Engineering Research Council (NSERC) of Canada and the Agence Nationale de la Recherche (ANR) in France for supporting the Project STP 397739-10. This work is done in the LIV research group (Laboratory of Intelligent Vehicles), in the Electrical and Computer Engineering Department of Sherbrooke University, (http://www.gel.usherbrooke.ca/LIV/public/). The authors want to thank the other researchers of the LIV for their cooperation.