Abstract

Modern military ranging, tracking, and classification systems are capable of generating large quantities of data. Conventional “brute-force” computational techniques, even with Moore’s law for processors, present a prohibitive computational challenge, and often, the system either fails to “lock onto” a target of interest within the available duty cycle, or the data stream is simply discarded because the system runs out of processing power or time. In searching for high-fidelity convergence, researchers have experimented with various reduction techniques, often using logic diagrams to make inferences from related signal data. Conventional Boolean and fuzzy logic systems generate a very large number of rules, which often are difficult to handle due to limitations in the processors. Published research has shown that reasonable approximations of the target are preferred over incomplete computations. This paper gives a figure of merit for comparing various logic analysis methods and presents results for a hypothetical target classification scenario. Novel multiquantization Boolean approaches also reduce the complexity of these multivariate analyses, making it possible to better use the available data to approximate target classification. This paper shows how such preprocessing can reasonably preserve result confidence and compares the results between Boolean, multi-quantization Boolean, and fuzzy techniques.

1. Introduction

There are anecdotes that, during the Gulf war era, many pilots would turn off the Radar Warning Receivers, because they picked up too many false-positives. Likely, due to the limited onboard processing capabilities and technology of the 1990s, many systems could not complete the target classification processing assignment in the given radar duty cycle. Certainly a lot of advancements have been made since the 1990s. And, as evidenced by Moore’s law on processors, what was possible then is several orders of magnitude eclipsed in today’s onboard processors. However, many applications [1], like airborne or other platform-based surveillance, routinely pipe sensor data to a command, control, and communications facility, where data is analyzed and decisions rendered. With the sophistication of today’s military theater scenarios, such decision delay can be costly. So improvements in onboard data processing are implemented. And this paper examines the result confidence using various logic and reduction techniques.

To start, a simple range and track problem is used [2]. The object is to find the distance to target using a time-of-flight (TOF) approach. The sensor sends a pulse to the target, and, using time delay, calculates the distance. The distance is calculated as speed divided by time. This is illustrated in a simple Boolean table (Table 1). This illustrates that we can get inaccurate results, if some of the key assumptions or input data are wrong. For instance, if it is an acoustic sensor, then, “speed” is affected by atmosphere density, and “time” accuracy is limited by the quantization in the electronics. Depending on the accuracies required, the researcher [3] can invest in better control (or model) of the environment or electronics that is, more sensors and thus more data—to improve calculated distance accuracy. Table 2 compares the Boolean and Fuzzy truth tables for such a TOF sensor.

In this paper, we will expand the simple ranging problem to that of target recognition in a cluttered visual scene [4]. The problem is similar, in that the algorithms extract key parameters from the data and use that to determine target type. The specifics of the feature extraction algorithm are not pertinent, nor are the actual dataset used for illustration. What we show in this paper is that, using binary quantization techniques, it is possible to converge to an approximate answer and that such convergence reasonably preserves the answer fidelity. The technique is illustrated with a full-fuzzy solution of the data, followed by a 5-level, 3-level, and a true Boolean reduction of the data.

2. Logic Analysis

In a typical ranging system, a stream of pulses are sent to target of interest. The sensor gets a discrete (temporally separated) sequence of detections (“hits”) on a given target. Each detection is of a finite, typically quite short, duration. From each detection is extracted a given collection of attributes about that hit. The attributes typically include detection time, position, and reflected intensity, as well as various statistical moments and other mathematical features of the detection. The goal of these detections, is typically to identify (detect and classify) the target of interest and its trajectory. By successfully tracking the target in motion, knowledge of the target can usually be substantially improved, since repeated detections of the moving target provide a greater volume of information and allow aggregate, track-based, statistics to also be used. The target classification problem is to match characteristics about the target against a set of known target signatures The correlation calculation then yields the proper classification.

With this as motivation, let us return to the simple problem of measuring the distance to a fixed object. A Boolean decision can be derived from a truth table. And, for the simple problem, such a truth table was shown above in Table 1. Basically, if there is accurate time and speed information, then calculated distance is accurate. In this section we examine how accurate is “accurate” and then what can be done to improve confidence that the calculated distance is accurate. We look at the problem in determining the time and factors affecting accuracy in TOF algorithms. and look at speed and factors affecting the assumption that the speed is accurate.

The likelihood of accurate speed data and likelihood of accurate time data can be thought of as a distribution, and this distribution can be used in a fuzzy truth table to calculate distance. The details are not presented herein [6], but are illustrated in a comparative truth table.

And, using a modern processor, given the speed and time information, the Boolean computation time is in microseconds. If we do the same calculation using the MATLAB fuzzy logic tool kit, with appropriate Gaussian probability functions, again, given the speed and time information and likelihoods, the computation times are comparable.

Now we expand the problem to a multidimensional target identification problem. Let us consider an example of a cluttered video image, and we need to find and classify the target in the scene. The image attributes include distance (to target), aspect ratio of target, vertical pixel height of target, area (in pixels) covered by target, target luminosity, dark area in the image, the surrounding luminosity, and edge pixels. The object is to classify the target (type) and assess the time taken to complete this chore. Reproduced from [5], is, Table 3, showing rows of such data. The data is used for illustration purpose only and is used in calculations only to illustrate the comparison between full neurofuzzy analysis, versus what target classification confidence we can achieve using Boolean or fuzzy techniques. The details of Neurofuzzy techniques are not reproduced here [7], but, as background, the ANFIS toolkit in MatLab and Verimax analysis using MiniTab are employed. (ANFIS is the adaptive neurofuzzy inference system. Deployed in MatLab is the Sugeno fuzzy model, where all output membership functions are singleton spikes, and the implication and aggregation methods are fixed. The implication method is simply multiplication, and the aggregation operator just includes all of the singletons. The Sugeno method is ideal for acting as an interpolative supervisor of multiple linear controllers that apply different operating conditions of a dynamic nonlinear system. A Sugeno system is also suited for modeling nonlinear systems by interpolating multiple linear models.) And, as we know [7], the Boolean and fuzzy approaches increase the linguistic rules as the number of input vectors increases. A neurofuzzy approach allows a more compact and computationally efficient representation and lends itself to adaptive schemes. And, where there is no intuitive knowledge of input vector behaviors or relationships to the output, these adaptive techniques in turn help create the entire fuzzy network for us [8].

Looking at Table 3, for this example, we quickly see that there are 9 target types and that there is little intuitive correlation between input vectors and target type or time taken to classify the target. Heuristic methods and expert knowledge (described in [9]) exploit a priori knowledge to make inferences and help cluster the vectors and reduce system size by eliminating some inputs.

Using the Verimax analysis in MiniTab, we calculate in Table 4, the correlation between vectors and see that vert (number of vertical pixels) and area (area covered by target in pixels) data is strongly correlated.

There are other inferences that can also be made [9]. But none that allows a simple determination of logical connections. Employing the neurofuzzy ANFIS function from the MatLab toolkit, we see in Table 5 that, for this example, it is possible to get up to 87% confidence in target classification [10], relying on the full dataset.

The terms as used by MiniTab are as followsCORR stands for the correlation between the original output and the estimated output from the fuzzy neural system using the data from each method.TRMS stands for the total root mean Square for the distance between the original output and the estimated output using the same testing data through the fuzzy neural system: where is the estimated value and is the original output value.STD stands for the standard deviation for the distances between the original output and the estimated output using the same testing data through the fuzzy neural system.MAD is the mean of the absolute distances between the original output and the estimated output using the same testing data through the fuzzy neural systemEWI is the index value from the summation of the values with multiplying the statistical estimation value by its equally weighted potential value for each field.ERR stands for the error rate which is expressed as where is the number of testing data, is the estimated output, and is the actual output.

And further analysis of the data in Table 3, using the Factor toolkit in MiniTab, shows that this confidence (correlation) degrades as we reduce the input matrix, by selectively removing high correlation vectors. This forms our baseline.

3. Boolean Reduction Techniques

To simplify the classification computations, we wanted to see if reducing the raw data to Boolean would degrade data quality, and, if so, to what level. We started with a 5-level quantization of the raw data (Table 6) and evaluated the correlation factors (Table 7). Then, we looked at 3-level quantization of the raw data (Table 8) and the correlation factors (Table 9). Finally, we did a true Boolean operation on the raw data (Table 10) and looked at correlation factors (Table 11). In all instances, for this example, the basic data properties were reasonably preserved. The various data tables are listed.

Table 11 shows the Verimax analysis when we employ Boolean relationships to determine the correlation functions.

In all instances, the general correlation similarities between vectors are approximately preserved. And, again, vert and area vectors (highlighted) have high correlation. Figure 1 shows the correlation value as raw data is successively quantized, and how, even at binary quantization, there is still 80% correlation between these two data values.

Clearly, many of the correlations are preserved, and the data is easier to handle in embedded processors. Thus, for an initial start, simple Boolean quantization of the raw data provides a reasonable estimate as compared to the full dataset fuzzy inferences.

4. Proposed Algorithm

Figure 2 shows the proposed algorithm, on how to implement the reduced computation schema, and consists of the following steps.

Assume there are by matrix with input variables and one output.(1)Calculate the correlation matrix, , between variables of dataset and the inverse matrix of the correlation matrix, . This number serves two purposes, first to establish a baseline to evaluate the fidelity of the Boolean reduction technique and the other to identify high correlation variables for reduction. (2)If it is possible to reduce the data matrix, then, select the number of reduced factors, , for , using step (1), where the accumulated variance is greater than 0.9 (arbitrary).(3)Recalculate the correlation matrix to baseline of the new value. (4)Iteratively, quantize the dataset and calculate correlation to verify data fidelity. In this example, 5-level, 3-level, and binary quantization was chosen, but the reduction technique can be applied to any quantization level, including directly reducing the dataset to binary. (5)Evaluate the degradation, and decide if the quantization is of adequate fidelity. (6)Use Boolean decision diagrams or other similar Boolean techniques to calculate solution. The selection of value to determine adequate correlation at 0.9 is based on experience. For this example, the small variety in targets and adequate orthogonality in the sensor data stream, allow setting a low cross-correlation number for dataset reduction. Similarly for determining of adequate data fidelity remains after data quantization, the value to 0.8 is selected based on experience. Users can certainly tweak the parameters to suit the specifics of the application.

5. Conclusion

Target detection and target classification in battle field cluttered images and in computationally limited systems are a growing problem. As more sensors are deployed, more data is available, and this strains available computation capacity, especially in embedded processor applications. In general, neural network algorithms are capable of solving the multi variate problem, if sufficient compute capability and/or time is available. These neural networks are used for recognizing the patterns of the system with adapting and training the system itself with given conditions. The fuzzy inference systems are generated by the human knowledge database using membership functions for decision making. The integration of those two techniques produces one optimized method by compensating each other and utilizing human knowledge database and if-then rules of fuzzy logic. This means that the neurofuzzy technique [11, 12] is performed by applying learning algorithms of neural networks for parameter identification of fuzzy models. Lacking that, developers seek either reduction of the data size and/or other computational schemes that simplify the problem. In this paper, we showed that, using a simple cross-correlation calculation, it is easy to identify data orthogonality and thereby reduce matrix order. A rule of thumb guide can be used to evaluate when such reduction is affecting the data integrity. Then, by successive quantization techniques, one can come to a Boolean dataset that still approximates the original data, with a known degradation in fidelity. This Boolean data can then be solved using binary decision diagrams, to yield a set of solutions approximate to the original full dataset solution. That result is presented in a companion paper. This has many applications where approximate solutions are adequate and where compute resources (either processor capability, or available time) are limited.