- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Table of Contents

International Journal of Distributed Sensor Networks

Volume 2013 (2013), Article ID 972641, 7 pages

http://dx.doi.org/10.1155/2013/972641

## A Method to Determine Basic Probability Assignment in Context Awareness of a Moving Object

^{1}Department of Multimedia Communication, Far East University, 76-32 Daehak-Gil, Gamkok-Myeon, Eumsung-gun, Chungbuk 369-700, Republic of Korea^{2}Department of Rehabilitation Technology, Korea Nazarene University, 48 Wolbong-Ro, Seobuk-Ku, Cheonan City, Chungnam 331-718, Republic of Korea

Received 22 March 2013; Accepted 16 July 2013

Academic Editor: Tai-hoon Kim

Copyright © 2013 Donghyok Suh and Juhye Yook. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Determining basic probability assignment (BPA) is essential in multisensor data fusion by using Fussy Theory or Dempster-Shafer Theory (DST). The study presented a method to determine BPA through sensor data only reported by sensors without depending on preset information data modeled prior to actual events. This was used to determine BPA for multi-sensor data fusion so that a pedestrian, who walked or moved, could recognize a moving object. The method resulted from the study was to evaluate the changes of each sensor measurement as time passed. Each BPA of each focal element was normalized to evaluate the aspects of the changes by time and to meet the basic characteristics of BPA in DST. That is, BPA of each focal element after evaluating sensor data was ranged between 0 and 1, and the total amount of all focal elements was 1. The study showed that a pedestrian could recognize a moving object with the method of determining BPA through multi-sensor data fusion conducted in the study.

#### 1. Introduction

Extracting meaningful and advanced information from fragmentary information acquired by many basic information sources became possible through multi-sensor data fusion. Multi-sensor data fusion has been applied in flights, space, robots, image processing, geographic information, biometric identification, and so forth. Fussy Theory and Dempster-Shafer Theory (DST) are used for multi-sensor fusion data. These theories provide a way to express the uncertain and obscure real world with mathematical logic. Today, ubiquitous sensor network has drawn much attention, and research on recognizing contexts and inducing the causes of those by multi-sensor data fusion after constructing network with heterogeneous multi-sensors to acquire better context information is actively progressed. Basic probability assignment (BPA) plays a key role when Fuzzy Theory and DST are used in multi-sensor data fusion. The process of multi-sensor data fusion based on DST depends on BPA, and therefore, data fusion and context induction are difficult to process without BPA. BPA is otherwise called mass function and does not refer to probability itself. It is a base to express stochastically whether each hypothesis that is called “focal element” in Evidence Theory is true or not or what the degrees of uncertainty are. Each focal element or the belief of a hypothesis and plausibility and uncertainty could be calculated by BPA. The study presented a method to determine BPA in the context to recognize a moving object. Context awareness from multi-sensors is widely used by modeling data acquired prior to events. However, it is difficult, if not impossible, to have all information for all contexts prior to events in the real world. So, a method to determine BPA is necessary for context awareness by sensor data acquired at a certain site without prior information. Therefore, the purpose of this study was that a moving observer would recognize a moving object only by sensor data without any prior information of the environments.

Most of the studies on acquiring information of moving objects in the real world estimated the locations of moving objects. An observer and sensors and equipment to measure moving objects stayed in fixed locations. These studies such as a smart walk system, provided mostly with routes guidance or geographic features, focused on static information. However, a pedestrian walking or a small vehicle running at low speed needs information of moving objects such as electric cars for one driver, motorcycles, and bicycles in the environments as well as static information of geographic features and transportation. Studies on acquiring and using information of moving objects sharing roads or encountering each other as well as static environments such as stairs and bumps are important for the safety of a pedestrian and a person in a small vehicle, not to collide or contact. High-performance sensors and radars to get and use advanced information are available for expensive cars but are not so for pedestrians and small vehicles with low-speed capacity. Multi-sensor data fusion by DST would provide valuable results, acquiring advanced information of moving objects from the combination of low-priced sensors. The method to determine BPA as a key aspect in this context would be widely useful.

The study consisted of the following. Related studies were analyzed in Section 2, and a new way of BPA determination was introduced in Section 3. Experiments and evaluation were shown in Section 4, and a conclusion was drawn in Section 5.

#### 2. Related Research

Dempster-Shafer Evidence Theory and Basic Probability Assignment.

Dempster-Shafer Theory (DST) is a theory to calculate the basic probability assignment (BPA), when the independent BPA has been defined. Provided that is defined as a universal set, which consists of the exclusive propositions and is the power set of , incorporating all the possible combinations of the propositions, could have the four features, as follows:

Each sensor will contribute its observation by assigning its beliefs over . This assignment is called the “basic probability assignment” of the sensor , denoted by . According to sensor ’s observation, the probability that is indicated by a “confidence interval” is [, ].

The lower bound of the confidence interval is the belief confidence which accounts for all evidence :

The upper bound of the confidence interval is the plausibility confidence, which accounts for all the observations that do not rule out the given proposition:

For each possible proposition, DST gives a rule of combining sensor ’s observation and sensor ’s observation :

The difference describes the evidential interval range, which represents the uncertainty [1, 2].

##### 2.1. Cases of BPA Determination

Bentabet and Jiang suggested a novel way to determine mass functions in DST. They applied it to the image segmentation with iterative Markovian estimation [3]. Ben Chaabane et al. and three other colleagues suggested a method for the estimation of mass functions in the DST. They applied it to color image segmentation [4]. Ali and Dutta suggested a way to determine BPA when the operators of minimum, maximum, and approximate values were recognized [5]. Jiang et al. showed a way to determine BPA with measuring a distance between subject properties and tested sample data [6]. Zuo et al. and two other colleagues introduced a method to determine BPA with rough set theory based on random set theory and BP neural networks [7]. Boudraa et al. showed a method to determine BPA with image segment used by the purge membership rating pixels from image histograms [8].

The weakness of the methods in these previous studies required a large amount of learning data. The subjects and results of data fusion were also limited to static aspects.

#### 3. BPA Determination to Recognize Moving Objects

The method to determine BPA in the study was to acquire information of a moving object when an observer was moving and to be aware of relative contexts and relations between the moving observer and the moving object. That is to be aware of relative situations between moving objects on the road. The context awareness was usually determined by matching acquired sensor data with data modeling based on information saved prior to events, but the study is to supplement the cases of context awareness that the information from the real world goes beyond the range of preset data modeling.

A moving observer was equipped with three different sensors for processing multi-sensor data fusion by using DST after acquiring data from the sensors of a moving object encountering on the road. A new method to determine BPA in this case was presented. The process of multi-sensor data fusion based on DST is that the belief and plausibility of each focal element can be calculated after BPA of each focal element is determined, and situations can be induced from comparing the belief and uncertainty of each focal element after getting uncertainty of the difference between belief and plausibility. So, the correct evaluation of the sensor data perceived from moving objects should be reflected on BPA determination.

##### 3.1. Measurement and Context Awareness

The study environment was dynamic as most of the world is. Time progress is considered in the dynamic situation, which is different from biometrics or geographic information based on image processing in the static situation. The sensor data are also measured values, scalar quantities, which do not have the components of vector. On the other hand, the study was to acquire directional context information of a moving object by analyzing the scalar quantity as combining sensor data and time progress.

##### 3.2. Time Division and Average Rate of Change in Time Intervals

There is a particular case that absolute values of sensor measurements have important meanings, but an analysis of changing aspects is useful to acquire information. Changing patterns of the sensor measurements by time progress are evaluated, which is the calculation of the average rates of variations in time intervals. Fuzzy C-Means (FCM) algorithm is applied to calculate them. Each cluster defines its central point after acquired and saved data are clustered in K clusters in FCM. The data in the clusters are evaluated by calculating clusters’ central points and Euclidean distance.

Clustering by sensor types existed naturally in the study. The rates of variations in fixed time intervals were calculated as the clusters’ central points were done. However, each data was not evaluated by comparing average values, which were different from FCM. The study was focused on comparing the amounts of changes among previously measured values and those in the next time intervals. Increases and decreases in measured values by time intervals were meaningful results for the observer with the sensors.

An issue could be raised here that single sensor could observe faster and simpler than multi-sensors with sensor data fusion could to know approaching or going away from moving objects by sensor data evaluation of comparing the amounts of changes among previously measured values and those in the next time intervals. High-priced cars can be equipped with high performance sensors and radars and operated, but the study was focused on pedestrians and small and light weighted vehicles without expensive or heavy equipment. A single sensor has a limitation for context awareness. A sound sensor has difficulty in differentiating ambient background noise from moving object’s sound. An ultrasonic sensor is good to measure the distance of moving object but makes errors for a moving object with low reflection. Infrared light sensor has high recognition rates for humans and animals with low sound wave reflection but the distance to recognize is relatively short. So, processing heterogeneous sensors’ data by the multi-sensor data fusion could result in high recognition rates and advanced context information.

##### 3.3. Evaluating Signal Variation Rates

The functional formulae of the sound, ultrasonic, and infrared light sensors were defined as the following. Sound function was , distance function measured by ultrasonic waves was , and short distance measurement function by using infrared light sensor was .

Variation rate in time intervals of a sound sensor was calculated as follows:

The variation rate of distance was fixed as 0, not to impact on risk factors because a moving object was going away if the measured distance value by distance measurement sensors was . The time from 1 to 30 seconds was measured per second as follows:

To have BPA, the variation rate, , of a slope by intervals of the sound measurement value, delta value, to elicit BPA determination based on the measurement values of a sound sensor from three types of sensors was defined as below.

Arctan () was expressed as from the sound delta value, . So, total delta absolute value regardless of negative or positive amount of a variation rate was divided by the total of delta value and multiplied by (/total time)^{2}. The value was high as the rate of the sound increased as time went by. It was if and 0 if . The value was 0 to fix between if the sound decreased rapidly. This was because that BPA of each focal element should be in the range from 0 to 1. This is the way to calculate BPA based on variation rates by time intervals of measured values by a sound sensor. The same way is applied to calculate BPA based on variation rates by time intervals of measured values of distance from an ultrasonic sensor and those of distance from an infrared light sensor . The process of calculating BPA using and is the same as that of calculating BPA using the sound value, . So, BPA can be calculated based on the variation rates of distance measured and reported by ultrasonic waves as the following:

BPA can be calculated based on the variation rates of distance measured by an infrared light sensor as well. Consider

The measured value of distance reported by ultrasonic and infrared light sensors was 0 if and if . The value was 0, meaning going away as the distance increased. Each BPA of a sound sensor and that of an ultrasonic sensor are added if both variation rates from a sound sensor and an ultrasonic sensor should be considered. Each BPA based on variation rates from an ultrasonic sensor and that of an infrared light sensor are added if the ultrasonic sensor and the infrared light sensor generate consecutive events, and a sound sensor does not.

The variation patterns of sensor values gave a base to evaluate the relations with situations. The results from recognizing and evaluating change patterns of sensor values were reflected on BPA determination for multi-sensor data fusion by using DST.

#### 4. Experiment and Evaluation

##### 4.1. Experiment

Multi-sensor data fusion using the method to determine BPA resulted in Section 3 was applied and investigated if a moving object would be a risk factor to the moving observer in this chapter. Cars, motorcycles, bicycles, electric cars, wheelchairs, pedestrians, and so on could be considered as moving objects on the real road. Cars and two wheeled motor vehicles with internal combustions using fossil fuel make big noise and run at high speed. Electric cars, wheelchairs, and pedestrians make relatively less noise and run at low speed.

The experiment in this chapter was that a pedestrian, the moving observer, recognized the moving object to avoid collision or contact possibilities on the road. Sensors Used: Sound Sensor, Ultrasonic Sensor, and Infrared Light Sensor Sensors’ specification Sound Sensor-Sensitivity: −403 dB Impedance: Max 2.2x Directivity: Omnidirectional Ultrasonic Sensor-Detectable range (m): 0.03–6 m Nominal Frequency: 40 Hz Resolution [mm]: ±3 mm Infrared Light Sensor-Field of view: 138° Operating Voltage: 3~10 V DC O/S: TinyOS 2.0

*Experiment Specification*. A sound sensor reported the sound made by the moving object. Then, the sensor reported the changes of the sounds additionally to detect if the moving object was approaching to or going away from the moving observer. An ultrasonic sensor began to operate and reported if the moving object was approaching when the sound sensor found and reported the moving object. The ultrasonic sensor detected and reported if the moving object was approaching or going away in regular time intervals. To supplement the operation of ultrasonic sensor detection capability, infrared light sensor was operated additionally in close distance of the moving object.

The change patterns of reported values had meanings to show the changes of gradual increases or decreases. The purpose was to detect the moving object that was approaching or going away from the moving observer by using the sensor system. Evaluating the signals detected and reported was possible by continuously comparing the differences between the sensing values in time progress.

Time indicated the moment of the detection in Table 1. That is, 1 was the first detection and 2 was the second detection. The loudness of the sound in dB detected by an ultrasonic sensor was recorded. Distance 1 was the distance measured by an ultrasonic sensor, and Distance 2 recorded was the distance measured by an infrared light sensor.

The experiment was to verify if reason deduction using multi-sensor data fusion could determine whether or not moving objects around pedestrians were risk factors. This was to see if the sound, ultrasonic, and infrared light sensors together could detect and report events related to the pedestrians’ safety from the emerging, moving, or approaching objects in pedestrians’ path. The sound sensor detected moving objects with noise and then determined whether they approached or went away by sensing the changes in the loudness of the noise. The variation rates of the loudness of the noise in each interval were bases for calculating the BPA. Acquired signals were calculated by the variation rates. The change patterns were evaluated, and the degrees of the moving object’s approach were displayed in Figure 1.

In Table 2, BPA in each focal element was calculated, using the way presented in the study, after the average variation rates in each focal element between and were evaluated. The values of belief and uncertainty in each focal element using BPA were presented. The *belief* and the *uncertainty* in each focal element were calculated by using multi-sensor data fusion based on DST. Table 2 shows that BPA could be calculated by using variation rates in time intervals and used for calculating *belief* and *uncertainty* in each focal element resulted from data fusion for context inference.

Table 3 presents *belief* values after calculating BPA in each focal element in the entire time intervals such as between and , and , and and .

The risk increased as the moving object was getting close, detected by the ultrasonic sensor. The signal detected and reported from the infrared light sensor also resulted in rapid increase of evaluating a risk. This result presented that the risk increased as the moving object approached when BPA was determined by evaluating and reflecting the variation rates of signal data reported from the sensors without prior information.

The study presents that BPA with evaluating variation rates of signals is to infer risk factors by calculating *belief* and *uncertainty* in each focal element. Final context inference is based on *belief* and *uncertainty*, but the changes of *belief* values alone were inferred in the study if the moving object was approaching or a risk factor because the *belief* values imply a hypothesis of having focal element or a probability of definite occurrences of events.

##### 4.2. Evaluation

The BPA calculation method aimed to apply to a smart cane for the blind. The support systems to walk for the blind were mainly developed to detect static factors such as stairs, roads, and barriers so far. The BPA calculation method presented in the study was to detect actual dangers from risk activities and approaches of moving objects for supporting individuals with visual disability to walk outside. The support system to walk for people with visual disability using this BPA calculation method could be examined in two types.

*Evaluation 1: Comparing Existing and Present Systems*. Table 4 includes the comparison results among the prior support systems and products to walk and the present system using BPA calculation method for the blind. The systems are divided into two types, Electronic Travel Aids (ETA) and Robotic Travel Aids (RTA). Guido, Care-O-Bot, Nursebot, PAWSS, Adaptive Walker, Robotic Walker, and S.J.L are examples of RTA, and L.L.L.S, Smart Wand, and A.S are the types of ETA. The evaluation items were designed to see; (a) if the systems recognized fixed barriers such as upward and downward stairs and projecting objects, (b) if they had functions to get away or to detour from detected barriers, (c) if they were aware of moving objects, and (d) if they detected pedestrians or big animals. Most of the RTA systems and the ETA systems using ultrasonic sensors detected and avoided barriers and had various additional features but did not recognize moving objects such as cars, two wheeled cars, and bicycles on the road. These systems did not notice people riding bicycles, running, or walking and big animals such as a dog either.

On the other hand, the support system to walk for the blind using multi-sensor data fusion based on the BPA calculation method presented in the study could recognize moving objects on the road and detect if the moving objects were threat noticed fixed barriers. The presented method also could recognize living bodies, which ultrasonic sensors could not detect easily.

*Evaluation 2: Comparing Sensors’ Components*. Most of the existing ETA systems used only ultrasonic sensors or used them as main sensors. The study was to verify that using multi-sensors had more reliable results than using a single sensor when the supporting systems to walk for persons with visual disability detected contacts with moving objects and other pedestrians and risk factors on the road. As the results of the experiments displayed in Table 3, belief values detected and reported by single sensors separately, a sound sensor (), an ultrasonic sensor (), and an infrared light sensor (), were weak, not clearly defining changes to determine if the moving objects were threats. The approaches of the moving objects were not detected even when both a sound sensor () and an ultrasonic sensor () were used together. However, belief values of focal elements based on the events detected and reported by the sound () and infrared light () sensors together or the ultrasonic () and infrared light () sensors together showed noticeable change patterns.

These results shown as Table 5 indicated that it was difficult for the sound sensor alone to detect the approaches of the moving objects because the sound was blending with a background noise and bicycles and pedestrians did not make a significant sound. The ultrasonic sensor did not recognize clearly the objects with irregular surfaces that reflected inaccurately or the living bodies such as pedestrians and big animals that did not reflect on the road. The infrared light sensor alone did not detect well the moving objects that were far away or did not use an internal combustion engine generating a heat.

#### 5. Conclusion

BPA determination played a key role in multi-sensor data fusion by using Fuzzy Theory and DST. The study provided a method to determine BPA for multi-sensor data fusion to recognize contexts of a moving observer and a moving object by evaluating sensor data without depending on information data modeled prior to actual events. The method was that the change of values measured by each sensor in time progress was evaluated and normalized to evaluate the meaning of those values. BPA of each focal element was between 0 and 1, and the BPA total amount of all focal elements was 1. The study showed that context awareness was possible based on the result that multi-sensor data fusion by using BPA determination was used to detect if the moving observer recognized the risk of a moving object with the sound, ultrasonic, and infrared light sensors. The future study is needed to interlink the context data set modeled with prior information in the sensor data fusion system.

#### References

- A. P. Dempster, “New methods for reasoning towards posterior distributions based on sample data,”
*The Annals of Mathematical Statistics*, vol. 37, no. 2, pp. 355–374, 2008. View at Google Scholar - G. Shafer,
*A Mathematical Theory of Evidence*, Princeton University Press, Princeton, NJ, USA, 1976. - L. Bentabet and M. Jiang, “Iterative Markovian estimation of mass functions in Dempster Shafer evidence theory: application to multisensor image segmentation,” in
*Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning*, vol. 6064 of*SPIE Proceedings*, January 2006. View at Publisher · View at Google Scholar · View at Scopus - S. Ben Chaabane, M. Sayadi, F. Fnaiech, et al., “A new method for the estimation of mass functions in the Dempster-Shafer's evidence theory: application to colour image segmentation,”
*Circuits, Systems, and Signal Processing*, vol. 30, no. 1, pp. 55–71, 2011. View at Publisher · View at Google Scholar · View at Scopus - T. Ali and P. Dutta, “Methods to obtain basic probability assignment in evidence theory,”
*International Journal of Computer Applications*, vol. 38, no. 4, 2012. View at Publisher · View at Google Scholar - W. Jiang, Y. Deng, and J. Peng, “A new method to determine BPA in evidence theory,”
*Journal of Computers*, vol. 6, no. 6, pp. 1162–1167, 2011. View at Publisher · View at Google Scholar · View at Scopus - Z. Zuo, Y. Xu, and G. Chen, “A new method of obtaining BPA and application to the bearing fault diagnosis of wind turbine,” in
*Proceedings of the International Symposium on Information Processing (ISIP '09)*, pp. 368–371, August 2009. - A. O. Boudraa, A. Bentabet, F. Salzenstein, and L. Guillon, “Dempster-Shafer's basic probability assignment based on fuzzy membership functions,”
*Electronic Letters on Computer Vision and Image Analysis*, vol. 4, no. 1, pp. 1–9, 2004. View at Google Scholar