- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

International Journal of Distributed Sensor Networks

Volume 2012 (2012), Article ID 871596, 11 pages

http://dx.doi.org/10.1155/2012/871596

## Linear Decision Fusion under the Control of Constrained PSO for WSNs

^{1}College of Information Science and Technology, Beijing Normal University, Beijing 100875, China^{2}Key Laboratory of Inertial Technology for National Defense, Beihang University, Beijing 100191, China

Received 31 May 2011; Revised 8 September 2011; Accepted 26 September 2011

Academic Editor: Don Sofge

Copyright © 2012 Sisi Jiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A major application of a distributed WSN (wireless sensor network) is to monitor a specific area for detecting some events such as disasters and enemies. In order to achieve this objective, each sensor in the network is required to collect local observations which are probably corrupted by noise, make a local decision regarding the presence or absence of an event, and then send its local decision to a fusion center. After that, the fusion center makes the final decision depending on these local decisions and a decision fusion rule, so an efficient decision fusion rule is extremely critical. It is obvious that the decision-making capability of each node is different owing to the dissimilar signal noise ratios and some other factors, so it is easy to understand that a specific sensor contribution to the global decision should be constrained by this sensor decision-making capability, and, based on this idea, we establish a novel linear decision fusion model for WSNs. Moreover, the constrained particle swarm optimization (constrained PSO) algorithm is creatively employed to control the parameters of this model in this paper and we also apply the typical penalty function to solve the constrained PSO problem. The emulation results indicate that our design is capable of achieving very high accuracy.

#### 1. Introduction

Owing to the low cost and ease of operation, wireless sensor networks are ideal for a wide variety of applications such as environmental monitoring, smart factory instrumentation, intelligent transportation, and remote surveillance [1–3]. When a WSN is used to monitor an area, each sensor node in the network should collect local observations and send a summary (compressed or partially processed data) to the fusion center. Then the fusion center uses these summaries and a specific decision fusion rule to make the final global decision, as shown in Figure 1.

Actually, WSN issues such as node deployment, localization, energy-aware clustering, and data aggregation are always considered as optimization problems. An optimization method that requires moderate memory and computational resources and yet produces good results is desirable, especially for implementation on an individual sensor node. Bio-inspired optimization methods are computationally efficient alternatives to analytical methods. Particle swarm optimization (PSO) [4] is a popular multidimensional optimization technique. Ease of implementation, high quality of solutions, computational efficiency, and speed of convergence are the advantages of the PSO. In literatures [5–9], PSO is used to discover the optimal WSN deployment. Static deployment is a one-time process in which solution quality is more important than fast convergence. PSO suits centralized deployment. Fast PSO variants are necessarily of dynamic deployment, so PSO limits network scalability. Literatures [10–14] apply PSO for WSN localization. Owing to energy issues, the distributed localization is desirable. Though PSO is appropriate for distributed localization, the choice is influenced by availability of memory on the nodes. Besides, PSO is also used for energy-aware clustering in literatures [15–17], where clustering is a centralized optimization carried out in a resource-rich base station and optimal clustering has a strong influence on the performance of WSNs. Recently, some researchers have begun to use PSO for WSN data aggregation. As discussed in literature [18], PSO has provided optimization in several aspects of data aggregation as follows.(a)In [19], Wimalajeewa and Jayaweera address the problem of optimal power allocation through constrained PSO. Their algorithm *PSO-Opt-Alloc *uses PSO to determine optimal-power allocation in the cases of both independent and correlated observations. The objective is to minimize the energy expenditure while keeping the fusion-error probability under a required threshold.(b)In [20], Veeramachaneni and Osadciw present a hybrid of ant-based control and PSO* (ABC-PSO*) for hierarchy and threshold management for decentralized serial sensor networks. PSO is used to determine the optimal thresholds and fusion rules for the sensors while the ant colony optimization algorithm determines the hierarchy of sensor decision communication.(c)In [21], Veeramachaneni and Osadciw present a binary multiobjective PSO *BMPSO* for optimal configuration for multiple sensor networks. PSO is modified to optimize two objectives: accuracy and time. The output of the algorithm is the choice of sensors, individual sensor threshold, and the optimal decision fusion rule.

As mentioned above, data fusion is a distributed repetitive process which is moderately suitable for PSO. Effective data fusion influences overall WSN performance and demands quick-convergence optimization techniques that assure high-quality solutions. Therefore, it is reasonable for us to choose PSO to control the parameters of our fusion rule.

In this paper, we design a linear decision fusion rule. In this rule, a linear equation is used to compute the integrated contribution of all the local decisions, and this equation is made up with local decision weight and all local decisions. Local decision weight can reflect a sensor decision-making capability, and the result of this equation is the summation of all the local sensor contributions. By comparing the result of the linear equation with a threshold , the final decision can be made in the fusion center. In order to get the smallest error probability, constrained PSO is employed to find out the optimal local decision weight and the threshold .

The rest of the paper is organized as follows. Section 2 discusses the constrained-PSO algorithm briefly. Section 3 describes our linear decision fusion rule in detail. The performance of our fusion rule is evaluated in Section 4, and Section 5 concludes this paper.

#### 2. Brief Introduction of Constrained PSO Algorithm

The particle swarm optimization (PSO) algorithm is a population-based evolutionary computation originally developed by Kennedy and Eberhart [4]. This algorithm does very well in searching throughout a highly multiple modal search space resulting in an optimized solution to an *n*-dimensional problem. PSO, originally introduced in terms of social and cognitive behavior, is widely used as a bottom-up problem solving method in engineering and computer science.

In the basic PSO algorithm, a swarm consists of particles flying around in an *n*-dimensional search space. The position of the th particle at the th iterations is used to evaluate the particle and represent the candidate solution for the optimization problem. It can be presented as , where is the position value of the th particle with respect to the th dimension . During the searching process, the position of a particle is affected by two factors: the best position visited by itself denoted as and the best position of the swarm found so far denoted as . The new velocity (denoted as ) and the position of particle at the next iteration are calculated as
where is called the inertia parameter, and are, respectively, cognitive and social learning parameters, and , are random numbers between (0,1). Relying on expressions (1), the particles can fly throughout search space toward and in a navigated way while still exploring new areas by the stochastic mechanism to escape from local optimal. The process for a particle to update its location is as shown in Figure 2.

Further, a constrained PSO problem can be described as follows: where is the objective function, is the nonlinear constraint, and is the linear constraint. In this paper, we just discuss the situation with the linear constraint. The most common approach for solving constrained PSO problems is to take advantage of a penalty function. The constrained problem can be transformed into an unconstrained one by penalizing the constraints and building a single objective function, which in turn is minimized by using an unconstrained optimization algorithm [22–24].

Generally, penalty function can be classified into two main classes: stationary and nonstationary. Stationary penalty functions use fixed penalty values throughout the minimization, while in contrast, in nonstationary penalty functions, the penalty values are dynamically modified. In this paper, we just use the stationary penalty function as expression (3), which is generally defined as [25] where is the original objective function of the constrained PSO problem, is a fixed penalty value, and is a penalty factor. In this way, the constrained problem becomes unconstrained and then we can use the basic PSO to find out the smallest .

Depending on the knowledge about the constrained PSO, we would employ it to find out the optimal solution for the parameters of our decision fusion rule. In the next section, we would like to discuss this issue in detail.

#### 3. The Linear Decision Fusion Rule

In this section, we plan to discuss this topic in three parts. In Section 3.1, we formulate the decision fusion problem regarding a binary hypothesis situation and the expressions of some crucial parameters are discussed, including the local error probabilities , , and threshold . In Section 3.2, our linear decision fusion rule is introduced and, with the expressions given by Section 3.1, the expressions of global error probabilities , , and total error probability can be figured out. From Section 3.2, it can be seen that the total error probability mainly depends on weights and , so we would like to discuss how to use PSO to optimize the parameters for our fusion rule in Section 3.3.

##### 3.1. Decision Fusion Problem Formulation

We consider a binary hypothesis testing problem in an *n*-node distributed wireless network. The th sensor observation under the two hypotheses is given by
where is zero-mean Gaussian observation noise with variance , is the signal to be detected, and is the decision to be made. In this paper, we consider the detection of a constant signal (e.g., ). As a brief review, the problem of radar detection can be formulated as a hypothesis testing problem where the two hypotheses are
The conditional probability density functions are and , so the decision could be made based on the following likelihood ratio test:
where is a threshold computed according to the Bayesian framework as follows: is the error cost (: we consider there is no enemy but actually the enemy exists in the network; : we consider the enemy exists but actually there is no enemy in the network; and are the costs when we judge the existence of the enemy rightly), and the prior probabilities of the two hypotheses and are denoted by and , respectively. In real applications, the value of and depends on what kinds of error would cause more serious effect. Taking the radar detection mentioned above as an example, in the real world the error that the radar misses an enemy is much more serious than the error that the radar considers something else as an enemy, so in this example . In this paper, we set , , and , so expression (7) becomes
The values of prior probabilities and depend on prior experiences, so we assume that they can be known before applying this decision fusion rule. Let us consider the detection of a constant signal , and the noise mentioned above is zero-mean Gaussian observation noise with variance , so, for all , the th sensor observation obeys Gaussian distribution as
Consequently, the conditional probability density functions and can be calculated as
and then, with (6) and (10), the decision rule of the th sensor can be formulated as
According to (11), the threshold of the th sensor is
From (12), we can see that the threshold of each sensor is different owing to the different which represents the noise character of a sensor. Accordingly, taking advantage of (10), (12), and (13), the local error probabilities and can be obtained as follows:

##### 3.2. Our Linear Decision Fusion Rule

In this paper, we design a novel decision fusion model and in this model, a linear equation is used to denote the integrated contribution of all the local decisions as where is the local decision of the th sensor and is the corresponding weight which is able to indicate the th sensor decision-making capability. In order to understand this model better, we take a group of workers who are working together to check an event for example. Firstly, all the workers are required to try their best to draw a conclusion for this event, and then they send their decision to their leader. The leader makes a final decision depending on all the conclusions given by the workers. During this decision-making process, the decisions made by the more capable workers would be paid more attention by the leader. In this example, the workers correspond to the sensors in the mobile sensor networks, the leader corresponds to the fusion center, and the ability of the workers corresponds to the weight . When a sensor decision-making capability is better, its weight could be greater. In other words, if a sensor can make a decision more accurately, its contribution to is considered to be bigger, so it is reasonable to use to denote the integrated contribution of local decisions in the fusion center. Then, would be compared with a threshold , and the final decision can be made according to the following inequality expression: Suppose all the local decisions are independent of each other; then we can get and . Besides, the error probabilities and take all the combinations of the local decisions (i.e., ) into account. Therefore, the two kinds of errors that the fusion center may make can be formulated as follows: where According to (13), (16), and (17), the total error cost can be calculated as In order to understand this model better, we take the situation for example. Suppose there are only two sensors in the network and their local decisions are and , respectively. Then according to the linear decision fusion model, the integrated contribution of the two local decisions can be figured out as .

It is obvious that there are four combinations of , , and , as shown in Table 1.

Then referring to expression (16), we can have and then we can achieve Here, it can be seen that the value of the total error probability depends upon , , and , so if the optimal , , and can be found, the smallest error probability can be achieved. In the following content, let us discuss how to get the optimal weights and threshold with the help of constrained PSO.

##### 3.3. Using Constrained PSO to Control the Linear Decision Fusion Rule

It is obvious that if all the local decisions are (), the final decision must be (), so we can consider that , and then, it becomes a constrained PSO problem which can be denoted as follows: Here, is the initial objective function and is the linear constraint. The penalty function approach is used to solve this problem. According to formulation (3) the penalty function can be established as where is the fixed penalty value and is the penalty factor. Then we should try to find out the smallest taking advantage of the basic PSO.

Basing on the analysis above, we can map from the basic PSO to the linear decision fusion rule, as shown in Table 2.

After that, the PSO is carried out as follows.(1)Firstly, the locations of all the particles are initialized and the current location is considered as .(2)Secondly, the velocity and the location of each node are updated in accordance with expressions (1) and (3).(3)Thirdly, the new fitness is computed and the new and are generated.(4)Finally, if the number of iterations increases to , then stop; otherwise, proceed back to step two. In this way, the optimal is discovered and is the smallest . Therefore, the smallest total error probability can be computed as .

#### 4. Performance Evaluation

In this section, we would like to demonstrate the performance of our decision fusion rule. On the platform Matlab, the basic PSO algorithm is realized and the objective function is set as . In order to obtain the smallest , a swarm consists of 25 particles flying around in an -dimensional search space, where is the number of network sensors. In the end of searching process, the should be the smallest , and then, with , the smallest error probability for our fusion rule can be obtained. Consequently, the corresponding and are the optimal parameters of the fusion rule.

Firstly, whether the parameters of PSO can affect the performance of our decision fusion rule is demonstrated in Section 4.1. Then, in Section 4.2, let us show that what would happen to our experimental results as the number of sensors increases.

##### 4.1. Choosing PSO Parameters , , and

It can be seen from expression (1) that the optimal performance of PSO depends on the values of parameters , , and . The inertia parameter controls the momentum of each particle. A larger can increase the global exploration ability while a smaller tends to improve the local exploration ability of particles. The cognitive learning parameter limits the contribution of a specific particle while the social learning parameter limits the contribution of the entire swarm .

At first, let us discuss the performance of our fusion rule when inertia parameter is fixed or changes along with time. In the first case, we set while, in the second case, we set , where and are, respectively, the current and maximum number of iterations and and are, respectively, the upper and lower bounds of . In this paper, we set , , and . The inputs and outputs of this simulation are as shown in Table 3.

From Table 3 and Figure 3, we can see that when the inertia parameter changes along with time, the total error probability is lower and the time consumed for convergence is shorter than those when is fixed. The reason is that if we choose the time-variable inertia parameter, particles can have good global exploration ability at the beginning phase of the searching process and good local exploration ability as the current number of iterations increases. Thereby, we decide to choose the time-variable inertia parameter .

After that, let us discuss the learning parameters and . If or , it means that we do not take the cognitive learning process or the society learning process into account. Under this situation, the simulation results are as shown in Table 4.

From Table 4 and Figure 4, it can be seen that when the cognitive learning parameter is not taken into account, PSO still can get converged, but the total error probability becomes larger and time consumed for convergence is longer comparing with the situation when . When we do not consider the society learning parameter , PSO cannot get converged at all. The reason is that each particle only has the knowledge about itself, so the swarm can never evolve to the same direction.

In sum, from the analysis above, we can see that the time-variable inertia parameter is better than the fixed one and the situation that both learning parameters and are taken into account is better than the situation when or . Therefore, in the following simulation, the parameters are set as

##### 4.2. The Performance of Linear Fusion Rule with Different Sensor Numbers

Here, the number of sensors in the network is set as , and , respectively. According to the experimental results, it can be seen that our decision fusion rule can indeed get very high accuracy. The inputs and outputs of our simulation are as shown in Table 5 and Figure 5.

When , the decision fusion rule becomes and, with these parameters, we can get the total error probability which is very small. After about more than 50 iterations, the PSO has already got converged, so the time consumed is very short.

Similarly, from Table 5 and Figure 5(b), when , we can see that the total error probability becomes and after about 120 iterations the PSO gets converged. When , which is close to zero and after 150 iterations the PSO gets converged.

As shown in Figure 6, along with the increase of , the error cost becomes smaller and smaller. This phenomenon is easy to understand because the decision result is more precise when there are more sensors working in the network. However, as increases, the time consumed for PSO to find the optimized parameters becomes longer, so when this fusion rule is used we should better get a balance between accuracy and time.

In brief, this simulation indicates that our linear fusion rule is capable of getting really small total error probability under the control of the constrained PSO, also showing that PSO does very well for the parameter optimization of the linear decision fusion rule.

#### 5. Conclusion

In this paper, we present a linear decision fusion model and propose a way of controlling the parameters of the model taking the advantage of the constrained PSO. In the model, the integrated contribution of local decisions is computed with a linear equation which is made up with local decision weights and local decisions, and then the integrated contribution is compared with a threshold in the fusion center. After that, according to the comparison results, the final decision can be made. Furthermore, the constrained PSO is creatively employed to discover the weights and the threshold. The simulation results show that our linear decision rule and the way of parameter optimization are efficient to get very high accuracy.

#### References

- J.-J. Xiao, S. Cui, Z.-Q. Luo, and A. J. Goldsmith, “Joint estimation in sensor networks under energy constraints,”
*Proceedings of the 1st Annual IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks (IEEE SECON '04)*, pp. 264–271, October 2004. - L. Snidaro, R. Niu, P. K. Varshney, and G. L. Foresti, “Sensor fusion for video surveillance,” in
*Proceedings of the 7th International Conference on Information Fusion (FUSION '04)*, pp. 564–571, Stockholm, Sweden, June 2004. - A. Tiwari, F. L. Lewis, and S. S. Ge, “Wireless sensor network for machine condition based maintenance,”
*Proceedings of the 8th International Conference on Control, Automation, Robotics and Vision (ICARCV '04)*, Amari Watergate Bangkok, vol. 1, pp. 461–467, December 2004. - J. Kennedy and R. Eberhart, “Particle swarm optimization,” in
*Proceedings of the IEEE International Conference on Neural Networks*, pp. 1942–1948, Perth, Australia, December 1995. - N. A. B. A. Aziz, A. W. Mohemmed, and B. S. Daya Sagar, “Particle swarm optimization and Voronoi diagram for wireless sensor networks coverage optimization,” in
*Proceedings of the International Conference on Intelligent and Advanced Systems (ICIAS '07)*, pp. 961–965, November 2007. View at Publisher · View at Google Scholar - J. Hu, J. Song, M. Zhang, and X. Kang, “Topology optimization for urban traffic sensor network,”
*Tsinghua Science and Technology*, vol. 13, no. 2, pp. 229–236, 2008. View at Publisher · View at Google Scholar · View at Scopus - J. Li, K. Li, and W. Zhu, “Improving sensing coverage of wireless sensor networks by employing mobile robots,” in
*Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO' 07)*, pp. 899–903, Sanya, China, December 2007. View at Publisher · View at Google Scholar - X. Wang, S. Wang, and J. J. Ma, “An improved co-evolutionary particle swarm optimization for wireless sensor networks with dynamic deployment,”
*Sensors*, vol. 7, no. 3, pp. 354–370, 2007. View at Scopus - T. P. Hong and G. N. Shiu, “Allocating multiple base stations under general power consumption by the particle swarm optimization,” in
*Proceedings of the IEEE Swarm Intelligence Symposium (SIS '07)*, pp. 23–28, April 2007. View at Publisher · View at Google Scholar - A. Gopakumar and L. Jacob, “Localization in wireless sensor networks using particle swarm optimization,” in
*Proceedings of the IET Conference on Wireless, Mobile and Multimedia Networks*, pp. 227–230, January 2008. View at Publisher · View at Google Scholar - R. V. Kulkarni, G. K. Venayagamoorthy, and M. X. Cheng, “Bio-inspired node localization in wireless sensor networks,” in
*Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC '09)*, pp. 205–210, San Antonio, Tex, USA, October 2009. View at Publisher · View at Google Scholar - K. S. Low, H. A. Nguyen, and H. Guo, “A particle swarm optimization approach for the localization of a wireless sensor network,” in
*Proceedings of the IEEE International Symposium on Industrial Electronics (ISIE '08)*, pp. 1820–1825, July 2008. View at Publisher · View at Google Scholar - H. Guo, K.-S. Low, and H.-A. Nguyen, “Optimizing the localization of a wireless sensor network in real time based on a low-cost microcontroller,”
*IEEE Transactions on Industrial Electronics*, vol. 58, no. 3, pp. 741–749, 2011. View at Publisher · View at Google Scholar - K. S. Low, H. A. Nguyen, and H. Guo, “Optimization of sensor node locations in a wireless sensor network,” in
*Proceedings of the 4th International Conference on Natural Computation (ICNC '08)*, pp. 286–290, October 2008. View at Publisher · View at Google Scholar - S. M. Guru, S. K. Halgamuge, and S. Fernando, “Particle swarm optimisers for cluster formation in wireless sensor networks,” in
*Proceedings of the Intelligent Sensors, Sensor Networks and Information Processing Conference*, S. K. Halgamuge, Ed., pp. 319–324, December 2005. - N. M. A. Latiff, C. C. Tsimenidis, and B. S. Sharif, “Energy-aware clustering for wireless sensor networks using particle swarm optimization,” in
*Proceedings of the 18th Annual IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC '07)*, pp. 1–5, September 2007. View at Publisher · View at Google Scholar - X. Cao, H. Zhang, J. Shi, and G. Cui, “Cluster heads election analysis for multi-hop wireless sensor networks based on weighted graph and particle swarm optimization,” in
*Proceedings of the 4th International Conference on Natural Computation (ICNC '08)*, vol. 7, pp. 599–603, October 2008. View at Publisher · View at Google Scholar - R. V. Kulkarni and G. K. Venayagamoorthy, “Particle swarm optimization in wireless-sensor networks: a brief survey,”
*IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews*, vol. 41, no. 2, pp. 262–267, 2011. View at Publisher · View at Google Scholar · View at Scopus - T. Wimalajeewa and S. K. Jayaweera, “Optimal power scheduling for correlated data fusion in wireless sensor networks via constrained PSO,”
*IEEE Transactions on Wireless Communications*, vol. 7, no. 9, pp. 3608–3618, 2008. View at Publisher · View at Google Scholar · View at Scopus - K. Veeramachaneni and L. Osadciw, “Swarm intelligence based optimization and control of decentralized serial sensor networks,” in
*Proceedings of the IEEE Swarm Intelligence Symposium (SIS '08)*, pp. 1–8, September 2008. View at Publisher · View at Google Scholar - K. Veeramachaneni and L. A. Osadciw, “Dynamic sensor management using multi objective particle swarm optimizer,” in
*Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications*, vol. 5434 of*Proceedings of SPIE*, pp. 1–12, Orlando, Fla, USA, April 2004. View at Publisher · View at Google Scholar - C. A. Floudas and P. M. Pardalos, “A collection of test problems for constrained global optimization algorithms,” in
*Lecture Notes in Computer Science*, vol. 455, p. 180, Springer, Berlin, Germany, 1987. - D. M. Himmelblau,
*Applied Nonlinear Programming*, McGraw-Hill, New York, NY, USA, 1972. - S. S. Rao,
*Optimization: Theory and Applications*, Wiley Eastern Limited, 1977. - J.-M. Yang, Y.-P. Chen, J.-T. Horng, and C.-Y. Kao, “Applying family competition to evolution strategies for constrained optimization,” in
*Lecture Notes in Computer Science*, vol. 1213, pp. 201–211, Springer, Berlin, Germany, 1997.