Abstract

Due to the limited communication resource and power, it is usually infeasible for sensor networks to gather data to a central processing node. Distributed algorithms are an efficient way to resolve this problem. In the algorithms, each sensor node deals with its own input data and transmits the local results to its neighbors. Each node fuses the information from neighbors and its own to get the final results. Different from the existing work, in this paper, we present an approach for distributed parameter estimation in wireless sensor networks based on the use of memory. The proposed approach consists of modifying the cost function by adding extra statistical information. A distributed least-mean squares (-LMS) algorithm, called memory -LMS, is then derived based on the cost function and analyzed. The theoretical performances of mean and mean square are analyzed. Moreover, simulation results show that the proposed algorithm outperforms the traditional -LMS algorithm in terms of convergence rate and mean square error (MSE) performance.

1. Introduction

Wireless Sensor Networks (WSNs) consist of a large number of sensor nodes that can be deployed for monitoring unattainable areas, such as deep oceans, forest fires, and air pollution [14]. To estimate some parameter of interest from data which is collected at nodes distributed over a geographic area, it is becoming more and more important to design and analyze estimation algorithms for the networks.

Generally, there are two kinds of estimation algorithms: centralized estimation [58] and distributed estimation [912]. In the former one, all nodes transmit their measurements to a central fusion center for processing, and the final estimate is sent back to the nodes. This method can obtain the global solution, but it requires a large amount of energy and communication. What is more, it will be nonrobust if the fusion center is out of work. In the latter one, each node only communicates with its immediate neighbor and the signal is processed locally. Compared to the centralized estimation, the distributed estimation can achieve good estimation performance whilst reducing complexity.

Due to the scalability, robustness, and low cost, distributed estimation algorithms have been receiving more and more attention. The paper [10] proposes a diffusion LMS to obtain the distributed estimation of the networks. The paper [11] proposes a distributed incremental RLS solution. The papers [12, 13] propose LMS algorithms for censor data. To the authors’ knowledge, there is no existing work dealing with the past information of the nodes for distributed estimation. Motivated by this fact, in this paper, we state a new cost function where the past information is considered in the cost function. By optimizing the proposed cost function, a diffusion memory-LMS is proposed in this paper.

This paper is organized as follows. Section 2 describes the mathematical problem of the network and memory -LMS. In Section 3, mean convergence is analyzed. Simulation results are provided in Section 4. Finally, we conclude the paper in Section 5.

Notation. We use boldface uppercase letters to denote matrices and boldface lowercase letters to denote vectors. We use to denote the expectation operator, for variance, for complex conjugate-transposition, for the trace of a matrix, and for Kronecker product. The notation stands for a vector obtained by stacking the specified vectors. stands for a linear transformation which converts a matrix into a column vector. Similarly, we use to denote the diagonal matrix consisting of the specified vectors or matrices. Other notations will be introduced as necessary.

2. The Mathematical Problem of the Network and Memory -LMS

We consider a sensor network which is composed of sensor nodes distributed over some geographic regions and used to monitor a physical phenomenon, shown in Figure 1. At every time , each node can get a scalar measurement . Here is a 1 × 1 observations vector, and it can be expressed in terms of the following linear combinations:where is an known regression vector which is corresponding to a realization of a stochastic process . is an nonrandom unknown parameter to be estimated. is a random error in observing and is assumed to be spatially and temporally independent. and . To get a global solution of , it requires access to the information across all nodes in the network. Therefore, we explore a distributed algorithm in this section.

2.1. Traditional Cost Function and -LMS

Generally, the -LMS algorithms [1012] are derived from the following problem function:where is used to denote as , is the local cost function of node . is a set of nonnegative weight coefficients between node and node and satisfiesHere, , and when node and node are not connected.

By minimizing (2), one can obtain the Adapt-then-Combine (ATC) diffusion LMS as follows [10]:where is the step size and is a set of nonnegative weight coefficients between node and node and satisfies

Here, is an matrix with individual entries .

The Adapt-then-Combine (ATC) -LMS can be described as follows. Firstly, at time , every node obtains its local estimation by exchanging the measurements across all its neighbor nodes. Then, all nodes combine the estimation from the first step with their neighbors. The traditional -LMS algorithms obtain their estimation at time by considering the measurements across its neighbor nodes only at time . Rather than relying solely on current data and on the data shared with the neighbors at a particular time instant [912], we propose a distributed least-mean squares (-LMS) algorithm called memory -LMS in the following subsection.

2.2. Cost Function and Memory -LMS with the Past Measurement Information

To use the past measurements more sufficiently, we first propose cost functions with the past information. At time , the global cost function with the past information is given bywhereis the local cost function with the past information. is a set of nonnegative weight coefficients between node and node and satisfies

Here, , when node and node are not connected. () is a constant to be designed by requirements of the network. (6) is simplified into (2) if .

By directly minimizing in (6), we obtain a global -LMS recursionwhere , , and is the step size. The moments and are usually unavailable in practice. Thus, we replace them by their instantaneous approximations and , respectively [13].

Equation (9) is not distributed because it requires access to measurements across the whole network.

Now, we focus on a distributed solution of by minimizing a local cost function with the past information. Notice that (6) can be rewritten aswhere can be rewritten aswhere is a weighted vector norm and , is the local estimate of node .

Substituting (11) into (10) leads to

Minimizing (12) with respect to , we have

Notice that estimation of based on (13) still requires access to information across all nodes in the network. In response to this, we replace by another kind of weighting coefficient , where

Then, we have

Hence, we get an iterative steepest-descent estimation for : that is,where and are step size parameters.

We replace by which is available at node in time . Then, (15) is rewritten as

Now, we use a two-step estimation for where the last step of (19) follows from a change of variables

Form (13) and (20), we have if .

Furthermore, from (20), we have directly.

To obtain an adaptive implementation, we replace the second-order moments by instantaneous values in (18): that is,

Now, we obtain a novel distributed algorithm called memory -LMS:where , if .

2.3. Implementation of the Proposed Memory -LMS Algorithm

According to the abovementioned memory -LMS, we implement the algorithm with three steps: time diffusion, adaptation, and node diffusion. In detail, it will be described as follows.

For every node , start with , select an appropriate value for the step size , and set the nonnegative coefficients , satisfying (24), for each time , and repeat.

Step 1. Time diffusion step with respect to past measurements:In this step, at the th iteration, every node summarizes cross-correlations from time to time and autocorrelations from time to time , respectively.

Step 2. Adaptation step is

Step 3. Node diffusion step with respect to preestimations of all nodesIn this step, every node combines its preestimation at Step 2 with preestimations shared by the neighbor nodes to obtain estimation .

3. Performance Analysis

In this section, we analyze the asymptotic performance of the proposed algorithm.

Denote that

Then, from (22) and (23), we have

Furthermore, the above expressions can be rewritten as

Inserting (31) into (32), we have

In the following of this section, we will give the asymptotic performance of and prove it under some assumptions.

3.1. Convergence Analysis of

Assumption 1. All regressors are independent of for all node and time .

Lemma 2 (see [10]). Let and denote arbitrary matrices, where . Then the matrix is stable for any choice of if and only if is stable.

Theorem 3. Under Assumption 1, mean of the memory -LMS estimation based on the past information converges to the true value , ifwhere .

Proof. Taking the expectation of both sides of (34) yieldsThe right side of (35) converges to 0, if is stable. This requires to be stable or equivalentlyTheorem 3 shows that the mean of the estimation convergences to the true value when proper is chosen.

3.2. The Mean Squared Errors Performance

Now, we study the mean squared errors of the memory -LMS.Let and in (37), by using the propertyWe rewrite (37) aswhereFor sufficiently large and stable matrix , (39) can be rewritten asSubstituting into (41), we can obtain the mean square deviation (MSD) as follows:where is a vector with a single entry 1 at position and zeros elsewhere.

4. Simulations

In what follows, we present computer simulations to evaluate the performance of the proposed memory -LMS algorithm and to verify the theoretical results introduced in Section 3.

Figure 2 gives the network topology with nodes considered in this section. Besides, noise power levels, that is, , vary across the nodes within . Correspondingly, SNRs vary within dB which is shown in Figure 3, where . The network data are generated based on model (1), and the unknown vector . The weights of and are defined according to the relative degree and Metropolis rule [10, 13]: that is,where denotes the number of neighbors of node or the degree of node .

We show the transient network mean square deviation (MSD): with different step-sizes and in Figure 4. Figure 4 illustrates that the estimate is convergent. As observed, the smaller is, the more accurate the estimate is (the slower the convergence rate).

Figure 5 demonstrates the network transient behavior in terms of mean square deviation (MSD): for the proposed memory -LMS algorithm, -LMS algorithm [9], and memory global LMS (3). The values are obtained by averaging over 200 independent experiments. As the results indicate, the performance of the proposed memory -LMS is closed to that of the global solution. We also observe that the performance of the proposed algorithm exceeds that of -LMS [9] by more than 4 dB.

Figure 6 shows tracking results of the changes in the parameter for the memory -LMS with different and traditional -LMS. Additionally, step size of memory -LMS is 0.08/. Step size of -LMS is 0.08. In the beginning, , and after 200 iterations, it is set to . It can be observed that the proposed tracing performance of the memory -LMS outperforms that of traditional -LMS both with and . It also shows that the larger is, the more accurate the estimate is. This is because that more statistical estimation information is considered in the estimation algorithm.

As observed, the smaller is, the more accurate the estimate is and the slower the convergence rate is. Hence, it is necessary to choose the small and the large according to (34) for systems that have strict request on estimate accuracy but are insensitive to convergence rate. Otherwise, choose the large and the small according to (34).

5. Conclusion

In this paper, a distributed estimation algorithm called memory -LMS is derived and analyzed. The proposed algorithm consists of three steps. Firstly, at time , for each node, a set of neighbor nodes collect and process their local information from time to time , to obtain their preestimates. Secondly, all nodes transmit the preestimates to their neighbors. Finally, each node combines the collected information together with its local preestimate to generate improved estimates. Simulation results show that the proposed algorithm improves the convergence rate and reduces the MSE effectively.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported in part by the Natural Science Foundations of Zhejiang Province (LQ15F010004) and by the National Natural Science Foundations of China (61501158).