- About this Journal ·
- Aims and Scope ·
- Article Processing Charges ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Recently Accepted Articles ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
ISRN Sensor Networks
Volume 2012 (2012), Article ID 342514, 8 pages
Detection of Node Failure in Wireless Image Sensor Networks
Department of Computer Science and Engineering, National Institute of Technology, Rourkela 769008, India
Received 6 December 2011; Accepted 21 December 2011
Academic Editors: J.-F. Myoupo and W. Xiao
Copyright © 2012 Arunanshu Mahapatro and Pabitra Mohan Khilar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A sequenced process of fault detection followed by dissemination of decision made at each node characterizes the sustained operations of a fault-tolerant wireless image sensor network (WISN). This paper presents a distributed self-fault diagnosis model for WISN where fault diagnosis is achieved by disseminating decision made at each node. Architecture of fault-tolerant wireless image sensor nodes is presented. Simulation results show that sensor nodes with hard and soft faults are identified with high accuracy for a wide range of fault rate. Both time and message complexity of the proposed algorithm are for an -node WISN.
WISN is emerging as a promising solution for a variety of remote sensing applications like battlefield surveillance, environmental monitoring, intruder detection systems, intelligent infrastructure monitoring, and scientific data collection . Irrespective of their purpose, all sensor networks are characterized by the requirement for energy efficiency, scalability, and fault tolerance. These requirements are particularly crucial in image sensor networks. There are certain issues which need to be addressed for the sustained operations of WISN: (1) WISN consisting of image sensor nodes may be deployed in unattended and possibly hostile environments which increases probability of node failure and (2) unlike conventional sensor nodes, image sensor nodes generate bulk amount of data which is routed to the sink node. Erroneous data generated by faulty sensor nodes must be protected from entering the network for effective bandwidth and energy utilization. These issues motivate to explore distributed self-fault diagnosis processes for WISN.
In this work, a distributed diagnosis algorithm is proposed which detects both hard and soft faults in the network. Each sensor node makes a decision based on comparison between its own reading and readings of its 1-hop neighbors. The sensor node is detected as fault-free if the sensor reading agrees with readings of more than neighbors where is a threshold. A timeout mechanism is used to detect hard faults where an unreported node is detected as hard faulty. All local diagnostic information is finally disseminated in the network in order to ensure that each mobile will have a global view of the network fault status, that is, each fault-free mobile correctly diagnoses the state of all the mobiles in the system. A spanning tree (ST) which spans all fault-free sensor nodes disseminates local diagnostics.
The proposed image sensor node architecture (refer to Figure 1) is simple and can be implemented with limited additional hardware complexity by extending the architecture proposed in [2, 3]. Each block is subject to failure, which in turn results in system failure. A node is detected as soft faulty when CMOS camera or the image processing module or embedded processor is faulty. A node is detected as hard faulty due to either of following reasons: (i) communication subsystem is faulty, (ii) battery is drained, and (iii) node is completely damaged.
The process of local detection and global diagnosis from a given fault instance is a multifaceted problem. The main contributions of this paper are as follows.(1)It proposes an architecture for image sensor nodes for fault-tolerant WISN.(2)Sensor nodes with hard and soft faults are identified with high accuracy for a wide range of fault rate by maintaining low time, message complexity.
The remainder of the paper is organized as follows. Section 2 presents related works. Section 3 presents the system model. Distributed diagnosis scheme is investigated in Section 4. The performance of the proposed work is evaluated in Section 5, and finally conclusion and future work are given in Section 6.
2. Related Works
System-level fault diagnosis was introduced by Preparata, Preparata et al. in 1967 , as a technique intended to diagnose faults in a wired interconnected system. Comparison-based diagnosis is an effective approach to system-level fault diagnosis. The first comparison-based model proposed by Malek  (asymmetric comparison model), Chwa and Hakimi  (symmetric comparison model) assume the existence of a central arbiter which gathers information about comparison. This comparison syndrome is then used to diagnose the system. Previously developed distributed diagnosis algorithms were designed for wired networks [4–10] and hence not well suited for wireless networks.
The problem of fault detection and diagnosis in wireless sensor networks is extensively studied in literatures [11–17]. The problem of identifying faulty nodes (crashed) in WSN has been studied in . This paper proposes the WINdiag diagnosis protocol which creates an ST for dissemination of diagnostic information. Authors in  have proposed a fault-tolerant detection scheme that explicitly introduces the sensor fault probability into the optimal event detection process where the optimal detection error decreases exponentially with the increase of the neighborhood size. Elhadef et al. have proposed a distributed fault identification protocol called Dynamic-DSDP for MANETs which uses an ST and a gossip-style dissemination strategy . In , a localized fault diagnosis model for WSN is proposed that executes in tree-like networks. The approach proposed is based on local comparisons of sensed data and dissemination of the test results to the remaining sensors.
In , the authors have presented a distributed fault detection model for wireless sensor networks where each sensor node identifies its own state based on local comparisons of sensed data against some thresholds and dissemination of the test results. Krishnamachari and Iyengar have presented a Bayesian fault recognition model to solve the fault-event disambiguation problem in sensor networks . A distributed fault detection scheme for sensor networks has been proposed in . It uses local comparisons with a modified majority voting where each sensor node makes a decision based on comparisons between its own sensing data and neighbor’s data, while considering the confidence level of its neighbors.
Most of the existing literature addresses the fault detection and diagnosis problem in WSN by considering sensor nodes as temperature, humidity, or pressure sensors. In the author’s knowledge, there has been little work on the design of a fault diagnosis model for WISN. Although there is considerable amount of research on fault detection and diagnosis in WSNs, the current approaches may not be suitable for WISNs due to associated processing and communication cost. Czarlinska and Kundur  have investigated the event acquisition properties of WISNs. These techniques include lightweight image processing, decisions from sensors with or without cluster head fault, and attack detection. In , the authors have investigated the problem of image transport over error-prone wireless sensor networks, where a two-state Markov model of node transitions between an on and off state is considered. In their proposed work, authors have not investigated any node failure detection scheme. In , an improved distributed fault detection scheme is proposed which shows a better performance from detection accuracy perspective but needs more message exchange and thus not energy efficient. In , authors have proposed FIND, a method to detect nodes with data faults. In their work, nodes are ranked based on their sensing readings as well as their physical distances from the event. A node is considered faulty if there is a significant mismatch between the sensor data rank and the distance rank.
The authors believe that it is necessary to discuss why image sensor node fault detection model is indispensable. First, image data requires transmission bandwidth, that is, orders of magnitude higher than that supported by currently available sensors. Second, image compression models require complex hardware and make the energy consumption for computation comparable to communication energy dissipation. If a faulty image sensor node is allowed to participate in the network activity, then data generated by it will be routed to the sink node. All the intermediate nodes will dissipate energy in relaying this faulty information. For a high rate of node failure, this leads to severe decrease in network lifetime and wastage of network bandwidth.
3. System Model
The proposed model considers a densely deployed wireless sensor network which includes camera-equipped nodes. It has been assumed that there are sensor nodes nonuniformly distributed in a square area of side , which is much larger than the communication range of the sensors. Every camera-equipped node is a full-function device (FFD). A node responds to an image query by generating a raw image within its sensing area, compressing the raw image and then applying forward error correcting (FEC) code before transmitting this image which is a general process of image transport in WISN.
The proposed model considers both hard and soft fault . In hard-fault situation, the sensor node is unable to communicate with the rest of the network, whereas a node with soft fault continues to operate and communicate with altered behavior. These malfunctioning (soft faulty) sensors could participate in the network activities since still they are capable of routing information. The proposed model assumes that the sensor fault probability is uncorrelated and symmetric, that is, where is the sensed image data by the sensor node, and is the actual image data.
3.1. Architecture of Proposed Wireless Image Sensor Nodes
In this section, the architecture of the proposed image sensor nodes is described in details (Figure 1).
CMOS image sensors have received greater attention over the last few decades because their performance is very promising compared to CCDs [2, 3]. However, remote and dangerous environments put more stress on the image sensing system (from radiation, heat, or pressure), possibly leading to pixel failure while making the replacement of faulty systems difficult. A fault-tolerant architecture  for CMOS camera can be adapted that effectively combines hardware redundancy in the active pixel sensor (APS) cells and software correction techniques. But this fault-tolerant architecture can tolerate up to certain pixel failure rate (PFrate), beyond which the quality reduction () of a corrected image may not be tolerable, and the CMOS camera may be detected as faulty.
Uncompressed raw image data require excessive bandwidth for a multihop wireless environment. Conventional image compression models  are not suitable for resource-constrained wireless sensor networks because they require complex hardware and make the energy consumption for computation comparable to communication energy dissipation. The proposed architecture uses compression technique as suggested in .
Forward error correction coding is required to achieve reliable transmission. The proposed architecture uses Reed-Solomon (RS) codes to identify and correct errors in transmission. Coding redundancy determines the error correction capability of an RS code. A self-checking RS encoder  is used by the proposed architecture. As suggested in , wireless connection to other motes in the network can be established through a Texas Instruments CC2420 2.4 GHz IEEE 802.15.4/ZigBee-ready RF transceiver. Each device in ZigBee contains information about those devices located within its transmission range. This information is held in a table called the neighbor table . As suggested by the authors in , SAMSUNGs S3C44B0X is adopted as the embedded processor of image sensor node.
4. Distributed Fault Diagnosis Scheme
This section describes the novel model for energy-efficient diagnosis of WISNs. The proposed diagnosis scheme has two main phases: (i) detection phase and (ii) dissemination phase.
4.1. The Detection Phase
In this phase, the node enters to normal mode (S3C44B0X mainly consists of four modes: normal mode, slow mode, idle mode, and stop mode). The normal mode is used to supply clocks to CPU as well as all peripherals in S3C44B0X. CPU wakes up image sensor and image processing module from power down mode. Image sensor starts to capture image. In spite of the fault-tolerant architecture described in Section 3.1, an image produced by the image sensor may not be acceptable if the pixel failure rate is high. Thus, the CPU calculates the quality reduction in the corrected image using methods suggested in  and then makes a decision about whether or not to discard the image reading by comparing with a threshold (). The embedded processor set is soft faulty if . The RS-encoder fault status of the proposed architecture can be mapped as follows: where is the parity checker  output. Using (2), the embedded processor set is soft faulty or fault-free.
The image processing module fetches the test image stored in shared memory. The test image is processed in the processing module, and the generated coded bit stream is sent to the embedded processor. Then, the processed image is packed into the diagnosis packet format required by network protocol. CPU configures CC2420 into transmission mode. Packets are broadcasted by CC2420, and the node returns to the receive state. For each fault-free sensor node, its neighboring fault-free sensor nodes have broadcasted similar coded information. Let be neighbor of and contains the coded information at node . The node agrees with only when the hamming distance is between and ; where number of ones in and is the maximum number of bits a Reed-Solomon decoder can correct. For with s-bit symbols, . An arbitrary node receives the sensor reading from neighboring nodes and forms a set () of nodes with similar reading . Node then compares its own reading and takes a decision on the basis of agreement and disagreement. In this phase, each sensor node makes a decision about whether or not to discard its own sensor reading in the face of the evidences , , and . A formal description of this phase is presented in Algorithm 1. The value for this threshold is (see the Appendix).
The detection algorithm uses timeout mechanism to detect hard faulty nodes. The node declares node as hard faulty if does not receive the sensor reading from before . The node cannot report to if either the transceiver of is faulty or battery is drained or node is completely damaged. At the end of detection phase, every fault-free node in the network has the local diagnostic view.
4.2. Dissemination Phase
The local diagnostic snapshots are disseminated to obtain a global diagnostic view of the network. The local diagnostic views are disseminated using as ST which is constructed immediately after the deployment of the network. This work uses UDG-NNT algorithm  to construct an ST where each node is assigned a rank. The sink node has the highest rank in the network. Each node , except sink node, selects the nearest node among its neighbors such that and sends a connect message to to inform that () is an edge in the ST. In order to maintain a connected ST, immediately after detection phase nodes check whether they are still connected to the ST or not. If a node notices that its parent is faulty, then it sends a connect message to nearest fault-free node with higher rank.
All leaves of the ST send their local diagnosis views to their parents. Each parent has to wait until it collects diagnostics from each of its children. Once the parent has collected the diagnostics, it combines all of them with its own local diagnostic and updates its fault table. After updating the aggregated diagnostic message is transmitted to its parent in the ST, and the process continues until the sink node collects all the local diagnostics. Once sink node has the global diagnosis view, it disseminates it down the tree to all nodes. The proposed model now can identify the set of faulty nodes present in the network. Here, is the true set of faulty nodes present in the network at time . The set of faulty node inferred by the model is . The difference between and , is the diagnosis error.
5. Performance Evaluation
The four performance metrics, namely, diagnosis latency, message complexity, detection accuracy (DA), and false detection rate (FDR) are used to evaluate the performance of the proposed algorithm. DA is defined as the number of faulty sensor nodes detected to the total number of faulty sensor nodes in the network. FDR is defined as the ratio of number of fault-free sensor nodes detected as faulty to total number of fault-free nodes in the network. The upper bound time complexity is expressed in terms of the following bounds:(i): an upper bound on the time needed to propagate a message between sensor nodes;(ii): an upper bound on the time required to encode (compression and RS encoding) the image.
Lemma 1. The proposed diagnosis model terminates before time , where is the depth of the spanning tree.
Proof. The detection phase takes at most time in detecting its own status and to obtain IDs of hard faulty 1-hop neighbors. In ST maintenance phase, the node with faulty parent needs at most time to get connected with ST. In at most , the sink node obtains the global diagnostic view of the network. The sink node disseminates this view that reaches the farthest node in at most . In worst case, . Now, the upper bound time complexity can be expressed as
The total number of messages exchanged by nodes to establish a complete and correct diagnosis is termed as message complexity.
Lemma 2. The proposed model has a worst-case message complexity in the network.
Proof. The diagnosis starts at each node by sending the coded message to its neighbors, costing one message per node, that is, messages in the network. In ST maintenance phase, the node with faulty parent needs three message exchanges to get connected with ST. In worst case, all nodes except sink node need to find a new parent to maintain ST, that is, messages need to be exchanged in the network to maintain ST. Each node, excluding the sink, sends one local diagnostic message. Each node, excluding the leaf node, sends one global diagnostic message, and in worst case, depth of ST is . Thus, message cost for disseminating diagnostic messages is . So, the total number of exchanged messages is
5.1. Simulation Results
Performance of the proposed scheme via simulations is presented in this section. This work uses OMNET++ as the simulation tool where all simulations are conducted on networks using the IEEE 802.15.4 at the MAC layer. The free space physical layer model is adopted where all nodes within the transmission range of a transmitting node receive a packet transmitted by the node after a very short propagation delay. The set of simulation parameters are summarized in Table 1.
The RS code is used with = 8 bits per symbol, , and . For RS encoder, the time cost is 1.02 msec to encode bit stream for image. The time consumed in compression is 4.08 msec  (for test image). The threshold value is pixel failure rate. The test image used is the block of Lena image. Every result shown is the average of 100 experiments. Each experiment uses a different randomly generated topology.
5.1.1. Experiment 1
In this experiment, the two performance metrics, namely, DA and FDR, of the proposed work are compared with the schemes proposed by [15, 16] for varying node failure rate and average numbers of neighbor nodes (). In this simulation experiment, sensor nodes are assumed to be faulty with probabilities of 0.05, 0.10, 0.15, 0.20, 0.25, and 0.30. Both hard and soft faulty nodes are randomly deployed in the network. The simulation result for low average number of neighbor nodes is shown in Figure 2.
The main reason for not achieving an extremely high performance is that for a low fault-free sensor nodes are unlikely to pass the threshold test. The detection accuracy of the proposed work outperforms that of the scheme proposed by . The work of  shows a marginal improvement over our work. The reason is that for there is a probability that a faulty node with more than faulty neighbors is detected fault-free. The scheme proposed by  considers and the probability of a node with number of faulty neighbors is very less. Further, their scheme needs more number of message exchange in the network to achieve this marginal improvement. However, the proposed work shows better performance in terms of FDR. If we put these results into context, we will find that since the proposed scheme will be used in WISNs, which are known to be resource constraint, it would be preferable for a proposed scheme to maintain lower FDR and to be communication efficient. In other words, it would be better to achieve high network reliability while maintaining high level (>95%) of detection accuracy, which is what the proposed work tries to achieve.
DA and FDR for and are plotted in Figures 3 and 4, respectively. The key conclusion from these plots is that the performance of the detection model increases with the increase of . For , DA of the proposed work is very close to the scheme of  while maintaining low FDR. Due to the expected high node degree in wireless sensor networks, the proposed fault diagnosis scheme is robust.
5.1.2. Experiment 2
In this experiment, the average and worst-case latency of isolation of unhealthy nodes for varying node failure rate and is analyzed. Figures 5 and 6 show the diagnosis latency of the proposed work. From Lemma 1, it is obvious that dissemination of diagnostics contributes more to change in diagnosis latency with respect to node density. The depth of the ST decides the variation in diagnosis latency, as it is used to disseminate diagnostics. Thus, as expected and depicted in Figure 6, the time required to diagnose the WISN remains almost constant with change in fault rate.
6. Conclusions and Future Work
This paper presents a distributed model to address the fundamental problem of identifying faulty (soft and hard) nodes in a WISN. The model is simple and detects faulty sensor nodes with high accuracy for a wide range of fault probabilities, while maintaining low message overhead. The message and time complexity of the proposed model is which is significantly low compared to present state-of-the-art approaches. Due to low message and time complexity, the model could be integrated to error resilient image transport protocols in wireless sensor networks. A natural extension of the model is to solve the transient and intermittent fault problem. Currently, work is going on to develop a model to identify transient and intermittent faults with lower message cost and the same or less latency.
In this section, we formulate the threshold .
Theorem 3. The optimum value of which minimizes the error is .
Proof. Proof of this theorem closely follows a similar proof in . The real situation at the sensor node is modeled by two variables and where represents the sensor reading and represents the actual reading. Let be the manifest that out of 1-hop neighbors of a node report the similar sensor reading . The objective here is to determine the fault detection estimate (DE) after obtaining information about the sensor readings of neighboring nodes. The possible vales of are fault-free () and faulty (). The probability that the detection estimate is fault-free, given that of the neighboring sensors report the same reading as node is defined as
Let be the probability that out of neighbors of node are fault-free. This probability is determined as
The correctness of the proposed algorithm can be analyzed by the conditional probabilities corresponding to combinations of , and . From these combinations, we can calculate the probability that the algorithm estimates the node is faulty though both sensed and actual readings are the same. By using marginal probability, this can be derived as In a similar manner, we can calculate the probability that the algorithm estimates the node is fault-free though the sensor reading does not agree with actual reading
Since, each block in the proposed architecture is assumed to fail or function independently of what happens to other blocks, it follows that the node failure probability is the same as individual block failure probability . The probability of at least one block is faulty when source encoder detected as fault-free is The probability of at least one block is faulty when source encoder detected as faulty is
Equations (A.4) and (A.5) suffice to calculate the probability that the detection algorithm declares a fault-free node as faulty. This probability is given by
Similarly, (A.4) and (A.6) suffice to calculate the probability that the detection algorithm declares a faulty node as fault-free can be derived as In the proposed algorithm, the detection estimation is fault-free only when where is the threshold value of Algorithm 1. Thus, (A.1) can be rewritten as
Thus, the error probability of the proposed algorithm in detecting the status of a node is given by
Substituting in (A.10), the expression of summand of (A.10) can be written as
For , (A.11) is negative for , zero for , and positive for . Additional terms with negative contributions are produced by decreasing one at a time from , while and positive contributions once . It follows that achieves a minimum when .
- I. F. Akyildiz, T. Melodia, and K. R. Chowdhury, “A survey on wireless multimedia sensor networks,” Computer Networks, vol. 51, no. 4, pp. 921–960, 2007.
- Z.-Y. Cao, Z.-Z. Ji, and M.-Z. Hu, “An image sensor node for wireless sensor networks,” in Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC '05), vol. 2, pp. 740–745, Las Vegas, Nev, USA, April 2005.
- S. Hengstler, D. Prashanth, S. Fong, and H. Aghajan, “Mesheye: a hybrid-resolution smart camera mote for applications in distributed intelligent surveillance,” in Proceedings of the 6th International Symposium on Information Processing in Sensor Networks (IPSN '07), pp. 360–369, Stanford University, Stanford, Calif, USA, April 2007.
- F. P. Preparata, G. Metze, and R. T. Chien, “On the connection assignment problem of diagnosable systems,” IEEE Transactions on Electronic Computers, vol. 16, no. 6, pp. 848–854, 1967.
- M. Malek, “A comparison connection assignment for diagnosis of multiprocessor systems,” in Proceedings of the 7th Annual Symposium on Computer Architecture (ISCA '80), pp. 31–36, ACM, La Baule, France, 1980.
- K.-Y. Chwa and S. L. Hakimi, “Schemes for fault-tolerant computing: a comparison of modularly redundant and t-diagnosable systems,” Information and Control, vol. 49, no. 3, pp. 212–238, 1981.
- D. M. Blough and H. W. Brown, “The broadcast comparison model for on-line fault diagnosis in multicomputer systems: theory and implementation,” IEEE Transactions on Computers, vol. 48, no. 5, pp. 470–493, 1999.
- S.-Y. Hsieh and Y.-S. Chen, “Strongly diagnosable systems under the comparison diagnosis model,” IEEE Transactions on Computers, vol. 57, no. 12, pp. 1720–1725, 2008.
- E. P. Duarte Jr. and T. Nanya, “A hierarchical adaptive distributed system-level diagnosis algorithm,” IEEE Transactions on Computers, vol. 47, no. 1, pp. 34–45, 1998.
- A. Subbiah and D. M. Blough, “Distributed diagnosis in dynamic fault environments,” IEEE Transactions on Parallel and Distributed Systems, vol. 15, no. 5, pp. 453–467, 2004.
- S. Chessa and P. Santi, “Crash faults identification in wireless sensor networks,” Computer Communications, vol. 25, no. 14, pp. 1273–1282, 2002.
- X. Luo, M. Dong, and Y. Huang, “On distributed fault-tolerant detection in wireless sensor networks,” IEEE Transactions on Computers, vol. 55, no. 1, pp. 58–70, 2006.
- M. Elhadef, A. Boukerche, and H. Elkadiki, “A distributed fault identification protocol for wireless and mobile ad hoc networks,” Journal of Parallel and Distributed Computing, vol. 68, no. 3, pp. 321–335, 2008.
- X. Xu, W. Chen, J. Wan, and R. Yu, “Distributed fault diagnosis of wireless sensor networks,” in Proceedings of the 11th IEEE International Conference on Communication Technology (ICCT '08), pp. 148–151, Hangzhou, China, November 2008.
- M.-H. Lee and Y.-H. Choi, “Fault detection of wireless sensor networks,” Computer Communications, vol. 31, no. 14, pp. 3469–3475, 2008.
- B. Krishnamachari and S. Iyengar, “Distributed bayesian algorithms for fault-tolerant event region detection in wireless sensor networks,” IEEE Transactions on Computers, vol. 53, no. 3, pp. 241–250, 2004.
- J. Chen, S. Kher, and A. Somani, “Distributed fault detection of wireless sensor networks,” in Proceedings of the Workshop on Dependability Issues in Wireless Ad Hoc Networks and Sensor Networks (DIWANS ’06), pp. 65–72, ACM, New York, NY, USA, 2006.
- A. Czarlinska and D. Kundur, “Wireless image sensor networks: event acquisition in attack-prone and uncertain environments,” Multidimensional Systems and Signal Processing, vol. 20, no. 2, pp. 135–164, 2009.
- H. Wu and A. A. Abouzeid, “Error resilient image transport in wireless sensor networks,” Computer Networks, vol. 50, no. 15, pp. 2873–2887, 2006.
- P. Jiang, “A new method for node fault detection in wireless sensor networks,” Sensors, vol. 9, no. 2, pp. 1282–1294, 2009.
- S. Guo, Z. Zhong, and T. He, “Find: faulty node detection for wireless sensor networks,” in Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems (SenSys '09), pp. 253–266, ACM, Berkeley, Calif, USA, 2009.
- M. Barborak, A. Dahbura, and M. Malek, “The consensus problem in fault-tolerant computing,” ACM Computing Surveys (CSUR), vol. 25, no. 2, pp. 171–220, 1993.
- G. H. Chapman, S. Djaja, D. Y. H. Cheung, Y. Audet, I. Koren, and Z. Koren, “A self-correcting active pixel sensor using hardware and software correction,” Design Test of Computers, IEEE, vol. 21, no. 6, pp. 544–551, 2004.
- L. W. Chew, L.-M. Ang, and K. P. Seng, “Survey of image compression algorithms in wireless sensor networks,” in Proceedings of the Information Technology International Symposium on (ITSim '08), vol. 4, pp. 1–9, Kuala Lumpur, Malaysia, August 2008.
- D.-U. Lee, H. Kim, M. Rahimi, D. Estrin, and J. D. Villasenor, “Energy-efficient image compression for resource-constrained platforms,” IEEE Transactions on Image Processing, vol. 18, no. 9, pp. 2100–2113, 2009.
- G. C. Cardarilli, S. Pontarelli, M. Re, and A. Salsano, “Concurrent error detection in reedndash;solomon encoders and decoders,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 15, no. 7, pp. 842–846, 2007.
- M. Khan, G. Pandurangan, and V. S. Anil Kumar, “Distributed algorithms for constructing approximate minimum spanning trees in wireless sensor networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 20, no. 1, pp. 124–139, 2009.