About this Journal Submit a Manuscript Table of Contents
International Journal of Distributed Sensor Networks
Volume 2013 (2013), Article ID 484359, 7 pages
Research Article

Dynamic Reconfigurable Hub as a Stationary Node in a Hybrid Sensor Network

School of Electronics Engineering, College of IT Engineering, Kyungpook National University, 80 Daehakro, Buk-gu, Daegu 702-701, Republic of Korea

Received 15 March 2013; Revised 12 June 2013; Accepted 13 June 2013

Academic Editor: Tai-hoon Kim

Copyright © 2013 Seol Young Jeong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Sensor network systems are being expanded with the development of technology for the design of a low-power chips and for operating sensor networks. However, it is difficult to modify or change an application for a stationary sensor node after installation in the field. Furthermore, there is a limit to installation after developing and testing a new sensor node if it needs to reconfigure the software and hardware of the existing sensor node. Therefore, not only is software reconfiguration needed but also dynamic hardware reconfiguration for various changes in requirements, such as sensor type, communication bandwidth, quality of service, or reduced power consumption depending on the changing environment and requirements of users. This paper proposes a specially designed stationary node called a SMART (system management architecture for reconfigurable technology) node, which is capable of dynamic reconfiguration of the various sensing functions or communication protocols. The SMART node can provide suitable service and change the communication protocol according to various surroundings and user requirements.

1. Introduction

An onsite stationary sensor node collects information about surrounding environments to provide various location-aware services [1]. During software or hardware design, these stationary sensor nodes are generally developed according to function optimized in the surrounding environment. However, when general stationary sensor nodes have been developed and installed in the workplace, it is not easy to modify the function of the software and hardware [24]. Moreover, when many mobile nodes are suddenly connected and begin transmitting to one of the stationary nodes located in a particular area, the system has to select a network guaranteed to provide high bandwidth instantly to meet all services [5]. And it needs to select a network in order to guarantee the stability of network data transfer rates even if the existing networks are damaged by fire, flood, or natural disaster [6]. If the system initially installed in the network is equipped with enormous bandwidth to guarantee required services, it is obviously inefficient due to the cost in terms of installation and maintenance [7].

This paper proposes a dynamically reconfigurable stationary hub node, called a SMART (system management architecture for reconfigurable technology) Node, combined with a device module and a host board. The device module provides various functions, such as environmental data gathering, network data transaction, and plug-and-play functionality. The host board is able to reconfigure many kinds of device modules. Entirely reinstalling sensor nodes is not necessary. The SMART node can perform dynamically based on a change of device module.

The remainder of the paper is structured as follows. Related works are discussed in Section 2. Section 3 introduces the system requirement and system outline. Section 4 describes the proposed reconfigurable system design and introduces implementation of the hardware and software. Section 5 demonstrates performance evaluation through a real test bed. Finally, conclusions are drawn in Section 6.

2. Related Works

The Narada [8] wireless sensor node using power amplified radios features low-voltage and adopted to achieve long communication ranges. The collected sensing data is transmitted via the 2.4 GHz IEEE 802.15.4 radio standard using the TI CC2420 transceiver. The wireless sensors can be easily and rapidly removed and reinstalled in new locations on a structural monitoring system that is proposed for monitoring short and medium-span highway bridges. However, the structural monitoring systems demand various types of sensors and communication protocols according to the environment.

The other study proposes that the framework is based on a service-oriented architecture that is modular, reusable, and extensible for structural health monitoring using networks of wireless smart sensors [9]. The wireless sensor platform used in this research is the Imote2, which is the only commercially available smart sensor platform. The Imote2 employs TinyOS [10], which is designed for sensor networks. The TinyOS updates the kernel and application code by using crossbow network programming (XNP) to support dynamic reconfiguration. However, this method has problems in that communication cost is higher, and it takes a long time because it is transmitted with the entire program via the sensor network.

The Mate [11] solved problems with TinyOS. Reprogramming a network merely requires adding a sensor node running a new propagating capsule that is a kind of command consisting of the VM-specific binary code. But Mate also has one disadvantage in that it results in overhead when running the VM background in every node and the type of Mate for the operation is limited because the capsule cannot be covered with all applications. SOS [12] is an event-driven architecture type that supports the schedule for sensor networks. But it is more expensive to download modules when it comes to communication cost.

The RUNES [13] from the European Union is a lightweight middleware framework. Its goal is the development of various applications that will not be affected by platform. However, the RUNES system has a problem in that it builds the sensor nodes and installs again if they have to add new hardware functions.

We propose the following resolutions to solve these problems. The SMART node overcomes the limited application and inefficiency of the software update. It is possible for it to be dynamically reconfigured for alteration of hardware or software and to perform dynamically according to a change of device module adapted to the surrounding environment.

3. System Requirement and Proposed System Outline

The SMART node proposed in this paper meets the following requirements.

3.1. Connection Management in Every Module

First, the master module, or host board (HB), detects the connection for each of the modules and decides which communication interface is to be used. Second, the host board communicates with every slave module or device module (DM) to initialize the device drivers for communication. The HB requests configuration data such as the number, type, and ID of the sensor from the DM. When the HB responds to the DM, it stores the configuration data requested from the DM in a repository and then manages the DM based on these data. Finally, when the DM disconnects from the HB, the HB disconnects the loaded device drivers and designated resources and then transmits its results to server.

3.2. Selection of Communication Protocols

The sensor node has to transmit data to the server or other units. This transmission of data is able to use the DM equipped with a communication system or Ethernet in the HB.

3.3. A Variety of Communication Interfaces

When the HB connects to the communication interface between each module, it can select one of the supported communication protocols. The HB also has to select the proper communication type based on the type or characteristic of the DM, such as the size of transmitting data or response data, the required data rate, and development costs.

The SMART node is configured with the HB and the DM that performs suitable functions according to demand. As shown in Figure 1, the SMART node can be configured as a sensor node that supports a variety of functions and communications by combining modules that detect environment data such as fire, movement, gas, temperature, humidity and smoke. The DMs have communication capability through ZigBee or WiFi.

Figure 1: The configuration of the SMART node.

4. Detailed System Design and Implementation

The HB is based on a low-power 32 bit MCU and is composed of three layers according to features. As shown in Figure 2, Layer 1 has an Ethernet, internal peripheral port (UART, SPI, USB, I2C), and protocol used to connect the device module. Layer 2 connects Layer 1 and Layer 3. It consists of four ports, two communication interfaces per port and protocols supported by Layer 1. Layer 3 exchanges data from Layer 1 by using a protocol and communication interface supported by Layer 2 and has an MCU, many different kinds of sensors, and a communication module.

Figure 2: The hardware architecture of the SMART node.

The DM has an 8-bit MCU equipped with a port interface. The port interface needs to select one or more internal communication protocols according to the type of device module. Also, there are various types of DMs depending on the type of sensor.

The SMART node’s features are as follows. It recognizes whether the DM is connected or not and handles data transmission to and from the DM. Also, it has to transmit the required data to select the available network protocol according to the purpose or policy of the service.

The software of the SMART node consists largely of three parts according to the functionality of the device. As shown in Figure 3, work management stores required work from the application and divides it into internal or external work. Work transport transmits to the internal or external repository of required work.

Figure 3: The software architecture of the SMART node.

It is classified according to data of the DM which is stored in the queue designated to each port. Work management transmits the work stored in work transport to the operational DM. Port management handles four ports and detects whether the DM is connected or not. The process of connection between the device module and the host board in the port register is divided into three phases. The tasks handled in each phase are as follows.

4.1. Detect Step

The port register detects whether the device module is connected or not by the PIO pin of the 20-pin connector. The detect phase of the port register is performed according to schedule, such as connection between modules, detection of communication protocols, and disconnection between modules.

Figure 4 illustrates the occurrence of the interrupt when both modules are connected. The PIO configuration of the HB is first set to pull-up, meaning that the initial status of the PIO is the high state. When the DM is connected, this configuration of the PIO pin changes to low from high. Thereafter, the PIO pin of the HB is kept on low. When the DM is disconnected, the current configuration of the PIO pin is changed to a floating state. It changes to the high state as the initial PIO configuration of the HB. At this point, the SMART node detects the interrupt, releases the pertinent memories, and removes the tasks in the DM.

Figure 4: The process of detecting the device module.

The process for detecting the type of communication protocols for the DM is shown in Figure 5. The HB counts up the number of all interrupt signals on the pin of the connector for a period of time. This number indicates the type of communication protocol between both modules. For example, an interrupt signal counted at one for 1 second indicates the UART type, and then the HB loads the related device drivers.

Figure 5: The process of detecting the type of communication protocols.
4.2. Init Step

At this step, all data of the connected DM are stored in the HB when the Detect step is finished. First, the SMART node initializes the number of the detected port, and the device driver is fitted to the communication interface. Second, the SMART node creates a task that handles the transmission of data. Finally, all of its data are stored in internal service in port management.

4.3. Notify Step

The previously mentioned data are transmitted to the server or other units by work transport in the SMART node. This is handled as external service because data are transmitted outside the SMART node.

The SMART node is simultaneously operated by many jobs according to priority. Thus, the system needs the OS for limited system resources to preferentially handle the many tasks according to their priorities. Therefore, the system should be designed based on Ubinos [14], which is a multithreaded operating system for the resource-limited embedded system, to build the ubiquitous computing environment or sensor network. Each thread of Ubinos has priority and is scheduled preemptively according to priority.

5. Evaluation

5.1. Test Bed

The test bed is made up of many SMART nodes and a server as shown in Figure 6 for the experiment. Nodes 1, 2, and 3 are built on the PAN 1 composed of the sensor DM gathering the environment data and the ZigBee DM transmitting the data. The data transaction between internal layers in the node is used for communication protocols such as SPI, UART, I2C, and USB. Also, the current data of the SMART node is transmitted to the SMART node as a gateway, and this gateway is transmitted to the server. The PAN 2 included in Node 4 transmits the environment data of the SMART nodes as a sensor node using Ethernet. Node 5 is in charge of the WiFi AP, and PAN 3 included in Node 5 is available to WiFi communication via smart phone.

Figure 6: The configuration of the test bed.

The features of the server program are as follows. When the SMART node is connected, the server program creates the icon. Also, it displays status information such as the type of sensor, current value, and critical value, as well as whether the device module is connected or not. Finally, the server program transmits the control commands and stores the packet log file.

5.2. Initialization Time When Multiple Devices Are Simultaneously Connected

When the DMs are connected to the three ports of the SMART node, except for the debug port, and the SMART node is powered on, it measures the total time it takes to initialize the connected DMs. When a single device is connected to the SMART node, the time it takes to initialize is as follows. The processing time of the Detect step took about 120 ms, as shown in Figure 7. These results would count the number of interrupt signals, determined by the type of communication module, for approximately 100 ms. Then, it took an average 4.88 ms in the Init step and about 26 ms in the Notify step. Thus, the average time it takes to report to the server after the DM is connected is about 131.1 ms.

Figure 7: The initialization time when three devices are simultaneously connected.

When many devices are connected to the SMART node at the same time, the time it takes to initialize is as follows. It took an average of 121 ms in the Detect step, an average of 7.5 ms in the Init step, and about 72.5 ms in the Notify step. In light of these results, we found that processing time is greater than the waiting time. The reason is that it took a longer time to sequentially handle the signal that comes to the pertinent ports. Thus, it was found that the SMART node works without any problems, even if many device modules are connected to the SMART nodes and initialized by the system.

5.3. Processing Time of Internal/External Working

Figure 8 illustrates the measured processing time for internal and external working of the SMART node application. The process of measurement is as follows. The time it took to handle the work of bringing the value of the sensor was measured, and it took an average time of 2.4 ms. Also, the time it took to transmit the data to the server was measured, and it took an average of 26.3 ms. All tests were repeated 10 times.

Figure 8: The processing time of internal or external working.
5.4. Transmission Protocol Selection Policy

Figure 9 illustrates the result according to the external work policy considering data size and transmission time if the node is using Ethernet and ZigBee at the same time. If the data is transmitted by ZigBee, it takes about 6 ms per byte. In contrast, if Ethernet is used, the processing time takes about 13 ms per byte. In the figure, cost is calculated as data size transmission time per byte process time. If using Ethernet, there is a penalty due to wake-up time.

Figure 9: The transmission cost and time.

Figure 9(a) shows the cost as changed by the size of the data. If data greater than 17 KB is transmitted by the system, it is more efficient to use Ethernet. Also, Figure 9(b) shows the results for data transmission time. In this case, if data greater than 3.6 KB is transmitted, it is more efficient to use Ethernet.

6. Conclusion

In this paper, we propose an improved sensor node called SMART node, which enables dynamic reconfiguration of the network communication protocols and function of the node. The SMART node can easily perform required tasks without modifying the function of installed sensor nodes because it can reconstruct the entire system by changing the specific device module according to the changing environment and needs of the user. Therefore the SMART node is able to reduce the time and cost required for the production of a sensor node and is helpful when transmitting data when data traffic momentarily increases.

Recently, demands for services using a sensor node, such as location awareness, location tracking, transaction of multimedia data, and data transmission of heterogeneous networks, have gradually increased. The proposed SMART node is expected to be effective in solving problems on the networks and handling various services.


This work was supported by the IT R&D program of MOTIE/KEIT (10041145, Self-Organized Software Platform (SOSP) for Welfare Devices).


  1. S. Raazi and S. Lee, “A survey on key management strategies for different applications of wireless sensor networks,” Journal of Computing Science and Engineering, vol. 4, no. 1, pp. 23–51, 2010. View at Google Scholar
  2. T. Kim and S. Hong, “State machine based operating system architecture for wireless sensor networks,” in Proceedings of the 5th International Conference on Parallel and Distributed Computing: Applications and Technologies (PDCAT '04), pp. 803–806, Springer, Heidelberg, Germany, December 2004. View at Scopus
  3. P. Grace, G. Coulson, G. Blair, B. Porter, and D. Hughes, “Dynamic reconfiguration in sensor middleware,” in Proceedings of the 7th International Workshop on Middleware for Sensor Networks (MidSens '06), pp. 1–6, Melbourne, Australia, November 2006. View at Publisher · View at Google Scholar · View at Scopus
  4. I.-Y. Chen and C.-C. Huang, “A reconfigurable software distribution framework for smart living environments,” International Journal of Smart Home, vol. 1, no. 2, pp. 155–170, 2007. View at Google Scholar · View at Scopus
  5. D.-K. Lee, B.-C. Kim, S.-H. Park, and S.-J. Kang, “The pumping node architecture to solve the traffic congestion problem due to the crowds of mobile nodes in wireless sensor networks,” Journal of Korea Information and Communications Society, vol. 34, no. 8, pp. 777–785, 2009. View at Google Scholar
  6. R. Kawano and T. Miyazaki, “Simultaneous optimization for dynamic sensor function allocation and effective sensed data aggregation in wireless sensor networks,” International Journal of Future Generation Communication and Networking, vol. 2, no. 4, pp. 15–27, 2009. View at Google Scholar
  7. A. Ben Letaifa and S. T. Mediatron, “Dynamic reconfiguration of radio mobile multimedia services platforms of 3rd and 4th generations,” International Journal of Advanced Science and Technology, vol. 12, pp. 11–24, 2009. View at Google Scholar
  8. J. Kim, R. A. Swartz, J. P. Lynch, J. Lee, and C. Lee, “Rapid-to-deploy reconfigurable wireless structural monitoring systems using extended-range wireless sensors,” Smart Structures and Systems, vol. 6, no. 5-6, pp. 505–524, 2010. View at Google Scholar · View at Scopus
  9. J. A. Rice, K. A. Mechitov, S. H. Sim, B. F. Spencer Jr., and G. A. Agha, “Enabling framework for structural health monitoring using smart sensors,” Structural Control and Health Monitoring, vol. 18, no. 5, pp. 574–587, 2011. View at Publisher · View at Google Scholar · View at Scopus
  10. J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and K. Pister, “System architecture directions for networked sensors,” in Proceedings of the 9th International Conference on Architectural Support for Programming Languages and Operating Systems, Vancouver, Canada, June 2000.
  11. P. Levis and D. Culler, “Mate: a tiny virtual machine for sensor networks,” in Proceedings of the 10th International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 85–95, San Jose, Calif, USA, October 2002. View at Scopus
  12. C. C. Han, R. K. Rengaswamy, R. Shea, E. Kohler, and M. Srivastava, “SOS: a dynamic operating system for sensor networks,” in Proceedings of the 3rd International Conference on Mobile Systems, Applications, and Services, Seattle, Wash, USA, June 2005.
  13. P. Costa, G. Coulson, C. Mascolo, L. Mottola, G. P. Picco, and S. Zachariadis, “Reconfigurable component-based middleware for networked embedded systems,” International Journal of Wireless Information Networks, vol. 14, no. 2, pp. 149–162, 2007. View at Publisher · View at Google Scholar · View at Scopus
  14. S. H. Park, D. K. Lee, and S. J. Kang, “Compiler-assisted maximum stack usage measurement technique for efficient multi-threading in memory-limited embedded systems,” Studies in Computational Intelligence, vol. 365, pp. 113–129, 2011. View at Publisher · View at Google Scholar · View at Scopus