About this Journal Submit a Manuscript Table of Contents
International Journal of Distributed Sensor Networks
Volume 2013 (2013), Article ID 490489, 11 pages
http://dx.doi.org/10.1155/2013/490489
Research Article

Modeling and Verification of a Heterogeneous Sky Surveillance Visual Sensor Network

Division of Electronics Design, Mid Sweden University, Holmgatan 10, 851 70 Sundsvall, Sweden

Received 19 April 2013; Accepted 29 July 2013

Academic Editor: Ivan Lee

Copyright © 2013 Naeem Ahmad et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A visual sensor network (VSN) is a distributed system of a large number of camera nodes and has useful applications in many areas. The primary difference between a VSN and an ordinary scalar sensor network is the nature and volume of the information. In contrast to scalar sensor networks, a VSN generates two-dimensional data in the form of images. In this paper, we design a heterogeneous VSN to reduce the implementation cost required for the surveillance of a given area between two altitude limits. The VSN is designed by combining three sub-VSNs, which results in a heterogeneous VSN. Measurements are performed to verify full coverage and minimum achieved object image resolution at the lower and higher altitudes, respectively, for each sub-VSN. Verification of the sub-VSNs also verifies the full coverage of the heterogeneous VSN, between the given altitudes limits. Results show that the heterogeneous VSN is very effective to decrease the implementation cost required for the coverage of a given area. More than 70% decrease in cost is achieved by using a heterogeneous VSN to cover a given area, in comparison to homogeneous VSN.

1. Introduction

Visual sensor networks (VSNs) are the image sensor based distributed systems. They consist of a large number of low-power camera nodes which collect image data from environment and perform distributed and collaborative processing on this data [1, 2]. These nodes extract useful information from collected images by processing the image data locally and also collaborate with other neighboring nodes to create useful information about the captured events. The large amount of image data produced by camera nodes and the limited network resources demand the exploration of new techniques and methods of sensor management and processing and communication of large data. It demands interdisciplinary approach with the disciplines such as vision processing, communication, networking, and embedded processing [3]. Camera sensor nodes with different types and costs are available in the market today. These nodes have different optical and mechanical properties. They have varied ranges of computation capabilities, communication powers, and energy requirements. The choice of camera nodes to design a VSN depends on the requirements and constraints of a given application. Depending on the type of the camera nodes, VSNs can be divided into two major categories, homogeneous and heterogeneous VSNs [4]. A homogeneous VSN uses similar types of nodes while a heterogeneous VSN uses different types of nodes.

VSNs have a large number of potential applications. They can be used in surveillance applications for intrusion detection, building monitoring, home security, and so forth. These applications capture images of the surroundings and process them for recognition and classification of the suspicious objects [5, 6]. Visual monitoring is being used in retail stores and in homes for security purposes [7]. Modern VSNs can provide surveillance coverage across a wide area, ensuring object visibility over a large range of depths [8, 9]. They can be used in traffic monitoring applications [1014] for vehicle count, speed measurements, vehicle classifications, travel times of the city links, and so forth. VSNs have a number of applications in sports and gaming field [15, 16]. They can collect statistics about the players or the game to design more specific training to suit the individual players. VSNs have applications in environment monitoring to collect data about animals and the environment which can be helpful to solve the nature conservation issues [17, 18]. Video monitoring has important applications in searching empty parking lots [19]. The PSF application [20] is designed to locate and direct a driver to the available parking spaces, near a desired destination. Video tracking techniques are used for people tracking [21].

A homogeneous VSN is formed by combining similar camera nodes. In contrast to this, a heterogeneous VSN contains different types of nodes [4, 22]. A heterogeneous VSN provides many advantages over homogeneous VSN such as resolution of conflicting design issues, reduction in energy consumption without degrading surveillance accuracy, higher coverage, lower cost, and reliability. The tasks allocation strategy in heterogeneous VSNs is designed such that the simpler tasks are assigned to resource-constrained nodes while more complex tasks are assigned to high performance nodes. For example, in a surveillance application, a motion detection task can be assigned to lower-resolution nodes while object recognition task can be assigned to high-resolution nodes. This strategy results in optimized use of node resources. This optimization in heterogeneous VSNs results in maximization of network lifetime as compared to homogeneous VSNs. In designing homogeneous VSNs, the sensor selection and node design are finalized on the basis of most-demanding application tasks. Assigning such a node to simpler tasks wastes precious node resources. A homogeneous VSN design is unable to provide all the desired features, such as higher coverage, low cost, reliability, and energy reduction, at the same time. It optimizes one or more parameters at the expense of other parameters [9, 23].

A single camera has limited field of view and is unable to provide coverage of a large area. To cope with this problem, arrays of cameras are used. The decreasing cost and increasing quality of cameras have made the visual surveillance of an area common by using a network of cameras [24]. An important research area is the placement of visual sensors to achieve a specific goal. A common goal is to place the image sensors in the arrays for complete coverage of a given area. The other goal can be to minimize the cost of visual sensor arrays, while maintaining the required resolution. Currently, the designers place cameras by hand due to the lack of theoretical research on planning visual sensor placement. In future, the number of cameras in smart surveillance applications is expected to increase to hundreds or even thousands. Thus, it is extremely important to develop camera placement strategies [25, 26]. In recent years, a large number of smart camera networks are deployed for a variety of applications. An important design issue in these distributed environments is the proper placement of cameras. Optimum camera placement increases the surveillance coverage and also improves the appearance of objects in cameras [24].

This paper discusses the design of a heterogeneous VSN, to be used for application of monitoring large birds, such as eagle, in the sky. The heterogeneous VSN will be able to provide coverage of each point in the area with the required minimum resolution. The heterogeneous VSN is designed by combining three sub VSNs for the surveillance of an area between two altitude limits. The VSN will be implemented with real monitoring nodes, which will be designed by following the model. Images will be captured and measurements will be performed to verify the full coverage at the given lower altitude and minimum achieved resolution at the given higher altitude. The heterogeneous VSN will reduce the implementation cost in comparison to homogeneous VSN.

The remaining of this paper is organized as follows. Section 2 describes the necessary theory for designing homogeneous and heterogeneous VSNs. Section 3 discusses the camera models and optics used in the experiment. Section 4 describes the heterogeneous VSN coverage model. Section 5 discusses the experimental measurement performed to verify VSN parameters. Section 6 discusses the results, and finally Section 7 describes the conclusion.

2. Theory

This section describes a brief theory about designing VSNs and discusses the homogeneous and heterogeneous VSN models. The heterogeneous VSN is a combination of a number of homogeneous VSNs and is effective in reducing the implementation cost for the surveillance of an area. An example of heterogeneous VSN is presented, and cost comparison between homogeneous and heterogeneous VSNs is made to see the cost reduction capability of the heterogeneous VSN.

2.1. Homogeneous VSN Model

The design of a VSN for the surveillance of golden eagles is presented in [27]. The VSN is formed with a number of cameras. A 3D visualization of the coverage with a matrix of cameras is shown in Figure 1. A sharp tip at the bottom of the figure represents a camera node. A number of such nodes form a matrix which is used to monitor eagles in the sky between two altitude limits above the ground, the higher altitude and the lower altitude . The area covered by the VSN is increased as the altitude from the camera nodes is increased. The area coverage increases till lower altitude , where VSN is able to fully cover a given area. Above altitude the coverage starts to overlap. This overlap is used to apply triangulation technique to find the exact size and altitude of an eagle.

490489.fig.001
Figure 1: 3D visualization of coverage with a matrix of cameras.

As the altitude of an eagle is increased from camera nodes, the resolution of its image on the camera sensor is decreased. A minimum image resolution is necessary to recognize an object moving at altitude and is assumed to be 10 pixels per meter. For altitudes higher than , the resolution of the object image drops below the minimum required criterion, so the VSN coverage is assumed zero above this altitude. The dark surface at the top of Figure 1 represents the altitude .

The first step to design a VSN is the selection of a camera sensor. A number of camera sensors are used in this study, and the details about their type, resolution, and size are given in Table 1. The second step of VSN design is the calculation of focal length of lens to be used with the chosen camera, which ensures the minimum required resolution at altitude and can be calculated by the following equation: where and are the minimum number of pixels of the object image along width and length, and are width and length of the object, is the higher altitude, and are the image sensor lengths, and and are the image sensor resolutions along horizontal and vertical directions, respectively. The angle of view (AoV) of a node, using a particular image sensor and a lens of specific focal length, can be calculated by the following equation: where is the chosen length (horizontal, vertical, or diagonal) of the camera sensor and is the lens focal length. The combination of camera and lens forms an individual node, and a number of such nodes are required to surveil a given area.

tab1
Table 1: Camera sensors used in study.

The third step of VSN design is the placement of nodes which ensures full coverage at the given lower altitude . The placement of nodes is calculated by calculating the distances and between the two neighboring nodes, along and directions, respectively. These distances can be calculated by the following equations:

The fourth step of VSN design is to calculate the total number of nodes (called cost) to cover a given area. The cost to cover a given area , can be calculated by the following equation: where and , , , , , , and are as defined above.

By using camera sensors given in Table 1 and the equations from (1) to (4), a VSN can be designed to cover a given area in the given altitude ranges. The favorable VSN design objective is to increase the coverage of an area but decrease its implementation cost. However, in actual practice, the cost of a VSN is greatly increased when the covered area is increased. For example, to cover an area of 1 km2, between altitude limits from 3000 to 5000 m above the ground, a homogeneous VSN requires a cost of 20 nodes. This cost calculation assumes minimum resolution of 10 pixels per meter and 14 Mp camera sensor. If the area of 1 km2 is increased by extending the altitude limits from 500 to 5000 m, the cost of the VSN is increased to 694. In this example, the increase in area is about 2.25 times, but the corresponding increase in cost is about 35 times. This result indicates the need for a new technique which is able to increase the area coverage but at the same time decreases the implementation cost. In the following paragraphs the design of a heterogeneous VSN is presented which fulfills this purpose.

2.2. Heterogeneous VSN Model

The input parameters to design a heterogeneous VSN include the area to be covered, lower altitude , higher altitude , and the object size, and , to be monitored. The VSN is designed with three sub-VSNs: VSN1, VSN2, and VSN3, as shown in Figure 2. The VSN1 is designed for lower altitude and higher altitude and covering the top most part of the given altitudes range, the VSN2 is designed for lower altitude and higher altitude and covering the middle part of the given altitudes range, and the VSN3 is designed for lower altitude and higher altitude and covering the lower part of the given altitudes range.

490489.fig.002
Figure 2: Heterogeneous VSN model.

The given higher altitude is treated as the higher altitude for VSN1. Suppose an image sensor , having parameters , , , and , is chosen to implement the nodes for VSN1 along with a lens of focal length . The lens focal length must ensure the minimum required resolution for the given object, when moving at the altitude . The value of can be calculated by using (1) for sensor and altitude .

The VSN2 is designed to cover an area between the higher altitude and the lower altitude . Suppose an image sensor , having parameters , , , and , is chosen to implement the nodes for VSN2 along with a lens of focal length . The value of the altitude , where the combination of the sensor and the lens of focal length can provide the minimum required resolution for the given object, can be derived from (1) and is given below:

The VSN2 will cover the altitudes range from and below. The VSN1 must ensure the coverage of the altitudes range from to . The higher altitude for VSN2 will serve as the lower altitude for VSN1, as shown in Figure 2. The value of is used to calculate the distance between VSN1 nodes by using (3), for focal length and sensor . Suppose the same distance is set between the nodes of VSN2, such that the nodes of VSN2 are placed at the midpoint between the nodes of VSN1. The value of the lower altitude for VSN2 is calculated by solving the triangles, connected to VSN1 and VSN2. The value of can be calculated by using the following equation: If the same camera sensor is used for both VSN1 and VSN2, then the above equation is reduced to the following equation:

The VSN3 is designed to cover the altitudes range from to the given lower altitude . The value of VSN2 will serve as the higher altitude , and the given value of lower altitude will serve as the lower altitude for VSN3, as shown in Figure 2. Suppose an image sensor , having parameters , , and , is chosen to implement the nodes for VSN3 along with a lens of focal length . The lens focal length must ensure the minimum required resolution for the given object, when moving at the altitude . The value of can be calculated by using (1) for sensor and altitude . To verify whether the VSN3 has the lower altitude the same as that given in the specification, the following equation can be used: where , , and are sensors and , , and are lenses focal lengths for VSN1,VSN2 and VSN3, respectively.

The lower altitude value can be used to calculate the placement of VSN3 nodes with respect to the nodes of VSN1 and VSN2. The proper placement of VSN3 will ensure the full coverage of the area for the given lower altitude value. If and are the distances of VSN3 nodes with respect to VSN1 and VSN2 nodes (Figure 6), respectively, their values can be calculated by using the following equations:

After describing the theory to design homogeneous and heterogeneous VSNs, an example of heterogeneous VSN is designed in the following paragraphs, by using the concepts described above.

2.3. Example of  Heterogeneous  VSN

Suppose coverage of an area is required between altitudes range from 293 to 47.3 cm. Thus, the values of and for the desired heterogeneous VSN are 293 and 47.3 cm, respectively. Suppose this VSN is designed to monitor an object, having length and width of 0.54 and 1.08 cm, respectively, shown in Figure 3. First of all, the design of the network VSN1 is presented. The higher altitude of VSN1 is equal to the given higher altitude , 293 cm. Suppose the sensor chosen to implement the nodes of this VSN is 5 Mp. The parameters of this sensor are given in Table 1. The value of to be used with sensor is calculated by using (1) and is found to be 12 mm.

490489.fig.003
Figure 3: Bird to be monitored by VSN.

Suppose the components chosen to implement the nodes for VSN2 are 5 Mp image sensor and lens with focal length 8.5 mm. The value of the altitude , where this combination of the sensor and the lens can provide the minimum required resolution for the given object, is calculated by using (5) and is found to be 208.6 cm. This higher altitude value for VSN2 will also serve as the lower altitude value for VSN1. The VSN1 must ensure the coverage of the altitudes range from 293 to 208.6 cm. The value of 208.6 m is used to calculate the distance between VSN1 nodes, along side, by using (3). The values for focal length and sensor used to calculate are the same as for VSN1, that is, 12 mm focal length and 5 Mp image sensor. The value of is found to be 97.9 cm. Similarly, the value of can be found along side. The same distance of 97.9 cm is set between the nodes of VSN2 along side, as discussed before. The lower value of for VSN2 is calculated by using (7) and is found to be 86.5 cm.

After discussing the design of VSN2, the design for the network VSN3 is considered. This network is designed to cover the altitudes range from to the given lower altitude . The value of VSN2 serves as the higher altitude for VSN3 while the value of lower altitude serves as the lower altitude . VSN3 must ensure the coverage of the altitudes range from 86.5 to 47.3 cm. Suppose WVGA image sensor is chosen to implement the nodes for VSN3. The value of to be used with WVGA sensor, to ensure the minimum required image resolution for the given object, when it is moving at altitude , is calculated by using (1) and is found to be 9.6 mm. The placement of the VSN3 nodes with respect to the VSN1 and VSN2 nodes is finalized by calculating the distances and , by using (9) and (10), and are found to be 22.19 and 22.76 cm, respectively.

The summary of the sub-VSNs parameters, discussed above, is given in Table 2.

tab2
Table 2: Summary of sub=VSNs.
2.4. Cost Reduction by Heterogeneous VSN

Suppose an area of 1000 cm by 1000 cm is required to be covered between altitude limits from 293 to 47.3 cm. The area is covered by using two different types of VSNs: homogeneous VSN and heterogeneous VSN. Suppose 5 Mp image sensor is used to implement both types of VSNs. The cost required to implement the homogeneous VSN is calculated by using (4) and is found to be 793 nodes. In case of heterogeneous VSN, the cost required to implement VSN1 for altitudes range from 293 to 208.6 cm is 40 nodes, for VSN2 from 208.6 to 86.5 is 117 nodes, and for VSN3 from 86.5 to 47.3 is 69 nodes. The total cost required to implement the complete heterogeneous VSN is 226 nodes. If we compare the costs for homogeneous VSN and heterogeneous VSN, it is obvious that heterogeneous VSN is offering more than 70% decrease in cost. Similar results can be verified by using WVGA image sensor in place of 5 Mp sensor. Although in case of using WVGA sensor more cost will be required to implement the same ranges of altitudes, but the decrease in cost will be more than 70% in case of using heterogeneous VSN.

In the remaining of this paper, the heterogeneous VSN which is described in the above example, is implemented by using actual cameras and optics.

3. Camera Models and Optics

The image sensors chosen to implement the monitoring nodes for sub VSNs in the above example include 5 Mp and WVGAs. The 5 Mp image sensor MT9P031 is used in camera model UI-5480CP-M from UEye and is shown in Figure 4. The horizontal length and the vertical length of this sensor are 5.632 and 4.224 mm, respectively. Also, the horizontal resolution and vertical resolution of this sensor are 2560 and 1920, respectively. These parameters are shown in Table 3. The power is applied to this camera module through Hirose connector while the data is received via Ethernet link.

tab3
Table 3: Parameters of camera sensors.
490489.fig.004
Figure 4: Cameras and lenses used in VSN design.

The WVGA image sensor MT9V032 is used in camera model UI-1220SE-M from UEye and is shown in Figure 4. The horizontal length and the vertical length of this sensor are 4.512 and 2.880 mm, respectively. Also, the horizontal resolution and vertical resolution of this sensor are 752 and 480, respectively. These parameters are shown in Table 3. The camera is using a USB interface which is used to apply power as well as receive data from the camera module.

The cameras and the accompanied optics used to implement the heterogeneous VSN are shown in Figure 4. The combination of a camera and the accompanied lens of relevant focal length form a monitoring node. An individual laptop machine is used with each monitoring node as data gathering and analysis platform. The 5 Mp camera module is connected to the laptop by using CAT 6 Ethernet cable. The power is supplied to the camera module by using 6 pin Hirose connector. The camera is connected to the laptop as client while the laptop is used as server. The WVGA camera is connected to the laptop by using a USB cable. The power to a camera node and laptop is supplied by using a 12 Volt rechargeable battery. A complete node is shown in Figure 5. The images of the bird are captured by UEye cockpit software. This is the camera interface to capture images/videos or to change the controlling parameters of the camera.

490489.fig.005
Figure 5: A complete camera node.
490489.fig.006
Figure 6: VSN model.

4. Coverage Model

The camera coverage model based on all the above design parameters is shown in Figure 6. The model contains five altitude lines , , , , and . The altitude line represents the ground plane and monitoring nodes are placed on this line. This line contains seven points from to . The camera nodes of VSN1 are placed at points and , the camera node of VSN2 is placed at point , and the camera nodes for VSN3 are placed at points and . The second altitude line is which contains eight points from to . The network VSN3 provides full coverage on this line by making intersections with VSN1 and VSN2. The third altitude line is which contains ten points from to . The VSN2 provides full coverage on this line by making intersections with VSN1. This altitude line also acts as the highest altitude boundary for VSN3. This is because the VSN3 is not able to provide the required resolution above this altitude. The fourth altitude line is which contains seven points from to . This altitude line acts as the highest altitude boundary for VSN2. The VSN2 is not able to provide the required resolution above this altitude. The last altitude line is which contains four points from to . This altitude line represents the highest altitude for VSN1 as well as for the entire heterogeneous VSN, where an object can be monitored with the required minimum resolution.

To perform coverage and resolution measurements, it is necessary to find the altitudes of the lines above ground and the distances of the points from the edge of the relevant lines. The altitude line represents the ground so it has altitude 0 cm. The altitude line is the lower altitude for VSN3; it is at the altitude 47.3 cm above the ground, as calculated before. The altitude line is the higher altitude for VSN3 or the lower altitude for VSN2. Thus, it is at altitude 86.5 cm above the ground. The altitude line is the higher altitude for VSN2 or the lower altitude for VSN1. Thus, it is at the altitude 208.6 cm above the ground. The altitude line represents the higher altitude for VSN1. Thus, it is at the altitude 293 cm above the ground. All these altitudes are summarized in Figure 7.

490489.fig.007
Figure 7: VSN altitudes.

After finding the altitudes of the lines, the distances of the points from the edge of the respective altitude lines are calculated. For altitude line , the distance of point from point can be calculated by solving the triangle in Figure 6. The distance of point from point is calculated and is found to be 68.8 cm. The distance between points and is which is 97.9 cm. The distance of point from point is calculated by adding the distance of point from and the distance of point from and is found to be 166.6 cm. The distance of point from is 235.4 cm. The point is /2 distance away from . Thus, the point is at 117.7 cm away from . The point is at 22.19 cm distance from , and point is at 22.19 cm distance from . Thus, the points and are at distances 91.0 and 144.5 cm away from , respectively.

Similarly, the distances of the points on other altitude lines are calculated. For altitude line , the distances of the points , , , , , , and from are 57.7, 79.9, 102.1, 133.3, 155.5, 177.7, and 235.4 cm, respectively. For altitude line , the distances of the points , , , , , , , , and from are 48.5, 70.7, 89.1, 111.3, 124.2, 146.3, 164.8, 186.9, and 235.4 cm, respectively. For altitude line , the distances of points , , , , , and from are 19.8, 48.6, 117.7, 186.8, 215.6, and 235.4 cm, respectively. For altitude line , the distances of the points , , and from are 97.9, 137.5, and 235.4 cm, respectively. The values of all these points are given in Table 4.

tab4
Table 4: Values of the points in VSN model.

5. Measurements of VSN Parameters

Two important parameters of a VSN which must be ensured are the full area coverage at a given lower altitude and the minimum required image resolution for an object, when moving at the given higher altitude. The remaining part of the paper describes the measurements to verify these parameters. The measurements are performed in a room. To measure all the distances, a 240 cm long scale is constructed on a long strip of paper. The markings on the scale are marked after every 5 cm. The scale is glued on a wall of the room. The wall is treated as sky for this experiment. A bird of size (1.08 × 0.54) cm, shown in Figure 3, is printed on a card board and is used to measure the minimum required image resolution.

5.1. Coverage Measurements

This section measures coverage at lower altitude of a given sub-VSN to verify whether this network is able to provide full coverage or not. The measurements are performed for altitudes , , and , which are lower altitudes for the networks VSN1, VSN2, and VSN3, respectively. For coverage measurement, the images are captured at the respective altitude line with the concerned camera nodes. The coverage of each node is measured and compared with the calculated coverage for that node to verify whether these nodes are covering the complete required area or there is some gap in the coverage.

5.1.1. Altitude D

It is the lower altitude of VSN1, where full coverage of this network is assumed. For more clear visualization of this altitude, a separate model which is extracted from heterogeneous VSN model (Figure 6) is shown in Figure 8(a). To perform measurements for coverage verification at lower altitude of VSN1, the wall is treated as altitude line . The ground line of length 235.4 cm is drawn parallel to the wall, 208.6 cm away from the wall. The points and are marked at distances 68.8 cm and 166.6 cm from the edge of the line, according to Figure 8(a). Node 1 is placed at point while node 2 is placed at point . Both nodes are using similar 5 Mp image sensors and lenses of focal length 12 mm, as discussed before. The complete setup is shown in Figure 9. The calculated distances expected to be covered by nodes 1 and 2 are from point (19.8 cm) to (117.7 cm) and from point (117.7 cm) to (215.6 cm). Images are captured with both monitoring nodes, and pixels are counted to measure the distances covered by these nodes.

fig8
Figure 8: VSN models, (a) VSN1, (b) VSN2, and (c) VSN3.
490489.fig.009
Figure 9: VSN installation.

The image captured by node 1 is shown in Figure 10(a). The pixel measurements in this figure show that about 131 pixels represent 5 cm distance. There are about 145 pixels from the left edge of the figure to the 25 cm point, which represents about 5.5 cm distance. Thus, point has value of 19.5 cm. Similarly, there are about 84 pixels from right edge of the figure to the 115 cm point, which represent about 3.2 cm distance. Thus, point has value of 118.2 cm.

fig10
Figure 10: VSN measurements at different altitudes with different nodes: (a–c) VSN1, altitude node 1, altitude node 2, and altitude , node 1; (d–f) VSN2, altitude node 1, altitude node 2, and altitude node 1; (g–j) VSN3, altitude node 1, altitude node 2, altitude node 3, altitude node 1.

The image captured by node 2 is shown in Figure 10(b). The pixel measurements in this figure show that about 131 pixels represent 5 cm distance. There are about 66 pixels from the left edge of the figure to the 120 cm point, which represents about 2.5 cm distance. Thus, point has value of 117.5 cm. Similarly, there are about 146 pixels from the right edge of the figure to the 210 cm point, which represents about 5.6 cm distance. Thus, point has value of 215.6 cm. All these values are shown in Table 5.

tab5
Table 5: Coverage measurements details.
5.1.2. Altitude C

It is the lower altitude of VSN2, where full coverage of this network is assumed. For more clear visualization of this altitude, a separate model which is extracted from heterogeneous VSN model, is shown in Figure 8(b). To perform measurements for coverage verification at lower altitude of VSN2, the wall is treated as altitude line . The ground line of length 235.4 cm is drawn parallel to the wall at a distance of 86.5 cm away from the wall. The points and are marked at distances 68.8 and 117.7 cm from the edge of the line, according to Figure 8(b). The camera node 1 is placed at point while the camera node 2 is placed at point . Both camera nodes 1 and 2 have similar 5 Mp image sensor, but different lens focal lengths, 12 mm and 8.5 mm, respectively. The calculated distances expected to be covered by nodes 1 and 2 are from point (48.5 cm) to (89.1 cm) and from point (89.1 cm) to (146.3 cm). Images are captured with both nodes, and pixels are counted to measure the distance covered by these nodes.

The image captured by node 1 is shown in Figure 10(d). The pixel measurements in this figure show that about 317 pixels represent 5 cm distance. There are about 108 pixels from the left edge of the figure to the 50 cm point, which represents about 1.7 cm distance. Thus, point has value of 48.3 cm. Similarly, there are about 279 pixels from the right edge of the figure to the 85 cm point, which represents about 4.4 cm distance. Thus, point has value of 89.4 cm.

The image captured by node 2 is shown in Figure 10(e). The pixel measurements in this figure show that about 220 pixels represent 5 cm distance. There are about 264 pixels from the left edge of the figure to the 95 cm point, which represents about 6.0 cm distance. Thus, point has value of 89.0 cm. Similarly, there are about 286 pixels from right edge of the figure to the 140 cm point, which represents about 6.5 cm distance. Thus, point has value of 146.5 cm. All these values are given in Table 5.

5.1.3. Altitude B

It is the lower altitude of VSN3, where full coverage of this network is assumed. For more clear visualization of this altitude, a separate model which is extracted from heterogeneous VSN model is shown in Figure 8(c). To perform measurements for coverage verification at lower altitude of VSN3, the wall is treated as altitude line . The ground line of length 235.4 cm is drawn parallel to the wall at a distance of 47.3 cm away from the wall. The points,, and are marked at distances 68.8, 91.0, and 117.7 cm from the edge of the line, according to Figure 8(c). Camera node 1 is fixed at point , camera node 2 is fixed at point , while camera node 3 is fixed at point . Both camera nodes 1 and 2 have similar 5 Mp image sensors but different lens focal lengths, 12 mm and 8.5 mm, respectively. Camera node 3 has WVGA camera sensor and lens of focal length 9.6 mm. The calculated distances expected to be covered by nodes 1, 3, and 2 are from point (57.7 cm) to (79.9 cm), from point (79.9 cm) to (102.1 cm), and from point (102.1 cm) to (133.3 cm), respectively.

The image captured by node 1 is shown in Figure 10(g). The pixel measurements in this figure show that about 573 pixels represent 5 cm distance. There are about 276 pixels from the left edge of the figure to the 60 cm point, which represents about 2.4 cm distance. Thus, point has value of 57.6 cm. Similarly, there are about 574 pixels from the right edge of the figure to the 75 cm point, which represents about 5.0 cm distance. Thus, point has value of 80.0 cm.

The image captured by node 3 is shown in Figure 10(h). The pixel measurements in this figure show that about 149 pixels represent 5 cm distance. There are about 9 pixels from the left edge of the figure to the 80 cm point, which represents about 0.3 cm distance. Thus, point has value of 79.7 cm. Similarly, there are about 66 pixels from the right edge of the figure to the 100 cm point, which represents about 2.2 cm distance. Thus, point has value of 102.2 cm.

The image captured by node 2 is shown in Figure 10(i). The pixel measurements in this figure show that about 403 pixels represent 5 cm distance. There are about 250 pixels from the left edge of the figure to the 105 cm point, which represents about 3.1 cm distance. Thus, point has value of 101.9 cm. Similarly, there are about 290 pixels from the right edge of the figure to the 130 cm point, which represents about 3.6 cm distance. Thus, point has value of  133.6 cm. All these values are given in Table 5.

5.2. Resolution Measurements

After measuring the coverage at respective lower altitudes of the sub-VSNs, the measurement of the minimum achieved image resolution is performed for the given object, when moving at the higher altitude of the respective sub-VSN, to verify whether this resolution fulfills the minimum resolution criterion. The measurements are performed for altitudes , , and , which are higher altitudes for the networks VSN1, VSN2, and VSN3, respectively. For resolution measurement, the images are captured at the respective altitude line with the concerned camera nodes. The resolution of the object image obtained by each node is measured for verification.

5.2.1. Altitude E

It is the higher altitude of VSN1 (Figure 8(a)) where minimum required object image resolution is assumed. To perform measurements for resolution verification for VSN1, the wall is treated as altitude line and the bird image of Figure 3 is fixed on the wall. The ground line of length 235.4 cm is drawn parallel to the wall at a distance of 293 cm away from the wall. Both nodes 1 and 2 are placed the same way as for measuring the full coverage, and images are captured by these nodes. One such image captured by node 1 is shown in Figure 10(c). For more clear observation, a small portion of Figure 10(c) which contains bird image is shown magnified in Figure 11(a). The bird pixels are measured along the width side by using segmentation. The number of pixels obtained is 19. This value is very near to the calculated value of 20.

fig11
Figure 11: Bird resolution at the highest altitude: (a) VSN1, (b) VSN2, and (c) VSN3.
5.2.2. Altitude D

It is the higher altitude of VSN2 (Figure 8(b)) where minimum required object image resolution is assumed. To perform measurements for resolution verification for VSN2, the wall is treated as altitude line . The ground line of length 235.4 cm is drawn parallel to the wall at a distance of 208.6 cm away from the wall. Both nodes 1 and 2 are placed the same way as for measuring the full coverage, and images are captured by these nodes. One such image captured by node 2 is shown in Figure 10(f). For more clear observation, a small portion of Figure 10(f) which contains bird image is shown magnified in Figure 11(b). The bird pixels are measured along the width side by using segmentation. The number of pixels obtained is 19. This value is very near to the calculated value of 20.

5.2.3. Altitude C

It is the higher altitude of VSN3 (Figure 8(c)) where minimum required object image resolution is assumed. To perform measurements for resolution verification for VSN3, the wall is treated as altitude line . The ground line of length 235.4 cm is drawn parallel to the wall at a distance of 86.5 cm away from the wall. The nodes 1, 3, and 2 are placed the same way as for measuring the full coverage, and images are captured by these nodes. One such image captured by node 3 is shown in Figure 10(j). For more clear observation, a small portion of Figure 10(j) which contains bird image is shown magnified in Figure 11(c). The bird pixels are measured along the width side by using segmentation. The number of pixels obtained is 19. This value is very near to the calculated value of 20.

6. Results

The coverage measurement results in Table 5 show that the measured values for different points are in accordance with the values calculated by heterogeneous VSN model. Even the measured values are broader than the calculated values. For example, the analysis of the range covered by node 1 at altitude line between and points (first two rows of Table 5) shows that the calculated value of is 19.8 cm while the measured value is 19.5 cm. It shows that the node is providing coverage before the calculated value. Similarly, the calculated value of is 117.7 cm while the measured value is 118.2 cm, which shows that the node is providing coverage after the calculated value. Thus, the node is providing reliable coverage of the required area. Similar is the case with other values in the table. Moreover, the resolution measurement results are also in accordance with the calculated values. The implementation cost comparison for both homogeneous and heterogeneous VSNs shows that the heterogeneous VSN provides more than 70% decrease in cost as compared to homogeneous VSN.

7. Conclusion

This paper discusses the design of a heterogeneous VSN for the surveillance of a volume between two altitude limits. Images are captured and measurements are performed to verify full coverage and minimum achieved resolution at lower and higher altitudes, respectively. The measurements verify that the heterogeneous VSN is able to provide full volume coverage between the given altitude limits. The core advantage of heterogeneous VSN over homogeneous VSN is the higher cost reduction.

References

  1. K. Obraczka, R. Manduchi, and J. Garcia-Luna-Aceves, “Managing the information flow in visual sensor networks,” in Proceedings of the 5th International Symposium on Wireless Personal Multimedia Communication, 2002.
  2. H. Medeiros, J. Park, and A. Kak, “A light-weight event-driven protocol for sensor clustering in wireless camera networks,” in Proceedings of the 1st ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC '07), pp. 203–210, Vienna, Austria, September 2007. View at Publisher · View at Google Scholar · View at Scopus
  3. S. Soro and W. Heinzelman, “A survey of visual sensor networks,” Advances in Multimedia, vol. 2009, Article ID 640386, 21 pages, 2009. View at Publisher · View at Google Scholar · View at Scopus
  4. Y. Charfi, N. Wakamiya, and M. Murata, “Challenging issues in visual sensor networks,” IEEE Wireless Communications, vol. 16, no. 2, pp. 44–49, 2009. View at Publisher · View at Google Scholar · View at Scopus
  5. M. Chitnis, Y. Liang, J. Y. Zheng, P. Pagano, and G. Lipari, “Wireless line sensor network for distributed visual surveillance,” in Proceedings of the 6th ACM International Symposium on Performance Evaluation of Wireless Ad-Hoc, Sensor, and Ubiquitous Networks (PE-WASUN '09), pp. 71–78, October 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. Y.-C. Tseng, Y.-C. Wang, K.-Y. Cheng, and Y.-Y. Hsieh, “iMouse: an integrated mobile surveillance and wireless sensor system,” Computer, vol. 40, no. 6, pp. 60–66, 2007. View at Publisher · View at Google Scholar · View at Scopus
  7. T. Brodsky, R. Cohen, E. Cohen-Solal, et al., “Visual surveillance in retail stores and in the home,” in Advanced Video-Based Surveillance Systems, pp. 50–61, Kluwer Academic, Boston, Mass, USA, 2001.
  8. G. L. Foresti, C. Micheloni, L. Snidaro, P. Remagnino, and T. Ellis, “Active video-based surveillance system,” IEEE Signal Processing Magazine, vol. 22, no. 2, pp. 25–37, 2005. View at Publisher · View at Google Scholar · View at Scopus
  9. P. Kulkarni, D. Ganesan, and P. Shenoy, “The case for multi-tier camera sensor networks,” in Proceedings of the 15th International Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV '05), pp. 141–146, June 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. D. Beymer, P. McLauchlan, B. Coifman, and J. Malik, “Real-time computer vision system for measuring traffic parameters,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 495–501, June 1997. View at Scopus
  11. K. H. Lim, L. M. Ang, K. P. Seng, and S. W. Chin, “Lane-vehicle detection and tracking,” in Proceedings of the International Multi Conference of Engineers and Computer Scientists, vol. 2, 2009.
  12. J. M. Ferryman, S. J. Maybank, and A. D. Worrall, “Visual surveillance for moving vehicles,” in Proceedings of the IEEE Workshop on Visual Surveillance (ICCV '98), pp. 73–80, 1998.
  13. D. Koller, J. Weber, and J. Malik, “Robust multiple car tracking with occlusion reasoning,” in Proceedings of the European Conference on Computer Vision, pp. 189–196, 1994.
  14. D. Koller, J. Weber, T. Huang et al., “Towards robust automatic traffic scene analysis in real-time,” in Proceeding of International Conference on Pattern Recognition, 1994.
  15. M. Xu, J. Orwell, L. Lowey, and D. Thride, “Architecture and algorithms for tracking football players with multiple cameras,” Proceedings of IEE Vision, Image and Signal Processing, vol. 152, no. 2, pp. 232–241, 2005.
  16. C. J. Needham and R. D. Boyle, “Tracking multiple sports players through occlusion, congestion and scale,” in Proceedings of the British Machine Vision Conference, vol. 1, no. 2, pp. 93–102, 2001.
  17. R. Kays, S. Tilak, B. Kranstauber et al., “Camera traps as sensor networks for monitoring animal communities,” International Journal of Research and Review in Wireless Sensor Networks, vol. 1, pp. 19–29, 2011.
  18. S. Uchiyama, H. Yamamoto, M. Yamamoto, K. Nakamura, and K. Yamazaki, “Sensor network for observation of seabirds in Awashima island,” in Proceedings of the International Conference on Information Networking (ICOIN '11), pp. 64–67, Barcelona, Spain, January 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. C. Micheloni, G. L. Foresti, and L. Snidaro, “A co-operative multi camera system for video-surveillance of parking lots,” in Proceedings of the IEE Workshop on Intelligent Distributed Surveillance Systems, pp. 21–24, 2003.
  20. J. Campbell, P. B. Gibbons, and S. Nath, “Irisnet: an internet-scale architecture for multimedia sensors,” in Proceedings of the ACM Multimedia, 2005.
  21. J. Krumm, S. Harris, B. Meyers, B. Brumit, M. Hale, and S. Shafer, “Multi-camera multi-person tracking for easy living,” in Proceedings of the 3rd IEEE International Workshop on Visual Surveillance, pp. 3–10, 2000.
  22. J. Wang and B. Yan, “A framework of heterogeneous real-time video surveillance network management system,” in Proceedings of the International Conference on Information Technology, Computer Engineering and Management Sciences (ICM '11), pp. 218–221, Nanjing, China, September 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. P. Kulkarni, D. Ganesan, and P. Shenoy, “The case for multi-tier camera sensor networks,” in Proceedings of the 15th International Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV '05), pp. 141–146, June 2005. View at Publisher · View at Google Scholar · View at Scopus
  24. H. Aghajan and A. Cavallaro, Multi-Camera Networks, Principles and Applications, Academic Press, 2009.
  25. E. Hörster and R. Lienhart, “Approximating optimal visual sensor placement,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '06), pp. 1257–1260, Toronto, Canada, July 2006. View at Publisher · View at Google Scholar · View at Scopus
  26. E. Hörster and R. Lienhart, “On the optimal placement of multiple visual sensors,” in Proceedings of the 4th ACM International Workshop on Video Surveillance and Sensor Networks, 2006.
  27. N. Ahmad, N. Lawal, M. O'Nils, B. Oelmann, M. Imran, and K. Khursheed, “Model and placement optimization of a sky surveillance visual sensor network,” in Proceedings of the 6th International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA '11), pp. 357–362, Barcelona, Spain, October 2011. View at Publisher · View at Google Scholar · View at Scopus