Table of Contents Author Guidelines Submit a Manuscript
Journal of Electrical and Computer Engineering
Volume 2016, Article ID 9312439, 14 pages
http://dx.doi.org/10.1155/2016/9312439
Research Article

Assessing Availability in Wireless Visual Sensor Networks Based on Targets’ Perimeters Coverage

1State University of Feira de Santana, Feira de Santana, BA, Brazil
2University of the Bío-Bío, Concepción, Chile

Received 2 July 2016; Accepted 20 September 2016

Academic Editor: Stefano Basagni

Copyright © 2016 Daniel G. Costa and Cristian Duran-Faundez. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Availability in wireless visual sensor networks is a major design issue that is directly related to applications monitoring quality. For targets monitoring, visual sensors may be deployed to cover most or all of targets, and monitoring quality may be focused on how well a set of targets are being covered. However, targets may have different dimensions and it is expected that large targets may be only partially viewed by source nodes, which may affect coverage quality and lead to a state of unavailability. In this context, this article analyzes the effect of target’s size on effective coverage in wireless visual sensor networks. A new coverage metric, the Effective Target Viewing (ETV), is proposed to measure monitoring quality over a set of targets, which is exploited as a fundamental parameter for availability assessment. Results show that ETV can be used as a practical coverage metric when assessing availability in wireless visual sensor networks.

1. Introduction

An increasing demand for autonomous surveillance and control applications has fostered the development of new monitoring technologies, which has placed sensor networks into a central position. A lot of sensing applications in military, industrial, residential, health care, and smart cities scenarios may be designed exploiting the flexibility of sensor networks [1]. For those networks, when sensor nodes are equipped with a low-power camera, visual information can be retrieved from the monitored field [2, 3], opening new opportunities for monitoring in Internet of Things scenarios. In general, image snapshots, infrared images, and video streams with different coding qualities and resolutions can provide valuable information for an uncountable number of monitoring applications.

In general, visual sensors have a viewing orientation and thus a directional sensing model can be defined. In a different way of scalar sensors, designed to retrieve scalar data such as temperature, pressure, and humidity, visual sensors may view distant or close objects or scenes according to their Field of View (FoV) [4, 5]. For targets monitoring, satisfactory sensing coverage would happen when one or more targets are being viewed by deployed sensors, which means that they are partially or completely inside the area defined by the sensors’ FoV.

Actually, targets may have different dimensions, potentially impacting target monitoring quality. While small targets may be sometimes more likely to be viewed, large targets may not be satisfactorily covered by deployed visual sensors. In fact, when covering a set of targets, it is usually required that every target is being viewed by at least one visual sensor, but there may be parts of targets that may not be viewed. For some applications, targets have to be viewed in all possible perspectives and monitoring quality should be accounted for all covered perspectives. As an example, visual sensors may view the front or back side of a target, providing different information for monitoring applications. For another group of applications, however, viewing perspectives may not be an issue, since enough parts of the targets are being viewed.

A system can be assumed as available when the expected services can be provided when requested. While some network environments can tolerate some states of unavailability, critical monitoring applications may be severely impaired. Therefore, a central issue in Wireless Visual Sensor Networks (WVSN) is availability assessment, since we want to say if a particular application may be assumed as available along the time. Generally, availability will be affected by hardware and coverage failures, but different availability metrics concerned with different availability issues may be defined to support the overall process of availability assessment [6].

Frequently, visual sensors may be deployed on a region of interest with many fixed or moving targets, where source nodes may view more than one target at a time. In this scenario, it is worth estimating the coverage quality for different configurations of visual sensors, potentially supporting efficient design and deployment of visual sensor networks. Evaluating the effect of different targets parameters on visual sensing coverage may then be beneficial for WVSN. Particularly, assessing availability for monitoring of small or large targets may be of paramount importance, especially for critical applications, as in automatic traffic control, industrial automation, public security, and rescue operations, just to cite a few.

This article addresses the problem of availability assessment in wireless visual sensor networks. For that, a geometrical model is defined to compute target viewing by visual sensors, for any size of targets modelled as circumferences. Based on it, a new coverage metric is defined to compute the viewed perimeter of targets, which is referred to as the Effective Target Viewing (ETV). This metric indicates the average percentage of the viewed perimeter of all considered targets. Monitoring availability can then be assessed based on ETV, along with monitoring requirements of applications, directly indicating if an application may be assumed as available or not. To the best of our knowledge, the contributions of this article have not been proposed before.

The remainder of this article is organized as follows. Section 2 presents some related works. Section 3 brings the statements and definitions of targets coverage. The proposed coverage metric and availability assessment approach are defined in Section 4. Section 5 presents numerical results, followed by conclusions and references.

2. Related Works

For wireless visual sensor networks, monitoring applications may require that a minimum number of targets are being viewed. The monitoring quality may then be associated to a percentage of coverage, which might guide deployment [7] and coverage optimization algorithms [8, 9]. In a different perspective, target viewing may be related to network availability [6], exploiting visual sensing redundancy to compensate failures in sensor nodes. Actually, sensing redundancy in WVSN is not straightforward and there are some relevant issues that should be properly considered [6, 10], as the perception of redundancy depends on applications monitoring requirements [11]. Target viewing may also be maximized when adjustable visual sensors are deployed, and the monitoring quality will be a function of visual redundancy over targets [12]. For all these cases, target viewing may be performed in different ways and with different objectives in wireless visual sensor networks.

Efficient sensing coverage will be deeply related to the way sensors are deployed. In deterministic deployment, sensors are neatly placed to achieve optimized coverage and many works have been concerned with optimization of the number of sensors required to cover a monitored field [13, 14]. On the other hand, for many monitoring scenarios, sensors are expected to be randomly deployed, bringing particular coverage problems [3, 9]. In general, nodes placement optimization is a relevant problem for scalar and visual sensor networks [4, 15, 16].

In general, visual sensors will be deployed for area, target, or barrier coverage [17]. After random deployment, camera-enabled sensors may be scattered over a monitored field, with unpredicted positions and orientations. For such sensors, coverage metrics are desired when assessing the sensing quality of wireless sensor networks. The work in [18] proposes a metric to measure the coverage quality of wireless visual sensor networks, computing the probability of a randomly deployed network to be -Coverage, where every point is covered by at least sensors. For higher values of , more visual sensors will be viewing the same area of a monitored field. In a different way, a metric is proposed in [19] to compute the coverage quality for target sensing. The impact of sensor deployment for visual sensing coverage is discussed in [7]. In [4], different issues for coverage estimation and enhancement are addressed.

When sensors may adjust the viewed area, sensing coverage may be optimized [20, 21]. The work in [22] computes an optimal configuration for visual sensors with changeable orientations, where visual coverage is based on the definition of nondisjoint cover sets. The work in [12] adjusts the sensors’ FoV to optimize the network coverage, achieving maximized viewing of a monitored field: sensors are reconfigured to increase sensing redundancy over defined targets. Optimal coverage is a relevant problem that has driven many research efforts in wireless visual sensor networks, but visual monitoring availability is also concerned with other relevant issues in these networks.

A core element of availability is sensing redundancy. In general, sensing redundancy is based on overlapping of sensing areas, but the way such overlapping will be considered when defining redundancy will depend on monitoring requirements of applications [6, 10]. Actually, sensing redundancy may be exploited to extend the network lifetime, when redundant nodes are deactivated, but redundancy selection is still a challenging issue in wireless visual sensor networks. In [23], algorithms for redundancy selection in WVSN were proposed. In a similar way, the work in [24] also addressed redundancy selection for availability enhancement, but it considers the targets perspectives when defining if sensors that are viewing the same target can be assumed as redundant. Sensing redundancy is also exploited in [25] when assessing availability for target coverage.

Besides redundancy, availability may be also concerned with the way targets are being viewed. Sometimes, different parts of targets’ contours may be different for applications. The work in [26] associates source priorities to cameras according to viewed parts of targets. In a different way, for large targets, it may be desired that the entire perimeter of targets is viewed by a set of cameras, as proposed in [27, 28]. In those works, scalar sensors (with circular sensing areas) are considered to cover targets, and the network is optimized to find the minimum number of sensors that cover the targets’ perimeters. Table 1 summarizes the discussed papers, classifying them according to their contributions to visual coverage and availability enhancement and assessment.

Table 1: Visual sensing coverage in wireless sensor networks.

Actually, previous works have addressed the problem of target coverage under different perspectives, for scalar and visual sensor networks. And some of them brought contributions for targets’ perimeters coverage. However, availability assessment for target coverage is still an open issue, especially for monitoring of large targets, fostering the definition of new availability assessment metrics.

3. Targets’ Perimeters Coverage

Visual sensors may be deployed for different tasks in a large set of monitoring and control applications. Such sensors may be expected to retrieve visual information of targets or scenes, with different particularities. For the case of target viewing, fundamental concepts have to be defined to allow proper modelling, as discussed in this section.

3.1. Sensors’ Field of View

A typical wireless visual sensor network may be composed of scalar sensors, visual sensors, actuators, and sinks. For visual monitoring tasks, one must be concerned with visual sensors and the way they view a monitored field.

In general, it is expected that a WVSN will be composed of visual sensors, which may be randomly or deterministically deployed over an area of interest. Each sensor , = , has (, ) location for 2D modelling. For randomly deployed sensors, their location after deployment may be discovered using some localization mechanism [22]. Whatever the case, it is assumed herein that sensors are static and their configurations do not change after deployment, but the proposed approach is also valid for dynamic networks.

Each visual sensor is expected to be equipped with a low-power camera, with a viewing angle and an orientation . The embedded camera also defines a sensing radius that is an approximation of the camera’s Depth of Field (DoF) [3], which is the area between the nearest and farthest point that can be sharply sensed. For simplification, the Field of View of any visual sensor is defined as the area of an isosceles triangle composed of three vertices, , , and . Vertex is assumed as the visual sensor position [18], (, ), while the other vertices are computed considering the values of , , and .

Figure 1 shows a graphical representation of a typical sensor’s FoV.

Figure 1: Field of View of a visual sensor.

One can compute the area of any sensor’s FoV, as expressed in (1), whenever the sensing parameters of the camera are known.

Basic formulations of trigonometry are used to compute vertices and for any sensor , as expressed in

3.2. Defining Targets

When wireless visual sensor networks are deployed for targets viewing, it is desired that the maximum number of targets will be visually covered by source sensors. In general, a target is any moving or static object that is expected to be viewed by visual sensors. Moreover, in real applications, targets may have different formats and sizes, but visual sensors may view just small parts of them.

A target is defined as a generic element located at position (, ), although 3D modeling may also be considered. For a total of targets, a target , has position (, ) as its center and thus, for simplification, a target is defined as a circumference with radius and center (, ). The value is computed taking the greatest distance from the center of the target to its border, assuming a top-down view (observer above the monitored field). Figure 2 shows examples of generic representations of targets.

Figure 2: Example of targets.

The camera’s FoV will view only part of the defined circumference, which will result in a viewed perimeter lower than , which is half the perimeter defined by the circumference. Moreover, we do not consider occlusion of targets, but it could be assumed in 3D modelling.

3.3. Computing Targets Viewing

The FoV’s triangle may intersect a target’s circumference in different ways. The area viewed by a sensor , defined as , will be an arch of the target’s circumference and thus it is defined by a pair of intersection points, and . These points are computed according to the way the FoV intersects the circumference, as exemplified in Figure 3. Obviously, the basic condition for target viewing is that the Euclidean distance between the considered target’s center and visual sensor position is lower than or equal to .

Figure 3: Targets coverage.

The points and can be computed considering the intersection of the lines defined by the vertices of FoV’s triangle. More specifically, we want to compute the intersection of lines and in relation to the target’s circumference. Actually, a generic line may have three different configurations concerning a circumference: it may not intersect, it may intersect in a single point (tangent line), or it may intersect in two points (secant line). Through geometry, the formulation in (3) can be considered when checking the way a line will intersect a circumference. Note that the formulation in (3) is valid for , but could be considered just taking the coordinates of vertex .where the following conditions are found: If , there is no intersection.If , there is a tangent line.If , there is a secant line.

If both and are secant to a considered target’s circumference, four intersection points will be computed but only the two closest to vertex have to be taken. It is due to the fact that visual sensors are not expected to see through the targets in this work (opaque targets). If any of those two lines is tangent, the intersection vertex is the point of tangency. The formulation in (4) computes all possible vertices for tangent and secant lines; if (or ) is a secant line, two different values for and may be found, but only one value is computed for a tangent line.

A special formulation has to be defined when or lines, or both, do not intersect the target’s circumference, as depicted in Figures 3(a) and 3(d). In these cases, one or two projection lines are drawn from vertex to line and these projections are tangent to the target’s circumference. Actually, the tangent line is perpendicular to the radius of the target’s circumference and thus a right triangle can be created, as presented in Figure 4.

Figure 4: Computing tangent vertex.

There are two possibilities for the tangent line, in which length between vertex of the considered visual sensor and the tangent point is defined as . If is greater than the height of the FoV’s (isosceles) triangle, defined as , the tangent point must not be considered as an intersection point. Otherwise, the tangent point is an intersection point to be considered when computing . The value of is the hypotenuse of the right triangle created also taking and the distance between vertex of the visual sensor and the center of the target’s circumference, defined as . And the value of can be found through trigonometry when taking the other parameters of the FoV’s triangle.

When , we have to compute the intersection of line with the circumference and this can be done just adjusting (3) and (4). In this case, of course, all intersection points (one or two) must be considered. When , two possible tangent points will be found. For that, we take the intersection points of target’s circumference with a circumference centered at vertex and with radius . Equation (5) can be used to compute those intersection points.

Two different points of tangency can be found when applying the formulation in (5). However, for two points and , only one of them will be inside the FoV’s triangle; that point will be an intersection point.

4. Proposed Availability Assessment

The availability level of monitoring applications will depend on visual and hardware characteristics of deployed sensors, as well as the network topology of the considered wireless visual sensor networks. Actually, visual monitoring applications will typically experience different levels of hardware failures and coverage failures [6]. While a hardware failure may result from energy depletion, sensors harming, connection problems, or faulty conditions [25, 29], among other factors, coverage failures happen when visual sensors cannot provide minimal acceptable information for applications functions. For example, if an application expects to view at least 70% of all targets’ perimeters, it is only assumed as available whether this constraint is respected (indicating that no coverage failure happened). A practical coverage metric associated to targets viewing is then highly desired, since it can be exploited for availability assessment.

We propose the Effective Target Viewing (ETV), a metric of the coverage quality over a set of targets. ETV indicates the percentage of viewed parts of targets’ perimeters. This metric is derived from the ETV(), which indicates the percentage of the viewed perimeter of target , while the ETV metric indicates the average values of ETV() for all targets , = .

ETV is a coverage metric. However, it can be exploited to assess the availability of visual monitoring applications. In fact, ETV can be associated with an availability state, which may be “yes” (available) or “no” (unavailable). Actually, when assessing availability, monitoring applications will define the minimum acceptable ETV for the deployed visual sensors. We define M-ETV as the minimum acceptable value for the ETV of the network, while M-ETV() is the minimum acceptable ETV(), for any considered target. For example, if M-ETV is 50%, it is the minimum acceptable average coverage of targets’ perimeters. However, if we define M-ETV() as 50%, at least 50% of each target’s perimeter must be viewed by visual sensors. As average results may hide the existence of targets that are not being satisfactorily viewed, M-ETV() may associate availability to uniform viewing over targets.

Actually, M-ETV and M-ETV() are parameters of applications, with no concern to deployed visual sensors and targets. In other words, as coverage failures depend on monitoring requirements [6], different applications may have different availability conditions even for the same network.

The ETV metric is computed taking the viewed parts of targets, assuming all nearby cameras. Actually, every visual sensor may view a percentage of any target’s perimeter, depending on the considered parameters. It is defined that a visual sensor may view a target within angles interval, defined as , which will be represented by a sector of the circumference with radius . The viewed area is defined by the pair of intersection points, which can be used to compute an angular distance, as specified in (6). The formulation in (6) is defined by the fact that both points and the center of the circumference create an isosceles triangle with as one of the sides. The law of cosines is so employed to compute , which is the central angle of target that determines the arc . This “view” will then be of the considered target’s circumference.

A simple way to compute the viewed perimeters of all targets is to compute an average result for the sum of all viewed perimeters in each target. Obviously, it would compute the viewed areas assuming redundant views of the same target, which may be relevant when replacing faulty nodes [6, 10]. However, as we are computing the percentage of targets’ perimeters being viewed, redundant coverage must not be accounted. In such way, the proposed ETV metric does not consider redundant views and thus its highest value for the view of any target is 100%. But if the angular distance of all values of is considered, redundant views on a target might be (erroneously) accounted, which would not correspond to the expected value of ETV. In order to avoid that problem, an algorithm was designed to avoid the accounting of redundant views, removing it from the viewed arches of the targets.

Let us define as a vector containing all points and , for all nodes , sorted by their order of appearance in a counterclockwise or clockwise tour in the perimeter of the circumference defining the target . Let us define as a vector containing the angle defined by , the center of , and . We also define as a numerical constant defining the maximum possible sum of the magnitudes, that is, 360°. Then, the contribution of the segment defined by a pair to ETV is defined as presented in

In (7), is the visual sensor node associated to , and is the arc defined by its two and points. The four conditions in (7) are used to decide whether the arcs confined by points in and are parts of the area visualized by a sensor node. The first condition evaluates if both points were projected by the same sensor, which is possible if () there is an entire (nonoverlapped) area exclusively viewed by sensor or () the target’s area viewed by sensor is also viewed by another sensor. The second condition evaluates if areas captured by sensors projecting points and are overlapped. The third condition evaluates the case where target regions viewed by sensors and are not overlapped between them but both of them are overlapped with the captured area of a common sensor . At last, the fourth condition will mark a nonviewed portion of the circumference.

Finally, ETV() and ETV can be computed as expressed in

A graphical example of computing the intersection points using the defined formulation is presented in Figure 5. The computed ETV() for this example is .

Figure 5: Visual representation of intersection points. (a) Original target; (b) intersection points.

Algorithm 1 computes ETV and ETV() for all targets, considering the equations previously presented. Most of computation is performed in line (), using the proposed geometrical model.

Algorithm 1: ETV computing.

5. Numerical Results

The proposed metrics can be used to assess availability in wireless visual sensor networks. We then defined some mathematical experiments for different parameters of visual sensors and targets, computing ETV and ETV(). Using Matlab, Algorithm 1 was implemented, along with the defined mathematical formulations. Next subsection presents the numerical results when computing those metrics.

5.1. Computing ETV and ETV()

Different configurations for visual sensors and targets were considered to compute ETV, assuming sensors randomly deployed and also sensors deterministically positioned in a grid-like topology. Initially, randomly visual sensors and targets were virtually positioned and their parameters were considered in the defined mathematical equations. For this verification, visual sensors have , , , and (, ) with random values, while targets have random values for (, ) and .

A 300 m × 600 m monitoring field is considered for computing the value of ETV for different network random configurations, as presented in Figure 6. Five different targets are randomly positioned in the monitored field for each test, taking two different fixed values for of all targets: 20 and 50. As random parameters are calculated, every verification is executed 10 times and only the average results are considered.

Figure 6: ETV after random deployment: (a) 5 targets with ; (b) 5 targets with .

As random parameters are being considered, there is no uniform distribution for ETV in Figure 6. But, in general, ETV increases for higher sensing radius. However, as can be seen in Figure 6(b), large targets are harder to be completely viewed in average, which reduces the value of ETV.

Visual sensors were also considered in planned positions. For the next experiment, a -sensor network with 2 columns of sensors with 20 rows each was considered, simulating a more realistic network. In that scenario, targets are located between the columns, as it may happen when cars are being monitored on a road. Figure 7 presents a graphical example of how sensors and targets are considered for this evaluation phase, disregarding the effect of occlusion.

Figure 7: Example of sensors deployment. The ETV in this example is .

ETV was computed when five targets are deployed in random positions (between the two columns of sensors) and with and . We also considered different values for sensing angle () of all visual sensors and sensing radius. As visual sensors are deployed with random orientations, every verification is also executed 10 times and only the average results are considered. The results for this verification are presented in Figure 8.

Figure 8: ETV for a specific scenario: (a) 5 targets with ; (b) 5 targets with .

The value of ETV varies according to the parameters of visual sensors. In general, higher values for the sensing radius () of visual sensors will increase ETV for the considered deployment scenario, but higher angles may decrease ETV. In fact, for low values of , the ETV was too low, since only targets that were closer to the border of the simulated road were viewed.

For this same scenario, more targets can be considered when assessing ETV. Figure 9(a) presents the results when 20 large targets have to be viewed. For more targets, the ETV is almost the same when also taking the same parameters, since the targets are being covered in the same way, in average. Moreover, larger targets may be harder to be completely viewed and thus the ETV may be lower.

Figure 9: ETV for large targets: (a) 20 targets with ; (b) 5 targets with .

At last, Figure 9(b) computes ETV for 20 targets with different sizes, assuming for all visual sensors. In this verification, the value of ETV increases for higher values of and .

Sometimes, it may be desired to compute the lowest ETV() for a monitoring application, which will indicate the worst targets covering for all targets in the considered scenario. As ETV is an average value, it may hide the fact that some targets are being badly covered or even not covered at all. Figure 10 presents the computed ETV and ETV() for the monitoring scenario of Figure 7, with visual sensors deployed in two uniform columns and targets randomly positioned between those columns. For this evaluation, all visual sensors have and  m, with random orientations (average results after 10 consecutive tests are considered).

Figure 10: ETV and ETV(). (a) Targets with . (b) Targets with .

Results in Figures 10(a) and 10(b) present ETV with similar values, indicating that in average the targets are being viewed with almost the same “quality,” even for larger targets. However, when we consider the lowest achieved ETV(), results for smaller targets in Figure 10(a) show that at least one of the targets was not even covered by any of the visual sensors, which may not be acceptable for some applications.

Next subsection discusses how ETV and ETV() can be used when assessing availability.

5.2. Assessing Availability

In general, availability is a characteristic of the applications, instead of the networks. As different applications will have different requirements concerning visual coverage and dependability [6], any availability metric must account the characteristics of each visual monitoring application.

Considering the average results presented in Figure 10(a), availability requirements of a set of hypothetical visual monitoring applications were defined. We considered that such applications define values for M-ETV and, sometimes, for M-ETV() (“—” means it is not relevant for the application), directly indicating the minimal conditions for availability. The results are presented in Table 2, where an application is assumed as available when M-ETV ETV and M-ETV() Lowest ETV().

Table 2: Availability requirements and attainable availability of some visual monitoring applications, for computed ETV and lowest ETV() in Figure 10(a).

As can be seen in Table 2, network and targets configurations are not enough to determine the availability of a particular visual monitoring application, since its minimum expected level of targets coverage must be respected. And this is true even for the same network configurations, as it happens with Applications 3 and 4 in Table 2.

Availability was also assessed for a more practical application, considering targets that move through an area covered by fixed visual sensors. That scenario emulates visual monitoring over a road for moving cars, which may have different dimensions. Initially, that scenario is composed of six visual sensors deployed in two imaginary parallel lines, with three cameras positioned in each of these lines, as presented in Figure 11. For the performed verifications, all visual sensors have  m and .

Figure 11: Monitoring scenario for a road with moving cars.

We consider that cars move only on one single direction, straightly from left to right in Figure 11, keeping in the center of the road. Three configurations of targets are considered for the tests:  m,  m, and  m. For this verification, ETV is computed for different “instants” of movement, which means that ETV is computed according to predefined positions. Figure 12 graphically presents an example of a target with  m, which is considered in fixed positions for ETV computing, in different instants. One should note that cars move only on -axis.

Figure 12: Cars moving through the considered scenario.

The computed values for ETV are presented in Figure 13, for a single target that moves 500 m from left to right. Assuming a coordinates system where position () is at the top left corner of the road, targets move from position () to position (), and the value for ETV in this scenario, with fixed cameras carefully positioned, depends on the position of the target and its size. Actually, all graphics in Figure 13 present results for the same scenario and the same movement behaviour, but only varying the number of instants of measures. In other words, for more instants of measures, the proposed algorithm is applied more times, changing only the position of the target, (, ). At last, it is considered that the monitoring application defines M-ETV = 45%.

Figure 13: ETV when targets move through the covered area. (a) Targets move 50 m for each measure. (b) Targets move 25 m for each measure. (c) Targets move 10 m for each measure. (d) Targets move 1 m for each measure.

For the considered scenario, one can easily note that in average smaller targets are easier to be more completely covered by visual sensors, resulting in higher ETV. Another important conclusion is that the application will not be available when targets are in some positions, since the computed ETV will be lower than the defined M-ETV (45%). It is also interesting to note that for the smallest target it may sometimes have the lowest ETV for the experiments, because it “falls” in areas with low coverage, and that is harder to happen for larger targets.

The proposed algorithm to compute ETV is significant because it allows the identification of parts of the network with poor coverage, which may lead to states of unavailability. And this information may be exploited to change configurations of the network, for example, rotating cameras or deploying more visual sensors. In order to test this possibility, we extended the monitoring scenario in Figure 11, deploying four additional cameras, as depicted in Figure 14.

Figure 14: Monitoring scenario with the addition of four new visual sensors.

The ETV was recomputed for this new scenario, as presented in Figure 15, considering ETV computation for movement of the target after 10 m (Figure 15(a)) and 1 m (Figure 15(b)).

Figure 15: ETV after deployment of four more visual sensors. (a) Targets move 10 m for each measure. (b) Targets move 1 m for each measure.

In general, ETV was improved for three tested sizes of the target, especially for larger targets. Actually, for  m, the application was always unavailable for the scenario in Figure 11. However, when considering the scenario with 10 visual sensors in Figure 14, application monitoring the largest target was available when it is between 125 m and 400 m.

With the performed verifications, the ETV of the defined scenarios could be assessed. Using the proposed mathematical formulations, one can estimate the way targets will be covered, which can be considered to adjust the deployed visual sensors or even trigger new deployments. We expect that this methodology can bring valuable results for wireless visual sensor network deployment, configuration, and operation.

5.3. Availability and Communication in WVSN

Availability in wireless visual sensor networks is strongly related to communication issues. Actually, the level of availability indicates how well a deployed network is retrieving data according to the monitoring requirements of the considered application, and thus states of unavailability may indicate that something is wrong or not operating as expected. And the causes of such “problems” are diverse.

A transient fault in wireless visual sensor networks will directly impact packet transmission, requiring proper mechanisms to assure some level of reliability. On the other hand, permanent faults may render part or the entire network unavailable, when the visual coverage area is reduced. In fact, if transmission paths are facing long periods of congestion, the network may become unavailable, even if enough targets are being properly viewed (packets are not being received at the sink side). High packet error rates may also impact the overall availability level of WVSN. Therefore, availability is a broader concept that comprises different levels of hardware and coverage failures [6], including communication issues.

The proposed Effective Target Viewing is a relevant metric to assess how well targets’ perimeters are being viewed. But ETV should also be considered along with other parameters to more completely measure the availability of wireless visual sensor networks applications. As connectivity problems may result in hardware failures that disconnect visual sensors, the ETV may be dynamically affected by the network condition: disconnected visual sensors may be not considered when computing ETV. And thus the ETV may even be used as a QoS metric, since the value of ETV may be impacted by the network.

Therefore, although ETV is computed considering only visual sensing parameters, sensors communication may also have a relevant role when computing ETV and enhancing availability in wireless visual sensor networks.

6. Conclusions

Target monitoring in wireless visual sensor networks is a relevant research topic that still presents some relevant challenging issues, fostering investigation in this area. As targets may have different forms and sizes, it is relevant to define mathematical mechanisms to assess the way such targets will be viewed, which can then affect real WVSN. For example, a low value of ETV may trigger reposition of rotatable cameras or even suggest new deployment of visual sensors. In either way, availability assessment based on targets’ perimeters can bring valuable results for wireless visual sensor networks.

As target size is central in the proposed approach, the way targets will be modelled is extremely relevant. In this article we considered circumferences to represent targets, providing a feasible and computationally viable solution. However, as future works, we will make more realistic modelling, considering convex polygons and grid of lines to represent targets, which may bring more realistic results. Moreover, real snapshots will be considered as a reference to identify the borders of the targets, allowing even more complex mathematical models. At last, 3D modelling will be also considered in future works.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by the Brazilian Research Agency CNPq under Grant no. 441459/2014-5 and by the University of the Bío-Bío, under Grants DIUBB 161610 2/R and GI 160210/EF.

References

  1. J. Yick, B. Mukherjee, and D. Ghosal, “Wireless sensor network survey,” Computer Networks, vol. 52, no. 12, pp. 2292–2330, 2008. View at Publisher · View at Google Scholar · View at Scopus
  2. Y. Charfi, N. Wakamiya, and M. Murata, “Challenging issues in visual sensor networks,” IEEE Wireless Communications, vol. 16, no. 2, pp. 44–49, 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. I. T. Almalkawi, M. G. Zapata, J. N. Al-Karaki, and J. Morillo-Pozo, “Wireless multimedia sensor networks: current trends and future directions,” Sensors, vol. 10, no. 7, pp. 6662–6717, 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. D. G. Costa and L. A. Guedes, “The coverage problem in video-based wireless sensor networks: a survey,” Sensors, vol. 10, no. 9, pp. 8215–8247, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. D. G. Costa, L. A. Guedes, F. Vasques, and P. Portugal, “Adaptive monitoring relevance in camera networks for critical surveillance applications,” International Journal of Distributed Sensor Networks, vol. 2013, Article ID 836721, 14 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  6. D. G. Costa, I. Silva, L. A. Guedes, F. Vasques, and P. Portugal, “Availability issues in wireless visual sensor networks,” Sensors, vol. 14, no. 2, pp. 2795–2821, 2014. View at Publisher · View at Google Scholar · View at Scopus
  7. D. Pescaru, V. Gui, C. Toma, and D. Fuiorea, “Analysis of post-deployment sensing coverage for video wireless sensor networks,” in Proceedings of the 6th Roedunet International Conference (RoEduNet '07), Craiova, Romania, November 2007.
  8. A. Mavrinac and X. Chen, “Modeling coverage in camera networks: a survey,” International Journal of Computer Vision, vol. 101, no. 1, pp. 205–226, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. J. Ai and A. A. Abouzeid, “Coverage by directional sensors in randomly deployed wireless sensor networks,” Journal of Combinatorial Optimization, vol. 11, no. 1, pp. 21–41, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. D. Costa, I. Silva, L. Guedes, F. Vasques, and P. Portugal, “Availability assessment of wireless visual sensor networks for target coverage,” in Proceedings of the IEEE International Conference on Emerging Technologies in Factory Automation (ETFA '14), pp. 1–8, Barcelona, Spain, 2014. View at Publisher · View at Google Scholar
  11. D. Costa, I. Silva, L. Guedes, F. Vasques, and P. Portugal, “Selecting redundant nodes when addressing availability in wireless visual sensor networks,” in Proceedings of the IEEE International Conference on Industrial Informatics, pp. 424–448, Porto Alegre, Brazil, July 2014.
  12. D. G. Costa, I. Silva, L. A. Guedes, P. Portugal, and F. Vasques, “Enhancing redundancy in wireless visual sensor networks for target coverage,” in Proceedings of the 20th Brazilian Symposium on Multimedia and the Web (WebMedia '14), pp. 31–38, November 2014. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Younis and K. Akkaya, “Strategies and techniques for node placement in wireless sensor networks: a survey,” Ad Hoc Networks, vol. 6, no. 4, pp. 621–655, 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. Y. E. Osais, M. St-Hilaire, and F. R. Yu, “Directional sensor placement with optimal sensing range, field of view and orientation,” Mobile Networks and Applications, vol. 15, no. 2, pp. 216–225, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. X. Sun, Y. Zhang, X. Ren, and K. Chen, “Optimization deployment of wireless sensor networks based on culture—ant colony algorithm,” Applied Mathematics and Computation, vol. 250, pp. 58–70, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. H.-H. Yen, “Optimization-based visual sensor deployment algorithm in PTZ wireless visual sensor networks,” in Proceedings of the 7th International Conference on Ubiquitous and Future Networks (ICUFN '15), pp. 734–739, IEEE, Sapporo, Japan, July 2015. View at Publisher · View at Google Scholar · View at Scopus
  17. T. Sauter, “Energy-efficient coverage problems in wireless ad hoc sensor networks,” Computer Communications, vol. 29, no. 4, pp. 413–420, 2006. View at Publisher · View at Google Scholar
  18. L. Liu, H. Ma, and X. Zhang, “On directional K-coverage analysis of randomly deployed camera sensor networks,” in Proceedings of the IEEE International Conference on Communications (ICC '08), pp. 2707–2711, Beijing, China, May 2008. View at Publisher · View at Google Scholar · View at Scopus
  19. M. Alaei and J. M. Barcelo-Ordinas, “Node clustering based on overlapping FoVs for wireless multimedia sensor networks,” in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC '10), pp. 1–6, Sydney, Australia, April 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. M. Rahimi, S. Ahmadian, D. Zats, R. Laufer, and D. Estrin, “Magic numbers in networks of wireless image sensors,” in Proceedings of the Workshop on Distributed Smart Cameras (DSC '06), Boulder, Colo, USA, October 2006.
  21. D. Devarajan, R. J. Radke, and H. Chung, “Distributed metric calibration of Ad hoc camera networks,” ACM Transactions on Sensor Networks, vol. 2, no. 3, pp. 380–403, 2006. View at Publisher · View at Google Scholar · View at Scopus
  22. Y. Cai, W. Lou, M. Li, and X.-Y. Li, “Target-oriented scheduling in directional sensor networks,” in Proceedings of the 26th IEEE International Conference on Computer Communications (INFOCOM '07), pp. 1550–1558, Barcelona, Spain, May 2007. View at Publisher · View at Google Scholar · View at Scopus
  23. D. G. Costa, I. Silva, L. A. Guedes, F. Vasques, and P. Portugal, “Selecting redundant nodes when addressing availability in wireless visual sensor networks,” in Proceedings of the 12th IEEE International Conference on Industrial Informatics (INDIN '14), pp. 130–135, IEEE, Porto Alegre, Brazil, July 2014. View at Publisher · View at Google Scholar · View at Scopus
  24. D. G. Costa, I. Silva, L. A. Guedes, F. Vasques, and P. Portugal, “Optimal sensing redundancy for multiple perspectives of targets in wireless visual sensor networks,” in Proceedings of the 13th International Conference on Industrial Informatics (INDIN '15), pp. 185–190, Cambridge, UK, July 2015. View at Publisher · View at Google Scholar · View at Scopus
  25. D. G. Costa, I. Silva, L. A. Guedes, P. Portugal, and F. Vasques, “Availability assessment of wireless visual sensor networks for target coverage,” in Proceedings of the 19th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA '14), pp. 1–8, Barcelona, Spain, September 2014. View at Publisher · View at Google Scholar · View at Scopus
  26. C. Duran-Faundez, D. G. Costa, V. Lecuire, and F. Vasques, “A geometrical approach to compute source prioritization based on target viewing in wireless visual sensor networks,” in Proceedings of the IEEE World Conference on Factory Communication Systems (WFCS '16), pp. 1–7, IEEE, Aveiro, Portugal, May 2016. View at Publisher · View at Google Scholar
  27. K.-S. Hung and K.-S. Lui, “Perimeter coverage scheduling in wireless sensor networks using sensors with a single continuous cover range,” EURASIP Journal on Wireless Communications and Networking, vol. 2010, Article ID 926075, 17 pages, 2010. View at Publisher · View at Google Scholar · View at Scopus
  28. K.-S. Hung and K.-S. Lui, “Perimeter coverage made practical in wireless sensor networks,” in Proceedings of the 9th International Symposium on Communications and Information Technology (ISCIT '09), pp. 87–92, Icheon, South Korea, September 2009. View at Publisher · View at Google Scholar · View at Scopus
  29. I. Silva, L. A. Guedes, P. Portugal, and F. Vasques, “Reliability and availability evaluation of wireless sensor networks for industrial applications,” Sensors, vol. 12, no. 1, pp. 806–838, 2012. View at Publisher · View at Google Scholar · View at Scopus
  30. X. Bai, L. Ding, J. Teng, S. Chellappan, C. Xu, and D. Xuan, “Directed coverage in wireless sensor networks: concept and quality,” in Proceedings of the IEEE 6th International Conference on Mobile Adhoc and Sensor Systems (MASS '09), pp. 476–485, Macau, China, October 2009. View at Publisher · View at Google Scholar · View at Scopus
  31. A. Neishaboori, A. Saeed, K. A. Harras, and A. Mohamed, “On target coverage in mobile visual sensor networks,” in Proceedings of the 12th ACM International Symposium on Mobility Management and Wireless Access (MobiWac '14), pp. 39–46, ACM, Montreal, Canada, September 2014. View at Publisher · View at Google Scholar · View at Scopus