Wireless Sensor Network Coverage Optimization: Comparison of Local Search-Based Heuristics
The Maximum Lifetime Coverage Problem (MLCP) requires heuristic optimization methods due to its complexity. A real-world problem model determines how a solution is represented and the operators applied in these heuristics. Our paper describes adapting a local search scheme and its operators to MLCP optimization. The operators originate from three local search algorithms we proposed earlier: LSHMA, LSCAIA, and LSRFTA. Two steps of the LS scheme’s main loop can be executed in three different ways each. Hence, nine versions of the LS approach can be obtained. In experimental research, we verified their effectiveness. Test cases come from three benchmarks: SCP1, proposed and used in our earlier research on the three LS algorithms mentioned above, and two others found in the literature. The results obtained with SCP1 showed that the algorithm based on the hypergraph model approach (HMA) is the most effective. The remaining results of the other algorithms divide them into two groups: effective ones and weak ones. However, other benchmarks showed that the more redundant the coverage of points of interest (POIs) by sensors, the more effective the perturbation method from the approach inspired by cellular automata (CAIA). The findings expose the strengths and weaknesses of the problem-specific steps applied in the LS algorithms.
Wireless sensor networks (WSNs) are essential parts of IT solutions in many applications: military, like battlefield surveillance, and civil ones, including forecast systems, environment observation, or habitat monitoring . Sensors can be integrated into numerous electronic devices and machines. Moreover, due to the advances in semiconductor and microelectromechanical technologies and the miniaturization of computing and sensing technologies, sensors and microcontrollers are tiny, consume low power, and are inexpensive. Thus, WSNs may consist of large numbers of small yet powerful devices cooperating in large areas. WSNs aim to monitor a region or a set of targets for collecting valuable information for modeling and forecasting situations in the area or controlling the usage of resources. In WSNs consisting of miniature devices, monitoring quality becomes an energy efficiency issue due to limited battery capacities.
WSN lifetime maximization techniques depend on two main components of a problem model. The first is objective, e.g., network lifetime optimization, coverage, connectivity, or transmission parameters. The second one represents WSN design constraints, for example, communication medium, resource limits, fault tolerance and self-organization, QoS requirements, or mobility and deployment . In our research, we focus on the issues concerning network lifetime maximization, that is, the maximization of an uninterrupted interval when the network satisfies the level of coverage above a certain threshold under some resource-limited constraints. Our research concerns a simplified network model, where sensors remain immobile once deployed, and their connectivity is always guaranteed. We assume that individual sensor placement is infeasible due to environmental conditions, such as monitoring a disaster area or a battlefield. Therefore, sensors are randomly scattered over the monitored field. Sensors have a finite battery capacity. Thus, after the drain, they should be replaced or recharged. However, we assume that neither of these options is applicable due to the operating conditions. So, once the energy reserve is depleted, the sensor is regarded as irretrievably lost to the network. The network is equipped with an external solid computational unit responsible for optimally scheduling and distributing tasks among the network devices.
We assume that sensors have uniform batteries and sensing ranges and have to monitor a set of targets also called points of interest (POIs). When the number of sensors is extensive and their monitoring ranges overlap, some sensors can be off while the required level of coverage is still satisfied. In this case, the optimization aims to find a sensor activity schedule with the longest makespan and guarantee a sufficient coverage level. This class of problems is called the Maximum Lifetime Coverage Problem (MLCP). The approaches to solving MLCP are based on different models of real-world circumstances; thus, they have different structures of the solution and the complexity of the problem.
In this paper, we develop local search strategies enriched by problem-specific procedures originating from three algorithms we proposed earlier: LS, LS, and LS. The strategies share the same execution scheme. In the beginning, they find a first feasible schedule that becomes a preliminary solution to the problem. Then, iteratively, they generate its neighbor and try to refine it. The new solution takes the place of its ancestor when we gain an improvement. The advantage of these strategies lies in the neighbor generation (so-called perturbation) and refinement procedures, which take advantage of some model-specific properties.
In recent years, many heuristic approaches have been proposed; thus, quite naturally, an idea arose to verify the efficiency of hybrid heuristic algorithms composed of components of various origins. Unfortunately, some approaches do not share the same model of the real-world problem and the same set of constraints. Therefore, their integration into one method is a nontrivial task and sometimes even questionable. However, this is not the case for LS, LS, and LS, where the approaches share the same model of the real-world problem and the structure of the solution representation. Hence, one can easily use selected problem-specific steps as exchangeable building blocks. Eventually, we constructed nine local search strategies by swapping the two steps: perturbation and refinement, from each of the three approaches. Then, we evaluated their performance experimentally.
The main contribution of this paper lies in generating nine heuristics, based on the building blocks originating from three other existing approaches, and experimentally verifying their efficiency. We used three benchmarks for testing the performance of the heuristics: SCP1, provided in our earlier publications [3–6]; the benchmark from [7, 8]; and the benchmark from .
The paper is organized as follows. Related work is briefly discussed in Section 2. Section 3 formally defines the Maximum Lifetime Coverage Problem (MLCP). The local search approach is introduced in Section 4. Section 5 describes our experiments with LS algorithms for MLCP. Our conclusions are given in Section 6.
2. Related Work
The majority of MLCP solving methods are based on two-stage approaches. In the first stage, we solve the Target Coverage Problem by finding the maximum number of sensor sets so that every set can perform the coverage task individually. In the second stage, we solve the Set Coverage Problem by finding the optimal scheduling for the sets of covers obtained in the first stage. The constraint for the sets concerning full coverage of POIs proposed in the first publications, for example, in , was later relaxed in  by introducing the required minimum percentage of coverage constraint.
The sets of covers are of minimum cardinality and can be disjoint or non-disjoint. In the disjoint sets, every sensor can be included in one of the sets of covers at most, whereas in the non-disjoint sets, there is no such restriction. In , the authors introduce the Disjoint Set Cover (DSC) problem, that is, the problem of finding the maximum number of disjoint covers, where every cover is a set of sensors that together monitor all the POIs, and prove its NP-completeness. The complexity of the case with non-disjoint sets of covers is analyzed in , where the Maximum Set Cover (MSC) problem is defined. It is a problem of determining a number of non-disjoint covers to maximize the network lifetime , where , is the time interval while the -th cover is active. The authors of  prove that the MSC problem is also NP-complete.
Selected papers on MLCP optimization with Disjoint and Non-Disjoint Set Cover based approaches are presented in Table 1 (DSC) and Tables 2 and 3 (NDSC). For information about sensor activity scheduling strategies for other WSN lifetime optimization definitions, the reader is referred to surveys and monographs, for example, [1, 2, 35, 36].
3. Maximum Lifetime Coverage Problem (MLCP)
3.1. Model of the Real-World Sensor Network
The subject of our research is a network of immobile homogenous sensors monitoring a number of points of interest (POIs). These sensors are randomly distributed over a monitored area. Sensors’ batteries have limited capacity. For energy saving purposes, some sensors can be turned off from time to time. We propose a model of such a network activity where we assume that time is discrete. During every time slot, a sensor can be on or off. When a sensor is off, its energy consumption is negligible. When a sensor is active, it consumes one unit of energy during every time slot. The number of time slots when the sensor can be active, that is, the initial battery load of a sensor, is denoted by .
The model of a sensor considered here is simplified. In real life, the working time of a battery depends on the surrounding temperature and how the battery is used. When we turn it on and off frequently, it discharges more than when we keep it on and off for extended periods. We ignore such problems in this research.
We assume that every POI can be covered by at least one sensor. Some POIs can be within the sensing range of multiple sensors. Since an active sensor covers every POI in its sensing range, this means that not all sensors must be active all the time. Moreover, many applications do not need to monitor all POIs all the time. It is often sufficient to monitor 80 or 90% of them anytime. We call this value the required level of coverage and denote it by . We want to maintain this level of coverage all the time, but to save sensor batteries, we want to keep the coverage level as close to as possible. It should not exceed by more than the tolerance factor (usually 2–5%).
In the real world, some sensors not needed for monitoring may be necessary to assure communication within the network. Our simplified model assumes steady sensor connectivity over the network lifetime and negligible energy spent on communication.
3.2. Formal Description
Let us formulate the network makespan maximization regarding a scheduling problem as proposed in . We consider parallel machines which represent sensors from and POIs from . Each machine has assigned its task consisting of a subset of POIs. The -th task contains POIs located in the monitoring range of the -th sensor: . An -th job , where , consists of some tasks scheduled for being executed simultaneously in a single time slot. The tasks included in a job are not chosen randomly. They identify a set of machines active when this job is executed. That is, they are a set of sensors for which the selection criterion is to cover the requested number of POIs. Each slot has its duration time ; hence, the schedule makespan is a sum of jobs’ duration times: . In further considerations, for simplicity, we assume that equals a time unit, and thus the schedule makespan is the same as the total number of jobs in the schedule .
In our experiments, a network activity schedule is represented as a 0–1 matrix with rows corresponding to machines and columns corresponding to jobs. The element in row and column is equal to 1 (resp., 0) when the machine is on (resp., off) in job .
The required level of coverage constraint says that the cardinality of the sum of POIs in tasks assigned to active machines in the job should be equal to or exceed the ; that is,t
Moreover, for all machines, the processing time should not exceed the limit :
The objective is the maximization of the schedule makespan with the required level of coverage and without exceeding the maximum processing time of machines .
4. General Scheme of the Local Search
In our earlier papers [4–6], we proposed three heuristic algorithms for solving MLCP. They all follow the general approach of local search presented in Algorithm 1. Each of them uses different methods in the three steps of local search: initialization (Step #1: line 1), perturbation (Step #2: line 3), and refinement (Step #3: line 4).
The methods used at the stage of initialization of the proposed algorithms have the following names: the random and fine-tuning approach (RFTA), the cellular automata inspired approach (CAIA), and the hypergraph model approach (HMA). Thus, we denote the LS algorithms proposed in the corresponding papers, respectively, by LSRFTA, LSCAIA, and LSHMA. This notation appeared for the first time in , not in original papers [4–6]. In all initialization step methods, we start with an empty schedule and add to it new slots, one by one. Just the methods used to obtain a new slot are different. Adding a new slot to a schedule decreases the battery levels of the sensors active in this slot.
When the initialization step terminates (Step #1 in Algorithm 1), some sensors are usually left with non-empty batteries. However, turning on even all these sensors does not guarantee a sufficient coverage level. Thus, we cannot regard this set as a configuration for yet another slot. These sensors contribute to the perturbation and refinement steps (Steps #2 and #3 in Algorithm 1). In the perturbation step, we either remove some slots or modify sets of active sensors in some slots, adding sensors from the set mentioned above and removing others. When we remove an entire slot from the schedule or turn off one or more sensors in a slot, we recover energy. This way, the set of sensors with non-empty batteries grows, and we hope to obtain one or more new slots from this set. In the refinement step, we employ one of the methods used in the initialization step to create new slots and add them to the modified schedule. If the new schedule is longer than the initial one, it replaces the initial one. The perturbation and refinement steps are repeated until some termination condition is met.
Removing some slots from the schedule in the perturbation step can be done in two ways. In , such slots are selected randomly. In , first, we try to turn off every active sensor in every slot. The decision to turn it off is made with a low probability threshold. Next, if removing an active sensor from a slot makes this slot unfeasible, the whole slot is deleted. Our experiments show that more than one slot is removed even for a very low probability, such as 0.0005. Thus, this method of perturbation is stronger than the former.
In , we used a completely different perturbation approach. We add the sensors from the set with non-empty batteries to randomly selected slots but only when adding an additional sensor increases the coverage level. If this is not the case, we must choose another slot for this sensor. When we use up all the sensors, we attempt to remove from the modified slots other sensors to make their coverage level as close to as possible.
Thus, for each of the three steps of local search, we have three different methods of proceeding. By selecting one of these methods at every step, we get 27 different variants. We will call these algorithms using the names of every step’s origins. For example, [HMA, RFTA, CAIA] denotes an algorithm where the initialization procedure from the LS algorithm generates the initial schedule, the perturbation step originates from LS, and the refinement step originates from LS. [RFTA, RFTA, RFTA] is identical with the LS algorithm because all three problem-specific steps come from this algorithm.
To assess the performance of new algorithms, we conducted experiments with some of them. We decided to always use the same method to generate an initial schedule. This way, the set of 27 algorithms described in Section 4 has been reduced to 9 because our algorithms differ only in the steps of the main loop in Algorithm 1. For the initialization step, a method producing schedules of moderate quality would be appropriate. It should give the main loop more room to improve these schedules and thus highlight performance differences between algorithms. Since HMA usually generates the most extended schedules, which are often hard to improve, we chose between the remaining two methods—RFTA and CAIA. We decided to use CAIA, and this choice was somewhat arbitrary. Eventually, we compared experimentally nine versions of LS algorithms of the form [CAIA,,] where stands for one of HMA, CAIA, and RFTA. Consequently, for a problem instance, all versions of the main loop start with the same initial schedule. The termination condition of the loop is a limit of 500 iterations.
From now on, we skip the initial CAIA in notation given at the end of Section 4 and refer to tested algorithms giving only the origins of the perturbation step and the refinement step. The compared nine algorithms are [HMA, HMA], [HMA, RFTA], [HMA, CAIA], [RFTA, HMA], [RFTA, RFTA], [RFTA, CAIA], [CAIA, HMA], [CAIA, RFTA], and [CAIA, CAIA].
In the experimental part, we used the benchmark SCP1 proposed in our earlier publications but also did tests with two external benchmarks: one from [7, 8] and the other from , selected due to compatible problem definitions and optimization criteria but differently defined test cases.
Experiments were conducted on HP Workstation Z2 G4 SFF with Intel® Core™ i7-8700 CPU @ 3.20 GHz and 16 GB RAM and Windows 10 Pro. Application for simulations was implemented in MS Visual Studio C++.
5.1. Measurement Methodology
The best measure of the efficiency of our LS algorithms would be a comparison of the obtained problem solution with an optimal solution. In this case, we would compare the length of the schedule returned by an LS algorithm with the length of an optimal schedule for the given problem instance. Unfortunately, we do not know optimal solutions because it is impossible to compute them in a reasonable time due to the problem’s computational complexity. Therefore, we decided to compare our solutions to the best-obtained suboptimal solution of the problem instance. These best-obtained suboptimal solutions were produced by LS algorithms using HMA to get initial schedules. Since initial schedules produced by HMA are longer than those obtained by other approaches, we hope the resulting schedules are genuinely close to optimal after applying LS to improve them. The lengths of best-obtained suboptimal schedules were used as reference values to evaluate the percentage quality of the schedules produced by our nine LS algorithms.
Moreover, we measured the mean percentage improvement of the initial schedule’s length, obtained in the main loop of Algorithm 1. It was calculated by subtracting the length of the initial schedule from the length of the final schedule and dividing the difference by the length of the initial schedule. For every class of problems, the obtained values were averaged over the number of problem instances.
5.2. Performance Comparisons Using Benchmark SCP1
5.2.1. Benchmark SCP1
SCP1 consists of eight classes of problems. There are 2000 sensors with the sensing range of one abstract unit in all of them. The monitored area is a square. Its side size varies from 13 to 28 abstract units (possible values: 13, 16, 19, 22, 25, and 28). POIs are placed in nodes of a rectangular or a triangular grid. Since the distance between grid nodes grows together with the area side size, the number of nodes is similar for all classes of SCP1. However, only about 80% of grid nodes have a POI. A POI is located in the node only if a randomly generated number between 0 and 1 is less than 0.8.
Consequently, instances of the same test case can have different numbers of POIs. For the triangular grid, this number is between 199 and 240, while for the rectangular grid, it is between 166 and 221. Coordinates of sensor locations are obtained using either a random generator or a Halton generator. Eventually, the test classes have the following configurations: 1., 2., 3., 4., 5., 6., 7., 8., where means a triangular grid of POI locations, means a rectangular grid, and and mean a Halton and a random generator of sensor locations. We generated 40 instances of every class.
Figure 1 depicts boxplots (minimum, lower quartile, median, upper quartile, and maximum) of Maximum–Minimum Distance (MMD)  values for the sensors and POIs in the instances sets. Precisely, the boxplots show the MMD of the evenness measure and are calculated as follows:where is an Euclidean distance between and , is the set of sensors’ locations, and is the set of vertices in the Voronoi polygons for the set of POIs locations. The values in the diagram grow as the side size of the monitored area grows, which is reasonable. Moreover, for the areas of the same side size, values for the cases where POIs are located on the nodes of the triangular grid are smaller than those for the cases applying a rectangular grid for the POI distribution.
Figure 2 shows the mean numbers of sensors covering 0, 1, 2, 3, 4, 5, and more than 5 POIs for the eight test cases of SCP1. One can see that the number of sensors covering a more significant number of POIs decreases as the side size of the monitored area grows. When the side sizes increase, the overlapping sections of neighbor sensor monitoring areas shrink, and some even disappear, so the numbers of shared POIs decrease. In the last class of problems, almost 75% of the sensors cover only one POI.
In our experiments, we assumed the required level of coverage as either 80 or 90%, while the tolerance factor was 5%. The same set of experiments was repeated for five different values of —10, 15, 20, 25, and 30.
5.2.2. Mean Normalized Lengths of Schedules and Percentage Quality of Schedules
The lengths of the schedules returned by the LS algorithms in question are the natural outcome of our experiments. However, the average length returned for all problem instances from a particular class may not measure the algorithm quality well. The optimal schedule lengths may differ for subsequent instances and values of . For the purpose of compatibility of output values, we normalized the schedule lengths. We assumed that the lifetime of batteries is the same, but different numbers of sensor activity intervals represented by can be available for scheduling. We assumed that, for , the battery lifetime is divided into 30 intervals and a slot takes one unit. Thus, the schedule makespan equals exactly the number of slots in a schedule. Consequently, for , one slot takes three time units; for , two units; for , 1.5 units; and for , 1.2 units. Eventually, normalized lengths of schedules represent the schedule makespans, that is, the total number of slots multiplied by the number of time units per slot.
Another measure is to compute for every result its percentage quality with respect to the best-known suboptimal schedule and then average these percentages over all the problem instances in a class.
Table 4 presents the mean makespans of schedules for particular classes of test cases. The mean makespan is computed as the sum of the makespans obtained for all instances and all values of divided by (the number of instances for a class of problems multiplied by the number of different values of ). Table 5 shows, for each of the eight classes of SCP1, the mean percentage qualities of the best-found schedules returned by the LS algorithms (the top table). It also shows the mean percentage improvement of the lengths of schedules returned by the LS phase with respect to the lengths of schedules returned by the initialization phase (the bottom table). Tables 4 and 5 present results for . Tables 6 and 7 show the same types of results as Tables 4 and 5 but assuming .
One can see from these results that [HMA, HMA] is usually the best version of LS. Three other LS algorithms, [HMA, RFTA], [CAIA, HMA], and [HMA, CAIA], are also effective, while the remaining five LS versions give much worse results—we will call them weak approaches. However, for , [RFTA, HMA] is better than the other weak approaches but not so good as effective ones. Thus, all LS algorithms with the perturbation method from LS produce a relatively good schedule, no matter what way is used to refine the schedule obtained after the perturbation.
However, our results also show that [HMA, HMA] is not always the best method. In Table 4, in three cases, the values for [HMA, RFTA] are slightly better than those for [HMA, HMA]. Moreover, in the first three lines of Tables 4 and 5, the values for [CAIA, HMA] are better than the corresponding values for [HMA, RFTA]. In Tables 6 and 7, the values for [HMA, CAIA] are always better than those for [CAIA, HMA] and, in three cases, better than the corresponding values for [HMA, HMA]. Moreover, for the last five classes of SCP1, the values for [HMA, RFTA] are better than those for [HMA, HMA]. From these observations, we can only say that one group of algorithms is better than the other group, but we cannot claim that one specific algorithm is always better than another one.
Comparing the results produced by [CAIA, HMA] and [RFTA, HMA], one could ask why the first approach belongs to the group of effective ones, whereas the second one can be weak. A possible explanation is that the perturbation method from LS is much stronger than the one from LS in terms of the number of slots deleted from the original schedule. Thus, in the case of [CAIA, HMA], the refinement step begins with higher energy levels in the batteries of available sensors. This allows an efficient HMA method to improve a shorter input schedule much more than in the case of a more extended input schedule but less energy in the batteries of available sensors.
Figures 3 and 4 present boxplots of makespans of schedules returned by the four good LS algorithms for the SCP1 benchmark with of 80% and 90%, respectively. These graphs show that all statistical parameter values (minimum, lower quartile, median, upper quartile, maximum) for [HMA, HMA] are usually better than the corresponding values for the three other algorithms.
In Figure 3, only for class 7, the values of minimum, maximum, and quartiles for [HMA, RFTA] are better than the ones for [HMA, HMA]. For class 8, the values of the minimum, the lower quartile, and the median for [HMA, RFTA] are better than the ones for [HMA, HMA], while the values of the upper quartile and the maximum are equal. For class 4, the median and the upper quartile for [HMA, RFTA] are better than the ones for [HMA, HMA], but the minimum, the lower quartile, and the maximum are equal. Thus, none of these two algorithms can produce a schedule longer than the best schedule given by the other, but [HMA, RFTA] more often returns better results. For classes 1–3, the maximum for [CAIA, HMA] is better than the maximum for [HMA, HMA], but the other boxplot parameters for [CAIA, HMA] are worse than those for [HMA, HMA]. [CAIA, HMA] has a more extensive interquartile range and distribution of the results. This is due to the properties of the perturbation method from LS which is more random than the method used by LS. Hence sometimes, it can produce better usage of the sensors than the more systematic approach of LS.
In Figure 4, for classes 4–8, the values of minimum, maximum, and quartiles for [HMA, RFTA] are better than the ones for [HMA, HMA]. Moreover, for classes 4 and 8, the values of almost all parameters for [HMA, CAIA] are better than the ones for [HMA, HMA] (the only exception is the maximum in class 8). For class 8, [HMA, CAIA] has smaller interquartile range than [HMA, HMA]. For class 7, the values of the corresponding parameters for [HMA, CAIA] and [HMA, HMA] are almost equal. Thus, boxplot graphs confirm conclusions from mean values.
5.3. Performance Comparisons Using Two Other Benchmarks
To validate our findings from Section 5.2, we decided to conduct additional experiments using benchmarks provided by other authors. We selected the same two benchmarks we used for experiments with our original algorithms LS , LS , and LS .
We selected four classes of problems proposed in [7, 8]. In all cases, the monitored area is a square with a side size of 100 abstract units. In three classes from , there are 100 POIs while the number of sensors is 100, 200, and 300 (cases #1, #2, and #3). In the class from , we have 400 POIs and 100 sensors (case #4). The remaining parameters have also the same values as in [7, 8]; that is, , the sensing range is 20 abstract units, and . Table 8 presents the mean measured lengths of schedules. Table 9 shows the mean percentage qualities of the best-found schedules returned by our LS algorithms (the top table) and mean percentage improvements of the lengths of schedules returned by the LS phase with respect to the lengths of schedules returned by the initialization phase (the bottom table) for each of the four classes of the benchmark in question, assuming .
This set of experiment results are similar to our earlier results with the benchmark SCP1. [HMA, HMA] always gives the most extended schedules. Algorithms [HMA, RFTA], [CAIA, HMA], and [HMA, CAIA] are worse than [HMA, HMA] but significantly better than the remaining five approaches.
Since in experiments with SCP1, we performed computations for two values of , 80% and 90%, we did the same for the test cases from [7, 8]. Tables 10 and 11 show the set of results similar to the ones in Tables 8 and 9 but obtained for .
For this set of benchmarks, lowering from 90% to 80% has not changed the relative performance of the three groups of our LS algorithms. However, the relative performance of the three effective algorithms has changed.
Figures 5 and 6 show boxplots of lengths of schedules returned by the four good LS algorithms for the benchmark proposed in [7, 8] with of 90% and 80%, respectively. In Figure 5, one can see that the minimum for [HMA, HMA] is always greater than or equal to the maximum for the other three algorithms. Moreover, the results for [HMA, HMA] have a much smaller interquartile range than those for the other algorithms. The schedules’ lengths are less dispersed, proving the higher stability of the [HMA, HMA] variant for . In Figure 6, the values of all parameters for [HMA, HMA] are better than those of the corresponding parameters for the other algorithms. However, for three out of four classes of the benchmark, the results for [CAIA, HMA] have the smallest interquartile range. Less dispersed schedules’ lengths prove the higher stability of the [CAIA, HMA] variant for . Again, boxplot graphs confirm conclusions from mean values.
5.3.2. The Benchmark Proposed by Manju et al. in 
The authors of the minimal-heuristic approach  proposed the following set of benchmarks for experiments with algorithms solving MLCP. The monitored area is a square with a side size of 150 abstract units. POIs and sensors are distributed randomly over this area. In the first five test cases (1.1–1.5), we have 100 POIs while the sensors’ numbers are 50, 100, 150, 200, and 250, respectively. In the next five test cases (2.1–2.5), there are 150 sensors, while the numbers of POIs are 20, 40, 60, 80, and 100. As in , the sensing range is 70 abstract units and . Requiring full coverage is reasonable because this set of benchmarks gives much more redundant coverage of POIs by sensors; that is, larger numbers of sensors cover individual POIs than in SCP1 and the benchmark from [7, 8] (see the next subsection for details). We used .
Table 12 gives the results of our LS strategies for the benchmarks from . The top part of Table 13 shows the mean percentage qualities of the best-found schedules returned by the LS algorithms. In the bottom part of Table 13, we present mean percentage improvements of the lengths of schedules produced by the LS phase with respect to the lengths of schedules returned by the initialization phase.
One can see from these results that [HMA, HMA] is still the winner. [CAIA, HMA] is the second best LS approach in 9 out of 10 cases. As the number of sensors grows (which means that the redundancy of coverage also increases), [CAIA, RFTA] becomes better than [HMA, RFTA].
Since in our experiments with the benchmark set SCP1 and with the benchmarks from [7, 8] we used equal to 80% and 90%, we also performed tests with the data sets from  using the same values of . For these test cases, the number of alternative sensor activity configurations satisfying coverage level is relatively high due to high redundancy in POI coverage. Lowering makes the number of these configurations even greater. The results for are given in Tables 14 and 15 while the results for are provided in Tables 16 and 17.
In this set of experiments, [CAIA, HMA] is the best LS algorithm in all cases. For , [HMA, HMA] is the second best in test Case 2.1. In all other test cases, for both equal to 80% and that equal to 90%, the second best is [CAIA, RFTA]. The third is [CAIA, CAIA]. Thus, for equal to 80% or 90%, the perturbation method used in CAIA always gives results better than the other two methods. Interestingly, for , the remaining six LS algorithms (including [HMA, HMA]) give a relative improvement of less than 1%.
Figures 7, 8, and 9 show boxplots of lengths of schedules returned by the four good LS algorithms for the benchmark proposed in  with of 100%, 90%, and 80%, respectively. In Figure 7 [HMA, HMA] always gives the largest values of minimum, maximum, and quartiles. [CAIA, HMA] and [HMA, CAIA] have usually smaller interquartile ranges than [HMA, HMA] and [HMA, RFTA]. The small value of the interquartile range for [HMA, CAIA] with the mean value lower than the mean values of the results returned by other algorithms means that [HMA, CAIA] never gives good results for this benchmark and . In Figure 8, [CAIA, HMA] not only has the highest values of all parameters but also almost always has the smallest interquartile range (only for class 1.1, [HMA, RFTA] has a smaller interquartile range). Decreasing from 100 to 90% changed the winner—more random perturbation method from LS turned out to be better in this case. Finally, in Figure 9, [CAIA, HMA] again has the highest values of all parameters and often has the smallest interquartile range. However, in this case, interquartile ranges for all four algorithms are close to each other. One more time, boxplot graphs confirm conclusions from mean values.
5.4. What Affects Algorithms’ Effectiveness?
5.4.1. The Redundancy in Coverage of POIs by Sensors in the Benchmarks
To explain differences in the relative performance of our nine LS approaches on various benchmarks, we investigated the numbers of sensors able to control particular POIs in SCP1 and the benchmarks from [7–9]. Our findings are presented in Figures 10–12.
Figures 10 and 12 show that in the test cases from SCP1 and from [7, 8], every POI can be monitored by not more than 50 sensors and, in some of these cases, even by not more than 20 sensors. On the other hand, Figure 11 shows that in the test cases from , many POIs can be monitored by more than 100 sensors. This high monitoring redundancy makes finding a cover for POIs much more manageable, even for .
We conclude from the above considerations that the perturbation method used in LS works better than the other two methods when there are many possibilities of forming a required cover for a given set of POIs. When the number of such options decreases due to a higher value of or a smaller number of sensors, the perturbation method from LS becomes better. The advantage of the perturbation method from LS over the one from LS when there is much redundancy in coverage of POIs is probably that the former removes more slots from the original schedule than the latter. When the number of similar legal covers becomes large, there are many possibilities of creating a valid schedule. Removing more slots and restoring more power to sensor batteries gives the refinement method more space to show its effectiveness and improve the original schedule. On the other hand, when the redundancy in coverage of POIs decreases, a more systematic approach of HMA works better.
Let us also notice that too much redundancy in coverage of POIs by sensors is not desirable in real life due to increased costs. Of course, the ultimate redundancy level depends mainly on a specific application.
5.4.2. Strengths and Weaknesses of the Perturbation Operators
The good performance of the perturbation method from LS comes from the fact that this method does not remove a single slot from the original schedule. Instead, the method only changes sets of active sensors in selected slots. Consequently, the set of sensors with non-empty batteries changes too. It opens up new possibilities for producing one additional slot in the refinement step, which is sufficient to obtain a more extended schedule. On the other hand, the perturbation methods from LS and LS delete slots from the initial schedule. It means that during the refinement step, the minimum goal is to create at least as many new slots as were removed during the perturbation. When we succeed, we can go further and try to generate more slots to make the new schedule longer. It can happen, nevertheless, that we are not able to get as many additional slots as we removed. Despite the refinement step execution, we get a shorter schedule. Slots or sensors in slots are selected for removal randomly, so our success depends on luck. As we mentioned above, when there is much redundancy in coverage of POIs by sensors and we remove many slots from the original schedule, the chances of removing the right slots and eventually getting a more extended schedule grow.
In this paper, we proposed 27 local search algorithms solving MLCP and studied the relative performance of 9 of them. The starting point in this research was three LS algorithms presented in our earlier papers: , , and LS.
The local search approach consists of two major steps: generating an initial solution to the problem and looking for its neighbor. The second step can be divided into two substeps executed iteratively: perturbation of the original solution and refinement of the perturbation’s result. This way, we have three problem-specific steps in our LS algorithms for MLCP. By swapping these three steps between three basic algorithms, we can get 27 different versions of LS solving MLCP.
Our research studied the relative performance of just nine of these versions—the initial problem solution was always generated using CAIA. For experimental research, we used the benchmark SCP1 (proposed in our earlier papers) and the benchmarks proposed in [7–9].
We computed the mean lengths of schedules returned by the LS algorithms, mean percentage qualities of the best-found schedules returned by the LS algorithms, and mean percentage improvements of the lengths of schedules produced by the LS phase with respect to the lengths of schedules returned by the initialization phase. Moreover, we analyzed boxplots of lengths of schedules returned by the four best LS algorithms.
The results of our experiments show that for the SCP1 benchmark and the benchmarks from [7, 8], the best pair of perturbation and refinement methods is usually the one used in LS, i.e., [HMA, HMA]. Approaches [HMA, RFTA], [CAIA, HMA], and [HMA, CAIA] are also effective, while the remaining combinations give much worse results. However, experiments with the benchmarks from  gave different results. When we assume , [HMA, HMA] is still the best approach, but for equal to 80% or 90%, the perturbation method used in LS gives better results. We conclude that the more redundant the coverage of POIs by sensors and the lower the value of , the more effective the perturbation method from LS. When this redundancy is lower or is higher, the perturbation method used in LS becomes better. One could ask about a threshold value of for which the perturbation method from LS becomes better than the method from LS. The results show that this value is problem-dependent. It depends on the redundancy in coverage of POIs by sensors. However, no rule defining this threshold value can be determined based on the presented experiments. It can be figured out only by conducting experiments with a given class of problems.
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this article.
W. Dargie and C. Poellabauer, Fundamentals of Wireless Sensor Networks: Theory and Practice, Wiley, Hoboken NY, USA, 2010.
H. Yetgin, T. K. Cheung, M. El-Hajjar, and L. Hanzo, “A survey of network lifetime maximization techniques in wireless sensor networks,” IEEE Communications Surveys and Tutorials, vol. 19, no. 2, pp. 828–854, 2017.View at: Google Scholar
K. Trojanowski, A. Mikitiuk, F. Guinand, and M. Wypych, “Heuristic optimization of a sensor network lifetime under coverage constraint,” in Proceedings of the Computational Collective Intelligence: 9th International Conference, Springer International Publishing, Nicosia, Cyprus, 2017.View at: Google Scholar
K. Trojanowski, A. Mikitiuk, and M. Kowalczyk, “Sensor network coverage problem: a hypergraph model approach,” in Proceedings of the Computational Collective Intelligence: 9th International Conference, Springer International Publishing, Nicosia, Cyprus, 2017.View at: Google Scholar
K. Trojanowski, A. Mikitiuk, and K. J. M. Napiorkowski, “Application of local search with perturbation inspired by cellular automata for heuristic optimization of sensor network coverage problem,” in Proceedings of the Parallel Processing and Applied Mathematics, Springer International Publishing, Berlin, Germany, 2018.View at: Google Scholar
A. Tretyakova and F. Seredynski, “Simulated annealing application to maximum lifetime coverage problem in wireless sensor networks,” Global Conference on Artificial Intelligence GCAI, vol. 36, pp. 296–311, 2015.View at: Google Scholar
Manju, D. Singh, S. Chand, and B. Kumar, “Target coverage heuristics in wireless sensor networks,” Advanced Computing and Communication Technologies Proceedings of the 10th ICACCT, vol. 562, pp. 265–273, 2016.View at: Google Scholar
S. Slijepcevic and M. Potkonjak, “Power efficient organization of wireless sensor networks,” in Proceedings of the IEEE International Conference on Communications, IEEE, Helsinki, Finland, 2001.View at: Google Scholar
Z Abrams, A. Goel, and S. Plotkin, “Set k-cover algorithms for energy efficient monitoring in wireless sensor networks,” in Proceedings of the Third International Symposium on Information Processing in Sensor Networks, ACM Press, New York, NY, USA, 2004.View at: Google Scholar
M. Cardei, M. T. Thai, Y. Li, and W. Wu, “Energy-efficient target coverage in wireless sensor networks,” in Proceedings of the INFOCOM 2005. 24th Annual Joint Conference of the IEEE Computer and Communications Societies, Miami, FL, USA, 2005.View at: Google Scholar
C.-C. Lai, C.-K. Ting, and R.-S. Ko, “An effective genetic algorithm to improve wireless sensor network lifetime for large-scale surveillance applications,” in Proceedings of the IEEE Congress on Evolutionary Computation, IEEE, Singapore, 2007.View at: Google Scholar
N. Ahn and S. Park, “A new mathematical formulation and a heuristic for the maximum disjoint set covers problem to improve the lifetime of the wireless sensor network,” Ad Hoc and Sensor Wireless Networks, vol. 13, no. 3-4, 2011.View at: Google Scholar
Y. Lin, J. Zhang, H. S.-H. Chung, W. H. Ip, Y. Li, and Y. H. Shi, “An ant colony optimization approach for maximizing the lifetime of heterogeneous wireless sensor networks,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 3, pp. 408–420, 2012.View at: Publisher Site | Google Scholar
P. Berman, G. Calinescu, C. Shah, and A. Zelikovsky, “Power efficient monitoring management in sensor networks,” in Proceedings of the IEEE Wireless Communications and Networking Conference IEEE Cat. No.04TH8733, IEEE, Atlanta, GA, USA, 2004.View at: Google Scholar
K. Deschinkel, “A column generation based heuristic for maximum lifetime coverage in wireless sensor networks,” in Proceedings of the SENSORCOMM 2011: The Fifth International Conference on Sensor Technologies and Applications, Curran Associates, Inc, Red Hook, NY, USA, 2011.View at: Google Scholar
A. Tretyakova and F. Seredynski, “Application of evolutionary algorithms to maximum lifetime coverage problem in wireless sensor networks,” in Proceedings of the IEEE International Symposium on Parallel & Distributed Processing, Workshops, IEEE, Cambridge, MA, USA, 2013.View at: Google Scholar
Y. E. E. Ahmed, K. H. Adjallah, and S. F. Babikier, “Non disjoint set covers approach for wireless sensor networks lifetime optimization,” in Proceedings of the International Symposium on Wireless Systems within the Conferences on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS-SWS), IEEE, Offenburg, Germany, 2016.View at: Google Scholar
A. Tretyakova, F. Seredynski, and F. Guinand, “Heuristic and meta-heuristic approaches for energy-efficient coverage-preserving protocols in wireless sensor networks,” in Proceedings of the 13th ACM Symposium on QoS and Security for Wireless and Mobile Networks - Q2SWinet, ACM Press, New York, NY, USA, 2017.View at: Google Scholar
Y. E. E. Ahmed, K. H. Adjallah, R. Stock, I. Kacem, and S. F. Babiker, “NDSC based methods for maximizing the lifespan of randomly deployed wireless sensor networks for infrastructures monitoring,” Computers & Industrial Engineering, vol. 115, no. 17–25, 2018.View at: Publisher Site | Google Scholar
M. Cardei, “Coverage problems in sensor networks,” Springer, Berlin, Germany, 2013.View at: Google Scholar
B. Wang, Coverage Control In Sensor Networks. Computer Communications and Networks, Springer, Berlin, Germany, 2010.
Y. E. E. Ahmed, Modeling, Scheduling and Optimization of Wireless Sensor Networks Lifetime, Université de Lorraine, Nancy, France, 2016.
X. Shen, “Evenness evaluation in ad-hoc sensor networks,” in Proceedings of the First International Conference on Networking and Distributed Computing, IEEE, Hangzhou, China, 2010.View at: Google Scholar