Abstract

Communication infrastructure planning is a critical design task that typically requires handling complex concepts on networking aimed at optimizing performance and resources, thus demanding high analytical and problem-solving skills to engineers. To reduce this gap, this paper describes an optimization algorithm—based on evolutionary strategy—created as an aid for decision-making prior to the real deployment of wireless LANs. The developed algorithm allows automating the design process, traditionally handmade by network technicians, in order to save time and cost by improving the WLAN arrangement. To this end, we implemented a multiobjective genetic algorithm (MOGA) with the purpose of meeting two simultaneous design objectives, namely, to minimize the number of APs while maximizing the coverage signal over a whole planning area. Such approach provides efficient and scalable solutions closer to the best network design, so that we integrated the developed algorithm into an engineering tool with the goal of modelling the behavior of WLANs in ICT infrastructures. Called WiFiSim, it allows the investigation of various complex issues concerning the design of IEEE 802.11-based WLANs, thereby facilitating design of the study and design and optimal deployment of wireless LANs through complete modelling software. As a result, we comparatively evaluated three target applications considering small, medium, and large scenarios with a previous approach developed, a monoobjective genetic algorithm.

1. Introduction

Implementing Wi-Fi networks with the best use of resources while offering the best service to users requires a careful planning. WLANs can range from relatively simple installations to very complex and intricate designs so a well-documented plan must be outlined before a wireless infrastructure can be deployed [1]. In practice, network technicians must survey the space towards deciding on the WLAN arrangement, whose process involves gauging the location of APs, clients, and obstacles. This is traditionally reduced to achieve the maximum Wi-Fi coverage as decision criterion, which is traduced into varying the AP placements based on signal strength measurements. Nevertheless, it is not merely drawing coverage circles on a plane and the difficulty increases when requiring a realistic view of the whole problem [2]. This indeed includes several factors such as the area morphology (i.e., verticality and horizontality of planes), number of clients populating the WLAN (i.e., distribution, density), type of IEEE 802.11 technology (i.e., modulation scheme and frame management), effective isotropic radiated power (EIRP), and physical distance and obstacle materials (i.e., water, glass, plastic, metal, wood, and concrete). The traditional deployment is consequently no effective for practitioners due to requirements in time and cost, which may drastically reduce the WLAN performance and usability due to operability problems concerning design (e.g., interferences and frame collisions due to home devices, hidden node problem) [3].

To help reduce this gap, this paper presents WiFiSim, an engineering modelling software being developed as part of an ongoing research project for the study and design of WLANs [46]. The purpose of this software is to model the behavior and performance of communication networks based on the IEEE 802.11 standard. The interest lies in the realism of the WiFiSim simulations, which provide a high level of interactivity and visual information with easy-to-interpret results through a configurable and intuitive GUI. In this paper we propose a further development beyond the study and design of WLANs: the problem of optimally deploying APs to cover Wi-Fi clients within a complex space. So this manuscript intended three significant contributions: to present a new intelligent modelling approach for WLAN deployment based on a multiobjective genetic algorithm; to examine its applicability into small, medium, and large scenarios; and to integrate the algorithm with the previous developments and provide a complete decision-making software for ICT technicians. To achieve this goal, we propose an evolutionary genetic strategy based on nondominated sorting genetic algorithm II (NSGA-II) where the optimization approach is applied into a demanding WLAN infrastructure.

To this end, the paper is organized as follows. Section 2 introduces the tools and approaches for WLAN deployment existing in literature. Section 3 is devoted to describing the evolutionary genetic strategy followed to develop the optimization algorithm. Section 4 presents the experimentation conducted in three representative case studies. Finally, the paper provides the results and future works.

2. State of the ART

Typically, the location of APs in Wi-Fi networks is manually estimated from their power transmission. From this parameter, APs are placed at regular intervals throughout the space with a predefined distance between adjacent APs. However, this approach idealizes the signal coverage that, in reality, often finds complex environments with a wide variety of obstacles and materials. This implies designs that may result in a deficient configuration of APs due to poor or excessive coverages. When applied to large scenarios, this approach may bring a significant cost overrun. To address these shortcomings, site surveys must be combined with software modelling tools that help improve and simplify the human process [7]. However, these tools need to have a high degree of realism and simulation capabilities to be really helpful (e.g., modeling of physical spaces and network behavior in Layer 1 and Layer 2 of the OSI model). For this reason, choosing a modelling software to study and design optimal WLANs can become hard due to the large number of existing tools [8].

2.1. Modelling Software for WLAN Deployment

At present, there is a large number of applications for designing and/or planning Wi-Fi networks. Examples of research in scientific literature using these applications include NetStumbler®, a survey-site tool that facilitates the detection of WLANs using the IEEE 802.11 protocol [9]; Wi-Fi Analytics Tool™, a software that provides advanced signal strength graphs and analyzes Wi-Fi channels to optimize the Wi-Fi network setup [10]; Wi-Spy DBx, a RF spectrum analyzer designed for troubleshooting Wi-Fi issues with nearby interfering devices in the 2.4 and 5 GHz bands [11]; Ekahau HeatMapper™, an auditory tool and Wi-Fi site-survey software for home use [12]; NetSpot, a site-survey and analyzing tool that helps improve the Wi-Fi signal strength and to boost the network speed [13]; Acrylic® WiFi Heatmaps, a site-survey and audit software aimed at generating awesome Wi-Fi heatmaps and editable reports on the RF spectrum coverage [14]; Wolf WiFi Pro®, a Wi-Fi device management software and predeployment toolkit for wireless professionals able to detect failure scenarios [15]; Bat-Planner, a basic planning suite for IEEE 802.11-based wireless networks [16]; TamoGraph®, a site-survey tool for collecting, visualizing, analyzing, and reporting Wi-Fi data [17]; VisualRF Plan, a wireless management suite that helps to model the RF environment and the underlying wired topology in a visual way [18]; RF3D Wifiplanner2, a modelling tool for planning and upgrading WLANs based on the study of RF signals [19]; AirMagnet Survey®, an accurate and flexible solution for planning indoor and outdoor WLANs [20]; WiTuners™, a Wi-Fi tool for site survey, automated deployment, and auditing [21]; and Ekahau Site Survey™, a professional software toolkit for site survey, spectrum analysis, and Wi-Fi planning [22], to name a few tools.

In general, all the above tools exemplify design and inspection software for wardriving, site survey, data collecting, and planning. Among the drawbacks, these applications (i) are mostly commercial software with a pay-per-use license agreement, (ii) are often natively available only for Windows platforms, (iii) do not always include structures or materials for obstacle modelling, (iv) only allow multifloor design through 2D maps, (v) do not integrate habitually algorithms for optimal WLAN deployment, and (vi) support their designs in a two-step process (i.e., definition of site requirements and planning based on RF propagation). Among them, AirMagnet Survey, WiTuners, and Ekahau Site Survey are remarkable exceptions. On the one hand, they automatically plan the AP positions and quantity needed to ensure a minimum coverage. On the other hand, they validate their designs with data collecting by auditing the real environment.

As a main disadvantage, these tools use proprietary algorithms for WLAN deployment based on RF planning (i.e., patent protected). This implies charging a licensing fee that can be fairly expensive (especially in comparison to open-source software), to make the owner too dependent on the developer (possibly less adaptable to the constantly changing needs of users) and to make the mechanics of the algorithms opaque to viewing (i.e., there is no testable information) [23]. The importance of utilizing evolutionary approaches, instead of providing a single solution as in the case of the RF-based algorithms, is to procure a set of optimal solutions to facilitate broader decision-making for users. Moreover, the above tools lack a modelling block to assist engineers in the study of wireless traffic in Layer 2. Both capabilities combined are critical when troubleshooting wireless communication issues to deploy efficient and useful WLANs. This way, the design process is improved by a cycle of three steps, much more creative, based on the generation of alternative solutions and analysis and selection of the one considered more fitting.

To introduce the developed modelling software, the main features and properties of WiFiSim are compared with the above-mentioned tools in Table 1. In sum, the pros of WiFiSim are (i) to facilitate for engineers the WLAN deployment by providing a set of optimal solutions closer to the best network design and (ii) to study WLANs in both Layer 1 and Layer 2 to avoid several communication issues concerning the PHY and MAC layers (e.g., throughput, channel utilization, frame collision, frame delay, queue length and delay, medium access delay, and jitter and hidden node problem). As far as we know, these are the main differentiating capabilities regarding other tools, whose modelling process allows practitioners to improve complex network designs. As a result, WiFiSim has been utilized by professionals on ICT and students and teachers at the University of Huelva to improve the learning and teaching of computer networking degrees over the past seven years (i.e., 10 professionals, 110 students, and 4 teachers since the 2011/12 academic year).

2.2. Approaches for WLAN Optimization

Early approaches based on computer models consisted in exhaustive searches of AP positions under strong restrictions, thus preventing being useful in more general situations [24]. Traditional forms of infrastructure planning, as used for mobile cellular networks, produce acceptable results but are generally considered too costly for wireless networks [25]. In this field, optimization techniques range from Hill climbing, random walk (RW), simulated annealing (SA), and Tabu search (TS) to genetic algorithms (GA). Hill climbing, a mathematical optimization technique within local search methods, was also applied along to metaheuristic neighborhood search algorithms (e.g., SA or TS) to prevent being trapped in local optimums. However, mathematical programming techniques are usually less preferred than metaheuristic approaches due to the difficulty for adapting math expressions to generic case studies. Moreover, TS and GA proved to be techniques with better performance than other local search approaches such as RW or SA. However, TS suffer from scalability and evolutionary genetic strategies depend largely on tuning parameters for the application [26]. Both issues—scalability and parameter tuning—were effectively addressed through an agent-based optimization approach [27] and a combination of pruning and neighborhood search algorithms [28], respectively.

Initial investigations on GAs led to deploying a single AP through a simple implementation [29]. Subsequent works were focused on modeling WLAN scenarios by increasing the number of APs [30]. GAs then evolved to include new factors such as the effect of different obstacles on the coverage performance [31]. In general, monoobjective strategies centered on optimizing a target function are more popular because of flexibility, robustness, and ease of implementation but may not be realistic enough when several design goals are sought to meet (e.g., number of APs, overall SNR, or throughput). For this reason, most of the real-world optimization problems are multiobjective in nature, which often have two or more target functions that must be met simultaneously and possibly conflict with each other. This causes having a set of optimal solutions (i.e., Pareto solutions) instead of having a single outcome. On the other hand, monoobjective algorithms may require being run every time to obtain multiple solutions. These drawbacks were addressed in this field thanks to a multiobjective genetic algorithm (MOGA) approach. Such is the case of an improved adaptive genetic algorithm (IAGA) for WLAN deployment based on attaining the AP quantity and signal route wastage as minimum as possible [32]. Analogously, a multiobjective strategy was proposed to obtain various optimal placement configurations for different numbers of APs based on the SNR region [33]. Although closer to the work being proposed, as main disadvantage, these algorithms were not provided as a modelling software for the complete study of the WLAN operation in Layer 1 and Layer 2 (i.e., not interactive and exhaustive).

In this paper, we specifically used NSGA-II, one of the most representative approaches of MOGAs due to its ability to find multiple Pareto solutions in a simple execution [34]. As a main advantage, this approach allows easily dealing with concave and discontinuous Pareto boundaries, combined with a crowding operator, allowing obtaining a wider set of optimal solutions than other GAs of first and second generation as the niched Pareto genetic algorithm (NPGA) [35], nondominated sorting genetic algorithm (NSGA) [36], strength Pareto evolutionary algorithm (SPEA) [37], Pareto archived evolution strategy (PAES) [38], Pareto envelope-based selection algorithm (PESA) [39], microgenetic algorithm (Micro-GA) [40], improved niched Pareto genetic algorithm (NPGA2) [41], improved envelope-based selection algorithm (PESA-II) [42], or improved strength Pareto evolutionary algorithm (SPEA2) [43], among others.

3. Modelling System

The first implementation of WiFiSim—acronym for Wireless Fidelity Simulator—consisted of an intelligent modelling software developed in Java™ with the Eclipse framework. WiFiSim allows modeling various parameters in the PHY and MAC layers of the OSI model with support for the IEEE 802.11a/b/g/n standards, including stand-alone configuration of APs and wireless clients (e.g., interbeacon frame, rate and sensitivity, Cartesian coordinates, transmission power, packet size, RTS threshold, and packet load distribution), support for a customized library of materials, and control of the medium access (i.e., CSMA/CA mechanism, RTS/CTS mechanism, and back-off algorithm). The validation of models in WiFiSim is done in a similar way as in AirMagnet Survey, WiTuners, and Ekahau Site Survey but using complementary tools, for instance, WiFi Analyzer. This application allows us to manually conduct real WLAN tests, such as measuring the signal sensibility, antenna aiming, or APs detection [44]. Thus, once nodes and obstacles are added to a wireless scenario, we can simulate network models with a high degree of realism and study signal loss-related problems. Further details are described in [4, 5].

In order to facilitate the graphical deployment of WLANs, WiFiSim was firstly extended with a steady-state genetic algorithm (SSGA). This allowed users to dynamically work on 3D maps and specify whether the APs must be automatically arranged anywhere inside the buildings or fixed on the walls according to common ICT installation requirements. The main advantage of using SSGA is that multiple optimal placement solutions can be obtained for the same configuration of APs (i.e., from 1 AP to 8 APs). This provides more alternatives to the network designer than automatic RF-based algorithms. Moreover, SSGA is able to automatically plan APs in both empty spaces (i.e., when the nodes are not known a priori) and populated environments (i.e., when the nodes are fixed beforehand). Hence, SSGA seeks to optimize the coverage when designing empty planes or maximize it on customer locations according to the user requirements. Briefly explained, SSGA consisted in the following steps: begin with a selection of candidate solutions (i.e., positions for a specific number of APs), evaluate the solutions according to the maximum coverage as objective, evolve these solutions by using parent selection, crossover, mutation, and replacement with subsequent generation improvements as long as the best solution does not change, and provide those best solutions to the user. Additional details can be found in [6].

3.1. Multiobjective Genetic Algorithm

As a complementary strategy to SSGA, a MOGA module has been integrated into WiFiSim. The major improvement of using MOGA compared to SSGA is that multiple optimal placement configurations consisting of different numbers of APs can be simultaneously obtained from a single run. That way, the functionalities of WiFiSim were extended by not only selecting up to 8 APs with SSGA but also collecting 32 simultaneous solutions per WLAN design with NSGA-II. It gives users a greater range of optimization and experimentation as shown in Figure 1.

The modelling technique was developed using jMetal, a Java framework for developing metaheuristic algorithms [45]. NSGA-II basically consists of a parent population and a descendant population , both of size , combined to form a new population of size . While is randomly created based on an initialization method, is generated from using a tournament, crossing, and mutation process. Then is classified into Pareto fronts according to a nondominated ordering process and a fitness function such that . Instead of randomly choosing the Pareto fronts to form the next generation of size , NSGA-II takes the best solution fronts and discards the worst fronts. This is efficiently done by computing the crowding distance of their solutions, meaning the value of the search space around a solution that is not occupied by another solution in the population:where stands for each of the objective functions and means the solution taken from the Pareto front for which and stand for the neighboring solutions. As a result, NSGA-II ensures the best and more diversity results since the solutions are distanced from each other when the population converges towards the optimal Pareto front after several generational cycles. The problem formulation and the genetic operators used are discussed in the following sections.

3.2. Map Structure and Objectives

The floor maps were codified in NSGA-II by means of a cell division method. In the case of the SSGA, maps were divided—depending on the floor shape and obstacles available—into square cells, rectangular cells, and mini-cells with resolution up to  m2 (i.e., 32 × 32 pixels by default). The NSGA-II approach was improved, treating the cells closer to reality by using not only standard cells but also triangular and polymorphic shapes. This resulted in a more efficient cell division method since it completely covers the whole useful space (Figure 2).

There were two optimization objectives in this problem: minimize the number of APs and maximize the number of cells covering the whole planning area. These objectives were in conflict since reducing the number of APs inevitably implies decreasing the average signal as well, and vice versa. The first decision criterion was to minimize the number of APs to reduce the infrastructure cost as much as possible. As second decision criterion, NSGA-II computes the signal attenuation by distance and obstacle at each cell for every AP configuration (i.e., from 1 to 32 APs). The free space path loss (FSPL) is calculated as a function of the distance and frequency, whose amount of signal loss is given in decibels bywhere is the distance from the transmitter (in Km) and is the frequency (in MHz). The loss signal by obstacle is then subtracted to (2) and the total power received at each cell is modeled as follows:where is the power received at each cell by every AP, is the power emitted for each AP, and is the loss signal due to the obstacle presence. Once the power signal is calculated at each map cell, two specific user constraints defined in the algorithm options are evaluated. These are the maximum bandwidth and Wi-Fi protocol. Since the rate and modulation scheme used in the WLAN set the media sensibility (i.e., rate in Mbps versus power in dBm), this serves to filter out the solution calculated by the algorithm in (3) and satisfy the design constraints. Therefore, the planning of the AP location plays a key role in maximizing the WLAN transmission coverage and communication throughput.

3.3. Coding Scheme and Chromosome Selection

The number of APs and their locations were concatenated to form a chromosome (Figure 3). We modified jMetal to use a double chromosome structure in NSGA-II consisting of a binary part and an integer part that codifies the two decision variables. The first one stands for 5 bits whose equivalent decimal value stands for the number of APs to be evaluated in the integer part of that chromosome. The second part contains 32 values that identify the cell number where the APs will be located. The number of cells will depend on the map size, number of floors, and obstacle shapes.

The chromosome selection is based on a fitness function and the stochastic universal sampling selection, where each individual of the population is valued in a nondominated Pareto rank after a selection process, becoming part of the next generation of those chromosomes with lower rank value [46]. In case of individuals with the same rank value, a crowding operator based on (1) is applied to discriminate individuals. The lower the value of the crowding operator, the greater the diversity of solutions that a chromosome has.

3.4. Mutation and Crossover Operators

The mutation process is applied independently to the binary and integer part of the chromosomes according to values of probability that initially are chosen very low (e.g., 3%). Specifically, we have used the integer flip mutation method for the integer part, which takes random positions of a chromosome to change their values. In case of generating a chromosome with more than one AP in the same cell position, the algorithm will consider them as the same AP and filter out this solution in favor of others presenting the same number of APs but in different map cells. As a result, the algorithm discards those anomalous chromosomes before passing to the next generation. Regarding the binary part, we used a bit flip method with similar mutation to the integer flip technique but considering the range .

The crossover process is also applied to the integer and binary parts severally. In particular, a uniform crossover scheme (UX) was applied to the integer part in which the values of two chromosomes are compared individually and swapped with a fixed probability, typically 0.5. Regarding the binary part, we used a one-point crossover operator in which all the values from a point of a chromosome are interchanged with the corresponding values in the other chromosome compared. Both crossover operators were entirely programmed in jMetal for this project.

3.5. Modelling Improvements

Beyond being an evolutionary genetic algorithm, NSGA-II was conceived to exploit its full capability as an intelligent modelling software. To this end, NSGA-II was integrated into WiFiSim to afford the parameter configuration of the multiobjective algorithm. First, because the generation, mutation, crossing, and selection of the population are processes cyclically done by NGSA-II, we end the algorithm execution by setting a maximum number of evaluations. This value, fixed to 5,000 by default as depicted in Figure 1, can be customized by users to experiment with different results. Once the algorithm finishes, it enters the process of ranking solutions, filters out those that are anomalous, and sorts them according to the number of APs that compose it. Hence, users can move in an orderly way from one solution to another—by means of a cursor—to study the arrangement of up to 32 access points. Conversely, in SSGA, when a replacement exists after an iteration, the algorithm reevaluates the population to determine whether it improves or not. In case of remaining stuck during 500 iterations, SSGA ends.

Secondly, NSGA-II allows search for optimal solutions based on requiring a minimum percentage of allowable coverage in the WLAN infrastructure according to a Wi-Fi bandwidth and technology (e.g., 1, 6, 11, 54, 108, 130, 150, or 300 Mbps for IEEE 802.11n). In this way, NSGA-II models the relationship between transmission rate (Mbps) and power (−dBm) for each AP technology (i.e., sensitivity to environment). This stands for an additional improvement giving users a greater optimization and experimentation grade compared to other WLAN planning tools or MOGA approaches.

Moreover, we have modified NSGA-II to develop a multithread version to parallelize the effort in processing the algorithm’s objectives. In this sense, we experienced that the higher processing requirements were spent in the evaluation of the strength signal at each map location for each AP configuration. The complexity proportionally increases with the number of resulting cells because of the number of building floors, map scale, and obstacles included. To this end, WiFiSim was programmed to use up to 8 simultaneous threads (set to 4 by default).

4. Experimentation

This section shows a comparative study about the performance and scalability of the SSGA and NSGA-II approaches. To this end, three target applications were considered: small, medium, and large scenarios. Experimenting with large environments has serious concerns compared with small and medium scenarios. Larger areas have tendency to contain more living rooms and obstacles to form a more complex morphology, thus resulting in a higher division of the map cells. This implies higher search spaces to compute with significant impact on uncertainty and execution time. In order to evaluate the tests, the parameter settings for both algorithms are summarized in Tables 2 and 3.

4.1. WLAN Design on Small Scenarios

This case study consisted in fixing two wireless clients on a small symmetric scenario where one and two APs were deployed anywhere on the map. The map was set with a scale of 1 : 65 meters per pixel, which is equivalent to 29.28 m2. This case study served to demonstrate how NSGA-II and SSGA managed their solutions to optimally deploy the WLANs. We observed that a solution with one AP did not cover the whole map (74% in Figure 4(a) and 82% in Figure 4(c)) whereas a solution considering two APs covered the complete scenario (Figures 4(b) and 4(d)). However, the algorithms successfully found similar solutions to optimally reach the two wireless clients in their locations.

4.2. WLAN Design on Medium Scenarios

For this test we used a real map house with two bedrooms, two bathrooms, a dining room, a living room, and two terraces on a floor of 237.91 m2 (Figure 5). The map was set with a scale of 1 : 32 meters per pixel. The scenario was modelled using a special contour-type material to form the outlines of the building (black thick lines with −95 dB), concrete walls for thin inner walls (green lines with −18 dB), construction beams for internal walls (black thin lines with −10 dB), a special “not evaluable” material for areas like closets (black lines with −80 dB), and a glass material for the doors accessing the terraces (blue lines with −2 dB).

The case study consisted in deploying one AP covering one client, two APs covering two clients, and two APs covering three clients. To this end, we placed the wireless clients at the living room (node 0), a bedroom (node 1), and a terrace (node 2). Although SSGA and NSGA-II achieved similar coverage percentages on the map (between 64% and 91% for the three examples), we observed that SSGA provided a distribution more centered on the nodes than NSGA-II when the number of nodes was higher than the APs. On the contrary, NSGA-II tried to provide a more global and diverse coverage on the map (i.e., more balanced).

4.3. WLAN Design on Large Scenarios

The following experiment was conducted in a typical office environment consisting of a 73 × 40 m2 building with rooms, corridors, and open spaces with a scale of 1 : 10 meters per pixel corresponding to 2920 m2 (Figure 6). Because of this structure, we created an exclusion zone around a landscaped courtyard at the center of the building using the “not evaluable” special obstacle. The goal was to prevent the algorithms from including APs in uninteresting areas and considering only working areas to look for the optimal solutions. In addition, the environment was modelled using a contour line for the building perimeter (black lines with −95 dB), brick walls to separate the office rooms (blue lines with −3 dB), and a “roof” obstacle with 7 dB of attenuation to separate the two building floors.

The case study consisted in observing how the algorithms deployed one AP for two nodes, two APs for two nodes, and two APs in two floors located at the building walls. From the results, we obtained similar wireless coverage in the three examples (67%, 89%, and 99%, resp.). Moreover, we observed that SSGA and NSGA-II found similar AP distribution with slight differences. This suggests a great accuracy and repeatability on the algorithm solutions, thus providing a high stability regardless of the strategy chosen.

4.4. Time Analysis

In order to evaluate the complexity of the SSGA and NSGA-II strategies, we extended the above case studies with additional configurations and conducted a comprehensive analysis on the time cost. The methodology consisted in computing the average times and standard deviations of the algorithm solutions for a series of 10 records per setting. To this end, we captured the calls to the Java methods of SSGA and NSGA-II with VisualVM 1.3.9. The maps were modeled with 8, 36, and 120 building structures for the small, medium, and large scenarios, respectively, in a floor area of 800 × 600 pixels of resolution. For it, the experiments were performed with an Intel® Core™ i7 (2.6 GHz, 16-GB RAM).

From the results, we found in general that the SSGA executions lasted more; the higher the map resolution, the higher the number of floor obstacles and the higher the target nodes or APs to search (Tables 4, 6, and 8). In addition, we found no evidence that the position of the APs within the map influenced the execution times (i.e., at the walls or anywhere) other than the number of solutions computed. These arguments are also valid for NSGA-II except for the execution times in medium and large environments which improved more with higher number of nodes deployed in the map (Tables 5, 7, and 9). This suggests that the uncertainty decreases as more clients are covered on the map. In other words, covering a single node requires more possible solutions of APs to be computed (i.e., more processing time) than covering several clients at the same time (i.e., more fitted solutions).

In relation to the size of the scenarios, we encountered significant differences in the execution times. In this sense, we found a better performance of NSGA-II compared with SSGA being more remarkable for the medium scenario. Moreover, it should be also mentioned that while SSGA requires one run per optimal solution searching (for one AP, two APs, etc.), NSGA-II computes a set of 32 optimal solutions in a single execution. This makes the NSGA-II approach significantly more efficient than the SSGA strategy in global terms.

With respect to the map resolutions, we attained times ranging from 0.54 ± 0.06 s to 2.25 ± 0.14 s for the small scenarios, from 27.76 ± 2.87 s to 40.06 ± 5.95 s for the medium scenarios, and from 11.98 ± 1.59 minutes to 17.15 ± 1.78 minutes for the large scenarios. This means that the algorithms are very sensitive to the map resolution, with the processing times being more suitable for small and medium scenarios but not so scalable for large environments due to the higher search spaces to compute. Nonetheless, these times are in accordance with the times typically achieved by implementations based on genetic algorithms as described through [4749]. As a solution, we enlarged the workspace of WiFiSim to reduce the resolution when simulating large scenarios. With this goal, the map resolution was changed from 1 : 10 meters by pixel in a workspace of 800 × 600 pixels to 1 : 20 meters by pixel in a workspace of 1600 × 1200 pixels. Considering the large scenario and the worst execution times from Tables 8 and 9, we obtained a reduction from 39.08 min to 15.65 min for SSGA and from 16.37 min to 6.93 min for NSGA-II. This allows reducing the processing cost to more manageable times, thus resulting in an improvement of 40.05% and 42.34%, respectively. As a result, this test suggests that larger workspaces must be utilized to simulate larger environments, being besides more appropriate when drawing small details in large scenarios due to the larger view.

5. Conclusions and Future Works

WLAN design and planning is a complex task that demands of engineers high analytical and troubleshooting skills to offer the best network performance and usage. While hand-operated traditional methodologies are not effective for practitioners due to cost and time, computer-aided systems must provide a high degree of realism and modelling capabilities to be useful. To contribute to this field, this paper presented an engineering tool—called WiFiSim—developed to facilitate the automatic WLAN planning of complex environments and to assist in decision-making prior to real deployment. This tool that affords the complete behavior modeling of Wi-Fi networks in Layer 1 and 2 of the OSI model has been improved with an optimization algorithm based on evolutionary genetic. The goal was to procure a set of suboptimal solutions very close to the best for the AP location problem considering the largest covered area and maximum strength signal as decision criteria.

With this purpose, a previous approach based on a monoobjective genetic algorithm (i.e., SSGA) was extended with a multiobjective genetic algorithm (i.e., NSGA-II). Thus, in addition to obtaining several AP locations for up to 8 different selectable configurations, the main advantage of using NSGA-II was the capability to attain 32 solutions at once with several simultaneous configurations of APs. This approach was also designed to satisfy the user design constraints concerning the transmission rate and Wi-Fi technology used in the WLAN.

To assess both approaches and highlight the advantages of the second algorithm, several scenarios with small, medium, and large areas were used to simulate typical Wi-Fi environments embodying an office, a house, and a campus. The aim of the various tests was to evaluate the consistency of the algorithms—and how they computed their optimal solutions—along different scenarios and map resolutions in relation to the structure complexity and the search spaces. From the results, we encountered that the algorithm executions lasted more; the higher the map resolution, the higher the number of map obstacles and the higher the number of targets (i.e., nodes or APs). Moreover, we found no significant difference in the execution times due to the position of the APs within the map (i.e., at the walls or anywhere). Concerning the map size, the algorithms achieved very affordable processing times for small and medium scenarios, not so scalable for large environments due to the higher search spaces. Although the times were in accordance with the typical requirements for GA-based implementations, the execution cost can be reduced to more manageable times (up to 40.05% and 42.34%) using larger workspaces to model larger environments (e.g., from a map resolution of 1 : 10 meters by pixel in a workspace of 800 × 600 pixels to 1 : 20 meters by pixel in a workspace of 1600 × 1200 pixels). As a result, we conclude that the NSGA-II approach achieved higher performance in general terms compared with the SSGA strategy.

Regarding the present, we are currently working on connecting WiFiSim with an eLearning tool for the programming, study, and distribution of wireless protocols based on an institutional web repository. In addition, future developments are focused on improving several functional and technical capabilities of WiFiSim. In this sense, we are working to extend the features on the PHY and MAC layers to enhance the software realism. Regarding the PHY layer, this comprises new signal measurements given by the BER, EIRP, and RSSI, an antennae modeling, support for IEEE 802.11ac/ad, larger Wi-Fi vendor library, and the cost of the resulting infrastructures. As for the MAC layer, this consists in providing the complete IEEE 802.11 frame format, including new frames as the RIFS, EIFS, and PS-Poll frames, new connection procedures as the association, reassociation, authentication, and deauthentication, and the protection mode for mixed scenarios with DSSS and OFDM, as well as power management by the TIM and DTIM mechanisms.

Finally, a set of selected videos and the modelling simulation software are offered for the free use and evaluation at http://www.uhu.es/tomas.mateo/wifisim.

Abbreviations

AP:Access point
BER:Bit error rate
CA:Collision avoidance
CSMA:Carrier sense multiple access
CTS:Clear to send
DSSS:Direct sequence spread spectrum
DTIM:Delivery traffic indication map
EIFS:Extended interframe space
EIRP:Effective isotropic radiated power
FSPL:Free space path loss
GA:Genetic algorithm
GUI:Graphical user interface
IEEE:Institute of Electrical & Electronics Engineers
IAGA:Improved adaptive genetic algorithm
ICT:Information and communication technologies
LAN:Local area network
MAC:Media access control
MICRO-GA:Microgenetic algorithm
MOGA:Multiobjective genetic algorithm
NPGA:Niched Pareto genetic algorithm
NPGA2:Improved niched Pareto genetic algorithm
NSGA-II:Nondominated sorting genetic algorithm II
OFDM:Orthogonal frequency division multiplexing
OSI:Open system interconnection
PAES:Pareto archived evolution strategy
PESA:Pareto envelope-based selection algorithm
PHY:Physical
PS-POLL:Power save poll
RF:Radio frequency
RIFS:Reduced interframe space
RSSI:Received signal strength indicator
RTS:Request to send
RW:Random walk
SA:Simulated annealing
SNR:Signal to noise ratio
SPEA:Strength Pareto evolutionary algorithm
SSGA:Steady-state genetic algorithm
TIM:Traffic indication map
TS:Tabu search
WI-FI:Wireless Fidelity
WIFISIM:Wireless Fidelity Simulator
WLAN:Wireless area network.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to express their very great appreciation to F. A. Márquez Hernández, D. Ortiz Fuentes, B. Moriña Arias, and M. E. Pedraza Escudero for their valuable and constructive work that helped to improve this research.