Abstract

Blind spots (or bad sampling points) in indoor areas are the positions where no signal exists (or the signal is too weak) and the existence of a receiver within the blind spot decelerates the performance of the communication system. Therefore, it is one of the fundamental requirements to eliminate the blind spots from the indoor area and obtain the maximum coverage while designing the wireless networks. In this regard, this paper combines ray-tracing (RT), genetic algorithm (GA), depth first search (DFS), and branch-and-bound method as a new technique that guarantees the removal of blind spots and subsequently determines the optimal wireless coverage using minimum number of transmitters. The proposed system outperforms the existing techniques in terms of algorithmic complexity and demonstrates that the computation time can be reduced as high as 99% and 75%, respectively, as compared to existing algorithms. Moreover, in terms of experimental analysis, the coverage prediction successfully reaches 99% and, thus, the proposed coverage model effectively guarantees the removal of blind spots.

1. Introduction

Recent trend shows that it is one of the greatest challenges to remove the blind spots from the user-specified environment as well as install the optimal communication system while designing the wireless networks. Blind spot [1] refers to that position where signal cannot be reached or too weak to be considered as a significant signal, which affects the overall performance of the communication system. There are not many existing methods [2, 3] available in the area of radio signal propagation prediction to optimize the indoor wireless coverage. In [2], a recent coverage prediction model using ray-tracing (RT) technique for field prediction, associated with genetic algorithm (GA) to optimize the base station antenna location in an indoor environment, is presented. The model is based on the image method and, thus, computational time is increased with the number of objects in the indoor environment. On the other hand, GA is used to find the location of optimal antenna that maximizes the signal strength calculated over the region of interest. The GA starts with a set of solutions, known as population, and solutions from a population are used to generate a new population. The solutions that will generate a new population are selected based on their fitness function. This procedure is repeated until a certain condition is satisfied. If the condition is not attained, genetic operators are applied to form a new population. The generation of new population completely depends on the linear combination of variables of each point in the optimization space in a certain generation. Therefore, implementation of this type of coverage prediction model is very troublesome and also computationally inefficient. Conversely, Yun’s algorithm [3] also consists of GA and RT to determine the minimum number of transmitting antennas as well as their appropriate locations to provide the optimized wireless coverage for indoor environment and according to Yun et al. [3], all existing coverage models failed to guarantee the optimum wireless coverage by completely eliminating blind spots from the indoor environment. However, it may be impractical to simulate the complex environments using Yun’s algorithm [3] due to expensive computational complexity that would sometimes be beyond the capacity of the contemporary computers.

Therefore, the main research objectives are illustrated as follows: (i) to determine the best location of transmitter to provide the optimum network deployment, (ii) to determine the number of transmitters required to ensure optimal coverage, and (iii) to remove the blind spots from indoor as well as obtain the maximum wireless coverage using the minimum number of transmitters while designing the wireless networks (i.e., to find a solution where the number of transmitters is minimum and all the receivers are covered by those transmitters, which means no receiver or blind spot will exist in the indoor). To address the current challenges and prime concerns as discussed, this study exploits both RT [411] and GA [1216] together with depth first search (DFS) [17] (the DFS is a last in first out searching in terms of live nodes where list of live nodes forms a stack) and branch-and-bound method as a new technique that guarantees the removal of blind spots and subsequently determines the optimal wireless coverage. The RT is used here to follow the trajectory from the transmitter to the receiver. For detailed explanations of RT model, we may refer to [6]. In GA, each chromosome is represented by a binary pattern that keeps the coverage information of a transmitter. By recombination of some patterns, it finds the optimal solution. The DFS is used to search those transmitters that are required to cover all the receivers (a generated node whose child nodes have not yet been explored is called live node and the node which is being explored is called E-node [12]). While searching, the recombination theory of GA is applied to the coverage pattern of the transmitters. And, branch-and-bound method is a backtracking procedure [12], where bounding functions are applied to avoid the generation of subtrees (to avoid unnecessary searching, thus making the algorithm faster) that have no answer node. In contrast to [12], the proposed algorithm consumed less memory requirement as it stores only a stack of nodes representing a single path. If any solution is found without exploring unnecessary nodes based on the bounding functions, both space and time complexities can be reduced further. The superiority of the proposed system will be verified in the subsequent discussions.

2. Proposed Coverage Algorithm

The working principle of the proposed algorithm based on the DFS is as follows. (i)Explore the root to generate its child.(ii)Visit the child nodes and follow on with the child of them and so on until a leaf node is found. (iii)Step back to the second child of the previous root, and so on.

For simplicity, the following notations are being used in the subsequent discussions. (i)If the number of sampling points is , is the th transmitter where . (ii)The coverage pattern of a transmitter is . (iii)The number of good and bad sampling points of a coverage pattern is and , respectively. (iv)Let be the edge that is connected with the root and immediate child of the root node. The edge has a real-valued weight , where is the th bit of coverage pattern . Each node also has a weight of . The weight of the root node is , the weight of the child node at level 1 is , and the child node at level 2 along the same path has the weight of . Therefore, the weight of each node can be expressed as , where and represent the depth and the weight of an edge of the tree, respectively.

In this study, the coverage information of a transmitter is described by a chromosome, which consists of a coverage pattern. The coverage pattern works as follows. Assume that an indoor area where sampling points are covered by transmitters , . Here, each transmitter has a coverage pattern like , where and = “0” or “1” for . The value indicates the th sampling point as a good sampling point, which is covered by the th transmitter and specifies that th sampling point is a blind spot. Again, two patterns and can generate the resultant pattern based on union operation as follows:

In the best case, the resultant coverage pattern is where that makes summation value equal to , which indicates 100% coverage. Thus, if the set of unique coverage patterns is , the purpose is to find a subset , where the number of covered sampling points is

For illustration, suppose there are 8 sampling points in an indoor area. If two transmitters and cover the sampling points 1, 3, 5, 8 and 2, 3, 4, respectively, their coverage patterns ( and ) and the resultant pattern are as shown in Table 1.

Here, the resultant pattern is created based on logical “OR” operation (where the result is “1”, if either first or second bit is “1” or both bits are “1;” otherwise, the result is “0”). The resultant pattern consists of six good sampling points (1, 2, 3, 4, 5, and 8) and two blind spots (6 and 7).

Figure 1 also gives some ideas on good sampling points and blind spots, where 40 receivers have been deployed. In Figure 1(a), an area surrounded by the dotted pentagon has been chosen as the transmitter position. Here, 21 receivers (small filled circles) receive the signals from the transmitter and other 19 receivers (small filled circles surrounded by the triangle) do not receive any signal. The blue lines are the emanated rays from the transmitter, while the red lines represent the specular reflections and transmissions of waves between the objects (obstacles). Therefore, in case of a single transmitter in Figure 1(a), 21 good sampling points and 19 blind spots are found. Similarly, while using two transmitters in Figure 1(b), still there exist 6 blind spots. And, finally for Figure 1(c), where 3 transmitters are being used, there is no blind spot available that indicates that the optimal coverage for indoor wireless has been achieved and those three transmitting positions (surrounded by the dotted pentagon) are considered as at their optimized locations.

In this study, the DFS uses branch-and-bound terminology while exploring the search tree. For the indoor environment having number of sampling points, the proposed bounding functions are as follows.(i)If the set of transmitters representing the path to the E-node is , then the set of child nodes with parent-child labeling is , where the coverage pattern is not covered by the resultant pattern , that is, .(ii)The number of blind spots in the resulting pattern generated from the set of transmitters until the path of the E-node is less than or equal to the summation of the good sampling points of the subsequent coverage patterns that correspond to the subtree of the root , that is, .(iii)The new node will be generated from the current E-node, if the weight (where ) of the current E-node at level is less than or equal to the number of sampling points .

Consider a floor plan of a building with six sampling points as presented in [3]. It is assumed that the transmitters are placed to the slight left of the sampling points by an arbitrary distance of 36 cm [3]. The reason behind this is if the positions of the transmitter and receiver are identical, then the proposed algorithm will skip the corresponding sampling point while calculating the received power as both positions of transmitter and sampling points are overlapped. Now, the ray tracer for each of the six sampling positions is executed to calculate the field distribution among sampling points and the generated coverage patterns () of those six transmitters () are in Table 2. Here, each row is a coverage pattern and each binary value in the column refers to a sampling point as good or blind spot for the corresponding transmitter.

From Table 2, the cost functions of the coverage patterns ( to ) are 3, 3, 3, 4, 3, and 5, respectively. Here, the cost function (effectiveness) of a coverage pattern is calculated based on the blind spots; that is, the lower the number of blind spots, the higher the effectiveness. It is noticed that both and are the same (i.e., the sampling points covered by have already been covered by ). Hence, is ignored marking it as a duplicate pattern. Thus, the proposed algorithm continues with 5 patterns except to select an optimal subset of them.

Figure 2 generates a DFS tree using those 5 transmitters to achieve the optimal coverage. The edge between two nodes represents a transmitter. According to the proposed algorithm, the exploration of a node is suspended as soon as a new unexplored node is generated. Then, the exploration of the new node is immediately begun.

The algorithm starts from the root node 1 at level 0, which indicates no transmitter in the area. The root is explored and its first child node 2 is the next E-node of level 1, where the coverage algorithm is based on single transmitter. Node 2 generates its first child 3 at level 2, which is expanded as usual to generate nodes 4 and 5 at level 3. However, it cannot be generated because of the third bounding function because the weight of each node at level 3 is greater than the sum of the number of sampling points, that is, . As the node 3 is a leaf node, the algorithm switches back to its parent node 2. The second child node 4 of node 2 is generated and the algorithm switches to the second level again. Children of node 4 should be transmitters and , respectively, but they cannot be generated because of first, second, and third bounding functions, respectively. The set of transmitters that consists of the path to the E-node 4 is , the resultant coverage pattern of which is as follows:xy(4)

The pattern covers the pattern that violates the first bounding function. The pattern has 2 blind spots. The set of transmitters that forms the sub-tree of root is . Therefore, according to the second boundary function:

As the condition of (5) is wrong, the algorithm will not generate the child node of . And, if the node 5 is intended to add, the weight of the node 5 at level 3 exceeds the sum of the number of sampling points, that is, , which violates the third bounding function. Hence, it prevents the generation of node 5. If the algorithm continues, the tree in Figure 2 will be constructed, where the cross (X) marked node of has duplicate coverage pattern. In Figure 2, the optimal coverage is formed by the transmitters {, , and } and the solution path consists of the nodes 1, 6, 8, and 9, respectively.

Figure 3 shows some sample simulations with different numbers of sampling points generated by the proposed coverage algorithm. Here, the small filled solid circles represent the sampling points Rx (receiving points), the solid circles in hollow circles represent the optimized positions of the transmitters Tx that cover all the sampling points from those positions as well as eliminate all the blind spots, and the rectangles are the objects (working as obstacles).

3. Results and Discussion

In this section, both time and space complexities of the proposed algorithm will be derived and compared with the existing Algorithms 1 and 2, as reported in [3, 12], respectively. Both time and space complexities of the existing Algorithm 1 reported in [3] were derived in [12] and it was observed that both time and space complexities were as the same as follows: Here, represents the number of sampling points. As the time complexity depends on the number of nodes generated or expanded until the required solution has been found, the time complexity of the existing Algorithm 2 can be expressed as follows by modifying (6): Here, and are the number of rejected coverage patterns and unexplored nodes.

In [12], it is already proven that the time complexity is according to the bounding function where BFS (breadth first search) algorithm is used. On the other hand, this paper is an enhanced version of our existing work, and hence, we can express the time complexity of DFS algorithm which is as well, because the time complexity of DFS is generally the same as BFS. However, in the proposed method, we have used new bounding functions and some criteria to limit the search space tree. According to the third bounding function, the solution must be found within the path length for the proposed algorithm. Therefore, the improved time complexity of the proposed method is obtained as follows by replacing value with : As the only single path from the root to a leaf node is stored in memory stack, the space complexity of the proposed algorithm can be expressed as where is the number of nodes unexplored in a single path. The proposed algorithm will be compared with the existing Algorithm 1 [3] and the existing Algorithm 2 [12], as presented in Table 3. Based on this table, it can be deduced that the time complexity of the proposed algorithm is much better than the existing Algorithms 1 and 2 (the time complexity of both existing algorithms is increasing exponentially while it remains almost constant for the proposed algorithm). This is further demonstrated in Figure 4. In this figure, the values of and are randomly assigned to 1 and 2, respectively. That is, it has been considered that the number of rejected duplicate patterns is and the number of unexplored nodes is . According to Figure 4, as the value of and increases, the complexity difference rises which indicates better performance of the proposed algorithm. In addition, Figure 5 and Table 4 show that both existing Algorithms 1 and 2 experience exponential increase in space complexity while only a linear increase with the proposed algorithm.

Let the space complexities of the existing Algorithm 1 and the proposed algorithm be and , respectively.

Now,

Thus, it can be written that , which indicates less space complexity of the proposed algorithm from the existing Algorithm 1. Again suppose that the space complexities of the existing Algorithm 2 and the proposed algorithm are and , respectively.

Now,

Therefore, it can also be written that , which indicates less space complexity of the proposed algorithm from the existing Algorithm 2. Therefore, space complexity of the proposed algorithm is much less than that of both existing Algorithms 1 and 2 that can be best described in a tabular format as in Table 4, where the values of , , and are also randomly assigned to 1, 2, and 1, respectively. That is, it has been considered that the number of rejected duplicate patterns is , the number of unexplored nodes in the whole search space is , and the number of unexplored nodes in a single path of the DFS tree is . From Table 4 and Figure 5, it can be decided that the space complexity of both existing Algorithms 1 and 2 is increasing exponentially while the proposed algorithm is increasing linearly, which indicates an outstanding performance of the proposed algorithm.

The comparison based on the computation time among proposed algorithm and existing Algorithms 1 and 2 has been exposed in Table 5 and the reduction of computation time of the proposed algorithm from existing Algorithms 1 and 2 has been presented as a graphical view for better understanding in Figure 6. Here, 10 different scenarios containing different numbers of sampling points have been considered. The number of transmitters in Table 5 indicates how many transmitters are involved to cover all the sampling points as well as to remove all the blind spots from the corresponding propagation area. From Table 5, it is revealed that the execution time of the proposed algorithm is always much less than both existing Algorithms 1 and 2. It is also established that 75% reduction of the execution time in average is possible by the proposed algorithm while comparing with the existing Algorithm 2 and above 99% reduction (in average) is possible while comparing with the existing Algorithm 1.

In this study, specular reflections and transmissions of waves between the walls and furniture are considered, because the RT model used in this paper is a 2D ray tracer. Indoor measurements are performed at a carrier frequency of 2.4 GHz. The transmitter (Tx) used in this experiment is R&S SMU200A Vector Signal Generator and the receiver (Rx) is R&S FSV Signal and Spectrum Analyzer. Here, the semispherical antenna is used for both of Tx and Rx to receive the signals from both directions. The obstacles are composed of arbitrary materials. The walls of the room are made of brick. The doors are made of wood and the windows are made of glass. The inside furniture of the room is made of wood and plastic board. There are some partitions that are made of plastic board and glass. Relative permittivity and refractive indexes of different materials (each and every kind of furniture in the building), such as brick, glass, wood, and plastic board in the testing room, are chosen as standard value [18, 19]. It should be noted here that a highly sophisticated simulation software is developed using Visual Studio 2008 (C# code) in this work that provides a Graphical User Interface (GUI). Therefore, this software will allow customizing the properties and the thicknesses of particular objects by a “mouse click” operation. In this study, all walls are assumed to have the same thickness and same properties except at locations wherever specified.

Besides, the detailed parameters, such as thickness, dielectric constant, and conductivity of particular objects, are taken for the realistic field calculations. When the properties of each object are modified, suppose that permittivity of the wall is changed, then the received signal strength will also change, because the received signal is directly related to the reflection loss, which is related to the permittivity of the materials. On the other hand, when any partition is changed by other materials, for instance, a partition made of brick is changed by the glass sheet, then the overall predicted signal is also changed, which will have an effect on the number of receivers covered by a particular transmitter, which means that the number of blind spots will also be changed. Figure 7 shows that the number of blind spots is changed because of changing of material used in the indoor environment.

In Figure 7(a), one transmitter and four receivers are placed in different locations. After rays are launched from the transmitter, two receivers remain uncovered which means that two blind spots exist within the coverage area, because of brick partition used in the dotted squared area. On the other hand, four receivers are covered by the transmitter when the partition walls (that exist in the dotted rectangle as shown in Figure 7(b)) are replaced by the glass sheet. Therefore, none of the blind spots exits in the coverage area. It can be summarized that when the properties of objects used in the simulation environment are changed, then the coverage pattern of the transmitter also changes.

In addition, real measurements of Figure 1 are carried out to account for the strength of the signal, because providing coverage with assessing the intensity of the signal is sufficient to claim that there are no blind spots. Bad sampling points are determined as when the received power from any transmitter (Tx) is less than −50 dB [3]. Figure 8 shows a relationship between the received power and coverage for different number of Tx(s), where the transmitted power is considered as constant and the number of Tx(s) is increased to obtain the maximum coverage. From Figure 8, it is also observed that when there is a single Tx on the environment, it covers a less number of Rx(s). Therefore, the total power also becomes less. For two Tx(s), the number of covered Rx(s) increases, which also increases the received power. For three Tx(s), almost all Rx(s) are covered and, also, the received power is getting higher.

According to the indication of the figures, it is clear that the Rx power is coming from all of the used Tx(s). When we use single Tx, the Rx(s) power is from that single transmitter. When there are two Tx(s), it is obvious that the Rx(s) receive power from both of the two Tx(s) and the average of the two received powers has been shown here. In case of three Tx(s), the Rx(s) receive from all three Tx(s) and here, also, the average received power is shown in the figure. During this experiment, the Tx power was kept constant at −30 dBm. However, this experiment can also be done for any range of Tx power. From Figure 8, it is observed that the percentage of coverage is increasing with the increase in the received power, which guarantees coverage of the receive point (i.e., there are no blind spots). The received power is also increased when the number of Txs is increasing. It is found that, when the number of Txs is 3, the percentage of coverage is almost 99% (almost all sampling points are covered).

4. Conclusion

An efficient and novel optimization algorithm of removing blind spots from the indoor area has been proposed in this study. The advantage of the algorithm is that the memory requirement is only linear. This is in contrast to other existing algorithms [3, 12], which require more space. The reason behind that is that, a stack of nodes on the path from the root to the current node is needed to store by the algorithm. As mentioned earlier, if the proposed algorithm finds the solution without exploring unnecessary nodes in a path, the space and time it takes will be much less. Thus, it has less time and space complexities in average and the more the number of sampling points in the indoor environment, the more the complexity difference between the proposed and the existing algorithms. Thus, it is revealed that the average execution time can be reduced as high as 99% and 75%, respectively, because of outstanding bounding functions as well as the concept of binary coverage pattern. Therefore, a conclusion can be drawn that the proposed algorithm excels in both algorithmic complexity and computation time.

Conversely, the wireless coverage model by [3] repeats the ray tracer for every generated chromosome. As a result, there is a high probability of repeating the RT program multiple times for the same transmitter position, which consumes a significant amount of time. The model [3] also cannot reuse the information of a transmitter location and, therefore, it runs the ray tracer repeatedly. The proposed algorithm does not run the ray tracer more than once for a single transmitting position. The proposed coverage model guarantees the removal of blind spots completely from the indoor area and subsequently continues recursively until it covers all the sampling (receiving) points. Theoretically, the coverage algorithm always covers all the receiving points to offer 100% coverage. Moreover, the performance and accuracy of the proposed coverage model are verified by making comparisons between simulation results and measured data. It is observed that the percentage of coverage is increasing with the increase in the received power that guarantees coverage of the receiving point. The received power is also increased when the number of transmitters is increasing. Therefore, in terms of measurement results, the percentage of obtaining coverage is 99% and almost all sampling points are successfully covered in the given area. Hence, the proposed algorithm can be a great competitor of other optimization techniques and the outcome of this study can be easily applied to real system engineering.