Abstract

Although ray tracing based propagation prediction models are popular for indoor radio wave propagation characterization, most of them do not provide an integrated approach for achieving the goal of optimum coverage, which is a key part in designing wireless network. In this paper, an accelerated technique of three-dimensional ray tracing is presented, where rough surface scattering is included for making a more accurate ray tracing technique. Here, the rough surface scattering is represented by microfacets, for which it becomes possible to compute the scattering field in all possible directions. New optimization techniques, like dual quadrant skipping (DQS) and closest object finder (COF), are implemented for fast characterization of wireless communications and making the ray tracing technique more efficient. In conjunction with the ray tracing technique, probability based coverage optimization algorithm is accumulated with the ray tracing technique to make a compact solution for indoor propagation prediction. The proposed technique decreases the ray tracing time by omitting the unnecessary objects for ray tracing using the DQS technique and by decreasing the ray-object intersection time using the COF technique. On the other hand, the coverage optimization algorithm is based on probability theory, which finds out the minimum number of transmitters and their corresponding positions in order to achieve optimal indoor wireless coverage. Both of the space and time complexities of the proposed algorithm surpass the existing algorithms. For the verification of the proposed ray tracing technique and coverage algorithm, detailed simulation results for different scattering factors, different antenna types, and different operating frequencies are presented. Furthermore, the proposed technique is verified by the experimental results.

1. Introduction

Nowadays, indoor wireless communication becomes more and more popular in communication field. Because of increasing demand in this field, an effective propagation prediction technique and optimized coverage algorithm are required in order to support the demand by using the minimum number of transmitters (s) and at the same time achieving the maximum indoor wireless coverage. Though there are a number of existing research works based on ray tracing for propagation prediction [17], most of them have not mentioned anything about the coverage. Therefore, researchers are still in need of an efficient and integrated method, which can serve for propagation prediction and coverage optimization.

The main problem for the ray tracing based propagation prediction model is the ray-object intersection test. This test consumed the most time and resources in a ray tracing method [8, 9]. Intersection test is performed every time after a new ray is generated for finding whether there is a ray-object intersection or not. Hence, if all objects participate in this test, the ray tracing time consumed will be extremely high. To accelerate the ray tracing technique, various methods, such as angular sectoring [10], KD-tree, octree, quadtree [4, 5, 11], and a preprocessing method [8], are proposed. However, the existing models, such as shooting and bouncing ray (SBR) [4], bidirectional path tracing (BDPT) [12], brick tracing (BT) [13], ray frustums (RF) [14], prior distance measure (PDM) [8], and space division (SD) [15] techniques, require higher prediction time due to complex algorithms used and limitations of the used techniques. Moreover, the prediction accuracy is not so high. In SBR technique, double ray counting error occurs for the receivers (s) situated in two successive ray cone areas. The BDPT technique shows incorrect results for single floor multiple room environments and takes a lot of time to create the ray paths. The BT technique shows inaccuracy for corner bricks because of the truncation of the slab, which results in erroneous analytic reflection and transmission coefficient. In RF technique, a large computer memory is required for complex environments to store the huge amount of triangular frustums, which results in slow performance. In SD technique, all the IDs of the unique cells are stored in a single list and the full list has to search for finding a specific cell during simulation. This will consume a lot of time and increase the execution time. In PDM method, a preprocessing operation is needed for the environment. This type of preprocessing makes the overall process more complex. None of the above techniques are accompanied by the coverage optimization technique.

Considering all of the drawbacks of the existing techniques, this paper introduces a new method by including the rough surface scattering. This proposed ray tracing method is based on Adelson-Velskii and Landis (AVL) tree data structure, dual quadrant skipping (DQS) technique, and closest object finder (COF). The AVL tree has a lower data searching time and it is used for efficiently handling different information relative to the objects and environment. Both of the DQS and COF techniques are newly introduced and described here. Both of the techniques help to reduce the ray tracing time by eliminating the unnecessary objects and enhancing the ray-object intersection test. Furthermore, our proposed microfacet based scattering method is not dependent on specific environment and is able to figure out the scattering field in all likely directions. This makes the proposed ray tracing method more accurate.

Along with the ray tracing technique, a new coverage optimization algorithm is also introduced here where the probability is used to find out the most suitable to be selected in order to achieve the optimum solution for indoor wireless coverage area. The probability of each is directly affected by the sampling pattern. The proposed algorithm introduces two types of probability that need to be taken into consideration, that is, intraprobability and interprobability. The concepts of probability will be explained in more detail in the next section. In order to support the probability concept, multilevel technique has been applied in the proposed algorithm where each sampling pattern is viewed level-by-level instead of -by-. By applying multilevel technique in the proposed algorithm, it provided faster computation time; this is due to less taking part in probability calculation for certain number of sampling points. For achieving better performance, genetic algorithm (GA) and depth first search (DFS) are used along with the probability theory for finding the optimum wireless coverage. GA is a widely used [1619] algorithm for optimization of different electromagnetic problems. Here, it is used to optimize the number of s needed for covering the whole area. It will also optimize the position of the necessary s. In DFS, the node which is generated from the -node is called a live node. -node refers to the node where children are currently generated. The -node is selected from various live nodes in the same level based on the probability. having the highest probability will be selected as -node and during each DFS process, every can be selected as -node at least once. To minimize the required computation time (time complexity) of the proposed coverage algorithm, changes have been made for the existing bounding and termination concepts that were proposed by the existing algorithm [19]. Basically, the bounding function uses for updating the latest probability of each new subbranch, hence improving the accuracy to determine the number of s required at corresponding positions. Besides, a termination criterion is used to avoid repeating select nodes, which had been selected as -node before. The proposed algorithm generates less number of nodes in the search tree and further reduces the computation time by using this bounding function and termination criteria. Some analyses and comparisons have been made and the results prove that the proposed algorithm is more efficient than the existing algorithms [19, 20] in terms of space tree generated and also the time complexity.

2. Proposed Ray Tracing Technique

2.1. Technique for Achieving Balanced AVL Tree

For data storing and retrieving purpose, a dynamically height balanced binary search tree, named AVL tree, is used. An AVL tree has two basic properties: the height of the subtrees of every node differs by at most one and each subtree is an AVL tree. An AVL tree maintains a search time, while the addition and deletion operation also take time (where is the number of objects). This timing is almost similar with another self-balancing tree, namely, red-black tree. However, the difference between them is the limiting height. For a tree of objects, the maximum height of an AVL tree is strictly less than [21] where is the golden ratio.

At the same time, the maximum height of a red-black tree is [22] Therefore, we can say that the AVL trees are more rigidly balanced than red-black trees. For this reason, the AVL tree data structure is chosen.

The data insertion technique in an AVL tree is identical to a binary search tree, where it is done by expanding a peripheral node. For maintaining balance, information of the balance factor will have to be stored in every node. This balance factor will maintain the balance efficiently after each insertion or deletion operation. The balance factor, , can be represented as This indicates if the two subtrees are in the same height or not. The of a height balanced binary tree can be one of the values . An AVL node is called left heavy when , equal heavy when , and right heavy when . After insertion of each new node, the balance factors have to be updated. If the balance factor becomes less than −1 or greater than +1, the tree becomes unbalanced and rotation process will be undertaken for making balanced AVL tree.

Here, Figure 1 represents a sample environment of 10 objects. The creation of AVL tree from this sample environment is presented here in Figures 2 and 3. Figure 2 shows a single rotation for making balance after insertion of 6th object and Figure 3 shows balancing with double rotation after insertion of the 10th object in the sample environment of Figure 1. The further insertion of objects can be done by following this technique.

2.2. Microfacet Based Scattering Model

In this section, the computational model for computing the scattering field of rough surface has been presented. This model is based on Kirchhoff approximation (KA) [18, 23], which is a well-known method. Hence, the rough surface is decomposed into small planes that are nearly tangent to the roughness and these are called microfacets. Figure 4 represents the profile of a rough surface, a random tangent plane on it, and the notations, which are going to be used for incident and scattered fields.

An incident plane wave irradiates the decomposed rough profile. Thus, the same amount of incident plane wave is received by each microfacet and the surface reflects it towards its own specular direction. As well as the smooth surface, this specific specular direction can be defined by incident angle and alignment of the indigenous normal of a microfacet.

Therefore, the scattering field can be computed as where is the scattered field and is the incident field in both polarizations, is the Fresnel reflection coefficient between incident and scattered directions, and is the shifting of phase caused by the free space propagation distance after and before . The term represents the shifting of phase because of the height of the microfacet regarding the global mean value, which is usually set at [12]. It can be written as

Now, a rational sum of all the aid of the scattered fields summarized in a minute solid angle around a definite direction will have to be formulated. In reality, the orientation of many microfacets can be the same but, depending on the global mean value, their heights are not automatically same. If numbers of microfacets among numbers are well directed, the rational sum is equivalent to a vector sum for each component of the scattered fields [12]: where is the scattering field for both of the polarizations of the th microfacet.

As a result of the plane wave propagation condition, the simplified scattering field for the th microfacet can be written as Now, the whole scattering field around a particular direction will be expressed for the parallel element. The similar interpretation can be used for the perpendicular polarization. So, the scattering field of the th microfacet in parallel polarization can be expressed as Then, the vector sum in around direction for contributions becomes If we set up the ratio in (9), the scattering field will become where the ratio signifies the probability of having number of microfacets from possible numbers in direction and represents the mean attenuation due to the shifting of phase. Therefore, the scattering field of (10) can be written with replacement of by the probability density function as The scattering coefficient can be deduced from (11) for both polarizations, which provides the ratio between the scattered power in a solid angle around and the incident power:

2.3. Proposed Optimization Techniques for Ray Tracing

After creating the data structure tree, it is necessary to find the objects, which are taking part in intersection test. We have sorted the necessary objects in two different techniques. First, we have used the proposed DQS technique to find a group of objects according to the ray direction. Then, the COF will find the nearest object from that particular group of objects and that nearest object will take part in intersection test. These two acceleration techniques will reduce the intersection test time by finding the exact object.

The DQS technique will reduce the prediction time by reducing the number of considered objects due to each intersection test. From Figure 5, we can easily describe the DQS technique. Suppose a ray is generated from the and intersects with an object at the position . According to the position of the objects in the environment with reference to the point , the environment will be divided into four quadrants: I, II, III, and IV. The distribution of the objects into different quadrants is illustrated in Figure 5.

Now assume the object is a nontransparent object and no diffraction is occurring. Hence, the ray will show normal reflection or scattered reflection. Therefore, there is no possibility for the reflected ray to go behind the object. That means that, for the next intersection test, we have no need to consider the objects of quadrants I and IV. Thus, all of the objects in the regions I and IV will be skipped for the next intersection test. This saves almost half of the prediction time for the next intersection test. Now assume, after reflection of the ray at , the ray intersects with another object at the position . Again, the whole environment will be divided into four quadrants based on . After reflecting on , the ray will go to the front of that object. Again, the shaded portion of the back side of that object will be skipped, which means quadrants II and III will be skipped. If there is refraction (for transparent object) or diffraction instead of reflection, the opposite portions of the quadrant (quadrants II and III) will be skipped and thus will reduce the prediction time.

By applying the DQS technique, a group of possible intersecting objects can be found and, among them, some objects (e.g., 39th, 40th, and 41st) are parallel. All of them will not intersect with the ray, but only the nearest object will take part in the intersection test. To find the nearest object, we have introduced the COF. The COF is composed of two different phases: first, the creation of an effective artificial surface (EAS) inside each of the possible intersecting objects found by DQS and second finding the distances between the ray source and different EAS by using the “Pythagoras Theorem.”

The EAS as shown in Figure 6(a) is defined as the effective surface within an object, which is used for calculating ray-object intersection. In this concept, a new invisible or imaginary surface is created inside the 3D object at a point where the ray contacted with the object. This surface will be a 2D surface and it is created at the middle of the object. It is introduced with the aim of simplifying the algorithm and, hence, reducing the computational complexity. Using the six vertex points of a cuboid, the software developed in this work will calculate the coordinates of the EAS.

Assuming , , , and are the four vertices of the EAS and ,  , , , , , , and are the eight vertices of the cube or cuboids, then the coordinate of of the ICS can be found as and , , and coordinates are determined in the same manner as the coordinate.

Now, if the coordinate of is and coordinate of is , then the coordinate of the middle point of face will be as Now, suppose the middle of the EAS of the 39th, 40th, and 41st objects is , , and , respectively, and is the coordinate of . These points will be used for calculating the distance between the objects and ray source point . This will be done by using the “Pythagoras Theorem.”

By extending the points and , a right angled triangle will be formed (Figure 6(a)). The coordinates of the point will be . Now, by applying the “Pythagoras Theorem” in the right angled triangle, we found If the distance between and is , then, from (15), we found where is the difference between the -coordinates of and and is the difference between the -coordinates of and .

By this way, distance between and other middle points will be calculated, which can be and , respectively, in the present case. Then, after comparing the distances, the nearest object is chosen and, in the above case, 40th object is found as the nearest. As the nearest object, this one will now be used for an intersection test to find the intersection point. After finding the nearest object, the intersection test is done for getting the exact ray-object intersection point. Based on this intersection point, the next ray shooting decision will be taken from this point. The occurrence of reflection or refraction is also dependent on this point.

As we have used 3D cuboids for representing the objects, there is a possibility of intersection between the ray and the six faces of the cuboids. Depending on the position of the object with respect to the ray shooting point, the exact face is chosen for finding the intersection point. As an example, if an object is in the first quadrant then the ray will possibly hit the left or back surface of that object and in that case, only these surfaces will be used for finding the intersection point, as demonstrated in Figure 6(b). Similarly, right or back surface for the second quadrant, right or front surface for the third quadrant, and left or front surface for the fourth quadrant, respectively, will be used. In case of the 34th object of Figure 5, right or back surface will have to be tested for finding the intersection point and it is shown in Figure 6(b). According to vector notation, the plane is expressed as the set of points for which where is a normal vector to the plane and is a point on the plane.

According to vector algebra, the line of the ray can be expressed as where is a vector in the direction of the line, is a point on the line, and is a scalar in the real number domain.

Now, by substituting (18) into (17), we get By distributing , (19) becomes Now, by solving (20), we get the value of : No intersection will exist if the line is parallel to the plane and starts outside the plane. In this case, (21) denominator will be zero and the numerator will be nonzero. The line will intersect everywhere with the plane if the line is parallel to the plane and starts inside the plane. In this case, both the numerator and denominator of (21) will be zero. In all other cases, the line intersects the plane once and represents the intersection as the distance along the line from ; that is, . Now, this intersection point () will act as the source for that particular ray and reflection, refraction, diffraction, or scattering will occur at that point according to the object property and the ray will proceed to a particular direction. Based on that direction, again, DQS will apply at the intersection point and the next object will be found for a ray-object intersection by applying the COF technique. This process will continue and, at the end, the ray will be counted either as a valid signal received by or an invalid signal.

2.4. Formulation of Time Complexity and Received Power

As we have mentioned earlier, for number of objects, the search operation of an AVL tree can be implemented in time. Based on (1), the time complexity of the proposed technique can be described as below.

Let be the number of objects, let be the number of surfaces of each 3D object, and let be the intersection testing time for the proposed method. If numbers of intersections are required to predict each significant ray, then the total intersection testing time can be calculated by the following equation: Moreover, according to DQS, the proposed method can omit a significant amount of objects during each intersection test. Let be the average number of omitted objects due to DQS. Now, (22) becomes Furthermore, the COF technique also skips objects during intersection test. Suppose is the number of objects skipped by COF technique. Thus, the equation for intersection time will be For similar cases, the intersection testing time by using the existing techniques [12, 24, 25] can be found as Accordingly, the time complexity of SBR technique is [16] where are the skipped objects due to mailbox technique.

Time complexity for BT and BDPT techniques can be found as [16] The time complexity of RF technique is [16] where is the order of quadtree.

Time complexity for PDM and SD techniques is [16] where and are the objects skipped due to prior distance measures and bounding spheres method, respectively.

And where are the objects skipped by space division.

The received power at an observation point is calculated with Friis transmission formula. For three-dimensional modeling, three-dimensional directivity data of transmitting and receiving antennas are required, which can be interpolated from the measurement data along the - and -planes [26].

If an observation point is included in a ray frustum, a ray path is formed, which traces back to the vertex of the frustum. If the ray hits the reflecting triangle or the diffracting edge that generates the frustum, the ray continues to trace back to the apex of the parent ray frustum. The process is recursively carried out until the ray hits the transmitting antenna. If the ray is intercepted by obstacles in the course of the ray tracing, the received power at the point is assumed to be zero. When the back-traced ray hits the source, the directivity of the transmitting antenna along the direction of the ray is utilized and the radiated electric field is obtained by [14].

Consider where is the transmitted power and is the wave impedance of the free space. The gain along the ray direction is . Unit vectors and represent the vectors along the elevation and azimuth directions, respectively. The values and represent the polarization components.

The incident electric field generated from the transmitting antenna undergoes reflection, transmission, and diffraction. The received electric field of a ray that arrives at the receiving antenna after many combinations of reflections, diffractions, and transmissions can be found as where is the divergence factor that accounts for the magnitude changes after th scattering. The dyadic quantities , , and are reflection, transmission, and diffraction.

The received voltage is the sum of multipath signals that have propagated along actual ray paths. The voltage can be obtained as where is the characteristic impedance of a receiver and is the gain along the ray direction . Unit vectors and represent vectors along the elevation and azimuth directions seen from the receiving antenna coordinate, respectively. The values and represent the polarization components. The received power can be obtained as

3. Proposed Coverage Algorithm

The basic idea of the proposed algorithm is to obtain the minimum number of s required to achieve the optimum wireless coverage. To achieve this goal, the concept of s’ probability to be selected for each sampling point has been introduced and incorporated with GA and DFS to optimize the indoor wireless coverage. Here, a sampling point is a point where it collects the signals coming from the surrounding s. Ray tracing is used to generate different coverage patterns.

In this study, the DFS progresses by expanding the first level of sampling point nodes of the tree that appear and finding out the node with a higher probability and searching from level to level until all the sampling point is covered; then, searching process will be stopped. The key word of the proposed algorithm is the probability. For ease of understanding, the used notations are given here.(i) is the th . Assume each of the s only covers one of the sampling points . Therefore, the number of s required to cover all the sampling points should be equal to or less than the total number of sampling points. Hence, , if the number of sampling points in the indoor area is .(ii) is the coverage pattern of .(iii) is the sampling point level for level th.(iv) is the probability of to be selected for .(v) is the total probability to be selected for .(vi) is the to be removed.If and are the two coverage patterns, then the resultant pattern will be as As an example, if 8 sampling points are numbered from 1 to 8 and one covers the sampling points 1, 2, 5, and 7, respectively, and another covers the sampling points 2, 3, 7, and 8, respectively, then the coverage pattern for that will be as in Table 1. Here, the resultant pattern is created by merging both and based on the concept of logical inclusive “OR” operation, as shown in Table 1, where the result is “1,” if the first bit is “1” OR the second bit is “1” or both bits are “1.” Otherwise, the result is “0.”

The probability of is based on a few principles.(i)If the number of covered by the becomes higher, then the term for summation will increase. Therefore, the probability of the to be selected will become higher. This is named as intraprobability. Intraprobability for is (ii)If the numbers of s which cover the same sampling point become smaller, then the probability of the to be selected for that particular sampling point level will become higher. This is named as interprobability. Interprobability for each sampling point level is (iii)The probability for each can be equal to 1, if and only if the consists of a sampling point which is not covered by other s; otherwise, the probability of the should be always smaller than 1.(iv)If the probability is equal to 1, the integer value shows the number of sampling points that are only covered by this and not other s.(v)Only the maximum probability of each will be taken.

For example, if there are 6 sampling points from 1 to 6, the coverage pattern of these six s will be as shown in Table 2.

The first level sampling point, , is only covered by , , and ; therefore, the probability for , , will be . By using the same principle, the probability of , , , , and is obtained, as shown in Table 2. By using the above data, we found the probability for the as 0.74074 and the probability for as 1.3333 .

Some of the bounding functions as well as termination criteria are applied while going deeper into the search space. For number of sampling points in the indoor environment, the proposed bounding functions are as follows.(i)The probability of each existing and nonselected for each sampling point level is recalculated, where the calculation starts from the current sampling point level up to th level of sampling point. Therefore, the probability of each node will be reduced from sampling point level to level.(ii)The process needs to be run for times, by removing one of the s each time. The step is used to make sure the chosen s are the minimum number of s required to achieve the optimum wireless coverage. Besides, this step is also able to obtain another alternative for choosing without degrading the coverage.

For making the proposed algorithm complete, the following termination criteria have been used. The termination happens if one of the following conditions becomes true.(i)The existing in current sampling point level is one of the chosen s in previous sampling point level. Termination happens to the current sampling point level and proceeds to the next level.(ii)Algorithm termination happens when no live node exists in the solution space to be explored.

To illustrate the proposed method, suppose there are 6 sampling points in the indoor propagation area and the coverage pattern is shown in Table 2. Figure 7 generates the state space search tree based on sampling point level to achieve the first optimal solution.

Initially, there is only one live node, node 0. This represents the case in which no has been placed in the propagation area. This node becomes the -node at the initial state. It is expanded based on sampling point level ; therefore, for sampling point level, , its child nodes 1, 2, and 3 are generated. These nodes represent the solution space, where only one is considered at a time. By calculating the probability of , , and as shown in Table 2, the has a higher probability; therefore, the next -node will be node 1 and the algorithm switches to the next sampling point level. Since, in sampling point levels 2, 3, and 4, exists again, according to the termination criteria, all these levels will be terminated and the -node will remain as 1.

For 5th sampling point, only 3rd, 4th, and 5th s exist in this sampling point level, by applying the first bounding function, where the probability of these s will be recalculated, starting from the current sampling point level. Since sampling point level is only covered by 6th , therefore, according to the third probability principle, the probability for 4th is equal to 1. Since, in last sampling point level, exists again, the termination criteria will terminate this level and the -node will remain as 5. Then, the resultant coverage pattern is as follows: The above pattern shows the primary optimum solution will be formed by the 1st and 4th . By applying the second bounding function, the proposed algorithm will be rerun again by removing one of the s each time; therefore, the algorithm needs to run for 7 times in order to get the correct optimum solution by comparing the latest coverage pattern with the primary optimum solution obtained and also the number of s required. Figure 7 is the first step and Figure 8 represents the rest of the steps.

3.1. Complexity Analysis

In this section, comparison between the proposed and existing algorithms [19, 20] is presented in terms of both space and time complexities. Here, the received signal at each sampling point due to one or more s is calculated by using the ray tracing algorithm. Initially, for the worst case, each covers one sampling point; therefore, numbers of s are needed for number of sampling points. The ray tracing algorithm runs for every sampling point from the location of the to generate the coverage pattern for each . At the beginning, the signal is transmitted from a particular situated at a certain location (e.g., ) to each of the sampling points. If the signal is able to be detected by another situated at another particular sampling point (e.g., ), then the signal generated by the of location should also be able to reach the located at . Therefore, the number of ray tracing algorithms to be run decreases for each . For example, if there are numbers of sampling points, then 1st is required to run ray-tracer times and 2nd will run times, and so on. Therefore, the total amount of ray tracing algorithms to be run for numbers of s is The proposed algorithm requires all sampling pattern information in order to perform the probability calculation. Hence, before proceeding to the proposed algorithm, the ray tracing algorithm needs to run completely for generating sampling pattern for all s. Based on this algorithm, for a typical indoor environment having 5 sampling points, the worst case search tree can be generated, as shown in Figure 9.

Node 0 refers to only the root node of the tree, so it does not refer to any . Therefore, node 0 of the tree can be referred to as dummy node. By referring to the tree organization of Figure 9, the total space tree required is It can be further simplified by writing the sum of the natural numbers up to in two ways as Adding these two series, There are of these ’s, so Hence, the sum of the natural numbers from 1 to is therefore half the product of the first term plus the last one multiplied by the number of terms. Thus, in general, the total number of nodes generated by the tree can be calculated as the following formula: where is the number of sampling points in the selected indoor environment. As all the generated nodes stay in memory, therefore, the space complexity is also the same as the time complexity. The space complexity refers to the number of nodes generated until the deepest level, and the time complexity depends on the number of nodes generated until the required solution has been found [19].

The proposed algorithm will be compared with the existing algorithms [19, 20]. According to existing algorithm [20], the time complexity can be expressed as where is the number of rows and is the number of sampling points in the indoor environment. Equation (45) represents the time complexity of the existing algorithm [20] for the worst case.

The time complexity of the existing algorithm [19] can be expressed as follows by modifying existing algorithm [20]: where is the number of sampling points in the selected indoor environment, is the number of coverage patterns that has been rejected because of duplication, and is the number of nodes that has been unexplored. In the worst case, the value of both and will be 0.

From Table 3, it is clear that both time and space complexities of the proposed algorithm are better than the existing algorithms [19, 20]. From the table, it is also apparent that the time and space complexities of the existing algorithms are increasing exponentially while the space and time complexity of the proposed algorithm increase linearly. Therefore, we can say that the proposed algorithm will take significantly lower execution time than the existing algorithms.

4. Results and Discussion

For fair comparison, the same environments are used and all experimental settings have been kept equivalent. All data in this section have been taken with the same simulation setup. All of the comparable techniques (in a format which gives the best possible outcome) have been implemented in the same simulation software. All simulations are carried out on a PC consisting of Intel(R) Core(TM) 2 Duo CPU E8400 @ 3.00 GHz, 3.23 GB of RAM, operated by Microsoft Windows XP operating system. Thus, there is no chance of discriminations between the techniques. Though this proposed technique is applicable for any frequencies, in most of the cases, 2.4 GHz operating frequency and half wave dipole antenna with 2.15 dBi gain have been used. The objects of the simulation software were made by using cuboids. The properties of different materials are used to make the simulation environment the same as realistic environments. The material properties are shown in Table 4.

To evaluate the performance of the proposed technique, a comparison is made with the existing methods. The comparison is made between the proposed technique and the SBR, BT, BDPT, RF, PDM, and SD techniques. The drawbacks of the existing techniques have been described in Section 1. For proper comparison, five different environments are chosen (one of them is shown in Figure 5). The environments are different by means of a number of objects. Some are mostly complex and some are moderate. Measurements are done in 10 different sampling points for each environment, by changing the and positions. The results obtained from 10 different sampling points of Figure 5 are represented graphically in Figure 10. Table 5 represents the overall results for all five environments.

According to Figure 10, the proposed algorithm shows lower time consumption for the environment shown in Figure 5. Results for five such kinds of different environments are represented in Table 5. From the results, we observe that the proposed algorithm shows 69.93% lower time consumption than SBR algorithm, 63.67% lower than BT, 82.44% lower than BDPT, 66.03% lower than RF technique, 83.36% lower than PDM method, and 82.12% lower than SD technique. This is because the DQS helps to decrease the time by skipping objects during simulation, the COF technique minimizes the ray-object intersection time, and the AVL tree data structure minimizes the data searching time.

For the justification of the inclusion of rough surface scattering and different optimization techniques, this part presents the effects of rough surface scattering and the proposed optimization techniques on prediction time. These effects are presented graphically in Figure 11 for different scattering factors (SF). SF is the key feature which has impact on scattering simulation. SF has a key impact on scattering angle. When the SF increases, the scattering angle also increases. Thus, the chance of ray-object interaction increases and it increases the prediction time. Here, a comparative study is presented to show the improvement of the proposed technique step-by-step. For (Figure 11(a)), after including the scattering in the ray tracing technique, the time decreases by 4.65% in average compared to the algorithm that has not considered scattering in ray tracing. Furthermore, 11.68% time reduction is achieved for the inclusion of DQS optimization technique and 31.55% reduction for the inclusion of both DQS and COF techniques. For (Figure 11(b)) and (Figure 11(c)), the time also decreases step-by-step after including the scattering and different optimization techniques. These results demonstrate strong effects of scattering and optimization techniques on prediction time.

As we know, there are a number of types of antennas, which can be used as and . For different types of antennas, the received power and electric field strength will be different due to different antenna characteristics (i.e., antenna gain). Figure 12 shows three different antennas and the electric field strength in the receiver side. Here, the antennas used are half wave dipole antenna with 2.15 dBi gain, bow-tie antenna with 2.3 dBi gain, and hemispherical antenna with 7 dBi gain. In all cases, 2.4 GHz operating frequency is used for data collection. In the figures, almost all of the techniques show nearly similar field strength at the receiver end. There are small differences between the techniques and this difference is a result of multipath propagation, which is represented by different approaches in different algorithms. As a result, variation occurs in the electric field strength at the receiver end. Another cause of the variation is different optimization techniques used in the algorithms. Due to these optimization techniques, the amount of the received signal varies and, thus, the electric field strength also varies. This nearly similar result of different techniques verifies the validity of the proposed technique.

In the above comparisons, verification is done by the data taken using different antennas with the same operating frequency. In this part, the same thing is done by changing the operating frequency for the same antenna to observe the effect of different frequencies. As it is known that the operating frequency has an impact on the path loss calculation [27], Figure 13 represents the path loss data for half wave dipole antenna in two different frequencies. 900 MHz frequency is used for Figure 13(a) and 2.4 GHZ is used for Figure 13(b). For 900 MHz, the path loss varies from 38.8 dB to 45 dB and for 2.4 GHz, it varies from 53.1 dB to 59 dB for different techniques. It indicates that, due to increasing the operating frequency, the path loss also increases.

In previous discussions, the validity and the superiority of the proposed ray tracing technique are done with the simulation results. However, in this part, the comparison between the simulated and the experimental data is also shown in order to evaluate the validity of the proposed technique. In real cases, the transmitted power is constant and the received power is variable for different s depending on the - separation, the amount of obstacles, and so forth. Based on the received power, the signal strength of changes, which makes the received signal a “good” signal or a “bad” signal. For this reason, the received power has significance in ray prediction and we have chosen this parameter for the comparison. Figure 14 shows the comparison between the simulated and experimental received power for different scenarios. During experiment and simulation, the transmitted power is kept constant. From the figure, it is observed that the simulation results are almost matching with the experimental results, which validate the proposed algorithm. The average difference between the simulation and experimental data is very small and it is 1.65%. These differences are the effects of different environmental parameters, object properties, and the existence of other electromagnetic signals in the practical environment.

Until now, it is apparently found that the proposed ray tracing technique is valid and more efficient than the existing ray tracing techniques. Here, validation of the proposed coverage algorithm is deliberated in comparison with the simulated and experimental results for different transmitted power. The coverage for a specific environment is changeable with the changes in transmitted power. If the transmitted power is constant, the coverage will depend on the obstacles. If the number of obstacles or position of obstacles changes, it will make changes in the coverage. Therefore, in case of constant transmitted power, the coverage will remain almost the same for the same environment. For this reason, we have verified our algorithm with different transmitted power for the same environment and achieved approximately 99% and 98% maximum coverage in terms of simulation and experiment, respectively, for 10 dBm transmitted power (refer to Figure 15). Also, from Figure 15, we observe the matching results between the simulation and experimental results of the coverage algorithm. The average difference between the simulation and experimental data is 0.3%. This difference is the result of different environmental effects and trivial mismatching between the simulation and experimental setup. These matching results confirm the accuracy of the proposed coverage algorithm.

5. Conclusion

A new propagation prediction technique for indoor environment is proposed in this paper where AVL tree is used for data storing and retrieving process, and DQS and COF techniques are used for accelerating the overall ray tracing process. This paper also proposed a new algorithm for wireless indoor coverage, which is based on the probability theory and optimized by GA and DFS. Analysis between the proposed method and the existing methods proved that the proposed method has lower time and space complexities. The obtained results reveal that the proposed technique improved the performance in terms of lower computational time of about 83.36%. In case of coverage algorithm, the space and time complexities of the proposed algorithm are greatly improved because of strong bounding functions and termination criteria as well as the concept of multilevel technique for finding the suitable s from the coverage pattern based on the probability. The similarities between the simulated and experimental results (a very high matching accuracy of more than 98% with respect to received power and coverage) confirm the validity of the proposed technique.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This research work is supported by the University of Malaya High Impact Research (HIR) Grant (UM.C/628/HIR/ENG/51) sponsored by the Ministry of Higher Education (MOHE), Malaysia.