Bioinspired Computation and Its Applications in Operation Management
View this Special IssueResearch Article  Open Access
Lianbo Ma, Hanning Chen, Kunyuan Hu, Yunlong Zhu, "Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization", The Scientific World Journal, vol. 2014, Article ID 941532, 21 pages, 2014. https://doi.org/10.1155/2014/941532
Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization
Abstract
This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higherlevel species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the partdimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the realworld RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.
1. Introduction
In recent years, radio frequency identification (RFID) technology as a new inventory tracking technology has great promise for diversified use in many industries with numerous practical applications. Much great potential has been realized and many are being explored. In many realworld RIFD applications, such as production, logistics, supply chain management, and asset tracking, a sufficient number of readers are deployed in order to provide complete coverage of all the tags in the given area [1, 2]. Specially, over the last ten years, RFID is used to build up an “Internet of Things” (IoT), a network that connects physical things to the Internet, making it possible to access remote sensor data and to control the physical world from a distance [3]. This brings some questions in the deployment of an RFID network for the operation and management of the largescale Internet of Things applications, such as optimal tag coverage, quality of service (QoS), and cost efficiency [4].
Due to the limited recognition range of single reader, many readers and tags need to be deployed according to some arrangement to construct the RFID system in the scenario area. This results in some necessary questions to be considered in the case of avoiding reader collision, such as how many readers are needed; where should the readers be placed; what is the efficient parameter setting for each reader [5, 6]. In addition, considering costefficient for RFID system, the network should meet the items with minimum number of readers and maximum tags coverage. Thus, the RFID network planning problem (RNP) is a difficult NP problem [7, 8]. In general, we defined that the RNP aims to optimize a set of objectives (tags coverage, load balance, economic efficiency, interference between readers, etc.), by adjusting the control variables (the coordinates of the readers, the number of the readers, the antenna parameters, etc.) of the system [8].
In the past two decades, evolutionary computation (EC) and swarm intelligence (SI) techniques for solving RNP problem have gained increasing attention, such as genetic algorithms (GA) [9, 10], evolutionary strategy (ES) [11], differential evolution (DE) [12], particle swarm optimization (PSO) algorithms [6, 11, 13], and bacterial foraging algorithms (BFA) [14, 15]. Specially, in [6] we present a multispecies particle swarm optimization model for solving RNP problem, achieving a significant positioning accuracy. It is noted that some scholars propose other methods to dispose similar problems, such as bipartite graph [16, 17]. However, with the increasing number of the deployed readers and tags in the largescale RFID deployment environment, the degree of complexity for solving the RNP optimization increases exponentially. The previous methods to solve the RNP optimization are incompetent for being prone to premature convergence [13].
A natural approach to tackle highdimensional optimization problems is to adopt the cooperative coevolution based on divideandconquer strategy. An early work on a cooperative coevolutionary algorithm (CCEA) by [18] provides a promising approach for decomposing a highdimensional problem. Recent studies [19–23] by taking improved decomposition strategy into PSO algorithm. Inspired by these recent works, we propose a novel hierarchical coevolving scheme, extending the canonical artificial bee colony (ABC) algorithm framework from nonhierarchy to hierarchy, called hierarchical artificial bee colony algorithm (HABC). Our HABC model is inherently different from others in the following aspects.
Firstly, the cooperative coevolving approach based on divideandconquer strategy with random grouping technology is adopted into HABC, which enhances the local search ability (exploitation). Under this method, the highdimensional vectors can be decomposed into smaller subcomponents which are assigned to the lower hierarchy. This method enhances the local searching ability.
Secondly, the traditional evolution operators such as crossover and mutation are applied to interaction of multispecies instead of single species enhancing the information exchange between populations. Under this new development, the neighbor bees with higher fitness can be chosen for crossover and mutation, which effectively enhances the global search and convergence to the global best solution as the dimension increases. This maintains population diversity and enhances the global search ability (exploration).
By incorporating this new degree of complexity, HABC can accommodate considerable potential for solving more complex problems. Here we provide some initial insights into this potential by evaluating HABC on both mathematical benchmark functions and a realworld RNP case, which focuses on minimizing four specific objective functions of a 30reader RFID network and a 50reader RFID network. The simulation results, which are compared to other stateoftheart methods, show the superiority of the proposed algorithm.
The rest of the paper is organized as follows. In Section 2, RFID system models and the RNP problem definitions are presented. Section 3 first gives a review of the canonical ABC algorithm and then proposes the novel HABC algorithm. Section 4 tests the algorithm on the ten benchmarks and illustrates the results. Section 5 describes the implementation of the proposed approach based on HABC on two instances, Cd100 and Rd100, and the results of simulation are analyzed. Finally, Section 6 outlines the conclusions.
2. Problem Formulation on RNP
An RFID system consists of four types of important components (see Figure 1): RFID tags, each placed on an object and consists of a microchip and an embedded antenna containing a unique identity, which is called Electronic Product Code (EPC); RFID readers, each has more than one antenna and is responsible to send and receive data to and from the tag via radio frequency waves; RFID middleware, which manages readers and filters and formats the RFID raw tag data; RFID database, which records RFID raw tag data that contain information such as reading time, location, and tag EPC. In this section, a mathematical optimization model for the RNP problem based on RFID middleware is proposed.
The model is constructed from several different aspects. The deployment region of hotspots is supposed to be a twodimension square domain. The tags here are passive and are based on the Class1 Generation 2 UHF standard specification [6, 9, 12]. It means that they can only be powered by radio frequency energy from readers. The proposed RNP model aims to improve the QoS of RFID networks by optimizing the objects including coverage, interference, load balance, and aggregate efficiency via regulating the parameters of RFID networks, including the number, location, and radiated power of readers. Generally the problem is formulated as follows.
2.1. Optimal Tag Coverage
The first objective function represents the level of coverage, which is the most important in an RFID system. In this paper, if the radio signal received at a tag is higher than the threshold dBm, the communication between reader and tag can be established. Then the function is formulated as the sum of the difference between the desired power level and the actual received power of each tag in the interrogation region of reader : where and are the tag and reader set that are deployed in the working area respectively, and represents the set of readers which has the tag in its interrogation region. This object function ensures that the received power at the tag from the reader in , which is mainly determined by the relative distance and radiated power of the reader , is higher than the threshold , which guarantees that the tag is activated. That is, by regulating the locations and radiated power of the readers, the optimization algorithm should locate the RFID readers close to the regions where the desired coverage level is higher, while the areas requiring lower coverage are taken into account by the proper radiated power increases of the readers.
2.2. Reader Interference
Reader collision mainly occurs in a dense reader environment, where several readers try to interrogate tags at the same time in the same area. This results in an unacceptable level of misreads. The main feature of our approach is that the interference is not solved by traditional ways, such as frequency assignment [24] and reader scheduling [6], but in a more precautionary way. This objective function is formulated as where is the tag set in the interrogation region of reader . For each tag , this objective considers all the readers except the best one as interfering sources. That is, by changing reader positions and powers according to this functional the algorithm tries to locate the readers far from each other to reduce the interference.
2.3. Economic Efficiency
This aspect could be approached from various points of view. For example, due to the stochastic noise, multipath effect, and attenuation in the propagation channel, readers should be located closely to the center of tags in the hotspots. From this perspective, this objective can be reached by weighing the distances of each center of tag clusters from its best served reader. Here we employ means clustering algorithm to find the tag cluster. It can be defined below as follows: where is the distance between the th reader and the th tag center and and are the position of th cluster center and its best served reader, respectively. In this way the algorithm tries to reduce the distance from the readers to the elements with high tag densities.
2.4. Load Balance
A network with a homogeneous distribution of reader cost can give a better performance than an unbalanced configuration [25]. Thus, in largescale RFID system, the set of tags to be monitored needs to be properly balanced among all readers. This objective function is formulated as where is the assigned tags number to reader and is the maximum number of tags which can be read by the reader in unit time. It should be noticed that the takes different values according to the different types of readers used in the network. This object aims to minimize the variance of load conditions by changing the locations and radiated power of readers.
2.5. Combined Measure
In this paper, the overall optimal solution for RNP is represented by a linear combination of the four objective functions: where is the objective function for the th requirement normalized to its maximum value ; the normalization is necessary because these four objectives represent nonhomogeneous quantities and are very different in values.
2.6. Objective Constraint
All the tags in working area must be covered by a reader. This constraint can be formally expressed by the following formula: where is a binary variable in which if the reader ; otherwise . So this constraint can maintain the power efficiency of network and ensure a complete coverage deployment.
3. Hierarchical Artificial Bee Colony Algorithm
3.1. Canonical Artificial Bee Algorithm
The ABC algorithm is a relatively new SI algorithm by simulating the foraging behaviors of honey bee swarm, initially proposed by Karaboga and further developed by Basturk and Akay [26–28]. In ABC, the colony of artificial bees contains three groups of individuals, namely, the employed, onlookers and scouts bees. Employed bees exploit the specific food sources and give the quality information to the onlooker bees. Onlooker bees receive information about the food sources and choose a food source to exploit depending on the quality information. The employed bee whose food source has been abandoned becomes a scout and starts to search for a new food source. The fundamental mathematic representations are listed as follows.
In initialization phase, the algorithm generates a group of food sources that correspond to the solutions in the search space. The food sources are produced randomly within the range of the boundaries of the variables. Consider where and . SN is the number of food sources and equals to half of the colony size. is the dimension of the problem, representing the number of parameters to be optimized. and are lower and upper bounds of the parameter, respectively. Additional, counters which store the numbers of trials of each bee are set to 0 in this phase.
In the employed bees’ phase, each employed bee is sent to the food source in its memory and finds a neighboring food source. The neighboring food source is produced according to (8) as follows: where is a randomly selected food source different from neighbor . is a randomly selected dimension. is a random number which is uniformly distributed in range . The new food source is determined by changing one dimension on . If the value in this dimension produced by this operation exceeds its predetermined boundaries, it will set to be the boundaries.
The new food source is then evaluated. A greedy selection is applied on the original food source and the new one. The better one will be kept in the memory. The trials counter of this food will be reset to zero if the food source is improved; otherwise, its value will be incremented by one.
In the onlooker bees’ phase, the onlookers receive the information of the food sources shared by employed bees. Then they will choose a food source to exploit depending on a probability related to the nectar amount of the food source (fitness values of the solution). That is to say, there may be more than one onlooker bee choosing the same food source if the source has a higher fitness. The probability is calculated according to (9) as follows:
After food sources have been chosen, each onlooker bee finds a new food source in its neighborhood following (8), just like the employed bee does. A greedy selection is also applied on the new and original food sources.
In scout bees’ phase, if a food source has not been improved for a predetermined cycle, which is a control parameter called “limit,” the food source is abandoned and the bee becomes a scout bee. A new food source will be produced randomly in the search space using (7), as in the case of initialization phase.
The employed, onlooker, and scout bees’ phase will recycle until the termination condition is met. The best food source which presents the best solution is then outputted. The pseudocode of original ABC algorithm is illustrated in Algorithm 1.

3.2. The Hierarchical Artificial Bee Colony Algorithm
The HABC integrates a twolevel hierarchical coevolution scheme inspired by the concept and main ideas of multipopulation coevolution strategy and cross and mutation operations. The flowchart of the HABC is shown in Figure 1. It includes four important strategy approaches: variables decomposing approach, random grouping of variables, background vector calculating approach, and cross and mutation operation, which is presented as follows.
3.2.1. Hierarchical Multipopulation Optimization Model
As described in Section 3.1, we can see that the new food source is produced by a perturbation coming from a random single dimension in a randomly chosen bee. This causes that an individual may have discovered some good dimensions, while the other individuals that follow this bee are likely to choose worse vectors in dimensions and abandon the good ones. On the other hand, when solving complex problems, the canonical ABC algorithm based on single population suffers from the following drawback: as a population evolves, all individuals suffer premature convergence to the local optimum in the first generations. This leads to low population diversity and adaptation stagnation in successive generations.
Hence, the HABC contains two levels, namely, the bottom level and top level, to balance exploring and exploiting ability. In Figure 2, in the bottom level, with the variables decomposing strategy, each subpopulation employs the canonical ABC method to search the partdimensional optimum in parallel. That is, in each iteration, subpopulations in the bottom level generate best solutions, which are constructed into a complete solution species that update to the top level. In the top level, the multispecies community adopts the information exchange mechanism based on crossover operator, by which each species can learn from its neighborhoods in a specific topology. The vectors decomposing strategy and information exchange (i.e., crossover operator) can be described in detail as follows.
3.2.2. Variables Decomposing Approach
The purpose of this approach is to obtain finer local search in single dimensions inspired by the divideandconquer approach. Notice that two aspects must be analyzed: how to decompose the whole solution vector, and how to calculate the fitness of each individual of each subpopulation. The detailed procedure solving those is presented as follows.
Step 1. The simplest grouping method is permitting a dimensional vector to be split into subcomponents, each corresponding to a subpopulation of dimensions, with individuals (where ). The th subpopulation is denoted as , .
Step 2. Construct complete evolving solution which is the concatenation of the best subcomponents’ solutions by the following: where represents the personal best solution of the th subpopulation.
Step 3. For each component , , do the following: (a)at employed bees’ phase, for each individual , ; replace the th component of the by using the th component of individual ; calculate the new solution fitness: . If , then is replaced by .(b)update positions by using (8);(c)at onlooker Bees’ Phase, repeat (a)(b).
Step 4. Memorize the best solution achieved so far; compare the best solution with and memorize the better one.
Under this method, highdimensional objective function vectors can be decomposed into smaller subcomponents, which are evolving separately. This multipopulations parallel processing approach enhances the local searching ability.
Random Grouping of Variables. To increase the probability of two interacting variables allocated to the same subcomponent, without assuming any prior knowledge of the problem, according to the random grouping of variables proposed by [20, 21], we adopt the similar random grouping scheme by dynamically changing group size. For example, for a problem of 100 dimensions, we can define that
Here, if we randomly decompose the dimensional object vector into subcomponents in each iteration (i.e., we construct each of the subcomponents by randomly selecting dimensions from the dimensional object vector), the probability of placing two interacting variables into the same subcomponent becomes higher over an increasing number of iterations.
3.2.3. The Information Exchange Mechanism Based on Crossover Operator between Multispecies
In the top level, we adopt crossover operator with a specific topology to enhance the information exchange between species, in which each species can learn from its symbiotic partner in the neighborhood. The key operations of this crossover procedure are described in Figure 3.
Step 1 (select elites to the bestperforming list (BPL)). First, a set of competent individuals from current species ’s neighborhood (i.e., ring topology) are selected to construct the bestperforming list (BPL) with higher fitness has larger probability to be selected. The size of BPL is equal to the number of current species . These individuals of BPL are regarded as elites. The selection operation tries to mimic the maturing phenomenon in nature, where the generated offspring will become more suitable to the environment by using these elites as parents.
Step 2 (crossover and mutation between species). To produce wellperforming individuals, parents are selected from the BPL’s elites only for the crossover operation. To select parents effectively, the tournament selection scheme is used, in which two enhanced elites are selected randomly, and their fitness values are compared to select the elites, and the one with better fitness value is regarded as parent. Then another parent is selected in the same way. Two offsprings are created by performing crossover on the selected parents. This paper adopts the arithmetic crossover method: the offspring is produced by where is the newly produced offspring and and are randomly selected from BPL.
Step 3 (update with greedy selection strategy). Not all current species are replaced by the elites from BPL; we set a selecting rate CR to determine the replaced individuals. Assuming that species size of is , then the replaced individuals number is . For the selected individual , the newly produced offspring is then compared with , applying a greedy selection mechanism, in which the better one is remained. We can choose four selecting approaches: selecting the best individuals (i.e., individuals), a medium level of individuals, the worst individuals, and random individuals. Hence, there are several HABC variants according to different selecting approach. Here, we choose the simplest approach (i.e., selecting the worst individuals) to be replaced.
4. Benchmark Test
In the experimental studies, according to the no free lunch (NFL) theorem [29], a set of 10 benchmark functions, which are listed in Appendix , are employed to fully evaluate the performance of the HABC algorithm without a biased conclusion towards some chosen problems. In order to compare the different algorithms fairly, we decide to use the number of function evaluations (FEs) as a time measure substituting the number of iterations due to the reason that the algorithms do differing amounts of work in their inner loops.
4.1. Experimental Settings
Experiments are conducted with six variants of HABC according to the different CR values. The proposed HABC is compared with six successful EA and SI algorithms: artificial bee colony algorithm (ABC) [26], cooperative coevolutionary algorithm (CCEA) [18], canonical PSO with constriction factor (PSO) [30], cooperative PSO (CPSO) [19], genetic algorithm with elitism (EGA) [31], and covariance matrix adaptation evolution strategy (CMAES) [32].
ABC is a recently developed SI paradigm simulating foraging behavior of bees [26]. CCEA is the earliest cooperative coevolutionary algorithm which applied the divideandconquer approach by Potter and de Jong [18]. CPSO is a cooperative PSO model, cooperatively coevolving multiple PSO subpopulations [19]. EGA is the classical genetic algorithm with elitist selection scheme [31]; the underlying idea of CMAES is to gather information about successful search steps and to use that information to modify the covariance matrix of the mutation distribution in a goal directed, derandomized fashion [32].
In all experiments in this section, the values of the common parameters used in each algorithm such as population size and total generation number are chosen to be the same. Population size is set as 50 and the maximum evaluation number is set as 100000. For the fifteen continuous testing functions used in this paper, the dimensions are all set as .
All the control parameters for the EA and SI algorithms are set to be default of their original literatures: initialization conditions of CMAES are the same as in [32], and the number of offspring candidate solutions generated per time step is ; the limit parameter of ABC is set to be , where is the dimension of the problem and is the number of employed bees. The split factor for CCEA and CPSO is equal to the dimensions [18, 19]. For canonical PSO and CPSO, the learning rates and are both set as 2.05 and the constriction factor . For EGA, intermediate crossover rate of 0.8, Gaussian mutation rate of 0.01, and the global elite operation with a rate of 0.06 are adopted [31]. The parameter setting for all algorithms is listed in Table 1. For the proposed HABC, the species number , split factor (i.e., the subpopulation number), and the selection rate should be tuned firstly in the next section.

4.2. Sensitivity in Relation to Parameters of HABC
4.2.1. Effects of Species Number
The number of species of HABC in top level needs to be tuned. Three benchmarksSphere, Rosenbrock, and Schwefel are used to investigate the impact of this parameter. The selection rate can be set as CR = 1, and the involved benchmark functions (i.e., Sphere, Rosenbrock, and Schwefel) are run 50 times. From Figure 5, we can observe that when increased, we obtained faster convergence velocity and better results on all test functions. However, it can be observed that the performance improvement is not evident when for most test functions. Thus, in our experiments, the species number for HABC is set at 10 for all test functions.
4.2.2. Choices of CR
The basic benchmark functions are adopted to evaluate the performance of HABC variants with different CRs. Form Table 2, we can find that HABC variant with CR equal to 1 performed best on four functions among all five functions, while CR equal to 0.05 got the best result on one function. According to the results with different CRs, we chose CR equal to 1 as an optimal value for the next experiments.

4.2.3. Effects of Dynamically Changing Group Size
Obviously, the choice of value for split factor (i.e., subpopulation number) had a significant impact on the performance of the proposed algorithm. In order to vary during a run, we defined for function optimization and the split factor is chosen uniformly at random from a set . Then, the variant of HABC with dynamically changing is compared with that with fixed split number on four benchmark functions for 50 sample runs. From the results listed in Table 3, we can observe that the performance is sensitive to the predefined value. HABC, using a dynamically changing value, consistently gave a better performance than the other variants except . Moreover, in most realworld problems, we do not have any prior knowledge about the optimal value, so the random grouping scheme can be a suitable solution.

4.3. Comparing HABC with Other StateoftheArt Algorithms on CEC Benchmark Functions
To verify the effectiveness of the proposed algorithm, CEC2005 functions are adopted to evaluate the performance of all algorithms. According to Section 4.2, we ensure the following optimal parameter setting of HABC: , , and , in comparison with CCEA, CPSO, CMAES, ABC, PSO, and EGA algorithms. Table 4 showed the experimental results (i.e., the mean and standard deviations of the error values values found in 50 sample runs) for each algorithm on . Figure 6 shows the search progress of the average values for all algorithms.

The mentioned five shifted and rotated functions are regarded as the most difficult functions to optimize. HABC outperformed CMAES on three out of all the five functions, except and . HABC can find the global optimum for within 10000 FEs, because the adopted variables decomposing approach can enhance the local search, which is a key contributing factor to handle highdimensional problem. On the other hand, CMAES converges extremely fast. However, it either converged very well or tended to become stagnant very quickly, especially on the multimodal functions (,). From the rank values presented in Table 4, the performance of all algorithms tested here is ordered as HABC > CMAES > ABC > CCEA > CPSO > PSO > EGA.
5. RFID Network Planning BasedHABC Algorithm
In this section, the details of proposed approach to solve the RNP problem are described.
5.1. Solution Representation of RNP Problem
In this work, the task of RFID network planning is to deploy several RFID readers in the working area in order to achieve five goals described in Section 4.1. Figure 7 shows an example of a working area containing 100 RFID tags and 1 RFID reader, where the following three decision variables are chosen in this work: : the axis coordinate value of the RFID reader; : the axis coordinate value of the RFID reader; : the read range (i.e., radiate power level) of the RFID reader.
These variables can be encoded into solution’s representation shown in Figure 7. We employ a representation in which each solution is characterized by a ( is the total number of readers are that deployed in the network) dimensional real number vector. In the representation, dimensions indicate the coordinates of the readers in the 2dimensional working area, and the other dimensions denote the interrogation range of each reader (which is determined by the radiated power).
5.2. Implementation of the HABC Algorithm for the RNP Problem
To apply the HABC algorithms to solve the RNP problem, the following steps should be taken and repeated (Figure 4).
(a)
(b)
(c)
(d)
(e)
Step 1 (RFID deployment parameters initialization). The deployment parameters consist of reader control variables and RFID networks topology. The former include the adjustable radiated power range, the corresponding recognition scope—the distance up to which tag can be read by the reader, the interference range—the distance within which reader collision mainly occurs.
The networks topology includes the shape and dimension of the region, the number of the RFID tags to be used, the tag distribution (i.e., the tag position) in the working area, and the tag power threshold—the minimum tag received power level under which the communication between reader and tag can be established.
Step 2 (encoding). Readers’ variables consisting of the position and radiated power range should be encoded into the algorithm individual’s representation as Table 5. The boundary limit of solution is defined by the networks topology.

Step 3 (population generation). Produce the initial HABC population. Initialize species; each is divided into subpopulation. Each subpopulation possesses individuals, where , . Then individuals based on dimensional objective should be randomly generated as shown in Figure 17, where , , , , is the position of the th state variable in the th individual of th subpopulation of the th species. is the group number by dividing dimensions into dimension. Emphasize that the group number is dynamically changed by random grouping approach by (11).
Notice that, each individual is characterized has a dimension equal to 3 ( is the number of used RFID readers, in this case), in which 2 dimensionalities for the coordinates of reader positions, and 1 dimensionalities for radiated powers of each reader.
Step 4 (construct complete evolving solution ). is the concatenation of the best subcomponents’ solutions .
Set .
Step 5 (optimization procedure). Loop over each species , : for each subpopulation , , do the following:(1)fitness calculation: for each individual , , replace the th component of the by using the th component of . Calculate the new solution fitness: using (1)–(5). If , then is replaced by ;(2)update positions using (8);(3)memorize the best solution achieved so far, compare the best solution with and memorize the better one;(4)select elites to the bestperforming list (BPL) from current species’ neighborhood.(5)crossover and mutation between species by (12);(6)update species with greedy selection strategy; .
Step 6 (termination condition). If the cycle is greater than limited value, stop the procedure; otherwise, go to Step 5.
5.3. Simulation Configurations
The readers used here are mobile and the tags are passive. According to the references [6, 33–37], the related RFID readers’ parameters can be set as in Table 6. Here the interrogation range according to the reader radiated power is computed as in [38]. The proposed algorithm is evaluated against two different RNP instances, namely, Cd100 and Rd500. The instance of Cd100 is tested on a 30 m × 30 m working space with 100 clustered distributed tags. Another instance, namely, Rd500, contains 500 randomly distributed tags in a 150 m × 150 m working space (shown in Figure 8). In this experiment, the parameters setting for HABC, ABC, PSO, EGA, and CCEA can be the same as in Section 4.1. Especially, the PS^{2}O algorithm proposed by us in [6] as an effective approach for solving RNP, is employed to compare with the proposed approach using HABC. For PS^{2}O, the number of swarms and swarm size can be set by , . The constriction factor is used with , and then the learning rates [6].

(a)
(b)
5.4. Results of the RNP with Cd100
In this section, an RNP instance, called Cd100, in which 10 readers are deployed in the working space with 100 clustered distributed tags, is employed, which can be considered as a continuous optimization problem with 30 dimensions, shown in Figure 8(a). All algorithms are firstly tested on the four objective functions (, and ), respectively. In the single objective optimization, the results are providing an optimal solution for a single objective that does not take account of the others. After that, the test of the combined objective function is implemented. The weighted coefficients used in this instance are set as , , , and , which also can be varied according to the demand of different network. The results consisting of the best, worst, mean, and standard deviation of the optimal solutions over 50 sample runs are listed in Table 7.

From Table 7, it is observed that the HABC can get better results in most proposed objective functions in comparison to other algorithms (PS^{2}O, EGA, CCEA, and PSO), except . Particularly, the good performance in the combined objective function , suggests that the proposed approach using HABC outperforms the other algorithms in optimizing the models presented in this paper.
Figure 9 illustrates the result only considering the coverage of readers. Figure 9(a) gives the convergence process of the average values obtained by HABC and other algorithms over 50 sample runs for the objective function . The corresponding reader locations and the distribution of their radiated power optimized by HABC are shown in Figure 9(b). In this case, according to the demand of higher tag coverage, the algorithms adjust the power and balance the deployment of readers in the working area. From Figure 9(a), the HABC has a faster convergence and gets better results than the other three algorithms. From Figure 9(b), it is obviously observed that the HABC can schedule with the reader network with higher tag coverage.
(a)
(b)
Note that the visual results of the RNP with other four single objects have similar trends, as shown in Figures 10–14. Figure 10 illustrates the result implemented by all algorithms with only considering the interference between readers. In this optimization mode, the algorithms aim to maintain sufficient distances between RFID readers. We can observe from Figure 10(b) that network operating condition deteriorates, without guaranteeing complete tag coverage rate while the objective function is perfectly optimized, because the readers are optimized to move away from each other and thus located far from high traffic areas for the purpose of minimizing the reader interference.
(a)
(b)
(a)
(b)
(a)
(b)
(a)
(b)
(a)
(b)
(c)
(d)
(e)
Similarly, Figures 11 and 12 show the results considering economic efficiency and load balance, respectively. In the optimization mode with only considering economic efficiency, the algorithms aim to locate the readers close to the cluster center of the tag dense areas. From Figures 11(a) and 11(b), it is clear to see that the HABC algorithm gets significant superiority in the convergence and accuracy of solutions. From Figure 12, HABC outperforms all the other algorithms in the optimization mode with balancing the number of tags served by each reader and radiated power of the reader.
Figure 13(a) shows the convergence process of all algorithms with the combined objective function when all the requirements are considered. We can observe from Figure 13(b) that HABC still finds the best solution, which is a reasonable compromise between different demands. Moreover, the convergence process of the five objective functions has a similar varying tendency, proving that the proposed HABC has faster convergence rate than other algorithms.
5.5. Results of the RNP with Rd500
Apparently, with the increasing number of the deployed readers and tags in the working area, the complexity of solving the RNP problem increases exponentially. Therefore, to further verify the efficiency of the proposed algorithm, the instance with larger scales, namely, Rd500, is also employed: 50 readers are deployed in the working space with 500 tags that are randomly distributed, which can be considered as a continuous optimization problem with 150 dimensions, shown in Figure 8(b). Similar to the test in Section 5.4, the proposed approach using HABC and other algorithms is firstly tested with the four single objects presented in (1)–(5), respectively. Then the test of the combined objective function is implemented. The weighted coefficients are the same as in Section 5.4. All the testing results are listed in Table 8 over 50 sample runs. As shown in Table 8, the results demonstrate that the proposed approach using HABC obtains superior solutions consisting of best, worst, and mean values to those of other algorithms on all the five objective functions, better than the performance of HABC tested on Cd100 instance in which HABC outperforms other algorithms on three of five objective functions. As expected, by employing the decomposing strategy of HABC, the networks problem can be divided into several smaller ones to reduce the computational complexity, achieving better results.
