Abstract
The design of wireless sensor networks (WSNs) in the Internet of Things (IoT) faces many new challenges that must be addressed through an optimization of multiple design objectives. Therefore, multiobjective optimization is an important research topic in this field. In this paper, we develop a new efficient multiobjective optimization algorithm based on the chaotic ant swarm (CAS). Unlike the ant colony optimization (ACO) algorithm, CAS takes advantage of both the chaotic behavior of a single ant and the selforganization behavior of the ant colony. We first describe the CAS and its nonlinear dynamic model and then extend it to a multiobjective optimizer. Specifically, we first adopt the concepts of “nondominated sorting” and “crowding distance” to allow the algorithm to obtain the true or near optimum. Next, we redefine the rule of “neighbor” selection for each individual (ant) to enable the algorithm to converge and to distribute the solutions evenly. Also, we collect the current best individuals within each generation and employ the “archivebased” approach to expedite the convergence of the algorithm. The numerical experiments show that the proposed algorithm outperforms two leading algorithms on most wellknown test instances in terms of Generational Distance, Error Ratio, and Spacing.
1. Introduction
The Internet of Things (IoT) is an emerging paradigm for information collection, communications, process, and application that is rapidly gaining ground in a wide variety of fields, including manufacture, transportation, healthcare, environment surveillance, and many other areas. The basic idea of this new networking and computing paradigm is the pervasive presence of a variety of objects (things), such as sensors, actuators, mobile devices, and RFID tags, which are able to interact with each other and communicate with the Internet infrastructure. Wireless sensor networks (WSNs) form an indispensable ingredient of the IoT for data collections and transmissions, thus having a significant impact on the overall service performance of the IoT. Therefore, optimization of the WSN design plays a crucial role in constructing a high performance IoT for meeting the requirements of various applications.
Designing a large scale WSN for supporting the IoT faces many challenges. These challenges are mainly caused by the limited resources available in WSNs, including battery lifetime, computing capacity, and memory space at sensor nodes, and also by dynamic network topology especially in ad hoc sensor networks. On the other hand, due to the wide spectrum of applications that may be supported by the IoT in the future, WSN design faces various design goals that often conflict with each other, for example, short delay, high throughput, minimal energy consumption, and low cost. Network design in such application scenarios must consider multiple factors in order to achieve tradeoffs among multiple objectives to achieve optimal overperformance for the entire network. Therefore, multiobjective optimization algorithms (MOAs) are fundamentally important to WSN design in the IoT development.
MOAs may be employed to address various key problems in WSN design. For example, the sensor node placement problem involves tradeoffs between multiple objectives such as maximum coverage and minimum energy consumption. When sensor nodes are organized in clusters, the selection of cluster heads is also a challenging issue that must be tackled with a multiobjective optimization vision. Routing strategy is another fairly important problem in WSN design [1], which may have a direct impact on multiple aspects of network performance, including data transmission delay, throughput, and lifetime of individual nodes as well as the entire network. Mobile agents are often used in WSNs to visit a sequence of sensors and fuse impotent data. Optimal agent routes also often need to meet multiple objectives such as minimizing total path delay, loss, and energy consumption as well [2]. These multiobjective problems are all challenging problems that need complex algorithms. On the other hand, special features of WSNs, such as limited computing power, storage space, and battery capacity in particular, bring in new requirements on time and space efficiency of the MOAs that can be applied in such an environment. This motivates research on devising new algorithms for multiobjective optimizations.
During the last twenty years, many MOAs have been proposed [3, 4]. In particular, the multiobjective evolutionary algorithms have been extensively studied [5–14]. For instance, the nondominated sorting genetic algorithm II (NSGAII) [12] and the strength Pareto evolutionary algorithm (SPEA2) [10] are the most favorable ones used in the field of engineering, economics, and business management [15–17]. On the other hand, some other MOAs have also been proposed with the development of bioinspired heuristics [18–21]. The representative one in this line of research is the multiobjective particle swarm optimization (MOPSO), which is a multiobjective optimizer extended by particle swarm optimization (PSO). In addition, Coello et al. [21] showed that MOPSO can achieve a better performance than both NSGAII and SPEA2 with some wellknown testing instances.
Inspired by a biological experiment of ants’ chaotic behavior, a chaotic ant swarm (CAS) optimization algorithm that models ants’ chaotic behaviors mathematically has been given in [22]. Specifically, CAS is a derivativefree method; the individuals of the ant colony exchange information and benefit from either their own experiences or the experiences of other individuals, while exploring promising areas of the search space. The model and mechanism of CAS are different from those of ant colony optimization (ACO) [23]. The CAS algorithm has been successfully applied to various areas such as fuzzy system identification [24], economic dispatch [25], computation of multiple global optima [26], data clustering, and parameter identification [27].
In this paper, we mainly focus on extending the CAS algorithm to a multiobjective optimizer. The major contributions of this paper are summarized as follows.(i)We employ the basic concepts of “nondominated sorting” and “crowding distance” referred to by NSGAII to allow the proposed algorithm to obtain the true or near Pareto optima.(ii)We redefine the rule of “neighbor” selection for each individual (ant) to enable the algorithm to converge and to distribute the solutions evenly. Also, we allow CAS to collect the current best individuals within each generation and we employ the “archivebased” approach to speed up the convergence of the algorithm.(iii)We conduct thorough comparisons between CAS and two leading multiobjective optimization algorithms (MOPSO and NSGAII) over representative benchmarks. The results show that the proposed algorithm outperforms MOPSO and NSGAII on most of testing instances in terms of Generational Distance, Error Ratio, and Spacing.
The remainder of this paper is organized as follows. Section 2 presents notations and basic concepts of multiobjective optimization. Section 3 briefly describes the CAS algorithm. The multiobjective version of CAS and its related analysis are proposed in Section 4. Section 5 shows the experimental results and compares the average performance of the proposed algorithm against that of both MOPSO and NSGAII with benchmark testing problems. Section 6 draws the conclusion of this paper.
2. Notations and Basic Concepts
In this section, we introduce the basic concepts of multiobjective optimization. Terminology is consistent with that in [12, 13, 21].
Definition 1 (global minimum). Given a function , for the value is called global minimum if and only if In this case, is the minimum solution, is the objective function, the set is the feasible region (), is the empty set, and represents the whole search space.
Definition 2 (multiobjective optimization problem). The multiobjective optimization problem is to minimize subject to , where is the decision (variable) space, is the objective space, and consists of realvalued objective functions.
Definition 3 (dominate, Pareto set, Pareto front). Let , be two vectors; is said to dominate if for all , and . A point is called (globally) Pareto optimal if there is no such that dominates . The set of all Pareto optimal points, denoted by , is called the Pareto set. The set of all Pareto objective vectors, , is called the Pareto front.
3. Chaotic Ant Swarm Algorithm
The phenomenon and behaviors of social insect societies such as ant colonies have fascinated many researchers in recent years. What particularly strikes the occasional observers as well as the scientists is the high degree of societal organization of ants; that is, the ant colonies can accomplish complex tasks in spite of the limited capability of individual ants. Through information exchange, the global selforganization behavior of social ants emerges. Such emergent behavior was reported in Leptothorax ant colonies which was related to the chaotic dynamics. Global oscillations of colony activity were also reported together with the observation that individual ant behavior can be characterized by lowdimensional strange attractors [28]. Meanwhile, the study of ant chaotic behaviors and of their selforganizing capacities has greatly interested computer scientists in developing models for distributed organizations which are useful in solving difficult optimization and distributed control problems. In the following, we will give the detailed chaotic ant swarm algorithm based on some biological observations and investigations on the chaotic and selforganizing behaviors of ants.
3.1. Chaotic and SelfOrganizing Behaviors of Ants
The chaotic behavior of insect was first presented by Cole in his experimental studies on activity cycles of Leptothorax ants [28]. Cole used a solidstate automatic digital camera to examine the dynamical behaviors of isolated individual ants and ant colony. He finally found some interesting results, “the ant behavior may be chaotic. The attractor of the movement activity of single, isolated Leptothorax longispinosus ants has a small, noninteger dimension characteristic of lowdimensional chaos. The activity of entire colonies of ants yields an integer dimension that is consistent with periodicity in activity.” He even speculated that, “The existence of chaos animal behavior can have several important implications. Variation in the temporal component of individual behavior may not be due simply to chance variations in the stochastic world, but to deterministic processes that depend on initial conditions.” Inspired by Cole’s observation, we give a nonlinear dynamic model called CAS to describe the chaotic and selforganizing behaviors of ants.
3.2. Nonlinear Dynamic Model of Chaotic Ant Swarm
ACO algorithms as well as CAS algorithm are both inspired by ant behaviors, but the fundamental principle of these two algorithms are quite different. ACO algorithms, which are based on probability theory, explain how ants find the shortest paths between food sources and their nests by disseminating pheromone. However, ACO algorithms do not consider any chaotic behaviors of single ants. Chaotic ant swarm algorithm, on the other hand, is based on chaotic search strategy and selforganizing of ant colony. We now give the nonlinear dynamic model of CAS as follows.
Consider an ant colony including ants, which located in the decision (search) space and they try to minimize a function . The position of each ant is denoted by , where .
In the initial stage, we employ the chaotic system in our equation to perform chaotic search. The system obtained from is described by Sole et al. in [29] (for more details please see our previous work [22–28, 30]). Each ant is influenced by its current position, the best position so far, its neighbors and the organization process of the swarm. The individual ant’s chaotic behavior is adjusted by introducing a successive decrement of the organization variable . Such an organization variable can lead the individual to moving to the new site then acquiring the best fitness. In addition, to achieve the goals of the information exchange between individuals and the movements to the new site aiming on the best fitness, we further introduce an quantity . The term is selected based on the fitness theory which has been substantially developed in optimization theory. Thus, we obtain the following detailed dynamical optimization system of the chaotic ant swarm:where is a sufficiently large positive constant and can be selected as , is a constant and , determines the selection of the search range of the th element of the variable in search space, is termed as the organization factor of ant , , and is the current state of the th dimension of the individual ant , where . is the best position found by the th ant and its neighbors within steps, is the current state of the organization variable, means the current time step, and is the previous step.
and control the convergence of (2); may converge faster if gets larger. The organization factor means the ant swarm is not organized. Under this condition, and are approximately equal to zero, and (2) can be rewritten as . If is very large, then the total time of “chaotic” search is small and the system converges very quickly; however, the desired optima (or near optima) cannot be obtained in this case. On the other hand, if is small, then the time of chaotic search is large, the system converges slowly, and the running time will be longer. The value of is thus chosen between . To denote a different of each ant, for example, we can set , where is a uniformly distributed random number in the interval . After the chaotic search, approximately equals zero and the convergence of (2) will be mainly determined by . Under this condition, when , the state of (2) will converge to .
We found that the above model of chaotic ant swarm equation (2) searches for optima in constrained positive or negative intervals. Specifically, if , (2) can be used to realize the search process in the intervals in which all , and if , (2) can be used to realize the search process in the intervals for all . We therefore introduce and give the following version of CAS model: where determines the search region of ant and offers the advantage that ants could search diverse regions of the problem space. If , it means that the chaotic attractor of ant moves a half to the negative orientation compared to the chaotic attractor of (2). The values of should be suitably selected according to the actual optimization problems. We call (3) the general algorithmic model of chaotic ant swarm. In this model we can select the initial position of an individual ant as , where . More details about CAS are given in [22].
4. Multiobjective Chaotic Ant Swarm Algorithm
4.1. From SingleObjective to Multiobjective
Equation (3) enables the CAS algorithm to obtain the optima or near optima with “satisfactory” precision when handling singleobjective optimization problems [22–28, 30]. In a naïve way, we directly extend the CAS to multiobjective CAS by employing the basic concepts of “nondominated sorting” and “crowding distance” [12]. Unfortunately, the extended CAS algorithm cannot achieve our desired goal. Figure 1 presents the obtained result of such an algorithm when solving a wellknown multiobjective optimization problem ZDT6 [12]:wheresubject to , .
(a)
(b)
As shown in Figure 1, the obtained results are far from the Pareto set; that is, this naively extended algorithm fails to explore the optima. This failure motivates us to reconsider the behaviors of both individual ants and ant colony.
Practically, in the real world, an individual ant performs the chaotic search at the initial time. If a pioneer ant has found a better position, the neighboring ants will follow him by exchanging information. With the search being evolved, all of the ants will eventually reach the source of food. This is the basic idea behind the singleobjective CAS algorithm. However, the organization of ant colony for multiobjective optimization is quite different. In the singleobjective optimization, the algorithm could find the optima based on the assumption that the “better” spatial position of a neighbor leads to a “better” object value. But this assumption does not hold in the multiobjective optimization. The neighbor should be redefined as the individual ant in the current first front to enable the algorithm to converge. On the other hand, individual ants that are in the current first front would travel to each other via information exchange. For example, if ant and ant are two neighboring ants in the current front, then ant would update its position to that of ant by exchanging information with ant since ant supposes ant is much better than itself. In the same way, ant will update its position to that of ant .
4.2. Multiobjective Chaotic Ant Swarm (MOCAS) Algorithm
Besides the differences mentioned above, the archivebased approach in the MOCAS is also employed to store nondominated solutions generated in the past. The use of redefined concept of neighbor selection and the historical record of previously found nondominated vectors allow the MOCAS algorithm to converge to the globally nondominated solutions with fast speed.
The detailed steps of the MOCAS algorithm are given in Algorithm 1.

Another desired goal of MOCAS is to distribute the obtained solutions on the Pareto front evenly. In the proposed MOCAS, we use the crowding distance mechanism to guide the algorithm to generate the welldistributed Pareto front. Specifically, in Step , the algorithm first sorts the according to the crowding distance. When the individual with the largest value of crowding distance selects the neighbor, he will determinately select the one with the smallest value of crowding distance in . This idea is also inspired by the ant behavior in the real world. When ants found several sources of food, the ant in the crowded source would move toward the uncrowded regions. We will validate the effectiveness of this idea in the next section.
Now we analyze the time complexity of one iteration of the entire algorithm. Step initializes the whole algorithm involving ant colony and some related parameters. It takes time, where is the population size, is the dimension of decision variables, and is number of objectives.
Step steps into the main loop and evaluates objectives for each individual; this procedure takes time. Step classifies the population and takes in the worst case. Step calculates the crowding distance for individuals in the current front; this step takes . Step updates the archive by nondominated sorting, so it has the same time complexity of Step . Step generates a new population by (3); this will take . Hence, the time complexity of one iteration of MOCAS in the worst case is .
5. Numeric Simulations
In this section, we test the MOCAS algorithm with several wellknown problems and we compare its performance against that of the stateoftheart multiobjective optimization algorithms, NSGAII (the code is available at http://www.iitk.ac.in/kangal/codes.shtml) and MOPSO (the code is available at http://www.cs.cinvestav.mx/~EVOCINV/software.html). Another important algorithm SPEA2 is not compared here since its performance is worse than that of NSGAII in most cases [12]. We use the typical parameter settings for these three algorithms except that the population size and the number of generations are all set to 200 and 2000, respectively. We run each test instance 30 times to statistically evaluate the performance of three algorithms.
5.1. Simulation Results
The testing problems are listed in Table 1, which are the most frequently used testing instances in multiobjective optimizations. All problems have two objective functions. None of these problems have any constraints. Table 1 also describes the number of variables, their boundaries, and the optima solutions for each problem.
Figures 2–9 depict the nondominated solutions produced by the MOCAS, MOPSO, and NSGAII algorithms, respectively, when they are applied to each of the problems in Table 1. The red curve in these figures represents the true Pareto front for the corresponding testing problem. It can be seen and intuitively observed from these figures that our proposed MOCAS algorithm either clearly outperforms both MOPSO and NSGAII algorithms, or outperforms one of the two but performs roughly the same with the other, or performs roughly the same with both of them, for all testing cases. This immediately suggests an average superiority of the MOCAS algorithm over the MOPSO and NSGAII algorithms.
(a)
(b)
(c)
(a)
(b)
(c)
(a)
(b)
(c)
(a)
(b)
(c)
(a)
(b)
(c)
(a)
(b)
(c)
(a)
(b)
(c)
(a)
(b)
(c)
Specifically, Figures 5 and 6 show that the output of the MOCAS algorithm converges to the true Pareto front for testing problems P4 and P5 while neither the MOPSO algorithm nor the NSGAII algorithm is able to do so for either of the two problems. Figure 4 shows the comparison on a noncontinuous testing function. The results demonstrate that the performance of the MOCAS algorithm is roughly equivalent to that of MOPSO but superior to that of NSGAII for testing problem P3, and Figure 7 demonstrates that the performance of the MOCAS algorithm is roughly equivalent to that of NSGAII but superior to that of MOPSO for testing problem P6. Figures 2, 3, 8, and 9 all suggest that the performance of the MOCAS algorithm is closely equivalent to that of both MOPSO and NSGAII for testing problems P1, P2, P7, and P8, respectively. Quantitatively detailed performance analysis is given in the next subsection to show the precise comparison of three algorithms.
5.2. Performance Evaluations
In order to further evaluate the performance of these three algorithms quantitatively, we adopt the following three performance metrics as suggested in [21, 31–33].
5.2.1. Generational Distance (GD)
This metric measures how far the obtained nondominated solutions are from the true Pareto front. Its definition iswhere is the number of obtained nondominated solutions and is the Euclidean distance (measured in objective space) between the th nondominated solution and the Pareto front.
5.2.2. Error Ratio (ER)
This metric was originally proposed by Van Veldhuizen [32] to measure the percentage of obtained nondominated solutions that are not on the Pareto front:where is the number of obtained nondominated solutions. means the th solution is a member of the Pareto set, and , otherwise.
5.2.3. Spacing (SP)
This metric measures the extent of spread of obtained nondominated solutions. It is defined aswhere , , is the mean of all ’s, and is the number of obtained nondominated solutions.
It should be noted that metrics GD and ER describe the first goal of multiobjective optimization algorithms as mentioned previously in the design of MOCAS, while SP reflects the second goal. In the following, we present the comparison for three algorithms with respect to above performance metrics.
Tables 2, 3, 4, 5, 6, 7, 8, and 9 show the comparison of the results from the three algorithms with respect to GD, ER, and SP, where the boldfaced numbers indicate the best average CD (or ER or SP) across the three algorithms. It can be seen that the average performance of MOCAS is the best when the three algorithms run the test problems P2, P3, P4, P5, P6, and P8. For testing problems P1 and P7, however, the tables show that the NSGAII has the best GD and ER, but the MOCAS has the best SP. By zooming in the corresponding figures (Figures 2 and 8) and inspecting the details, we see that the MOCAS would not converge to the true Pareto front, but it has a fairly good distribution of obtained solutions. This observation matches the quantitative performance evaluations.
6. Conclusion
We proposed in this paper a novel bioinspired multiobjective optimization algorithm named MOCAS for designing wireless sensor networks in the Internet of Things by extending previous work on chaotic ant swarms. A straightforward extension from the singleobjective optimization CAS model to the multiobjective optimization CAS model was initially attempted by introducing the “nondominated sorting” and “crowding distance” notions into the singleobjective optimization CAS model. This approach however turns out to be a failure since its outcomes are not even close to the true Pareto front. We then redefined the concept of “neighbors” and “neighborselecting” rules and incorporated an “archivebased” approach into the algorithm allowing the resulting MOCAS algorithm to converge fast to the true Pareto front with an evenly distributed set of solutions. By testing MOCAS on some wellknown multiobjective optimization problems and comparing the results with those produced by the stateoftheart peer algorithms MOPSO and NSGAII, we have seen that the proposed MOCAS algorithm outperforms the other two algorithms on average, which evidently shows the competitiveness of the MOCAS algorithm in dealing with multiobjective optimization problems.
Since the variables of MOCAS are bounded by symmetric intervals in this paper, one issue that we would like to explore in the future is how to set the parameter for the algorithm to cover a wider range of multiobjective optimization problems. Another interesting issue that we also would like to study is the further extension of MOCAS to handle constrained multiobjective optimization problems.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.