Abstract

Recently, nature-inspired techniques have become valuable to many intelligent systems in different fields of technology and science. Among these techniques, Ant Systems (AS) have become a valuable technique for intelligent systems in different fields. AS is a computational system inspired by the foraging behavior of ants and intended to solve practical optimization problems. In this paper, we introduce the AntStar algorithm, which is swarm intelligence based. AntStar enhances the optimization and performance of an AS by integrating the AS and algorithm. Applying the AntStar algorithm to the single-source shortest-path problem has been done to ensure the efficiency of the proposed AntStar algorithm. The experimental result of the proposed algorithm illustrated the robustness and accuracy of the AntStar algorithm.

1. Introduction

Natural swarms have inspired swarm intelligence. Many algorithms and techniques based on swarm intelligence have been developed to solve optimization problems, such as the Ant System (AS) [1, 2], particle swarm optimization (PSO) [3], artificial bee colony (ABC) [4, 5], Firefly Algorithm (FA) [6], and Intelligent Water Drops (IWD) [7, 8]. Dorigo et al. introduced the AS in the 1990s [1, 2]. AS is an optimization algorithm inspired by the foraging behavior of natural ant colonies. In nature, during foraging, ants manage to establish the shortest paths from their nests to food sources by depositing pheromones on the ground as they move. AS uses this same idea. Kennedy introduced PSO, which is inspired by the swarm of bird and fish schools [3] and which simulates the behaviors of bird flocking. ABC was introduced by Karaboga in 2005 [4, 5]. ABC tries to simulate the natural food foraging behavior of real honeybees, as broken into three groups: employed bees, scouts, and onlookers. In ABC, employed bees go to food sources to determine the amount of nectar present there. Next, ABC calculates the probability value of the food sources. The onlooker selects the preferred sources. The scouts are sent to search areas to discover new food sources. The Firefly Algorithm, introduced by Yang in 2009, was inspired by the flashing (flight) behavior of insects [6]. Shah-Hosseini proposed the IWD algorithm in 2007 [7, 8], trying to simulate natural river systems and the interaction between water drops and their environment. During movement, IWD velocity is increased nonlinearly, proportional to the inverse of the soil between the two locations. Soil in IWD is increased by removing some soil from the path between the two locations. This increase is inversely proportional to the time that IWD needs to pass from its current location to the next location. This time, in turn, is proportional to the velocity of the IWD and inversely proportional to the distance between the two locations. Many swarm intelligence systems have been used for path-planning problems. The advantages of such systems include the possibility to add expert knowledge to the search operation and the ability to work with several candidate solutions simultaneously rather than only exploring a single alternative. Głąbowski et al. used Ant Colony Optimization (ACO) to solve the shortest-path problem [9]. Hsiao et al. used ACO to search for the best path of a map [10]. Tan et al. used ACO for real-time planning of the globally optimal path for mobile robots [11]. Montiel et al. solved the path-planning problem of mobile robots using the Bacterial Potential Field (BPF) approach [12]. Contreras-Cruz et al. solved the mobile robot path-planning problem by combining an artificial bee colony algorithm as a local search procedure with an evolutionary programming algorithm that refines the feasible path found by the set of local procedures [13]. Mo and Xu combined biogeography-based optimization (BBO), PSO, and an Approximate Voronoi Boundary Network (AVBN) to plan the global path of mobile robots in a static environment [14]. Many swarm intelligence algorithms, such as AS, ACO, ABC, and FA, have been used to solve path-planning problems in the field of robotics [15]. In this paper, we propose AntStar, a technique based on swarm intelligence. AntStar enhances the performance of AS by inserting an evaluation function, the [16] algorithm, into the transition probability function of AS. This guides the random movements of ants in order that they perform an admissible search from the first iteration. Therefore, AntStar reaches a suboptimal solution early.

This paper is organized as follows. In Section 2, background is presented. The AntStar algorithm is explained in Section 3. Section 4 explains the application of the AntStar algorithm. The application of AntStar to a single-source, shortest-path problem is presented in Section 5; Section 6 explains the experiments with and performance of AntStar. Applying the AS to a single-source, shortest-path problem is presented in Section 7 in order to compare AntStar with AS in Section 8. Section 9 presents the discussion, and the conclusion is given in Section 10.

2. Background

The proposed AntStar algorithm integrates AS and the algorithm. Therefore, this section discusses, as follows, the Natural Ant System, the Artificial Ant System, and the algorithm.

2.1. Natural Ant System

In nature, ants work in colonies that differ in size. The main work of ants is to forage for food. Many researchers have studied ant behavior [1, 1719]. Understanding the foraging behavior of natural ant colonies was one of the main problems studied by ethologists. Ants have trail-laying and trail-following behavior when foraging [20]. During foraging, ants manage to establish the shortest paths from their nests to food sources and then return. This is done by laying communications media (chemical substances called pheromones) in varying quantities on the ground, used for communication between individuals. Each ant lays information (pheromones) regarding their paths and decides where to go according to these pheromone trails. The foraging behavior of the natural ant can be described as follows:(1)At first, ants move randomly and lay down pheromone on the ground.(2)If a food source is discovered, the ants return to the nest and lay down a pheromone trail.(3)If a pheromone is discovered during movement, the probability of following the pheromone trail will increase.(4)If an ant reaches the nest, it goes again to search for a food source.

The collective behavior of ants is autocatalytic: the more the ants follow a trail, the more the trail becomes attractive to follow [21]. This operation can be characterized as positive feedback, where the probability of choosing a path increases with the number of ants that have before chosen the same path [21]. Consider Figure 1(a) [21]. The ants walk from nest A to food source E. The path the ants followed when free of obstacles is in Figure 1(a). Unexpectedly, an obstacle appears in the path, breaking it off, as in Figure 1(b). Thus, the ants at place B (ants walking from nest A to food source E) or at place D (ants walking from food source E to nest A in the opposite direction) must decide whether to go left or right. This decision is influenced by the degree of pheromones left by the preceding ants.

As we can see, more pheromone on a route provides an ant with stronger motivation and higher probability to choose that route. At the beginning, the first ant at place B (or D) in Figure 1(b) moves randomly, laying down pheromones without caring about the lack of previous pheromones. An ant that uses the route BCD will reach D before the first ant on the route BHD (Figure 1(c)), since BCD is shorter than BHD. As a result, more ants returning from the food source E to nest A via D will select DCB, because it has stronger, less-decayed pheromones than DHB. This in turn causes the amount of pheromones on BCD (the shorter path) to grow faster than on the DHB (longer one).

2.2. Artificial Ant System

The artificial AS simulates the foraging behavior of real ant colonies as follows.

Consider an ant colony looking for food like solving a problem. Consider ants as solutions. In each tour, each ant looks for food and returns to the nest, representing one solution. An artificial AS is derived from the Natural Ant System in this way, with these major differences [21]:(1)Artificial ants are designed with memory capacity.(2)Artificial ants are not completely blind like real ants.(3)Working time is discrete.

AS uses a swarm of computer agents (a swarm of ants), each agent simulating a real ant. During the movement (search) of each agent, the ants manage to establish the shortest paths from their nest to the food source (source to destination) using the transition probability and trail intensity. The transition probability of an artificial ant (agent) from node to node for the th ant is given by (1) below [1] and in Figure 2:where is a heuristic value on edge , is the intensity of trail on edge at time , and and are parameters that control the importance of the pheromone versus a heuristic. The trail intensity equation of edge is [1]orwhere is an evaporation coefficient of the trail between time and and is the laying quantity of pheromone on edge by the th ant over the period of time . Moreover, is a constant, and is the tour length of the th ant. This updating evaporation of the pheromone allows an indirect form of communication called stigmergy. Dorigo et al. [1, 21] tested the artificial AS with the well-known travelling salesman problem (TSP) [22]. Given a graph , a set of cities , and a set of edges , the TSP is defined as the problem of finding a path with minimal length that visits each city once. The distance between cities is given by the Euclidean distance Let be the number of ants in town at time , with to , where is number of towns. is the total number of ants.

As we said before, each ant is considered an agent with these properties as follows:(A)An agent selects a town to move to using a probability function (see (1)), which depends on the amount of trail on the edge and the town distance .(B)Going to visited towns is disallowed using a tabu list.(C)After each tour, an agent lays a substance called a trail on each edge of its trip using (2a), (2b), (2c), and (2d).

2.3. Algorithm

algorithm is an extension of the Dijkstra algorithm [23] described in 1968 by Hart et al. [16]. It is a searching algorithm incorporating heuristic information about the problem domain into a formal mathematical theory of graph searching to find a good enough (perhaps not the best) path from point to point. The admissible searching property is used by to expand the fewest nodes in the searching operation of the optimal path. evaluates all accessible nodes using the evaluation function to decide which one should be expanded next [16]. The evaluation function is defined as where is the past path-cost function (actual cost) of an optimal path from source to current node and where is the future path-cost function (heuristic cost) of an optimal path from current node to the destination [16]. To maintain the admissibility of , the estimates of must not be higher than the lowest possible cost [16].

The algorithm uses two lists (OPEN and CLOSED) to maintain the operation. The OPEN list saves all available nodes that were not yet accessed. The CLOSED list saves all visited nodes. The OPEN list is sorted according to , and uses the OPEN list to select the next node to visit with the minimum cost to access. Gradually, finds the relatively shortest path from the source to the destination. Let us consider a connected subgraph with start node and as goal nodes. The algorithm is summarized below.

Search algorithm [16] is as follows:(1)Insert start node to the open list(2)Calculate the evaluation function:(3)Select the node with the minimum () from the Open_List().(4)If (),insert to the closed list Closed _List(),terminate the algorithm.(5)Elseinsert to the Closed _List();calculate evaluation function for each successor and insert them to open list;go to Step (2).

3. Proposed AntStar Algorithm

AntStar is a proposed algorithm that enhances the optimization and the performance of AS by integrating the AS with the algorithm, gaining advantages from both. AntStar inserts the evaluation function of into the transition probability function of AS. Therefore, AntStar is able to improve on the performance of the AS by combining the collective behavior and self-organization of AS with the evaluation function of the algorithm. So, at the beginning (the first iteration) of the AntStar algorithm, no pheromones are available in the search area. However, the evaluation function of AntStar guides the random movements of ants to perform an admissible search from the first iteration, while in AS each ant moves randomly with their only guide of the heuristic function of AS. Actually, this integration gives AntStar the following properties:(1)Multiagents. They take advantage of multiagents from the AS.(2)Stochastic Best-First-Search. It takes advantage of the best-first-search from using the evaluation function . In addition, AntStar inherits the probabilistic advantage of solving computational problems from the AS.(3)Collective Behavior and Self-Organization. They take advantage of swarm intelligence from the AS.

The AntStar algorithm works as follows.

During the movement (search) of each artificial ant (agent) in AntStar, the agents manage to establish the shortest paths from their nest to the food source (source to destination) using the transition probability and trail intensity equation. AntStar builds its solution by repeatedly applying a stochastic, best-first-search rule using the transition probability rule. Agents prefer to move to nodes that are connected by a short edge and that have a larger amount of pheromone. The transition probability of an agent at node to node for the th ant is obtained by integrating the evaluation function of and the transition probability, as in (7) below and in Figure 3:wherewhere is the past path-cost function (actual cost) of an optimal path from source to current node at time , is the future path-cost function (heuristic cost) of an optimal path from current node to the destination, and is the intensity of the trail on edge at time . provides evidence of how many ants have chosen that same edge in the past. and are parameters that control the importance of the pheromone versus the heuristic. Once all agents have completed their tours and reached their targets, the pheromones are updated on all edges according to the trail intensity given by (2a), (2b), (2c), and (2d) above [1].

4. Applications of the AntStar Algorithm

AntStar is a swarm intelligence algorithm applicable to any problems that can be addressed by such algorithms (e.g., AS, ACO, ABC, PSO, and IWD) or by the algorithm, including the single-source, shortest-path problem [24], travelling salesman problem (TSP) [22], -Queen Puzzle [25], or Multiple Knapsack Problem (MKP) [26]. Any problem the state spaces of which can be represented as graphs may benefit from the AntStar algorithm.

In fact, AntStar can be adapted to other kinds of problems. In the single-source, shortest path and TSP, AntStar uses the distances between vertices (cities) in the evaluation function of the transition probability (7). On the other hand, in the -Queen Puzzle, AntStar’s evaluation function of the transition probability represents not the distance between two vertices, but rather the positions on the chessboard where queens will kill each other if placed. In the MKP, AntStar uses the benefit and the capacity values in the evaluation function of the transition probability (7); the allowed set will be the remaining feasible items. The AntStar algorithm is adapted to a wide range of problems through the proper usage of the evaluation function.

As detailed in Section 5, we experimented with AntStar in the single-source, shortest-path problem. In the TSP problem, which is represented by a graph with nodes and edges [22], the AntStar algorithm is applied by considering the cities as the nodes of the graph, with the links between cities as edges. Each link has a distance value, and each city has a specific location. Using this information, the AntStar algorithm can be directly applied to the TSP problem.

Another example application is the -Queen Puzzle [25]. Khan et al. [27] and Shah-Hosseini [7] introduced ACO and IWD as methods to solve the -Queen Puzzle with a decreased search space. The AntStar algorithm, IWD, and ACO all have the same working principle, making it possible for AntStar to solve the -Queen Puzzle in a similar manner.

In the knapsack problem, there are several items, each with a weight, a profit, and a knapsack [26]. The aim is to put some of the items in the knapsack for sale in order to obtain maximum profit such that the total weight is less than or equal to a given limit. In the multiple knapsacks (MKP) problem, there are many items () and multiple knapsacks (). The aim is to put some of the items in the knapsacks for sale in order to obtain maximum profit with the condition that none of the knapsacks is overflowing. Many algorithms have been developed to solve the MKP, such as [28], where the MKP is represented as a graph with and edges: . The vertices denote the items, while represents the arcs (paths) between the items. Since AntStar and IWD have the same working principle, it may be deduced that AntStar is applicable to the MKP problem.

5. Applying AntStar to the Single-Source Shortest-Path Problem

As mentioned above, AntStar is a search algorithm that integrates the algorithm with AS. In this section, we apply the AntStar algorithm to the single-source shortest-path problem. In graph theory, a connected graph is a directed graph with vertices, edges, and weight function : -≫ . Each path has a weight that is the sum of the weights of each edge in the path [24]:The shortest-path weight from to is defined by The shortest path between vertex and vertex is defined as any path with the weight The application of the AntStar algorithm to a single-source shortest-path problem is described by the flowchart in Figure 4.

Algorithm 1 details the steps of the AntStar algorithm with the shortest-path problem.

() Initialize:
  Set
  Set NC 0  NC is the cycles counter
  For every edge () set an initial value for
  trail intensity and
() Place the ants on the source node
() Set s is the tabu list index
   For to do
      Place the starting town of each ant in
() Repeat until each ant reaches the target or in local minima
   Set
   For to do
   // at time the th ant is in town
   Calculate the evaluation function of each neighbor
    
   Choose the town j to move to, with probability
    
   Move the th ant to the selected town
   Insert town in
() For to do
   Compute the length
   Update the shortest tour found
   For every edge
     For to m do
        
() For every edge
     Compute
() Set  
   Set NC NC + 1
   For every edge
    Set
() If (NC < NCMAX) and (not stagnation behavior) then
    Empty all tabu lists
    Go to step ()
Else
    Print shortest tour
    Stop

6. Experimentation Processes

Various scenarios were devised to test the performance of the proposed AntStar algorithm on the shortest-path problem. These experimental results determined the robustness, accuracy, adaptability, and efficiency of the proposed algorithm. The proposed algorithm was tested successfully many times in the search space with different configurations that were as big as 100 square meters (10 m × 10 m). We divided the search space into 100 nodes, each one square meter in size, so the length of each edge within each node is one meter, as illustrated in Figure 5.

6.1. Selection of Parameters

Parameter values indirectly affect the performance of AS, as well as the proposed AntStar. In this section, we will discuss the effectiveness and sensitivity of tuning the parameters and of the probability equation (7). We tested several values for each parameter. First, we fixed of AntStar based on many studies [1, 2, 21] and then tested AntStar with several values of (Table 1).

As shown in Table 1, the length of the path decreases until , after which the length becomes fixed. Similarly, then, we fixed and tested AntStar with several values for (Table 2).

As shown in Table 2, the length of the path is fixed (optimal) up to , after which the length becomes fixed at a higher value (13.071). Therefore, in our investigation in Sections 6.2 and 7, we set and

6.2. Experiments Results

The parameters of AntStar were set based on our experiments and many studies [1, 2, 21] as , , , and We used 10 ants and 10 iterations to test the AntStar algorithm on the shortest-path problem. As we mentioned, the movement of each ant from node to node depends on the transition probability function (see (7)), which is influenced by actual cost, heuristic cost, and pheromone trail values. Two scenarios using AntStar are discussed in this section. The source node of the first scenario is at 1 m, 8 m, with target node at 10 m, 10 m. The source node of the second scenario is at 5 m, 5 m, with target node at 7 m, 5 m. The probability values of movement for one ant in the first scenario are illustrated in Figure 6. Figure 6(a) illustrates the probability values of first movement. As can be seen, there are three different probability values for movement; the ant decides to move toward the node with the highest value (0.424). As we will see, it is not necessary to move toward the node with the highest value. Figure 6(b) illustrates the probability values for the second movement. As can be seen, there are four different probability values for movement; again, the ant decides to move toward the node with the highest value (0.3689). As can be seen in Figures 6(c) and 6(d), the ant decided not to move toward the node with the highest value probability due to stochastic movement, which helps AntStar to explore the search space. Work continues in this manner until the ant reaches the target.

A snapshot of the results of different paths in the experiment of the first scenario with AntStar is displayed in Figure 7. Figure 7 illustrates the output paths of the first scenario using the AntStar algorithm on the shortest-path problem. As we can see, it displays four different paths, each of different length: (1) 16.49 m; (2) 13.07 m; (3) 13.07 m; and (4) 12.485 m, which is the best. Those paths are the best paths of the first four iterations of the algorithm. The shortest path found using the AntStar algorithm in this scenario is plotted in Figure 8.

As AntStar executed with 10 ants and 10 iterations, at each iteration 10 different paths were generated, one path per ant. The best path of each iteration is a candidate for the best path. Therefore, Figure 7 showed only the best path of each iteration. Figure 9 illustrates the length of the shortest path of each iteration in this scenario.

At the beginning, no pheromones are available in the search area. Therefore, upon the first iteration of AntStar, each ant moves randomly, constrained by the evaluation function (7). As we can see in Figures 7 and 9, on the first iteration the best path was of length 16.49 m. The best path of the second and third iterations was enhanced to length 13.07 m. We executed the algorithm using the same search space, configuration, start point, and target point to determine the shortest path of the searching space. Therefore, we know the shortest path of this configuration. On the fourth iteration, AntStar found the best path in the search space, of length 12.485 m. This quick enhancement was due to the collective behavior, self-organization, and evaluation function of the AntStar algorithm. Over the rest of the iterations, there was no change in the length of the found path, as we can see in Figure 9. The time of finding the best path of each iteration is illustrated in Figure 10. The output paths of the second scenario using the AntStar algorithm on the shortest-path problem are illustrated in Figure 11. As we can see, three different paths are displayed, each of different length: (1) 16 m; (2) 13.2 m; and (3) 10.24 m, the shortest path. Those paths are the best paths of all the iterations of the algorithm. Let us recall that AntStar executes with 10 ants and 10 iterations, so, at each iteration, 10 different paths are generated, one path per ant. Figure 12 illustrates the length of the shortest path of each iteration in this scenario. The time of finding the best path of each iteration is illustrated in Figure 13.

7. AS Applied to the Shortest-Path Problem

In this section, we apply the AS algorithm to the shortest-path problem for the same 100-square meter (10 m × 10 m) search space and configuration as we used with AntStar and setting the parameters of the AS the same as AntStar, , , , and . We used 10 ants and 10 iterations to test the AS system on the shortest-path problem. A snapshot of the results captured from some of these experiments is displayed in Figure 14, illustrating the output paths of the AS system applied to the shortest-path problem over 10 iterations. As we can see, it displays many different paths, each of different length: (1) 17 m; (2) 16 m; and (3) 14 m, the best in this experiment. Those paths are the best from the 10 iterations of the AS. Figure 15 illustrates the lengths of the best 10 paths of each iteration. The time cost of each iteration is illustrated in Figure 16.

8. Comparison

In this section, we compare the proposed AntStar system with AS and ACO as applied to the shortest-path problem using the same search space and configuration. The comparison is from an experimental point of view, so it includes traveled distance in meters, used path, and execution time. Plots of the results of the used path and traveled distance using the AntStar algorithm and AS are shown in Figure 17. We executed the AntStar algorithm and AS with the same start point (1 m, 8 m) and target point (10 m, 10 m). As can be seen, the travel distance of AntStar is 12.485 m for the given search space and configuration, while AS and ACO produced another path and travel distance (14 m and 13.31 m, resp.). We executed the algorithm using the same search space, configuration, start point, and target point to determine the shortest path. Therefore, we know the shortest path of this configuration.

AntStar, AS, and ACO are competitive in finding the path operation, but in terms of execution time and traveled distance the AntStar prevailed in the competition. Let us recall that AntStar, AS, and ACO have been executed for ten iterations each with ten ants. The best (minimum) time was that of AntStar (1.85 sec), while the best time of AS is 3.7 and that of ACO is 1.9. The best average time was that of AntStar (2.241 sec) while the average time of AS was 3.932 and that of ACO was 3.03. The best-traveled distance was that of AntStar (12.485 m), while the best-traveled distance of AS was 14 m and that of ACO was 13.31 m. The best average traveled distance was that of AntStar (13.05 m), while the best average traveled distance of AS was 17.3 and that of ACO was 14.36. Actually, AntStar, AS, and ACO had a different execution time, different traveled distance, and path for each iteration as illustrated in Figures 18 and 19. Detailed performance indexes are listed in Table 3.

9. Discussion

From the obtained experimental results of the AntStar and AS applied to the shortest-path problem, we focus on three performance indexes: the traveled distance, the used path, and, finally, the time to reach the target.

Used Path and Traveled Distance. As illustrated in Figure 17 and in Table 3, AntStar, AS, and ACO all succeeded in reaching the target from the initial point. In terms of execution time and traveled distance, however, AntStar prevailed in the competition. Let us recall that AntStar, AS, and ACO were all executed for ten iterations, each with ten ants. The minimum traveled distance was AntStar’s (12.485 m), compared to 14 m for AS and 13.31 m for ACO. As can be seen in Figure 18, upon the first iteration, the best path of all ants using AntStar was 16.49 m long, while the best path of AS was 23 m long and that of ACO was 18.14 m long. The best path on the second and third iterations using AntStar was enhanced to a length of 13.07 m, while AS on the second and third iterations had best-path lengths of 22 m and that of ACO was 16.14 m. On the fourth iteration, AntStar found the best path in the search space, which is of length 12.485 m, while AS found its best path (which is not the best for the given search space) on the seventh iteration and ACO found its best path on the fourth iteration. This quick enhancement of AntStar was due to the collective behavior, self-organization, and evaluation function of the AntStar algorithm. So, at the beginning (on the first iteration) of the AntStar, AS, and ACO, no pheromones were available in the search area, but the evaluation function of AntStar guides the random movements of ants to perform an admissible search from the first iteration, while, in AS and ACO, each ant moves randomly with their only guide of the heuristic function.

Time to Arrive at Target. Travel distance is linearly related to time for any fixed speed. In all experiments, the ants of AntStar realized the best time (1.85 sec) to attain the target, as the probability function, which includes the evaluation function, was better in terms of guiding the ants and had an optimal control strategy.

10. Conclusion

In this paper, we proposed the AntStar algorithm, which enhances the optimization and the performance of AS by integrating AS with the algorithm. As we have seen, applying the AntStar algorithm to the single-source, shortest-path problem demonstrated its efficiency. The experimental results showed the quick enhancement of AntStar, which is due to the evaluation function, the collective behavior, and the self-organization of the AntStar algorithm. Further, these results illustrate the robustness and accuracy of the AntStar algorithm on the single-source, shortest-path problem, which encourages us to apply AntStar to other problems, such as the TSP, -Queen Puzzle, and MKP.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This project was funded by the National Plan for Science, Technology and Innovation (MAARIFAH), King Abdulaziz City for Science and Technology, Kingdom of Saudi Arabia (Award no. 12-MED2474-02).