Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2018, Article ID 5671709, 12 pages
https://doi.org/10.1155/2018/5671709
Research Article

A Novel Teaching-Learning-Based Optimization with Error Correction and Cauchy Distribution for Path Planning of Unmanned Air Vehicle

College of Mechanical and Equipment Engineering, Hebei University of Engineering, Handan, Hebei 056038, China

Correspondence should be addressed to Zhibo Zhai; moc.mot@obihziahz

Received 9 May 2018; Accepted 25 June 2018; Published 1 August 2018

Academic Editor: David M. Powers

Copyright © 2018 Zhibo Zhai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Teaching-learning-based optimization (TLBO) algorithm is a novel heuristic method which simulates the teaching-learning phenomenon of a classroom. However, in the later period of evolution of the TLBO algorithm, the lower exploitation ability and the smaller scope of solutions led to the poor results. To address this issue, this paper proposes a novel version of TLBO that is augmented with error correction strategy and Cauchy distribution (ECTLBO) in which Cauchy distribution is utilized to expand the searching space and error correction to avoid detours to achieve more accurate solutions. The experimental results verify that the ECTLBO algorithm has overall better performance than various versions of TLBO and is very competitive with respect to other nine original intelligence optimization algorithms. Finally, the ECTLBO algorithm is also applied to path planning of unmanned aerial vehicle (UAV), and the promising results show the applicability of the ECTLBO algorithm for problem-solving.

1. Introduction

Global optimization is a universal issue to the entire scientific community. It has been applied widely in many different fields such as chemical engineering [1], molecular biology [2], the training of neural networks [3], job shop scheduling [4], and network design [5]. However, in most cases, global optimization problems are nonlinear and nondifferentiable and, hence, gradient-based methods cannot be used. In recent years, a lot of effective optimization algorithms have been developed and used to successfully solve global optimization problems that are nonlinear and nondifferentiable. Typical algorithms include particle swarm optimization (PSO) [6] proposed by Kennedy and Eberhart in 1995 and inspired by swarm behavior of fish schooling and bird flocking, differential evolution (DE) [7] which mimics Darwinian evolution, group search optimizer (GSO) [8] which is inspired by animal searching behaviors, artificial bee colony (ABC) [9] which simulates the foraging behavior of honey bees, water cycle algorithm (WCA) [10] which is based on the observation of water and cycle processes and how rivers and streams flow to the sea in the real world, cuckoo search (CS) [11] which mimics the brooding behavior of some cuckoo species, backtracking search algorithm (BSA) [12] which is developed from the differential evolution algorithm, differential search algorithm (DSA) [13] which is inspired by the migration of superorganisms utilizing the concept of stable motion, and interior search algorithm (ISA) [14] which is inspired by interior design and decoration.

Recently, Rao et al. [15] proposed the teaching-learning-based optimization (TLBO) algorithm inspired by the teaching-learning process in a classroom. The algorithm simulates two fundamental phases of learning consisting of the “Teacher Phase” and the “Learner Phase.” One of the remarkable advantages of the TLBO algorithm is its simple computation. The other important advantage of the TLBO algorithm is that it does not require specific controlling parameters (the crossover and mutation probability, etc.) except for the common controlling parameters (the size of population and the problem dimensional), which makes the TLBO algorithm easy to implement and more quickly convergence speed. Hence, it has been extended to engineering optimization [15], physics-biotechnology optimization [16], multiobjective optimization [17], heat exchanger design [18], dynamic economic emission dispatch [19], and so on.

Although the TLBO algorithm has some advantages, it has some undesirable dynamical properties that degrade its searching ability. One of the most important issues is that there exists the lower exploitation ability and the smaller scope of solutions in the later stages of evolution. Another issue is regarding the ability of the TLBO algorithm to balance exploration and exploitation [20]. Exploration is the ability that the TLBO algorithm develops global solution space, and the exploitation is the ability that the TLBO algorithm searches the approximately optimization solution in local solution space. Overemphasize of exploration process prevents the population converging, while too much emphasis on the exploitation process tends to cause the premature convergence of the population. In practice, the exploration and exploitation processes contradict each other and, in order to achieve good solutions, the two processes should be properly trade-off. To improve the performance of the TLBO, modified or improved algorithms are proposed in recent years, such as an elitist teaching-learning-based optimization (ETLBO) algorithm [21], teaching-learning-based optimization with neighborhood search (NSTLBO) [22], and teaching-learning-based optimization with dynamic group strategy (DGSTLBO) [20].

Although the modified TLBO algorithms have better performance than the TLBO for some classical problems, some important issues are not considered such as there are still the lower exploitation ability and the smaller scope of solutions in the later stages of evolution. To address these issues, this paper proposes a novel version of TLBO that is augmented with error correction and Cauchy distribution (ECTLBO) in which the Cauchy distribution is utilized to expand the searching space and error correction to avoid detours to achieve more accurate solutions.

The rest of this paper is organized as follows. Section 2 first briefly introduces the TLBO algorithm and the details of its implementation. Section 3 presents TLBO with error correction and Cauchy distribution (ECTLBO). Section 4 analyzes the results of ECTLBO and several related optimization algorithms via a comparative study. Section 5 applies ECTLBO algorithm to path planning of unmanned aerial vehicle (UAV). Finally, the work is summarized in Section 6.

2. Teaching-Learning-Based Optimization

The teaching-learning-based optimization algorithm is a nature-inspired algorithm analogous to the teaching-learning process in a class between a teacher and learners. The process of implementing TLBO consists of two phases, “Teacher Phase” and “Learner Phase.” The “Teacher Phase” stands for learning from the teacher while the “Learner Phase” denotes learning through the interaction between learners.

2.1. Teacher Phase

During the Teacher Phase, the updating formula of the learning for a learner (i = 1, 2, N, where N is the number of learners), is a vector of a learner which includes xij consisting of various subjects such as literature, mathematics, and English (j = 1, 2, D, Xi (xi1, xi2,, xiD, where D is the number of subjects which a learner studied)), iswhere is a newly generated individual according to , is the best individual of current population, is the current mean value of all individuals, r is a vector whose elements are distributed randomly in [0, 1], and is a teaching factor deciding the value of the to be changed. The value of is either 1 or 2, indicating the learner learns something or nothing, respectively, from the teacher. The value of is decided randomly with equal probability:

2.2. Learner Phase

During the Learner Phase, each learner interacts with other learners to improve his or her knowledge. A learner Xi learns something new if the other learner Xj has more knowledge than him or her. f(Xi) is the summary of all the scores of subjects for the ith learner, and the updating formula of the learning for a learner Xi is

3. A Novel Teaching-Learning-Based Optimization

In this study, a novel version of TLBO that is augmented with an error correction strategy and Cauchy distribution (ECTLBO) is proposed.

3.1. Error Correction Strategy

Some learners who have a bad result because of the bad study method should be guided correctly. Because the study method of some learners toward the teacher is wrong, this time if this is not corrected in time, there will be a detour phenomenon. The study method has problems even wrong, and this leads to opposite. Although each learner spends a lot of efforts, the effect is not too obvious. So, it must have correction function to avoid detours to achieve faster convergence speed and the precision of the optimization as long as the learner who has back phenomenon is corrected in a timely manner. The updating equation of the Teacher Phase is

3.2. Cauchy Distribution

Cauchy distribution is a common distribution in probability theory and mathematical statistics, and the probability density function in dimension is as follows:

It is the standard Cauchy distribution when the parameter t equals 1. Figure 1 is the probability density curves of standard Gauss distribution, standard Cauchy distribution, and standard uniform distribution, respectively. As can be seen from Figure 1, the peak of Cauchy distribution at the origin is the smallest of three different distributions, while the velocity of the long flat shape near to zero is the slowest. So, if the mutation strategy of Cauchy distribution is used in the Teacher Phase and Learner Phase, its disturbance ability or self-adjustment ability is the strongest of three different distributions, and the basic TLBO algorithm is more likely to jump out of the local optimum and improve the search speed. The updating equation of the Learner Phase is

Figure 1: Probability density curves of standard Gauss, Cauchy, and uniform distributions.
3.3. Flowchart of Distribution of ECTLBO Algorithm

As explained above, the flowchart of an error correction strategy and Cauchy distribution (ECTLBO) is shown in Figure 2.

Figure 2: Flowchart of ECTLBO algorithm.

4. Experimental Studies

4.1. Test Benchmark Functions

To evaluate the performance of the ECTLBO algorithm, 6 contest benchmark functions [23] are used in a set of experimental studies. The definition of these functions is given in Table 1.

Table 1: Definition and optimum of benchmark functions used to evaluate optimization algorithms.
4.2. Experimental Platform and Termination Criterion

All experiments are conducted on the same computer with a Celoron 2.26 GHz CPU, 2 GB memory, and windows XP operating system with MATLAB 7.9. For the purpose of decreasing statistical errors, all experiments are repeated 25 times for all 6 test functions of 30 dimensions. Also, 300,000 function evaluations (FEs) [24] are used as the stopping criterion.

4.3. Performance Metric

The mean value () of the function error value is recorded to evaluate the performance of each algorithm, where and denote the best fitness value for solution and the real global optimization value of the test problem, respectively. The standard deviation (SD) indicates robust of various optimization algorithms on F1F6 test functions on 30 dimensions. To verify whether the overall optimization performance of various optimization algorithms is significantly different, statistical analysis is used to compare the results obtained by the algorithms for the same kind of problems. Therefore, the statistical tool Wilcoxon’s rank sum test [25] at a 0.05 significance level is adopted. The Wilcoxon’s rank sum test assesses whether the mean value () of two solutions from any two algorithms is statistically different from each other.

4.4. Comparison of ECTLBO with Relevant TLBO Algorithms

The ECTLBO algorithm is compared to four different relevant TLBO algorithms: TLBO, ETLBO, NSTLBO, and DGSTLBO. The parameters for four relevant TLBO algorithms are taken from their references listed above. Each algorithm runs independently 25 times, and the statistical results of and SD are provided in Table 2, the last three rows of which show the experimental results, and the best results are shown in bold. The evolution plots of NSTLBO, TLBO, ECTLBO, ETLBO, and DGSTLBO are illustrated in Figure 3. In addition, the semilogarithmic convergence plots are used to analyze the relationship of the mean errors of the functions.

Table 2: Results of five algorithms for 25 independent runs on 6 test functions of 30 dimensions.
Figure 3: Evolution of mean function error values derived from five algorithms: (a) F1, (b) F2, (c) F3, (d) F4, (e) F5, and (f) F6.

In this section, the ECTLBO algorithm is compared with four relevant TLBO algorithms. From the statistical mean value () given in Table 2, the overall performance of the ECTLBO algorithm is significantly better than that of other algorithms. The ECTLBO algorithm outperforms NSTLBO, TLBO, ETLBO, and DGSTLBO on six, six, five, and six test functions out of six test functions, respectively. As can be seen from the statistical mean value () in Table 2, the ECTLBO algorithm is better than the other four algorithms for functions F1, F2, F4, F5, and F6. For function F3, the ECTLBO algorithm performs the same as the ETLBO algorithm in terms of the statistical mean value (). Therefore, it is interesting to note that the overall performance of the ECTLBO algorithm is significantly better than the original TLBO, ETLBO, NSTLBO, and DGSTLBO algorithms.

Considering the above situations, the main reason is that the Cauchy distribution can expand the searching space and error correction to avoid detours to achieve more accurate solution, helping to identify a more promising solution. That is to say, exploration and exploitation are balanced better in the ECTLBO algorithm. Therefore, it can be concluded that the ECTLBO algorithm performs most effectively for accuracy among the five relevant TLBO algorithms.

4.5. Comparison of ECTLBO Algorithm with Nine Original Intelligence Optimization Algorithms

In this section, the ECTLBO algorithm is compared with nine original intelligence optimization algorithms including PSO [6], DE [7], GSO [8], ABC [9], WCA [10], CS [11], BSA [12], DSA [13], and ISA [14]. From the statistical mean value () of Table 3, it can be seen that the ECTLBO algorithm performs better than other nine original intelligence optimization algorithms based on the Wilcoxon’s rank sum test results. More specifically, the ECTLBO algorithm outperforms PSO, DE, ABC, CS, GSO, WCA, DSA, BSA, and ISA algorithms on five, six, three, six, three, five, four, three, and four out of six test functions, respectively. For unimodal functions F1 and F2, the ECTLBO algorithm especially outperforms above other nine original intelligence optimization algorithms. As can be seen from the standard deviation (SD) of Table 3, the ECTLBO algorithm is better than nine original intelligence optimization algorithms on functions F1, F2, and F6. This indicates that robustness of the ECTLBO algorithm is better than that of nine original intelligence optimization algorithms on functions F1, F2, and F6.

Table 3: Results of ten algorithms over 25 independent times on 6 test functions of 30 dimensions with 300,000 FES.

5. Application of ECTLBO to UAV Path Planning

5.1. Path Planning of Unmanned Aerial Vehicles (UAV) Problem

UAV is a rather complicated global optimum problem in mission planning. This problem aims to look for or figure out an optimal or suboptimal flight route from the starting point to the target under specific complex combat field environment in time [26]. Figure 4 shows that the problem of UAV is considered as D-dimensional function optimization problem in essence. The original coordinate system Oxy is converted to a new coordinate system Oxy′. The axis x′ is divided equally into D parts, the y′ coordinates of each node on the vertical line are optimized, and a set of points composed of vertical coordinates of D points is obtained. By connecting these points in sequence, we get a path connecting the starting point and the destination.

Figure 4: Typical UAV battle field model.

There are two main goals of UAV path planning: avoiding threats and minimizing fuel costs. Therefore, before studying UAV route optimization, we must determine the performance indicators of each path. The following cost equations [27] are used to describe the minimum threat cost and minimum fuel safety performance:where is the length of the route, is a generalized cost function, is a threat cost, is the cost of oil consumption, and the coefficient is the trade-off factor of the threat factor and the performance of fuel consumption.

According to the path planning performance index formula, the cost weight of each edge in the network diagram is calculated. For the feasible UAV route of section i, the cost weight value can be expressed as

It is assumed that all radars in the enemy defense area are the same and not interconnected, and the radar threat model is simplified, and the radar signal is proportional to 1/d4 (d indicates the distance between the UAV to the enemy radar and the missile threat position), so the threat cost between the two nodes when the UAV is flying along the line i of the network map. The approximation is considered to be proportional to the integral of 1/d4 along this edge (as shown in Figure 5). In simulation studies, it is usually simplified to divide the segment into five segments within the threat:where represents the length of the connection point; show distances from the center k of the threat source at 0.1, 0.3, 0.5, 0.7, and 0.9; represents the threat weight of k threats; and is the number of threat positions.

Figure 5: Threat cost model.
5.2. Analysis of Simulation Results and Comparisons

Table 4 gives the threat points of the UAV and the coordinates of the starting point and the target point. By selecting the appropriate parameters, the classical algorithms such as ECTLBO and TLBO, NSTLBO, ETLBO, DGSTLBO, PSO, and DE are applied to the UAV path planning, and the simulation results are shown in Figure 6, respectively. As can be seen from Figure 6, TLBO, NSTLBO, ETLBO, and DGSTLBO all fall into local optimal. Although PSO and DE do not fall into the local optimal, the optimal route map generated by these two algorithms is obviously longer than that of the algorithm ECTLBO. At the same time, it can be seen from Figure 6 that the unmanned aerial vehicle (UAV) route obtained by the ECTLBO algorithm has successfully avoided all the threat sources and successfully reached the task end. By comparing with TLBO, NSTLBO, ETLBO, DGSTLBO, PSO, and DE algorithms, the experimental results show that the ECTLBO algorithm can get higher quality navigation trace and higher quality convergence and better avoid the threat route. And the convergence speed is faster than other classical algorithms.

Table 4: The parameters of threat environment.
Figure 6: Optimal flight routes and evolutionary curves derived from seven algorithms.

6. Summary and Conclusions

This paper presents a new version of the TLBO algorithm (ECTLBO), in which error correction and Cauchy distribution are introduced. The performance of the ECTLBO algorithm is evaluated compared with that of other variant TLBO algorithms, nine original intelligence optimization algorithms. The experimental results verify that the ECTLBO algorithm has overall better performance than that of other variant TLBO algorithms and is very competitive among them. Besides that, we also applied it to UAV, and the simulation results show that the path planning stability and path quality by the proposed approach are much more smooth, and shorter and more optimal than those by other well-known algorithms.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors wish to acknowledge the financial support for this work from the National Natural Science Foundation of China (51575442 and 61402361), Research Fund for the Doctoral Program of Higher Education of China (17129033029), and Natural Science Foundation of Hebei Province (F2017402069).

References

  1. M. Srinivas and G. P. Rangaiah, “Implementation and evaluation of random tunneling algorithm for chemical engineering applications,” Computers and Chemical Engineering, vol. 30, no. 9, pp. 1400–1415, 2006. View at Publisher · View at Google Scholar · View at Scopus
  2. B. K. F. Lima, “Challenges of continuous global optimization in molecular structure prediction,” European Journal of Operational Research, vol. 181, no. 3, pp. 1198–1213, 2007. View at Google Scholar
  3. S. Kulluk, L. Ozbakir, and A. Baykasoglu, “Self-adaptive global best harmony search algorithm for training neural networks,” Procedia Computer Science, vol. 3, no. 1, pp. 282–286, 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. L. Liu and H. Zhou, “Hybridization of harmony search with variable neighborhood search for restrictive single-machine earliness/tardiness problem,” Information Sciences, vol. 226, no. 3, pp. 68–92, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. O. Baskan, “Harmony search algorithm for continuous network design problem with link capacity expansions,” KSCE Journal of Civil Engineering, vol. 18, no. 1, pp. 273–283, 2014. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Kennedy and R. C. Eberhart, “Particle swam optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 107, no. 12, pp. 1–4, Perth, WA, Australia, November-December 1995.
  7. R. Storn and K. Price, “Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Publisher · View at Google Scholar · View at Scopus
  8. S. He, Q. H. Wu, and J. R. Saunders, “Group search optimizer: an optimization algorithm inspired by animal searching behavior,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 973–990, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. H. Eskandar, A. Sadollah, A. Bahreininejad, and M. Hamdi, “Water cycle algorithm–a novel metaheuristic optimization method for solving constrained engineering optimization problems,” Computers and Structures, vol. 110-111, pp. 151–166, 2012. View at Publisher · View at Google Scholar · View at Scopus
  11. X. S. Yang and S. Deb, “Cuckoo search via Lévy flights,” in Proceedings of the World Congress on Nature and Biologically Inspired Computing (NaBIC), pp. 210–214, Coimbatore, India, December 2009.
  12. P. Civicioglu, “Backtracking search optimization algorithm for numerical optimization problems,” Applied Mathematics and Computation, vol. 219, no. 15, pp. 8121–8144, 2013. View at Publisher · View at Google Scholar · View at Scopus
  13. P. Civicioglu, “Transforming geocentric Cartesian coordinates to geodetic coordinates by using differential search algorithm,” Computers and Geosciences, vol. 46, no. 3, pp. 229–247, 2012. View at Publisher · View at Google Scholar · View at Scopus
  14. A. H. Gandomi, “Interior search algorithm (ISA): a novel approach for global optimization,” ISA Transactions, vol. 53, no. 4, pp. 1168–1183, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. R. V. Rao, V. D. Kalyankar, and G. Waghmare, “Parameters optimization of selected casting processes using teaching-learning-based optimization algorithm,” Applied Mathematical Modelling, vol. 38, no. 23, pp. 5592–5608, 2014. View at Publisher · View at Google Scholar · View at Scopus
  16. R. V. Rao, Teaching Learning based Optimization Algorithm: And Its Engineering Applications, Springer Publishing Company, Incorporated, New York, NY, USA, 2015.
  17. R. V. Rao and G. G. Waghmare, “Multi-objective design optimization of a plate-fin heat sink using a teaching-learning-based optimization algorithm,” Applied Thermal Engineering, vol. 76, pp. 521–529, 2015. View at Publisher · View at Google Scholar · View at Scopus
  18. R. V. Rao and V. Patel, “Multi-objective optimization of heat exchangers using a modified teaching-learning-based optimization algorithm,” Applied Mathematical Modelling, vol. 37, no. 3, pp. 1147–1162, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. T. Niknam, F. Golestaneh, and M. S. Sadeghi, “$\theta$-multiobjective teaching–learning-based optimization for dynamic economic emission dispatch,” IEEE Systems Journal, vol. 6, no. 2, pp. 341–352, 2012. View at Publisher · View at Google Scholar · View at Scopus
  20. F. Zou, L. Wang, X. Hei, D. Chen, and D. Yang, “Teaching–learning-based optimization with dynamic group strategy for global optimization,” Information Sciences, vol. 273, no. 8, pp. 112–131, 2014. View at Publisher · View at Google Scholar · View at Scopus
  21. R. V. Rao and V. Patel, “An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems,” International Journal of Industrial Engineering Computations, vol. 3, no. 4, pp. 535–560, 2012. View at Publisher · View at Google Scholar · View at Scopus
  22. L. Wang, F. Zou, X. Hei, D. Yang, D. Chen, and Q. Jiang, “An improved teaching–learning-based optimization with neighborhood search for applications of ANN,” Neurocomputing, vol. 143, pp. 231–247, 2014. View at Publisher · View at Google Scholar · View at Scopus
  23. P. N. Suganthan, N. Hansen, J. J. Liang et al., “Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization,” Nanyang Technological University, Singapore, 2005, Technical Report. View at Google Scholar
  24. J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006. View at Publisher · View at Google Scholar · View at Scopus
  25. W. Yong, C. Zixing, and Z. Qingfu, “Differential evolution with composite trial vector generation strategies and control parameters,” IEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 55–66, 2011. View at Google Scholar
  26. Y. E. Wen and H. Fan, “Research on mission planning system key techniques of UCAV,” Journal of Naval Aeronautical Engineering Institute, 2007. View at Google Scholar
  27. Z. SCheng, Y. Sun, and Y. Liu, “Path planning based on immune genetic algorithm for UAV,” in Proceedings of the International Conference on Electric Information and Control Engineering, pp. 590–593, Wuhan, China, March 2011.