Complexity

Complexity / 2020 / Article
Special Issue

Open Challenges on the Stability of Complex Systems: Insights of Nonlinear Phenomena with or without Delay 2020

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 6105952 | https://doi.org/10.1155/2020/6105952

M. A. Elsisy, D. A. Hammad, M. A. El-Shorbagy, "Solving Interval Quadratic Programming Problems by Using the Numerical Method and Swarm Algorithms", Complexity, vol. 2020, Article ID 6105952, 11 pages, 2020. https://doi.org/10.1155/2020/6105952

Solving Interval Quadratic Programming Problems by Using the Numerical Method and Swarm Algorithms

Guest Editor: Edgar Cristian Díaz González
Received20 May 2020
Revised21 Jul 2020
Accepted05 Aug 2020
Published30 Sep 2020

Abstract

In this paper, we present a new approach which is based on using numerical solutions and swarm algorithms (SAs) to solve the interval quadratic programming problem (IQPP). We use numerical solutions for SA to improve its performance. Our approach replaced all intervals in IQPP by additional variables. This new form is called the modified quadratic programming problem (MQPP). The Karush–Kuhn–Tucker (KKT) conditions for MQPP are obtained and solved by the numerical method to get solutions. These solutions are functions in the additional variables. Also, they provide the boundaries of the basic variables which are used as a start point for SAs. Chaotic particle swarm optimization (CPSO) and chaotic firefly algorithm (CFA) are presented. In addition, we use the solution of dual MQPP to improve the behavior and as a stopping criterion for SAs. Finally, the comparison and relations between numerical solutions and SAs are shown in some well-known examples.

1. Introduction

Nonlinear programming has been appeared in solving many real-world problems. Solving interval programming problems is a hot issue in the research area. Interval programming problems are divided into interval linear programming and interval nonlinear programming [112].

Interval nonlinear programming problems are used in modeling and solving many real applications such as planning of waste management activities [13]. The mathematical model and the proofs of interval analysis can be found easily in [14]. Many researchers and authors solve the interval nonlinear programming problems by different methods [1518], but all these methods try to get the optimal solution under some specific conditions. For example, in [8, 9], Hladík divided the problem into subclasses which can be reduced to easy problems. He put a condition for solving these problems that they must be convex quadratic programming. Jiang et al. [11] suggested a method to solve the nonlinear interval number programming problem with uncertain coefficients both in nonlinear objective function and nonlinear constraints. Liu and Wang [17] presented a numerical method to interval quadratic programming. Li and Tian [18] generalized Liu and Wang’s method [17] to solve interval quadratic programming. Their proposed method requires less computing compared with Liu and Wang’s method.

As mentioned above, there are many approaches for solving IQPP, but the most common one is dividing the interval problem into two problems. In the first problem, the optimal solution of the lower objective function on the largest feasible region is found, while in the second one, the optimal solution of the upper objective function on the lowest feasible region is obtained. So, the solution value of the interval problem is between values of the lower objective function and the upper objective function. As is known, this process is very difficult in many applications which lead to the difficultly in reaching the lowest value of the objective function of the problem.

KKT conditions are first-order necessary conditions for solving quadratic programming problems. KKT conditions are optimality conditions for the optimization problems with interval-valued objective functions and real-valued constraint functions investigated and discussed in [1922]. Wolfe’s duality theorems, strong duality theorems, and duality gap in interval-valued optimization problems are discussed in a good mathematical view in [12]. Chalco-Cano et al. [19] introduced a new concept of a stationary point for an interval-valued function based on the gH derivative.

SAs are an important concept in computer science [23, 24]. SAs can be described as a population of agents or individuals interacting with each other and with their environment and working under very few rules. The inspiration often comes from nature, especially biological systems. They are successfully applied in real-life applications. Examples of SAs are ant colony optimization [24, 25], particle swarm optimization (PSO) [26, 27], firefly algorithm (FA) [28], glowworm algorithm [29], krill herd algorithm [30], monkey algorithm [31], and grasshopper optimization algorithm [32].

On the other hand, most of the researchers proposed hybrid algorithms to improve the solution quality, to benefit from their advantages, and to overcome any deficiencies. For example, a gaining sharing knowledge-based algorithm for solving optimization problems over the continuous space was proposed in [33]. Yulian Cao et al. presented a comprehensive learning particle swarm optimizer (CLPSO) embedded with local search (LS) which has the strong global search capability of CLPSO and fast convergence ability of LS [34]. In [35], the authors presented an adaptive particle swarm optimization with supervised learning and control (APSO-SLC) for the parameter settings and diversity maintenance of particle swarm optimization (PSO) to adaptively choose parameters, while improving its exploration competence. A new hybrid PSO algorithm that introduces opposition-based learning (OBL) into PSO variants for improving the latter’s performance is proposed in [36]. In [37], a surrogate-assisted PSO with Pareto active learning was proposed to solve the multiobjective optimization problem with high computational cost. Finally, the historical memory-based PSO (HMPSO) is proposed in [38] which used an estimation of distribution algorithm to estimate and preserve the distribution information of particles’ historical promising p bests.

In this paper, a new approach is suggested to solve the interval quadratic programming problem (IQPP). IQPP is converted into the modified quadratic programming problem (MQPP) by replacing all intervals by additional variables. KKT conditions of MQPP are derived and solved by a numerical method. The numerical method provides the boundaries of the basic variables which are used as starting points in SAs. The solutions of KKT conditions are obtained by using the Mathematica program. CPSO and CFA are used to solve these problems to give the decision maker (DM) a fast view about the position of the optimal solution in the intervals. The dual of MQPP is discussed. The solutions of this problem are used to improve the behavior of the proposed approach and as a stopping criterion for this approach.

2. Interval Quadratic Programming Problem (IQPP)

The interval quadratic programming problem (IQPP) is an interval nonlinear programming problem [911]. The objective function is the quadratic function, and the constraints are linear functions. IQPP can be defined aswhere is an interval-valued function, , is a quadratic function, are the linear functions, , and . The feasible region is supposed to be nonempty and fixed.

The optimal solution of the interval programming problem cannot be defined exactly because at each value belonging to interval coefficients in the objective function and/or the constraints, there may be a new optimal solution. So, no one can define the exact optimal solution. Many researchers descried the optimal solution of IQPP by the objective function values. In [4, 15], the authors defined the optimal solution for the interval linear programming problem. For example, Garajová and Hladík [4] defined the optimal set of the interval linear programming problem and examined sufficient conditions for its closedness, boundedness, connectedness, and convexity. So, we defined the optimal solution as the union of all optimal solutions of IQPP. We explored the whole feasible region to get all possible optimal solutions of IQPP. In our approach, by numerical methods, we tried to get all optimal solutions in the feasible region, while by SAs, we found the best objective value in the whole interval.

3. The Solution of IQPP

The idea of our new vision to solve IQPP starts by replacing all intervals by additional variables and converting IQPP to the modified interval nonlinear problem (MQPP). KKT conditions of MQPP are obtained and solved by a numerical method. The solutions of the numerical method are functions in the additional variables. These solutions are providing the boundaries of the basic variables which are used as start points for SA. The dual of MQPP is presented and solved. CPSO and CFA are used to solve MQPP and its dual form. Furthermore, the solution of dual MQPP is used as a stopping criterion for our approach and to improve its performance. The proposed approach leads to explore the whole feasible region to get the optimal solution anywhere in the intervals.

3.1. Numerical Methodology

KKT conditions are a system of equations solved by two methods, with additional variables and with interval coefficients. Mathematica is used to solve this system program. In the first method, the equations are solved as algebraic equations where the solutions can be expressed as a function of the additional variables. These solutions are very helpful for DM if the optimal solution at certain values of interval coefficients is required. We use the Newton method in the second method if we want to know the boundary of the variables. The solution of this method is used as an initial stage of CPSO, and CFA improved their ability to find the solution in short time than using the whole space of the variables.

The following theorems are used for solving the system of nonlinear equations with interval coefficients. Let MQPP have a continuous function which has a zero in a given subset of , i.e., a vector exists such that , where is the set of real intervals’ vectors. Let be the set of real vectors, be the set of matrices, be an element of an interval vector , be a hull inverse of the interval matrix , be an interior of the interval matrix , be the set , be the interior of an interval , and be a volume-reducing property of the Newton iteration.

Theorem 1 (see [38]). Let be Lipschitz continuous on and let be a regular Lipschitz set on , then(i)For every , the equation has at most one solution (ii)The inverse function defined on the range by is Lipschitz continuous, and is a Lipschitz matrix for on (iii)If , then for every , (iv)If is compact and there is a point such that for all , then , i.e., the equation has a unique solution

Theorem 2 (see [38]). Under the assumption of Theorem 1 above, if , then every satisfying has the following three properties:(i)Every zero of satisfies (ii)If , then contains no zero in (iii)If and , then contains a unique zero in (and hence in )Since , implies , it is natural to consider the general Newton iteration [38]:

With the general Newton operator:

Theorem 3 (see [38]). Let be a strongly regular Lipschitz matrix on for . Let be such that is regular and let be an inverse of . If is regular, then the Newton iteration (6) is strongly convergent for every choice of . Moreover, for all , we have either

Corollary 1 (see [38]). If, for some , is an M-matrix or , then the optimal Newton iteration (6) is strongly convergent for every choice of and the relations (8) and hold.
is defined as

3.2. Swarm Algorithm (SA)

Chaos theory (CT) is used to improve the performance of many SAs [39], where the high randomness of the chaotic sequence improves the convergence and diversity of the solutions. CT is considered as irregular behavior in nonlinear systems due to using the chaotic maps. These maps are worked as particles which move in a small range of nonlinear dynamic systems without knowing the traveling path of these particles. Many researchers proposed combinations between CT and meta-heuristic algorithms to improve the solution quality such as hybrid chaos-PSO [40], chaotic genetic algorithm [41], combined evolutionary algorithm with chaos [42], chaotic whale optimization algorithm [43], and chaotic artificial neural networks [44].

3.2.1. Chaotic Firefly Algorithm (CFA)

FA is an evolutionary computation technique [28]. The main advantages of FA are exploitation and exploration. The improved FA with CT which is called the chaotic firefly algorithm (CFA) is applied to solve IQPP. The details of the main steps of CFA are described as follows:Step 1. Initialization. A population of random N fireflies (solutions) is initialized , where T is the total number of iterations. The position of the i-th firefly in an dimensional space is denoted as and represented as .Step 2. Evaluation. Evaluating the fitness value (the light intensity of each firefly in the population or simply .Step 3. Determination of Best Solution. For minimization problems, the firefly that has minimum light intensity is the best solution .Step 4. Updating Positions of Fireflies. For every firefly and for every firefly do the following: if , the i-th firefly is attracted to the firefly j, and its position is updated according to the following equation:where the attractiveness at is 0, is the light absorption coefficient, is the Cartesian distance between the two fireflies i and j, is a step size factor controlling the step size, and is a vector drawn from a Gaussian or other distribution. If , then otherwise .Step 4-1. Chaotic repairing of the new position .Step 4-2. Updating the best solution : if the new position of the i-th firefly is better than the best solution , i.e., , then .Step 5. Stopping Condition. If a prespecified stopping criterion is satisfied, stop the run; otherwise, go to Step 4.

3.2.2. Chaotic Particle Swarm Optimization Algorithm (CPSO)

PSO can solve many difficult optimization problems. It has a faster convergence on some problems in comparison [45]. The idea of PSO is that several random particles are placed in the search domain of the optimization problem. At its current location, each particle evaluates the objective function. After that, each particle determines the direction of movement in the search domain by combining some aspects of the history of its own current and best locations with particles located nearby in the swarm, but with some random disturbance. The next iteration takes place after all particles have been moved. Eventually the swarm, like a flock of birds collectively foraging for food, is likely to move close to an optimum of the fitness function. The i-th particle is described by an n-dimensional vector as , while its velocity is represented as . The best position of the particle in its memory that it visited is denoted as . The best position in the swarm is denoted as . The steps of the CPSO algorithm are described as follows:Step 1. Initialization.(a)Initializing randomly the positions of all particles(b)Initializing randomly the velocities of all particles(c)Setting t = 1, where t is the increment of time and T is the total number of iterationsStep 2. Optimization.(a)Evaluating the objective function value (b)If , then and (c)If , then and (d)If the stopping criterion is satisfied go to Step 3(e)All velocities are updated according to the following equation:where is an inertia term, are the positive constants, and are the random numbers belonging to (0, 1).(f)All positions are updated according to the following equation:(g)Chaotic repairing of the new position (h)t=t+ 1(i)Go to Step 2(a)Step 3. Termination. If a prespecified stopping criterion is satisfied, stop the run; otherwise, go to Step 2.

3.2.3. Chaotic Repairing of Infeasible Solution

If the new position is infeasible, it is repaired according to the following equation:

If is still infeasible, is repaired according to the following equation:where is any feasible solution in the search space and is a chaotic number generated by the following logistic map:where is the age of the infeasible solution, = 4, , and .

3.3. The Proposed Approach

In this section, we discuss the proposed approach. The following steps describe the proposed approach clearly:Step 1. Replacing all intervals in IQPP by additional variables which is called MQPP and obtaining the dual form of MQPP.Step 2. Constructing KKT for MQPP, and solving KKT equations by the numerical algorithm.Step 3. Using the solutions of the numerical algorithm as a start point of CPSO and CFA.Step 4. Solving MQPP and its dual form by CPSO and CFA.Step 5. The values of the objective function which are obtained from solving the problem by SA and its dual form are compared. If their values are the same, the global optimal solution of our problem is found. If there is a difference between the outputs from the problem and its dual, we solve the problem and its dual form again until the difference between them is ε, where ε can be computed aswhere is the difference between the optimal value of the problem and the optimal value of its dual problem. This solution is a local optimal solution. This comparison is used as a new stopping criterion.The suggested method is suitable for convex and nonconvex problems. The steps of the proposed approach are illustrated in Figure 1.

4. Results and Discussion

The proposed algorithm is tested by solving three problems taken from the literature. Each problem was independently run 30 times. The proposed algorithm was programmed in MATLAB (R2016b) and implemented on the PC with P4 CPU 3.00 GHz, 1 GB RAM with an i5 processor, Windows 7 operating system. The proposed algorithm, as any nontraditional optimization algorithms, involves a number of parameters that affect the performance of the algorithm. The parameters adopted in the implementation of CFA and CPSO are listed in Table 1.


CFA parametersCPSO parameters

The swarm size (m)20The swarm size20
Number of iteration (T)200Number of iteration (T)200
Initial attractiveness ()1Acceleration coefficientsc12.8
The light absorption coefficient (γ)1c21.3
The step size factor ()0.95The inertia weight ()0.6
Chaos search repairing iteration (m)1E02
1E-6

4.1. Problem 1

This problem is formulated as follows [10]:

By replacing all intervals by additional parameters, the problem becomeswhere .

The dual form of problem (15) is

KKT conditions of problem (15) are

In [8, 9], problem (14) can be divided into two problems. The first problem is

Its solution is and . The second problem is

Its solution is and . The solutions of KKT conditions in (18) can be expressed as

In addition, the numerical solution provides the boundaries of the basic variables as and . The comparison between different types of SAs in solving problem (1) is shown in Table 2.


PSOFACPSOCFA


4.2. Problem 2

This problem is formulated as follows [1]:

By replacing all intervals by additional parameters, the problem becomeswhere , and .

The dual form of problem (22) is

The KKT conditions of problem (22) are

In [8, 9], problem (21) is divided into two problems. The first problem is

Its solution is and . The second problem is

Its solution is and . The solutions of KKT conditions in (24) can be expressed as

In addition, the numerical solution provides the boundaries of the basic variables as . The results of problem (2) by using SAs are shown in Table 3.


PSOFACPSOCFA


4.3. Problem 3

This problem is formulated as follows [46]:

By replacing all intervals by additional parameters, the problem becomeswhere

The dual form of problem (29) is

The KKT conditions of problem (28) are

In [8, 9], problem (28) is divided into two problems. The first problem is

Its solution is and . The second problem is

Its solution is and .

The solutions of KKT conditions in (31) can be expressed as

In addition, the numerical solution provides the boundaries of the basic variables and In Table 4, the results of problem (2) by using SAs are listed.


PSOFACPSOCFA


In addition, the statistical results obtained, by original PSO, original FA, CPSO, and CFA, over the 30 runs are summarized in terms of CPU time, mean value, standard deviation, and worst and best values in Table 5.


AlgorithmCPU time(s)MeanSDWorstBest

Problem 1Range of [1.025, 74]PSO5.451.67830.04191.70241.6209
FA4.561.53370.00651.53791.5252
CPSO3.76E−0021.025001.02501.0250
CFA2.95E−0021.025001.02501.0250

Problem 2Range of [−3.5, −0.75]PSO12.05−3.22440.0458−3.1846−3.2759
FA7.96−3.32300.0066−3.3257−3.3355
CPSO2.54E−002−3.49990−3.4999−3.4999
CFA1.51E−002−3.50−3.5−3.5

Problem 3Range of [−3.4922, 0.5217]PSO22.45−1.87950.0304−1.8750−1.9297
FA10.38−3.14980.0487−3.0907−3.1872
CPSO9.41E−0022.70370.0013−2.7005−2.7046
CFA8.95E−002−3.49220−3.4922−3.4922

Furthermore, Figures 24 show the convergence curve of the best obtained so far by original PSO, original FA, CPSO, and CFA for the 3 problems.

Results show that the proposed SAs (CFA and CPSO) outperform the other original algorithms in terms of the optimality. In addition, these results prove that the proposed SAs can solve IQPP effectively with low computational cost where the CPU time is less than the other original algorithms as shown in Table 5. In other words, the solutions, of the test problems, of CPSO and CFA are the same as the solutions of previous methods, but they are very fast without any effort of computation. On the other hand, the numerical approach gives the solution as a general formula in the additional variables, where we can obtain, by this formula, the solution at any values inside the intervals. In addition, the numerical solution provides the boundaries of the basic variables which are used in the step of initialization in SAs. Finally, we can say that our approach, as any SAs, is more generalized and suitable for real applications than traditional methods.

5. Conclusion

This paper deals with a new approach to solve IQPP. We aim to explore the feasible region to get the optimal solution anywhere. All intervals were replaced by additional variables. The new form with additional variables is MQPP. KKT conditions for MQPP were solved numerically to get the solutions as a function in the additional variables and provide the boundaries of the basic variables. The solutions are used as start points for SAs. CPSO and CFA are used to solve MQPP and its dual form. The advantages of our procedure are (1) the solution of the numerical method is more general than previous methods, (2) giving the decision maker a very fast view of the optimal solution inside the intervals, (3) using the optimal solution of the dual problem as a stopping criterion for SAs is more suitable than other criteria, and (4) its effectiveness is verified as compared with other studies. Also, we compare PSO, FA, CPSO, and CFA with each other. Real applications of interval nonlinear programming problems should be conducted in the future. In addition, we are planning to use this vision to solve multiobjective linear programming with interval coefficients. Also, we aim to discuss the GSK algorithm to solve IQPP.

Data Availability

All data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that this article content has no conflicts of interest.

References

  1. A. K. Bhurjee and G. Panda, “Efficient solution of interval optimization problem,” Mathematical Methods of Operations Research, vol. 76, no. 3, pp. 273–288, 2012. View at: Publisher Site | Google Scholar
  2. M. Chen, S.-G. Wang, P. P. Wang, and X. Ye, “A new equivalent transformation for interval inequality constraints of interval linear programming,” Fuzzy Optimization and Decision Making, vol. 15, no. 2, pp. 155–175, 2016. View at: Publisher Site | Google Scholar
  3. J. W. Chinneck and K. Ramadan, “Linear programming with interval coefficients,” The Journal of the Operational Research Society, vol. 51, no. 2, pp. 209–220, 2000. View at: Publisher Site | Google Scholar
  4. E. Garajová and M. Hladík, “On the optimal solution set in interval linear programming,” Computational Optimization and Applications, vol. 72, no. 1, pp. 269–292, 2019. View at: Publisher Site | Google Scholar
  5. M. Hladík, “Robust optimal solutions in interval linear programming with forall-exists quantifiers,” European Journal of Operational Research, vol. 254, no. 3, pp. 705–714, 2016. View at: Publisher Site | Google Scholar
  6. M. Hladík, “How to determine basis stability in interval linear programming,” Optimization Letters, vol. 8, no. 1, pp. 375–389, 2014. View at: Publisher Site | Google Scholar
  7. M. Hladík, “Weak and strong solvability of interval linear systems of equations and inequalities,” Linear Algebra and Its Applications, vol. 438, no. 11, pp. 4156–4165, 2013. View at: Google Scholar
  8. M. Hladík, “Optimal value bounds in nonlinear programming with interval data,” TOP, vol. 19, no. 1, pp. 93–106, 2011. View at: Publisher Site | Google Scholar
  9. M. Hladík, “Optimal value range in interval linear programming,” Fuzzy Optimization and Decision Making, vol. 8, no. 3, pp. 283–294, 2009. View at: Publisher Site | Google Scholar
  10. C. Jiang, Z. G. Zhang, Q. F. Zhang, X. Han, H. C. Xie, and J. Liu, “A new nonlinear interval programming method for uncertain problems with dependent interval variables,” European Journal of Operational Research, vol. 238, no. 1, pp. 245–253, 2014. View at: Publisher Site | Google Scholar
  11. C. Jiang, X. Han, G. R. Liu, and G. P. Liu, “A nonlinear interval number programming method for uncertain optimization problems,” European Journal of Operational Research, vol. 188, no. 1, pp. 1–13, 2008. View at: Publisher Site | Google Scholar
  12. H.-C. Wu, “On interval-valued nonlinear programming problems,” Journal of Mathematical Analysis and Applications, vol. 338, no. 1, pp. 299–316, 2008. View at: Publisher Site | Google Scholar
  13. X. Y. Wu, G. H. Huang, L. Liu, and J. B. Li, “An interval nonlinear program for the planning of waste management systems with economies-of-scale effects-A case study for the region of Hamilton, Ontario, Canada,” European Journal of Operational Research, vol. 171, no. 2, pp. 349–372, 2006. View at: Publisher Site | Google Scholar
  14. R. E. Moore, Methods and Applications of Interval Analysis, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1979.
  15. M. Allahdadi and H. Mishmast Nehi, “The optimal solution set of the interval linear programming problems,” Optimization Letters, vol. 7, no. 8, pp. 1893–1911, 2013. View at: Publisher Site | Google Scholar
  16. H. Ishibuchi and H. Tanaka, “Multiobjective programming in optimization of the interval objective function,” European Journal of Operational Research, vol. 48, no. 2, pp. 219–225, 1990. View at: Publisher Site | Google Scholar
  17. S.-T. Liu and R.-T. Wang, “A numerical solution method to interval quadratic programming,” Applied Mathematics and Computation, vol. 189, no. 2, pp. 1274–1281, 2007. View at: Publisher Site | Google Scholar
  18. W. Li and X. Tian, “Numerical solution method for general interval quadratic programming,” Applied Mathematics and Computation, vol. 202, no. 2, pp. 589–595, 2008. View at: Publisher Site | Google Scholar
  19. Y. Chalco-Cano, R. Osuna-Gómez, B. Hernández-Jiménez, and H. Román-Flores, A Note on Optimality Conditions to Interval Optimization Problems, Atlantis Press, Paris, France, 2015.
  20. L. Stefanini and B. Bede, “Generalized Hukuhara differentiability of interval-valued functions and interval differential equations,” Nonlinear Analysis: Theory, Methods & Applications, vol. 71, no. 3-4, pp. 1311–1328, 2009. View at: Publisher Site | Google Scholar
  21. H.-C. Wu, “The Karush-Kuhn-Tucker optimality conditions in an optimization problem with interval-valued objective function,” European Journal of Operational Research, vol. 176, no. 1, pp. 46–59, 2007. View at: Publisher Site | Google Scholar
  22. J. Zhang, S. Liu, L. Li, and Q. Feng, “The KKT optimality conditions in a class of generalized convex optimization problems with an interval-valued objective function,” Optimization Letters, vol. 8, no. 2, pp. 607–631, 2014. View at: Publisher Site | Google Scholar
  23. K. G. Dhal, S. Ray, A. Das, and S. Das, “A survey on nature-inspired optimization algorithms and their application in image enhancement domain,” Archives of Computational Methods in Engineering, vol. 26, pp. 1607–1638, 2018. View at: Google Scholar
  24. M. Dorigo and M. Birattari, “Ant colony optimization,” in Encyclopedia of Machine Learning, C. Sammut and G. I. Webb, Eds., pp. 36–39, Springer, Boston, MA, USA, 2010. View at: Google Scholar
  25. Abd el-wahed, F. Waiel, A. A. Mousa, and M. A. Elsisy, “Solving economic emissions load dispatch problem by using hybrid ACO-MSM approach,” The Online Journal on Power and Energy Engineering (OJPEE), vol. 1, no. 1, pp. 31–35, 2009. View at: Google Scholar
  26. M. A. El-Shorbagy and A. E. Hassanien, “Particle swarm optimization from theory to applications,” International Journal of Rough Sets and Data Analysis, vol. 5, no. 2, pp. 1–24, 2018. View at: Publisher Site | Google Scholar
  27. M. A. El-Shorbagy, “Hybrid particle swarm algorithm for multi-objective optimization,” Menoufiya University, Al Minufya, Egypt, 2010, M.S. thesis. View at: Google Scholar
  28. S. Verma and V. Mukherjee, “Firefly algorithm for congestion management in deregulated environment,” Engineering Science and Technology, an International Journal, vol. 19, no. 3, pp. 1254–1265, 2016. View at: Publisher Site | Google Scholar
  29. M. Marinaki and Y. Marinakis, “A glowworm swarm optimization algorithm for the vehicle routing problem with stochastic demands,” Expert Systems with Applications, vol. 46, pp. 145–163, 2016. View at: Publisher Site | Google Scholar
  30. A. L. a. Bolaji, M. A. Al-Betar, M. A. Awadallah, A. T. Khader, and L. M. Abualigah, “A comprehensive review: krill Herd algorithm (KH) and its applications,” Applied Soft Computing, vol. 49, pp. 437–446, 2016. View at: Publisher Site | Google Scholar
  31. Y. Zhou, X. Chen, and G. Zhou, “An improved monkey algorithm for a 0-1 knapsack problem,” Applied Soft Computing, vol. 38, pp. 817–830, 2016. View at: Publisher Site | Google Scholar
  32. S. Saremi, S. Mirjalili, and A. Lewis, “Grasshopper optimisation algorithm: theory and application,” Advances in Engineering Software, vol. 105, pp. 30–47, 2017. View at: Publisher Site | Google Scholar
  33. A. W. Mohamed, A. A. Hadi, and A. K. Mohamed, “Gaining-sharing knowledge based algorithm for solving optimization problems: a novel nature-inspired algorithm,” International Journal of Machine Learning and Cybernetics, vol. 11, no. 7, pp. 1501–1529, 2020. View at: Publisher Site | Google Scholar
  34. Y. Cao, H. Zhang, W. Li, M. Zhou, Y. Zhang, and W. A. Chaovalitwongse, “Comprehensive learning particle swarm optimization algorithm with local search for multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 4, pp. 718–731, 2018. View at: Google Scholar
  35. W. Dong and M. Zhou, “A supervised learning and control method to improve particle swarm optimization algorithms,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 7, pp. 1135–1148, 2016. View at: Google Scholar
  36. Q. Kang, C. Xiong, M. Zhou, and L. Meng, “Opposition-based hybrid strategy for particle swarm optimization in noisy environments,” IEEE Access, vol. 6, pp. 21888–21900, 2018. View at: Publisher Site | Google Scholar
  37. Z. Lv, L. Wang, Z. Han, J. Zhao, and W. Wang, “Surrogate-assisted particle swarm optimization algorithm with Pareto active learning for expensive multi-objective optimization,” IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 3, pp. 838–849, 2019, ‏. View at: Publisher Site | Google Scholar
  38. A. Neumaier, Interval Methods for Systems of Equations, vol. 37, Cambridge University Press, Cambridge, UK, 1990.
  39. D. Yang, G. Li, and G. Cheng, “On the efficiency of chaos optimization algorithms for global optimization,” Chaos, Solitons & Fractals, vol. 34, no. 4, pp. 1366–1375, 2007. View at: Publisher Site | Google Scholar
  40. M. A. E. Shorbagy and A. A. Mousa, “Chaotic particle swarm optimization for imprecise combined economic and emission dispatch problem,” Review of Information Engineering and Applications, vol. 4, no. 1, pp. 20–35, 2017. View at: Publisher Site | Google Scholar
  41. M. A. El-Shorbagy, A. A. Mousa, and S. M. Nasr, “A chaos-based evolutionary algorithm for general nonlinear programming problems,” Chaos, Solitons & Fractals, vol. 85, pp. 8–21, 2016. View at: Publisher Site | Google Scholar
  42. J. Xiao, Improved Quantum Evolutionary Algorithm Combined with Chaos and its Application, Springer, Berlin, Germany, 2009.
  43. D. Oliva, M. Abd El Aziz, and A. Ella Hassanien, “Parameter estimation of photovoltaic cells using an improved chaotic whale optimization algorithm,” Applied Energy, vol. 200, pp. 141–154, 2017. View at: Publisher Site | Google Scholar
  44. Z. Aram, S. Jafari, J. Ma, J. C. Sprott, S. Zendehrouh, and V.-T. Pham, “Using chaotic artificial neural networks to model memory in the brain,” Communications in Nonlinear Science and Numerical Simulation, vol. 44, pp. 449–459, 2017. View at: Publisher Site | Google Scholar
  45. M. A. El-Shorbagy, M. Elhoseny, A. E. Hassanien, and S. H. Ahmed, “A novel PSO algorithm for dynamic wireless sensor network multiobjective optimization problem,” Transactions on Emerging Telecommunications Technologies, vol. 30, no. 11, 2018. View at: Publisher Site | Google Scholar
  46. D. Ghosh, D. Ghosh, S. K. Bhuiya, and L. K. Patra, “A saddle point characterization of efficient solutions for interval optimization problems,” Journal of Applied Mathematics and Computing, vol. 58, no. 1-2, pp. 193–217, 2018. View at: Publisher Site | Google Scholar

Copyright © 2020 M. A. Elsisy et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views680
Downloads256
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.