Abstract

From the analysis of the traditional social cognitive optimization (SCO) in theory, we see that traditional SCO is not guaranteed to converge to the global optimization solution with probability one. So an improved social cognitive optimizer is proposed, which is guaranteed to converge to the global optimization solution. The global convergence of the improved SCO algorithm is guaranteed by the strategy of periodic restart in use under the conditions of participating in comparison, which helps to avoid the premature convergence. Then we give the convergence proof for the improved SCO based on Solis and Wets’ research results. Finally, simulation results on a set of benchmark problems show that the proposed algorithm has higher optimization efficiency, better global performance, and better stable optimization outcomes than the traditional SCO for nonlinear programming problems (NLPs).

1. Introduction

Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial in artificial intelligence field. Swarms often are large groups of small insects in which each member performs a simple role, but the action produces complex behavior as a whole. The emergence of such complex behavior extends beyond swarms. Similar complex social structures also occur in higher-order animals and insects that do not swarm: colonies of ants, flocks of birds, packs of wolves, or colonies of bees, and so on.

Human has higher adaptability and social intelligence than insect swarm, and human intelligence derives from the interactions of individuals, including interacting with the environment, in a social world from the study of social cognitive theory. Xie et al. [1] present social cognitive optimization (SCO) algorithm which is a novel heuristic swarm intelligence optimization algorithms. SCO algorithm has a probabilistic iterative procedure of lots of learning agents. A learning agent obtains vicarious capability by tournament selection and shares information by the knowledge library with symbolizing capability. Because SCO algorithm fully makes use of the interactions and share of the entire social swarm, it greatly improves the convergence speed and accuracy of the swarm intelligence algorithm and makes it better than many other well-used intelligent optimization methods, such as PSO and ACO, in many applications. Such applications include nonlinear programming problems (NLPs), nonlinear complementarity’s problem (NCP) [2], fractional programs [3], nonlinear system of equation [4], engineering design problems [5], and Web service composition selection [6].

Many researchers improved the traditional SCO. Wang et al. [7] improved SCO algorithm through joining self-organizing migrating algorithm (SOMA) migration in the process of SCO and adding two parameters in SCO algorithm to solve the SAT problem, which showed the improved SCO may obtain the quick convergence rate in the optimization early and only small effect on the last result. Ma et al. [8] brought in the chaos and Kent mapping function to modify and optimize the conditions of neighborhood search and got more reasonable knowledge points which were distributed more uniformly for solving the nonlinear constraint problems. Sun et al. [9] presented a hybrid social cognitive optimization algorithm based on elitist strategy and chaotic optimization is proposed to solve constrained nonlinear programming problems, which partitions learning agents into three groups in proportion: elite learning agents, chaotic learning agents, and common learning agents. Zhi-zhong et al. [10] improve the social cognitive optimization, put the improved SCO algorithm into the framework of culture algorithm, constructed a novel algorithm, culture social cognitive optimization (C-SCO), and used C-SCO to solve the QoS-aware cloud service composition problem. Sun et al. [11] present a social cognitive optimization algorithm (SCO) to generate optimal evidence weight values for the Dempster-Shafer (D-S) evidence model based on historical training data.

These improved algorithms, called hybrid optimization algorithms, are mainly based on empirical analysis of the experiment while the global convergence analysis of hybrid algorithm has not been studied in theory. Because the SCO is originated from simulation social cognitive progress and involves sophisticated stochastic behavior, it is hard to perform theoretical analysis, resulting in the lack of solid theoretical foundation. Particularly, the performance of SCO as optimization techniques requires theoretical support. The lack of theoretical foundation injures the further development of SCO and blocks the application of SCO in problems where serious algorithms are required.

In this paper, from the analysis of the traditional social cognitive optimization (SCO) in theory, we see that traditional SCO is not guaranteed to converge to the global optimization solution with probability one. So a novel social cognitive optimizer is called stochastic SCO that is guaranteed to converge to the global optimization solution. The global convergence of the improved SCO algorithm is guaranteed by the strategy of periodic restart in use under the conditions of participating in comparison, which helps to avoid the premature convergence. Then we give the convergence proof for the stochastic SCO based on Solis and Wets’ research results [12]. Finally, simulation results on a set of benchmark problems show that the proposed algorithm has higher optimization efficiency, better global performance, and better stable optimization outcomes than the traditional SCO for NLPs.

The remainder of this paper is organized as follows. In Section 2, we survey the traditional SCO and analyse global convergence of traditional SCO algorithm. In Section 3, our improvement to traditional SCO is described concretely and the proof of the global convergence of the proposed algorithms is presented. In Section 4 the typical experiments are employed to evaluate the performance of the improved SCO and the conclusions are showed in Section 5, and finally the last section presents the acknowledgment and the appendix.

2. Convergence Analysis of Traditional SCO Algorithm

2.1. Social Cognitive Optimization Algorithm (SCO)

Social cognitive theory (SCT) agrees that people learn by observing others, with the environment, behavior, and cognition all as the chief reciprocal factors in influencing development. Human learning possesses the abilities to symbolize, learn from others, plan alternative strategies, regulate one’s own behavior, and engage in self-reflection. So human has higher adaptability and social intelligence than insect swarm. By introducing human social intelligence based on SCT to artificial system, Xie et al. [1] proposed social cognitive optimization (SCO) algorithm in 2002. In SCO optimization procedure, a knowledge library with symbolizing capability consists of a number of knowledge points which are denoted by the location in search space and its fitness values; learning agents, on behalf of human individuals, in possession of a knowledge point in the knowledge library, act observational learning via the neighborhood local searching by observing the selected model from tournament selection. The neighborhood local searching for referring to is finding a new point , which is for dimension where is a uniform distribution which is usually generated by linear congruential method within , . SCO algorithm basic steps are clearly described in [1].

2.2. Basic Conception and Theory for Global Convergence Theorem

The general global optimization problem () used here is defined as where is a vector of decision variables, is an -dimensional feasible region and is assumed to be nonempty, a subset of , and is a real-valued function defined over from to . The goal is to find a value for contained in that minimizes . Notice that the feasible region may include both continuous and discrete variables. Denote the global optimal solution to () by , where

Solis and Wets [12] provide a convergence proof, in probability, to the global minimum for general step size algorithms with conditions on the method of generating the step length and direction.

Conceptual algorithm [12] is as follows.

Step 0. Find in and set .

Step 1. Generate from the sample space .

Step 2. Set , choose , set , and return to Step 1.
is probability space on iteration . The are probability measures corresponding to distribution functions defined on as conditional probability measures. The is Borel subsets of . is the -algebra of subset of . The map with domain and satisfies the following condition.

Hypothesis 1 (H1). Consider ; if , then .
Clearly, the monotone sequence converges to the infimum of on . In order to avoid excluding some pathological situations, we replace the search for the infimum by that for , the essential infimum of on , defined as follows: where is a nonnegative measure defined on the (Borel) subsets of with . Typically is simply the n-dimensional volume of the set ; more generally is the Lebesgue measure.

Hypothesis 2 (H2). For any subset of with we have that
It means that given any subset of with positive Lebesgue measure the probability of repeatedly missing the set , when generating the random samples , must be zero.

This requires that the sampling strategy determined by the choice of the cannot rely exclusively on distribution functions concentrated on proper subsets of of lower dimension (such as discrete distributions), or that the sampling strategy consistently ignore a part of with positive “volume” (with respect to ).

Theorem 1 (convergence theorem (global search)). Suppose that is a measurable function. is a measurable subset of   and (Hypotheses 1 and 2) is satisfied. Let be a sequence generated by the algorithm. Then where is the probability that, at step , the point generated by the algorithm is in . is the optimality region.

2.3. Convergence Analysis of Traditional SCO Algorithm

Although traditional SCO may outperform other evolutionary algorithms in the early iterations, its performance may not be competitive as the number of generations is increased. Traditional SCO algorithm is not guaranteed to converge to global optimal solution. If the optimization algorithm satisfies the Hypotheses 1 and 2, general convergence proofs are given.

The traditional SCO algorithm saves the best solution in the knowledge, so it obviously satisfies Hypothesis 1. But, global optimization point of the agents in SCO algorithm will not be set at a random solution in the search space in the end of every generation. It is obvious that the algorithm does not satisfy Hypothesis 2 and is not an optimization algorithm with global search convergence properties according to Theorem 1. So the traditional SCO algorithm is not guaranteed to converge to global optimal solution with probability one.

3. Method

3.1. Overview

Swarm intelligence optimization algorithms are also called metaheuristic algorithms because they provide a high-level framework which can be adapted to solve optimization problems. So, when swarm intelligence is used to solving a specific problem, it must be modified to fit the problem.

The SCO’s iterative procedure is rooted in human intelligence with the social cognitive theory. When the learning agent falls into the local optimal, learning agents learn without observation, but with full randomicity by stochastic search, which helps to increase the global ergodicity of the knowledge library and to avoid premature convergence.

In this paper, a novel stochastic social cognitive optimization (SSCO) algorithm based on periodic partial reinitialization is proposed to solve NLPs to improve the global convergence speed of social cognitive optimization (SCO) algorithm. Simulation results show that the performance of SSCO is evidently better than traditional SCO for NLPs.

3.2. Stochastic Social Cognitive Optimization (SSCO) Algorithm

In our study, we incorporate the periodic partial reinitialization of the population into the SCO to enhance the overall performance of the algorithm. Figure 1 shows the flowchart of SSCO. The SSCO is described as follows.

Step 1 (initialization). (a) Set the parameters of SSCO: , , , , , , , where denotes the number of knowledge points in knowledge library; denotes the number of learning agents. denotes the times of maximum learning cycle; denotes the tournament width; and denotes the maximum iterate number of reinitialization; denotes the maximum variation of fitness in generation.
(b) Randomly create knowledge points in knowledge library (), and then evaluate their fitness values basis on objective function, and save the global optimization point : the best point with the best fitness.
(c) Assign a knowledge point in to a learning agent randomly, but not repeatedly.

Step 2. For each learning agent, learning cycle is as follows.(a)If the variation of fitness of the global optimization point in the previous generation is less than , a new point is randomly generated; if the new point is better than , is assigned to .(b)Otherwise, we have the following.(1)Tournament selection: select a best knowledge point from arbitrary knowledge points in not repeatedly with itself.(2)Observational learning: compare the fitness of with that of . The neighborhood local searching for the better referring to the worse is finding a new point according to (1); if the new point is better than , is assigned to .(c)Library refreshment: remove the worst knowledge point in , and add the new point into .

Step 3. Repeat Step 2 until a stop condition (e.g., maximum number of iterations or a satisfactory fitness value). The total evaluation times are .

3.3. Convergence Analysis of SSCO Algorithm

Traditional SCO algorithm is not guaranteed to converge to global optimal solution with probability one. According to Theorem 1, the proof presented here casts the SSCO into the framework of a global stochastic search algorithm, thus allowing the use of Theorem 1 to prove convergence. It remains to show that the SSCO satisfies both (H1) and (H2).

Let be a sequence generated by the SSCO algorithm, where is the current best position of the swarm at time .

Define function : The definition of above clearly complies with hypothesis H1, since the sequence is monotonic by definition because of always saving the best solution in the knowledge.

If the SSCO algorithm satisfies hypothesis H2, the union of the sample spaces of the agents must cover , so that at time step , where denotes the support of the sample space of agent . Because every learning agent has a chance to get a random solution in the search space in every generation when the iteration is static in a certain precision, , . Define the Borel subset of , and ; then ,  . Thus by hypothesis H2 satisfied by Theorem 1, SSCO can be convergent to global best solution with probability one.

4. Numerical Experiment

4.1. Experimental Settings

Nonlinear programming problems (NLPs) always are nonconvex, highly nonlinear, nondifferentiable, and discontinuous, which is a constrained global optimization problem; the traditional deterministic algorithms for solving the NLPs are very difficult. Furthermore, the constrained global optimization is NP-hard [13], which does not admit efficient deterministic solutions in practice.

In our experiments, the test cases of eight benchmark NLPs from literature [1] and literature [13] in Table 1, except four problems because of lack of result in the referenced literatures, will be applied to show the way in which the proposed algorithm works. The eight benchmark problems almost include all the kinds of the constraints (linear inequalities LI, nonlinear inequalities NI, and nonlinear equalities NE). With respect to constraint handling of the NLPs in the proposed algorithms, there are two effective methods [14], basic constraint handing (BCH) rule for common inequalities constraint and adaptive constraints relaxing (ACR) rule for equalities constraint. The rules are described in detail in [15].

Table 1 lists the parameters of each test case: number of variables, type of the function, relative size of the feasible region in the search space given by the ratio , the number of constraints of each category (LI, NE, and NI), and the number of active constraints at the optimum.

4.2. Result and Discussion

To evaluate the performance of the SSCO, we compare SSCO with standard gray-coded GA and traditional social cognitive optimization. The experiments setting for SSCO is the same as that of SCO from [1]. Let , , , , (for , ). Each problem is executed 50 times. We calculate the best solution, worst solution, and mean solution by means of having statistical computation for each running of the SSCO and other algorithms. The experimental results obtained by other algorithms are provided in [1].

Table 2 shows the comparison of the test results between the SSCO and the known three algorithms. Opt. is the optimum value of each NLP. The results indicate that SSCO is superior to the known two algorithms from the viewpoints of the best solution, worst solution, and mean solution. Meanwhile, we can see that best solutions obtained by SSCO are better than other three algorithms since those solutions are much close to the true optimal solutions.

Figures 2, 3, 4, 5, 6, 7, 8, and 9 show the relative fitness value norm , which are performed by SCO and SSCO, versus for different benchmark NLPs, respectively, where is current generation number. The SSCO, which has periodic partial reinitialization, shows higher convergence velocity and higher sustainable evolutionary capability at the process of evolution than traditional SCO. Furthermore, the iterative process of the SSCO has no more computing than traditional SCO.

For SSCO and SCO, the only difference is that the SSCO agents include the strategy of periodic restart, which is guaranteed to converge to the global optimization solution. It makes no effect on the total computing, however, because the SSCO considers static individuals are substituted with new randomly generated individuals to avoid the premature convergence in the learning iteration; the SSCO shows higher performance than SCO, PSO, and GA in all the eight test cases.

From the above analysis, the SSCO has higher efficiency in solving NLPs for reaching the near-optimal solutions. Consequently, the experimental results indicate that the SSCO has better robustness, effectiveness, and stableness than SCO and GA reported in the literature.

5. Conclusions

A stochastic social cognitive optimization (SSCO) algorithm with the strategy of periodic restart is proposed in this paper for solving NLPs. The periodic restart of SCO for static individuals can avoid the premature convergence, improve the global searching performance, and ensure the algorithm to obtain stable optimal solution. The convergence proof for the stochastic SCO is given based on Solis and Wets’ research results. The final experiment results indicate that the new algorithm has good capability to find optimal solution.

Appendix

The appendix provides the description of eight test functions.where : where : where : where : where : where : where : where

Conflict of Interests

The authors (Jia-ze Sun, Shu-yan Wang, and Hao Chen) declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (611721701, 61203311, 61105064, and 61050003), by the Scientific Research Program of Shaanxi Provincial Education Department (12JK0732 and 13JK1183), by the Natural Science Foundation of XUPT (ZL2013-26), and by special funding for Key Discipline Construction of General Institutions of Higher Learning from Shaanxi Province and special funding for course development for Xi’an University of Posts and Telecommunications.