Abstract

A new algorithm is presented for solving the nonlinear complementarity problem by combining the particle swarm and proximal point algorithm, which is called the particle swarm optimization-proximal point algorithm. The algorithm mainly transforms nonlinear complementarity problems into unconstrained optimization problems of smooth functions using the maximum entropy function and then optimizes the problem using the proximal point algorithm as the outer algorithm and particle swarm algorithm as the inner algorithm. The numerical results show that the algorithm has a fast convergence speed and good numerical stability, so it is an effective algorithm for solving nonlinear complementarity problems.

1. Introduction

The nonlinear complementarity problem is an important class of nonsmooth optimization problems and is widely used in mechanical, engineering, and economic fields. The algorithm research has attracted great attention from scholars at home and abroad. Algorithms for solving such problems mainly include the Lemke algorithm, homotopy method, projection algorithm, Newton algorithm, and interior point algorithm [110]. These algorithms were mostly based on the gradient method and relied on the selection of an initial point. How to select the appropriate initial point is a very difficult problem; therefore, in recent years, many scholars at home and abroad have been dedicated to the bionic intelligent algorithm not depending on the initial point and the gradient information. In 1995, Kennedy and Eberhart proposed the particle swarm algorithm (PSO) [11] of the predatory behavior of birds for the first time. It was similar to the genetic algorithm, an optimization tool based on iteration; its advantages were being easy to understand, easy to implement, and no need to adjust many parameters, and it was widely recognized by academic circles, who then put forward some improved algorithms. At present, the algorithm has been successfully applied in function optimization, neural network training, fuzzy system control, and so forth. In 2008, Zhang and Zhou [12] proposed a mixed algorithm for solving nonlinear minimax problems by combing the particle swarm optimization algorithm with the maximum entropy function method, and then Zhang extended the method to solve nonlinear L-1 norm minimization problems [13] and nonlinear complementarity problems [14] and achieved good results. But these algorithms were random algorithms depending on the probability essentially, where the entropy function was only playing the role of conversing problems. Sun et al. [15]. proposed a social cognitive optimization algorithm (SCO) based on entropy function for solving nonlinear complementarity problems. Yamashita et al. [16] proposed a new technique that utilizes a sequence generated with the PPA algorithm in 2004. And these algorithms cannot guarantee convergence to global optimal point with one hundred percent certainty even if the entropy function is a convex function. Proximal point algorithm is a global convergent algorithm for solving the convex optimization problem. Based on this, this paper researches a particle swarm optimization and proximal point algorithm for the discrete nonlinear complementary problem with every component function being a convex function, and the convergence to global optimal point with one hundred percent is guaranteed. First of all, the nonlinear complementary problem is transformed into an unconstrained optimization problem of a smooth function using the maximum entropy function, and then we construct the particle swarm optimization-proximal point algorithm by combing the particle swarm optimization algorithm with the proximal point algorithm, effectively giving full play to their respective advantages and optimizing the problem using the mixed algorithm. The numerical results show that the algorithm is a new effective algorithm with a fast convergence rate and good numerical stability.

2. The Smoothing of Nonlinear Complementary Problem

Let be continuously differentiable; the nonlinear complementarity problem solves , in order to make when (where is an matrix and is constant vector), the NCP becomes linear complementarity problem, that is, LCP().

Definition 1 (see [2]). If satisfies then is an NCP function.

Common NCP functions are as follows:(1)—Fisher-Burmeister function;(2); —punishment Fisher-Burmeister function;(3)—differentiable NCP function.

We can transform solving the NCP(F) into solving the nonlinear equations by using Fisher-Burmeister function as follows: Equation (3) can be transformed into the optimization problem where each component is smooth function of vector . However, the problem is a complex of nondifferentiable optimization problems.

We can transform the nonlinear complementary problem into smooth optimization problem by using the maximum entropy function method as follows.

Definition 2 (see [9, 10]). Let be maximum entropy function on in (3); that is,

Theorem 3 (see [9, 10]). Function goes along with the increase of parameter monotone decrease for any and is limited to (); that is,

The initial ideas of entropy function method originated from the published paper of Kreisselmeier and Steinhauser [17] in 1979. The engineering and technical personnel like it at home and abroad because it is easy to prepare common software for solving many types of optimization problems, and it can provide the solution accuracy required with some convexity conditions. Since the 1980s, the method is widely used in structural optimization and engineering design and other fields. In recent years, the method gained the better results for solving constrained and unconstrained minimax problems, linear programming problems, and semi-infinite programming.

Large enough by the above theorem shows that we can use maximum entropy function instead of the objective function ; then the previous nonsmooth problems are changed into a differentiable function of unconstrained optimization problems. As is known to all, when is limited, we can get the approximate solution of the original problem, but when is appropriate we can guarantee high precision. For a problem with constraints, it can use the penalty function method into unconstrained problems.

3. The Idea of PSO Algorithm

PSO algorithm [11] is an evolutionary computation technique, originated from the behavior of birds preying. It is similar to the genetic algorithm, an optimization tool based on iteration. The system is initialized with a set of random solutions; through the iterative searching for the optimal value, there is no crossover and mutation in the genetic algorithm, but by searching with particles in solution space following the optimal particle, its optimization has been widely recognized by academe. In an -dimensional target search space, there are particles forming a group, where the th particle in the th iteration is expressed as a vector , . The position of each particle is a potential solution; flying speed is the -dimensional vector correspondingly. In each iteration, note that is the optimal position of searching for the th particle itself so far; is the optimal position of the whole particle swarm so far. In the ()th iteration, the speed and position update of the th particle are where , , and are two learning factors, and and are pseudorandom numbers distributed on Uniformly. , where   is constant, set by the user themselves.

Shi and Eberhart presented the inertia weight PSO [18]: where is inertia weight in (10), controlling the previous velocity influence on the current speed. When is bigger, the influence of previous velocity is greater, and ability of global search of algorithm is weak; when is smaller, the influence of previous velocity is smaller, and ability of global search of algorithm is stronger. We can jump out of local minima by adjusting the size of .

Equation (9) was improved in [19]:

Termination condition is the minimum adaptive threshold predetermined, which meets the maximum number of iterations or the optimal location searched by particle swarm according to the specific problems.

4. The Idea of PPA Algorithm

In 1970, Martinet [20] proposed the proximal point algorithm, which is a global convergence algorithm for a convex optimization problem; then Rockafellar [21] researched and extended the algorithm.

Consider the following optimization problem: where is a closed regular convex function.

Proximal point algorithm (PPA) produces the iterative sequence for solving (12), which is as follows: where is a distance function; the proximal point algorithm uses early. In recent years, many scholars put forward similar distance functions to meet the convexity, such as Bregman function, similar to entropy function.

Proximal point algorithm is a classical deterministic algorithm. If the optimization problem is a convex optimization problem, the iterative sequence produced by the algorithm converges to the global optimal point. This paper assumes that each component of the nonlinear complementarity problem (12) is a convex function; then the corresponding entropy function is a convex function; thus, we can ensure that the iterative sequences converge to global optimal with proximal point algorithm as the outer algorithm.

5. Particle Swarm Optimization-Proximal Point Algorithm for Nonlinear Complementarity Problems

Combined with the proximal point algorithm, the particle swarm optimization-proximal point algorithm for nonlinear complementarity problems is as follows.

5.1. The Steps of the Outer Algorithm (See Figure 1)

(1)Take initial point for , the parameters , ; let .(2)Take PSO algorithm in order to solve the problem of adjacent point iteration, we get .(3)Check whether it has reached the requirements of precision, if it has, the algorithm stops; otherwise, , to 2.

5.2. The Inner Particle Swarm Algorithm

(1)Initialize the particle swarm of , and set the initial position and velocity of each particle.(2)Calculate fitness value of each particle.(3)Compare the fitness value of each particle with the optimal position experienced if good, as the current optimal position.(4)Compare the fitness value of each particle with the optimal position global experienced if good, as the current global optimal position.(5)Evolve the position and speed of particle by (9) and (10).(6)If the termination condition is satisfied, we output the solution; otherwise, it returns to (2).

6. Numerical Results

We take a nonlinear complementary problem with four different functions in [2, 3, 5, 7] to verify the validity of the new algorithm, and the comparative results with those of [1416] are as follows.

Example [2, 3, 5, 7]. Consider The solution of the problem is and . Here, is , the two learning factors are , where reduces from 1.0 to 0.4, group size is 20, the number of maximum evolution is 100, and the search range is .

We programme the algorithm using VC++6.0 in Windows XP with CPU being Pentium 4 2.93 GHz and memory being 512 MB; the results are in Table 1, where the inner algorithm runs after 10 times, taking the worst solution as the initial iteration point in the next step, and required accuracy is 0.0001.

Data analysis: the algorithms in papers [2, 3, 5, 7] were all deterministic algorithms; therefore, there is no comparison with “the worst solution” and “success rate of searching” in Table 2. We calculate the maximum error by , where we take the worst solution in ten times as the calculation error. From Table 1, we know that it needed 5 iterations of outer circulation and 100 the maximum iteration times in inner circulation. The paper [14] needs 5000 generations of evolution, but the computing speed of our algorithm is much more faster than it.

The pure PPA algorithm could not find the advantages probably for nonmonotonic NCP problems, which is proved in [16], so we find the near solution with PSO firstly and then solve iteratively by using PSO-PPA algorithm for nonmonotonic NCP problems. Compared with pure PPA algorithm, the PSO-PPA algorithm do not need the initial point, and its optimization speed is faster. The results are in Table 3.

7. Conclusion

In this paper, first of all, we use the maximum entropy function method for smoothing and then propose a new efficient algorithm for solving nonlinear complementarity problems combined with the proximal point algorithm, without using the initial point and derivative information. This algorithm not only provides a new algorithm for nonlinear complementarity problems, but also expands the application range of the particle swarm algorithm. The experimental data shows that the algorithm has a fast convergence rate and good numerical stability and is an effective algorithm for nonlinear complementarity problems.

Acknowledgments

The work is being supported by the Scientific Research Foundation of the Education Department in Shaanxi Province of China (Grants nos. 11JK0493 and 12JK0887) and the Natural Science Basic Research Plan in Shaanxi Province of China (Grants nos. S2014JC9168 and S2014JC9190).