Abstract

A one-layer recurrent neural network is developed to solve pseudoconvex optimization with box constraints. Compared with the existing neural networks for solving pseudoconvex optimization, the proposed neural network has a wider domain for implementation. Based on Lyapunov stable theory, the proposed neural network is proved to be stable in the sense of Lyapunov. By applying Clarke’s nonsmooth analysis technique, the finite-time state convergence to the feasible region defined by the constraint conditions is also addressed. Illustrative examples further show the correctness of the theoretical results.

1. Introduction

It is well known that nonlinear optimization problems arise in a broad variety of scientific and engineering applications including optimal control, structure design, image and signal progress, and robot control. Most of nonlinear programming problems have a time-varying nature; they have to be solved in real time. One promising approach to solve nonlinear programming problems in real time is to employ recurrent neural networks based on circuit implementation.

In the past two decades, neural networks for optimization are studied massively and many good results are obtained in the literature; see [119] and references therein. In particular, Liang and Wang developed a recurrent neural network for solving nonlinear optimization with a continuously differentiable objective function and bound constraints in [4]. A projection neural network was proposed for solving nondifferentiable nonlinear programming problems by Xia et al., in [20]. In [9, 19], Xue and Bian developed a subgradient-based neural network for solving nonsmooth convex or nonconvex optimization problems with a nonsmooth convex or nonconvex objective function.

It should be noticed tha, many nonlinear programming problems can be formulated as nonconvex optimization problems, and among nonconvex programming, as a special case, pseudoconvex programmings are found to be more prevalent than other nonconvex programming. Pseudoconvex optimization problem has many applications in practice, such as fractional programming, computer vision, and production planning. Very recently, Liu et al. presented a one-layer recurrent neural network for solving pseudoconvex optimization subject to linear equality in [1]; Hu and Wang proposed a recurrent neural network for solving pseudoconvex variational inequalities in [10]. Qin et al. proposed a new one-layer recurrent neural network for nonsmooth pseudoconvex optimization in [21].

Motivated by the works above, our objective in this paper is to develop a one-layer recurrent neural network for solving pseudoconvex optimization problem subject to a box set. The proposed network model is an improvement of the neural network model presented in [10]. To the best of our knowledge, there are few works treating of the pseudoconvex optimization problem with a box set constraint.

For convenience, some notations are introduced as follows. denotes the set of real numbers, denotes the -dimensional Euclidean space, and denotes the set of all real matrices. For any matrix , means that is a positive definite (negative definite). denotes the inverse of . denotes the transpose of . and denote the maximum and minimum eigenvalue of , respectively. Given the vectors , , . denotes the 2-norm of ; that is, , where denotes the spectral radius of . denotes the derivative of .

Given a set , denotes the closure of the convex hull of .

Let be a locally Lipschitz continuous function. Clarke’s generalized gradient of at is defined by where is the set of Lebesgue measure zero, does not exist, and is an arbitrary set with measure zero. The set-valued map is said to have a closed (convex, compact) image if for each , is closed (convex, compact).

The remainder of this paper is organized as follows. In Section 2, the related preliminary knowledge are given, and the problem formulation and the neural network model are described. In Section 3, the stability in the sense of Lyapunov and finite-time convergence of the proposed neural network is proved. In Section 4, illustrative examples are given to show the effectiveness and the performance of the proposed neural network. Some conclusions are drawn in Section 5.

2. Model Description and Preliminaries

In this section, a one-layer recurrent neural network model is developed to solve pseudoconvex optimization with box constraints. Some definitions and properties concerning the set-valued map and nonsmooth analysis are also introduced.

Definition 1 (set-valued map). Suppose that to each point of a set , there corresponds a nonempty set . Then is said to be a set-valued map from .

Definition 2 (locally Lipschitz function). A function : is called Lipschitz near if and only if there exist , , such that for any , satisfying , where . The function : is said to be locally Lipschitz in if it is Lipschitz near any point .

Definition 3 (regularity). A function : , which is locally Lipschitz near , is said to be regular at if there exists the one-sided directional derivative for any direction which is given by , and we have . The function is said to be regular in if it is regular for any .

Definition 4. A regular function is said to be pseudoconvex on a set , if, for for all , we have

Definition 5. A function is said to be pseudomonotone on a set , if, for all , we have
Consider the following optimization problem with box set constraint: where , and is nonsingular.
Substituting with , then the problem (4) can be transformed into the following problem: Let where is defined as Obviously, and .
Throughout this paper, the following assumptions on the optimization problem (4) are made.The objective function of the problem (4) is pseudoconvex and regular and locally Lipschitz continuous. is bounded; that is, where is a constant.
In the following, we develop a one-layer recurrent neural network for solving the problem (4). The dynamic equation of the proposed neural network model is described by differential inclusion system: where is a nonnegative constant, , and is a discontinuous function with its components defined as Architecture of the proposed neural network system model (9) is depicted in Figure 1.

Definition 6. is said to be an equilibrium point of the differential inclusion system (9) if that is, there exist such that where .

Definition 7. A function is said to be a solution of the system (9) with initial condition , if is absolutely continuous on , and for almost , Equivalently, there exists measurable functions , such that

Definition 8. Suppose that is a nonempty closed convex set. The normal cone to the set at is defined as .

Lemma 9 (see [22]). If , is regular at , then .

Lemma 10 (see [22]). If is a regular function at and is differentiable at and Lipschitz near , then , for all .

Lemma 11 (see [22]). If are closed convex sets and satisfy , then for any , .

Lemma 12 (see [22]). If is locally Lipschitz near and attains a minimum over at , then .

Set . Let ; then there exists a constant , such that , where denotes the interior of the set , . It is easy to verify the following lemma.

Lemma 13. For any and , , where , and is the th element of .

3. Main Results

In this section, the main results concerned with the convergence and optimality conditions of the proposed neural network are addressed.

Theorem 14. Suppose that the assumptions and hold. Let . If , then the solution of the network system (9) with initial condition satisfies .

Proof. Set By Lemma 10 and (15), evaluating the derivation of along the trajectory of the system (9) gives If , it follows directly that . If , according to Lemma 13, one gets that . Thus, we have If , then ; this means that . If not so, the state leaves at time , and when , we have . This implies that , which is the contradiction.
As a result, if , for any , the state of the network system (9) with initial condition satisfies . This completes the proof.

Theorem 15. Suppose that assumptions and hold. If , then the solution of neural network system (9) with initial condition converges to the feasible region in finite time , where , and stays thereafter.

Proof. According to the definition of , is a convex function in . By Lemma 10, it follows that Noting that , we can obtain by (19) that for all , , and such that Since and , there at least is one of the components of which is or . So . Noting that , we have Let . If , then , and Integrating (22) from to , we can obtain Let . By (23), ; that is, or . This shows that the state trajectory of neural network (9) with initial condition reaches in finite time .
Next, we prove that when , the trajectory stays in after reaching . If this is not true, then there exists such that the trajectory leaves at , and there exists such that for , .
By integrating (22) from to , it follows that Due to , . By the definition of , for any , which contradicts the result above. The proof is completed.

Theorem 16. Suppose that assumptions and hold. If , then the equilibrium point of the neural network system (9) is an optimal solution of the problem (4) and vice versa.

Proof. Denote as an equilibrium point of the neural network system (9); then there exist , such that where . By Theorem 15, ; hence, . By (25), . We can get the following projection formulation: where with , , defined as By the well-known projection theorem [17], (26) is equivalent to the following variational inequality: Since is pseudoconvex, is pseudoconvex on . By (28), we can obtain that . This shows that is a minimum point of over .
Next, we prove the reverse side. Denote as an optimal solution of the problem, then . Since is a minimum point of over the feasible region , according to Lemma 12, it follows that From (29), it follows that there exist , , and . Noting that , and at least one or , there exist and such that , and .
In the following, we prove that . We say that . If not, then . Since , we have Thus . By the condition of Theorem 16, . Hence, , which contradicts with . This implies that ; that is, which means that is the equilibrium point of neural network system (9). This completes the proof.

Theorem 17. Suppose that assumptions and hold. If , then the equilibrium point of the neural network system (9) is stable in the sense of Lyapunov.

Proof. Denote as an equilibrium point of the neural network system (9); that is, By Theorem 16, we get that is an optimal solution of the problem (4); that is, . By Theorem 15, the trajectory with initial condition converges to the feasible region in finite time and will remain in forever. That is, for all . Let Since is a minimum point of on , we can get that , for all .
Consider the following Lyapunov function: Obviously, from (33), , and Evaluating the derivation of along the trajectory of the system (9) gives Since , by (34), we can set . Hence, For any , there exists such that . Since is pseudoconvex on , is pseudomonotone on . From the proof of Theorem 16, for any , . By the definition of , . Hence, . This implies that Equation (37) shows that the neural network system (9) is stable in the sense of Lyapunov. The proof is complete.

4. Numerical Examples

In this section, two examples will be given to illustrate the effectiveness of the proposed approach for solving the pseudoconvex optimization problem.

Example 1. Consider the quadratic fractional optimization problem: where is an matrix, , and . Here, we choose ,

It is easily verified that is symmetric and positive definite and consequently is pseudoconvex on . The proposed neural network in (9) is capable of solving this problem. Obviously, neural network (9) associated to (38) can be described as where

Let , , then we have . Moreover, the restricted region with . An upper bound of is estimated as . Then the designed parameter is estimated as . Let in the simulation.

We have simulated the dynamical behavior of the neural network by using the mathematical software when . Figures 2, 3, and 4 display the state trajectories of this neural network with different initial values, which shows that the state variables converge to the feasible region in finite time. This is in accordance with the conclusion of Theorem 15. Meanwhile, it can be seen that the trajectory is stable in the sense of Lyapunov.

Example 2. Consider the following pseudoconvex optimization: where

In this problem, the objective function is pseudoconvex. Thus the proposed neural network is suitable for solving the problem in this case. Neural network (9) associated to (42) can be described as where

Let , ; then we have . Moreover, the restricted region with . An upper bound of is estimated as . Then the designed parameter is estimated as . Let in the simulation.

Figures 5 and 6 display the state trajectories of this neural network with different initial values. It can be seen that these trajectories converge to the feasible region in finite time as well. This is in accordance with the conclusion of Theorem 15. It can be verified that the trajectory is stable in the sense of Lyapunov.

5. Conclusion

In this paper, a one-layer recurrent neural network has been presented for solving pseudoconvex optimization with box constraint. The neural network system model has been described with a differential inclusion system. The constructed recurrent neural network has been proved to be stable in the sense of Lyapunov. The conditions which ensure the finite time state convergence to the feasible region have been obtained. The proposed neural network can be used in a wide variety to solve a lot of optimization problem in the engineering application.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the Natural Science Foundation of Hebei Province of China (A2011203103) and the Hebei Province Education Foundation of China (2009157).