Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014 (2014), Article ID 283092, 8 pages
http://dx.doi.org/10.1155/2014/283092
Research Article

A One-Layer Recurrent Neural Network for Solving Pseudoconvex Optimization with Box Set Constraints

Department of Applied Mathematics, Yanshan University, Qinhuangdao 066001, China

Received 5 December 2013; Accepted 18 January 2014; Published 27 February 2014

Academic Editor: Wei Bian

Copyright © 2014 Huaiqin Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A one-layer recurrent neural network is developed to solve pseudoconvex optimization with box constraints. Compared with the existing neural networks for solving pseudoconvex optimization, the proposed neural network has a wider domain for implementation. Based on Lyapunov stable theory, the proposed neural network is proved to be stable in the sense of Lyapunov. By applying Clarke’s nonsmooth analysis technique, the finite-time state convergence to the feasible region defined by the constraint conditions is also addressed. Illustrative examples further show the correctness of the theoretical results.

1. Introduction

It is well known that nonlinear optimization problems arise in a broad variety of scientific and engineering applications including optimal control, structure design, image and signal progress, and robot control. Most of nonlinear programming problems have a time-varying nature; they have to be solved in real time. One promising approach to solve nonlinear programming problems in real time is to employ recurrent neural networks based on circuit implementation.

In the past two decades, neural networks for optimization are studied massively and many good results are obtained in the literature; see [119] and references therein. In particular, Liang and Wang developed a recurrent neural network for solving nonlinear optimization with a continuously differentiable objective function and bound constraints in [4]. A projection neural network was proposed for solving nondifferentiable nonlinear programming problems by Xia et al., in [20]. In [9, 19], Xue and Bian developed a subgradient-based neural network for solving nonsmooth convex or nonconvex optimization problems with a nonsmooth convex or nonconvex objective function.

It should be noticed tha, many nonlinear programming problems can be formulated as nonconvex optimization problems, and among nonconvex programming, as a special case, pseudoconvex programmings are found to be more prevalent than other nonconvex programming. Pseudoconvex optimization problem has many applications in practice, such as fractional programming, computer vision, and production planning. Very recently, Liu et al. presented a one-layer recurrent neural network for solving pseudoconvex optimization subject to linear equality in [1]; Hu and Wang proposed a recurrent neural network for solving pseudoconvex variational inequalities in [10]. Qin et al. proposed a new one-layer recurrent neural network for nonsmooth pseudoconvex optimization in [21].

Motivated by the works above, our objective in this paper is to develop a one-layer recurrent neural network for solving pseudoconvex optimization problem subject to a box set. The proposed network model is an improvement of the neural network model presented in [10]. To the best of our knowledge, there are few works treating of the pseudoconvex optimization problem with a box set constraint.

For convenience, some notations are introduced as follows. denotes the set of real numbers, denotes the -dimensional Euclidean space, and denotes the set of all real matrices. For any matrix , means that is a positive definite (negative definite). denotes the inverse of . denotes the transpose of . and denote the maximum and minimum eigenvalue of , respectively. Given the vectors , , . denotes the 2-norm of ; that is, , where denotes the spectral radius of . denotes the derivative of .

Given a set , denotes the closure of the convex hull of .

Let be a locally Lipschitz continuous function. Clarke’s generalized gradient of at is defined by where is the set of Lebesgue measure zero, does not exist, and is an arbitrary set with measure zero. The set-valued map is said to have a closed (convex, compact) image if for each , is closed (convex, compact).

The remainder of this paper is organized as follows. In Section 2, the related preliminary knowledge are given, and the problem formulation and the neural network model are described. In Section 3, the stability in the sense of Lyapunov and finite-time convergence of the proposed neural network is proved. In Section 4, illustrative examples are given to show the effectiveness and the performance of the proposed neural network. Some conclusions are drawn in Section 5.

2. Model Description and Preliminaries

In this section, a one-layer recurrent neural network model is developed to solve pseudoconvex optimization with box constraints. Some definitions and properties concerning the set-valued map and nonsmooth analysis are also introduced.

Definition 1 (set-valued map). Suppose that to each point of a set , there corresponds a nonempty set . Then is said to be a set-valued map from .

Definition 2 (locally Lipschitz function). A function : is called Lipschitz near if and only if there exist , , such that for any , satisfying , where . The function : is said to be locally Lipschitz in if it is Lipschitz near any point .

Definition 3 (regularity). A function : , which is locally Lipschitz near , is said to be regular at if there exists the one-sided directional derivative for any direction which is given by , and we have . The function is said to be regular in if it is regular for any .

Definition 4. A regular function is said to be pseudoconvex on a set , if, for for all , we have

Definition 5. A function is said to be pseudomonotone on a set , if, for all , we have
Consider the following optimization problem with box set constraint: where , and is nonsingular.
Substituting with , then the problem (4) can be transformed into the following problem: Let where is defined as Obviously, and .
Throughout this paper, the following assumptions on the optimization problem (4) are made.The objective function of the problem (4) is pseudoconvex and regular and locally Lipschitz continuous. is bounded; that is, where is a constant.
In the following, we develop a one-layer recurrent neural network for solving the problem (4). The dynamic equation of the proposed neural network model is described by differential inclusion system: where is a nonnegative constant, , and is a discontinuous function with its components defined as Architecture of the proposed neural network system model (9) is depicted in Figure 1.

283092.fig.001
Figure 1: Architecture of the neural network model (9).

Definition 6. is said to be an equilibrium point of the differential inclusion system (9) if that is, there exist such that where .

Definition 7. A function is said to be a solution of the system (9) with initial condition , if is absolutely continuous on , and for almost , Equivalently, there exists measurable functions , such that

Definition 8. Suppose that is a nonempty closed convex set. The normal cone to the set at is defined as .

Lemma 9 (see [22]). If , is regular at , then .

Lemma 10 (see [22]). If is a regular function at and is differentiable at and Lipschitz near , then , for all .

Lemma 11 (see [22]). If are closed convex sets and satisfy , then for any , .

Lemma 12 (see [22]). If is locally Lipschitz near and attains a minimum over at , then .

Set . Let ; then there exists a constant , such that , where denotes the interior of the set , . It is easy to verify the following lemma.

Lemma 13. For any and , , where , and is the th element of .

3. Main Results

In this section, the main results concerned with the convergence and optimality conditions of the proposed neural network are addressed.

Theorem 14. Suppose that the assumptions and hold. Let . If , then the solution of the network system (9) with initial condition satisfies .

Proof. Set By Lemma 10 and (15), evaluating the derivation of along the trajectory of the system (9) gives If , it follows directly that . If , according to Lemma 13, one gets that . Thus, we have If , then ; this means that . If not so, the state leaves at time , and when , we have . This implies that , which is the contradiction.
As a result, if , for any , the state of the network system (9) with initial condition satisfies . This completes the proof.

Theorem 15. Suppose that assumptions and hold. If , then the solution of neural network system (9) with initial condition converges to the feasible region in finite time , where , and stays thereafter.

Proof. According to the definition of , is a convex function in . By Lemma 10, it follows that Noting that , we can obtain by (19) that for all , , and such that Since and , there at least is one of the components of which is or . So . Noting that , we have Let . If , then , and Integrating (22) from to , we can obtain Let . By (23), ; that is, or . This shows that the state trajectory of neural network (9) with initial condition reaches in finite time .
Next, we prove that when , the trajectory stays in after reaching . If this is not true, then there exists such that the trajectory leaves at , and there exists such that for , .
By integrating (22) from to , it follows that Due to , . By the definition of , for any , which contradicts the result above. The proof is completed.

Theorem 16. Suppose that assumptions and hold. If , then the equilibrium point of the neural network system (9) is an optimal solution of the problem (4) and vice versa.

Proof. Denote as an equilibrium point of the neural network system (9); then there exist , such that where . By Theorem 15, ; hence, . By (25), . We can get the following projection formulation: where with , , defined as By the well-known projection theorem [17], (26) is equivalent to the following variational inequality: Since is pseudoconvex, is pseudoconvex on . By (28), we can obtain that . This shows that is a minimum point of over .
Next, we prove the reverse side. Denote as an optimal solution of the problem, then . Since is a minimum point of over the feasible region , according to Lemma 12, it follows that From (29), it follows that there exist , , and . Noting that , and at least one or , there exist and such that , and .
In the following, we prove that . We say that . If not, then . Since , we have Thus . By the condition of Theorem 16, . Hence, , which contradicts with . This implies that ; that is, which means that is the equilibrium point of neural network system (9). This completes the proof.

Theorem 17. Suppose that assumptions and hold. If , then the equilibrium point of the neural network system (9) is stable in the sense of Lyapunov.

Proof. Denote as an equilibrium point of the neural network system (9); that is, By Theorem 16, we get that is an optimal solution of the problem (4); that is, . By Theorem 15, the trajectory with initial condition converges to the feasible region in finite time and will remain in forever. That is, for all . Let Since is a minimum point of on , we can get that , for all .
Consider the following Lyapunov function: Obviously, from (33), , and Evaluating the derivation of along the trajectory of the system (9) gives Since , by (34), we can set . Hence, For any , there exists such that . Since is pseudoconvex on , is pseudomonotone on . From the proof of Theorem 16, for any , . By the definition of , . Hence, . This implies that Equation (37) shows that the neural network system (9) is stable in the sense of Lyapunov. The proof is complete.

4. Numerical Examples

In this section, two examples will be given to illustrate the effectiveness of the proposed approach for solving the pseudoconvex optimization problem.

Example 1. Consider the quadratic fractional optimization problem: where is an matrix, , and . Here, we choose ,

It is easily verified that is symmetric and positive definite and consequently is pseudoconvex on . The proposed neural network in (9) is capable of solving this problem. Obviously, neural network (9) associated to (38) can be described as where

Let , , then we have . Moreover, the restricted region with . An upper bound of is estimated as . Then the designed parameter is estimated as . Let in the simulation.

We have simulated the dynamical behavior of the neural network by using the mathematical software when . Figures 2, 3, and 4 display the state trajectories of this neural network with different initial values, which shows that the state variables converge to the feasible region in finite time. This is in accordance with the conclusion of Theorem 15. Meanwhile, it can be seen that the trajectory is stable in the sense of Lyapunov.

283092.fig.002
Figure 2: Time-domain behavior of the state variables , , and with initial point .
283092.fig.003
Figure 3: Time-domain behavior of the state variables , , and with initial point .
283092.fig.004
Figure 4: Time-domain behavior of the state variables , , and with initial point .

Example 2. Consider the following pseudoconvex optimization: where

In this problem, the objective function is pseudoconvex. Thus the proposed neural network is suitable for solving the problem in this case. Neural network (9) associated to (42) can be described as where

Let , ; then we have . Moreover, the restricted region with . An upper bound of is estimated as . Then the designed parameter is estimated as . Let in the simulation.

Figures 5 and 6 display the state trajectories of this neural network with different initial values. It can be seen that these trajectories converge to the feasible region in finite time as well. This is in accordance with the conclusion of Theorem 15. It can be verified that the trajectory is stable in the sense of Lyapunov.

283092.fig.005
Figure 5: Time-domain behavior of the state variables , , and with initial point .
283092.fig.006
Figure 6: Time-domain behavior of the state variables , , and with initial point .

5. Conclusion

In this paper, a one-layer recurrent neural network has been presented for solving pseudoconvex optimization with box constraint. The neural network system model has been described with a differential inclusion system. The constructed recurrent neural network has been proved to be stable in the sense of Lyapunov. The conditions which ensure the finite time state convergence to the feasible region have been obtained. The proposed neural network can be used in a wide variety to solve a lot of optimization problem in the engineering application.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the Natural Science Foundation of Hebei Province of China (A2011203103) and the Hebei Province Education Foundation of China (2009157).

References

  1. Q. Liu, Z. Guo, and J. Wang, “A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization,” Neural Networks, vol. 26, pp. 99–109, 2012. View at Publisher · View at Google Scholar · View at Scopus
  2. W. Lu and T. Chen, “Dynamical behaviors of delayed neural network systems with discontinuous activation functions,” Neural Computation, vol. 18, no. 3, pp. 683–708, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  3. Y. Xia and J. Wang, “A recurrent neural network for solving nonlinear convex programs subject to linear constraints,” IEEE Transactions on Neural Networks, vol. 16, no. 2, pp. 379–386, 2005. View at Publisher · View at Google Scholar · View at Scopus
  4. X.-B. Liang and J. Wang, “A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints,” IEEE Transactions on Neural Networks, vol. 11, no. 6, pp. 1251–1262, 2000. View at Publisher · View at Google Scholar · View at Scopus
  5. X. Xue and W. Bian, “A project neural network for solving degenerate convex quadratic program,” Neurocomputing, vol. 70, no. 13–15, pp. 2449–2459, 2007. View at Publisher · View at Google Scholar · View at Scopus
  6. Y. Xia, H. Leung, and J. Wang, “A projection neural network and its application to constrained optimization problems,” IEEE Transactions on Circuits and Systems I, vol. 49, no. 4, pp. 447–458, 2002. View at Publisher · View at Google Scholar · View at MathSciNet
  7. S. Qin and X. Xue, “Global exponential stability and global convergence in finite time of neural networks with discontinuous activations,” Neural Processing Letters, vol. 29, no. 3, pp. 189–204, 2009. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Effati, A. Ghomashi, and A. R. Nazemi, “Application of projection neural network in solving convex programming problems,” Applied Mathematics and Computation, vol. 188, no. 2, pp. 1103–1114, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. X. Xue and W. Bian, “Subgradient-based neural networks for nonsmooth convex optimization problems,” IEEE Transactions on Circuits and Systems I, vol. 55, no. 8, pp. 2378–2391, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  10. X. Hu and J. Wang, “A recurrent neural network for solving a class of general variational inequalities,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 37, no. 3, pp. 528–539, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. X. Hu and J. Wang, “Solving pseudomonotone variational inequalities and pseudoconvex optimization problems using the projection neural network,” IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1487–1499, 2006. View at Publisher · View at Google Scholar · View at Scopus
  12. J. Wang, “Analysis and design of a recurrent neural network for linear programming,” IEEE Transactions on Circuits and Systems I, vol. 40, no. 9, pp. 613–618, 1993. View at Publisher · View at Google Scholar · View at Scopus
  13. J. Wang, “A deterministic annealing neural network for convex programming,” Neural Networks, vol. 7, no. 4, pp. 629–641, 1994. View at Google Scholar · View at Scopus
  14. J. Wang, “Primal and dual assignment networks,” IEEE Transactions on Neural Networks, vol. 8, no. 3, pp. 784–790, 1997. View at Publisher · View at Google Scholar · View at Scopus
  15. J. Wang, “Primal and dual neural networks for shortest-path routing,” IEEE Transactions on Systems, Man, and Cybernetics A, vol. 28, no. 6, pp. 864–869, 1998. View at Publisher · View at Google Scholar · View at Scopus
  16. W. Bian and X. Xue, “Neural network for solving constrained convex optimization problems with global attractivity,” IEEE Transactions on Circuits and Systems I, vol. 60, no. 3, pp. 710–723, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  17. W. Bian and X. Xue, “A dynamical approach to constrained nonsmooth convex minimization problem coupling with penalty function method in Hilbert space,” Numerical Functional Analysis and Optimization, vol. 31, no. 11, pp. 1221–1253, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  18. Y. Xia and J. Wang, “A one-layer recurrent neural network for support vector machine learning,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 34, no. 2, pp. 1261–1269, 2004. View at Publisher · View at Google Scholar · View at Scopus
  19. X. Xue and W. Bian, “Subgradient-based neural networks for nonsmooth convex optimization problems,” IEEE Transactions on Circuits and Systems I, vol. 55, no. 8, pp. 2378–2391, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  20. Y. Xia, H. Leung, and J. Wang, “A projection neural network and its application to constrained optimization problems,” IEEE Transactions on Circuits and Systems I, vol. 49, no. 4, pp. 447–458, 2002. View at Publisher · View at Google Scholar · View at MathSciNet
  21. S. Qin, W. Bian, and X. Xue, “A new one-layer recurrent neural network for non-smooth pseudoconvex optimization,” Neurocomputing, vol. 120, pp. 655–662, 2013. View at Google Scholar
  22. F. H. Clarke, Optimization and Nonsmooth Analysis, Wiley, New York, NY, USA, 1983. View at MathSciNet