Abstract

Based on the feature of projection operator under box constraint, by using convex analysis method, this paper proposed three robust linear systems to solve a class of quadratic optimization problems. Utilizing linear matrix inequality (LMI) technique, eigenvalue perturbation theory, Lyapunov-Razumikhin method, and LaSalle’s invariance principle, some stable criteria for the related models are also established. Compared with previous criteria derived in the literature cited herein, the stable criteria established in this paper are less conservative and more practicable. Finally, a numerical simulation example and an application example in compressed sensing problem are also given to illustrate the validity of the criteria established in this paper.

1. Introduction

Quadratic optimization problem is a simple but very important and basic problem in convex optimization theory. It is widely applied in many scientific and engineering applications, such as regression analysis, data fusion, system identification, filter design, and compressed sensing [15]. Among these applications, the real-time solutions of such quadratic optimization problems are often required. In order to solve quadratic optimization problem, many different algorithms are provided in previous reference, such as proximal point algorithm (PPA), extended PPA, and splitting methods [68]. However, in many practical optimization problems, the numbers of decision variables and constraints are usually very large. When a large-scale quadratic optimization problem has to be performed in practical problem, computation complexity becomes more challenging. For such applications, classical optimization techniques may not be competent due to the problem dimension and stringent requirement on computational time [9]. One promising method for solving these problems is to employ artificial recurrent neural networks, since neural network has parallel computing capacity [10]. Mathematically, the optimization problem to be solved is mapped into a dynamical system so that its state output can give the optimal solution and the optimal solution is then obtained by tracking the state trajectory of the designed dynamical system based on the numerical ordinary differential equation technique [11]. From the pioneering work of McCulloch and Pittes, numerous neural network models have been developed. Compared with conventional numerical optimization algorithm, neural network has a low model complexity and parallel computing capacity, it is more suitable for engineering applications, and it has a weaker global convergence condition.

In the recent decades, all kinds of different neural network models were established to solve variant constrained optimization problems. These optimization problems include game theory, linear programming problems, linear complement problems, projection equation problems, variational inequality problems, nonlinear optimization problems, general convex optimization problems, nonconvex optimization problems, and nonsmooth optimization problems.

Neural network for solving linear programming problem perhaps stemmed back from Pyne’s work [12] and Tank and Hopfield’s work [13]. Their seminal work has inspired other researchers to develop recurrent neural networks for nonlinear optimization. Zhang and Constantinides derived a Lagrangian neural network for solving nonlinear convex optimization problems with linear equality constraints in [14]. In [15], Zhang researched the exponential stability of quadratic optimization problem on neural network and established a discrete-time neural network model to solve quadratic optimization problem with convex constraint only. Tan et al. studied the global exponential stability of discrete-time neural network for constrained quadratic optimization problem in [16]. In order to solve more general optimization problem, Yashtini and Malek investigated a discrete-time neural network model for solving nonlinear convex problems with hybrid constraints in [17]. Bouzerdoum and Pattison presented a neural network for solving quadratic convex optimization problems with bounded variables in [18]. Xia proposed primal-dual neural networks for solving linear and quadratic programming problems in [19], researched a dual neural network for solving strictly convex quadratic programming problems in [20], and proposed a Bi-projection neural network for solving constrained quadratic optimization problems in [21]. To solve quadratic minimax optimization problems, Liu and Wang [22] proposed a projection neural network (PNN) for constrained quadratic minimax optimization problem. To solve nonsmooth optimization problems, Liu and Wang [23] proposed a one-layer PNN for solving a class of pseudoconvex and nonsmooth nonlinear optimization problems.

It is worth pointing out that most of above established neural network models are nonlinear forms. And the stability criteria derived in the literature are based on Lyapunov stable theory. However, when the constrained conditions of quadratic optimization are box constrained, nonlinear projection operator satisfies section constraint; in this case, nonlinear projection operator can be expressed by a linear form with uncertain term. This means that nonlinear projection neural network can be rewritten as a robust linear system. Thus, utilizing eigenvalue perturbation theory, we can give out some new system stable criteria. This idea inspires this work. In this paper, using convex analysis tool, we first establish some new robust linear neural network for box constrained quadratic optimization; then, by using LMI technique and eigenvalue perturbation theory, some exponentially stable criteria are established. When time delays are considered, by using Lyapunov-Razumikhin method and LaSalle’s invariance principle, we further derive some asymptotical stability criteria for the established time-delayed linear robust neural network. To illustrate the efficiencies and validity of the derived stability criteria in this paper, a numerical example and an application example in compressed sensing problem are also given.

The remainder of this paper is organized as follows. In Section 2, a constrained quadratic optimization problem and related neural network models are described. In Section 3, the global stability and convergence of the proposed neural network are analyzed. In Section 4, a simulation numerical example and an application to compressed sensing are given. Finally, conclusions are drawn in Section 5.

2. Problem and Neural Network Model

Consider the following constrained quadratic optimization problem:where , is a symmetric nonnegative definite matrix, is a vector, and the superscript denotes the transpose operator. is defined by , where , are constants such that . It is seen that (1) can contain the constrained least square problem as a special case. Many solution methods for solving (1) were presented, including neural networks and numerical optimization algorithms. They all have a solution space being greater than the solution space of (1). As a result, when is large, these solution methods will have a slow convergence rate. However, fast computation of such large optimization problems is often required in practice. Due to low model complexity and parallel computing capacity, neural networks algorithm becomes a popular method to solve problem (1). In what follows, by using projection theory and equivalent transform, some neural network models for solving problem (1) will be first introduced.

2.1. Neural Network without Time Delays

As is well known, without considering time delays, the optimal solution for problem (1) is equivalent to the equilibrium point of the following project neural network:where is an arbitrary constant and is a projection operator defined bywhere denotes norm of . Denote ; since , from [2], the display expression of is as follows:Set and substitute into (2); it yields thatDenote as an equilibrium point of system (5), and let ; system (5) can be rewritten in the following form:where . Notice the expression of in (4); one can obtain thatDenote ; since , it follows thatSetit follows that , .And system (6) can be rewritten in the following linear robust neural network model form:

2.2. Neural Network with Time Delays
2.2.1. Time-Delayed Neural Network Type I

When the time delay is taken into account, a delayed projection neural network, which can be regarded as an improvement form for model (2), can be suggested for solving (1) as follows:where time delay is a constant. Set , and substitute into (12); it yields thatDenote as an equilibrium point of system (13), and let ; system (13) can be rewritten in the following form:where . Notice the expression of in (4); one can obtain thatDenote ; since , it follows thatSetit follows that , .And system (14) can be rewritten in the following time-delayed linear robust neural network form:

2.2.2. Time-Delayed Neural Network Type II

Another time-delayed neural network, which can be regarded as another improvement form for model (2), can be suggested for solving (1) as follows:where time delay is a constant. Set , and substitute into (20); similar to the technique used above, system (20) can be transformed into the following equivalent time-delayed linear robust neural network form:

3. Stability Analysis

Since systems (11), (19), and (21) are all linear differential equations with uncertain term, the stability criteria for these systems can be derived by using eigenvalue perturbation theory, Lyapunov-Razumikhin method, and LMI technique. To derive the stability criteria, we first introduce the following lemma.

Lemma 1 (see [24]). Suppose that , where is a constant matrix satisfying , and is an uncertain matrix satisfying , , , and of appropriate dimensions; the inequalityholds if and only if, for some ,

By using Lemma 1, the following stability criterion can be derived.

Theorem 2. The equilibrium point of system (2) is exponentially stable if there exists a positive constant such that the following linear matrix inequality holds:and the exponential convergence rate is , where is the eigenvalue of matrix , .

Proof. From Section 2.1, it follows that the equilibrium point’s exponential stability of system (2) is equivalent to the exponential stability of the trivial solution of system (11). Notice that system (11) is a linear system structure; thus the exponential stability of the trivial solution of system (11) is equivalent to , where is an arbitrary eigenvalue of matrix .
Set , . Denote as the eigenvalues of matrix satisfying and as the eigenvalues of matrix satisfying . Let be an identity eigenvector belonging to eigenvalue ; namely,It follows thatThusSince is a normal matrix, there exists a unitary matrix such thatthis means thatSet ; one can obtain thatthusNotice that ; it yieldswhich means that . Obviously, if it means that . Because is eigenvalue of matrix and matrix is a symmetric real matrix, this implies that if matrix , then . On the other hand, is equivalent toSince , by Lemma 1, if there is a positive constant such thatthen , , which means that the trivial solution is exponentially stable; this completes the proof.

Remark 3. Obviously, the stability condition established in Theorem 2 is a LMI form. Using Matlab LMI tool box, it can be easily solved. However, when a large-scale quadratic optimization problem has to be performed, the computation complexity becomes challenging. In order to overcome this flaw, by using eigenvalue perturbation theory, another more simple and practical result can be derived as follows. Before continuing, the following lemma is needed.

Lemma 4 (see [25]). Let , ; ; denotes the eigenvalues of matrix ; then for arbitrary , there exists an eigenvalue such thatwhere and .

Theorem 5. The equilibrium point of system (2) is exponentially stable if , and the exponential convergence rate is .

Proof. From the proof of Theorem 2, it follows that the equilibrium point’s exponential stability of system (2) is equivalent to , where is an arbitrary eigenvalue of matrix . Let be an arbitrary eigenvalue of matrix ; by Lemma 4, it yields thatBy the definition of , we haveFrom (36) and (37), it yields thatObviously, means that arbitrary eigenvalue of matrix is negative; this means that the equilibrium point of system (2) is exponentially stable, which completes the proof.

Remark 6. Usually, matrix infinite norm is larger than matrix 2-norm; thus the result established in Theorem 5 can be rewritten by matrix 2-norm form further. And notice the special property of identity matrix ; by using improved eigenvalue perturbation theory, another exponentially stable criterion can be derived as follows. Before continuing, the following lemma is needed.

Lemma 7 (see [26]). Let be normal matrix, are eigenvalues of matrix is an arbitrary matrix, and denotes the eigenvalue of matrix ; then there exists an eigenvalue of matrix such thatwhere denotes matrix 2-norm.

Theorem 8. The equilibrium point of system (2) is exponentially stable if , and the exponential convergence rate is .

Proof. From the proof of Theorem 2, it follows that the equilibrium point’s exponential stability of system (2) is equivalent to , where is an arbitrary eigenvalue of matrix . Let be an arbitrary eigenvalue of matrix ; since is a normal matrix and all of the eigenvalues of matrix are −1, by Lemma 7, it yields thatBy the definition of , we haveFrom (40) and (41), it yields thatObviously, means that arbitrary eigenvalue of matrix is negative; this means that the equilibrium point of system (2) is exponentially stabile, which completes the proof.

Remark 9. As is well known, Lyapunov-Razumikhin method is a powerful stability analysis tool for linear time-delayed system. When time delays are considered, by using Lyapunov-Razumikhin method, a similar asymptotically stable result with Theorem 8 for systems (12) and (19) can be derived as follows. Before continuing, the following Razumikhin condition is needed.

Lemma 10 (see [27] (Razumikhin condition)). The equilibrium point of system (12) is asymptotically stable if there exists a positive definite Lyapunov function satisfying

By using Razumikhin condition, the following asymptotically stable criterion for systems (12) and (19) can be obtained as follows.

Theorem 11. The equilibrium point of system (12) is asymptotically stable if .

Proof. Construct a positive definite Lyapunov function ; it follows thatWhen , it yields , and the derivatives of Lyapunov function along the trajectory of system (19) satisfyBy the definition of , we haveFrom (45), (46), and Lemma 10, we haveObviously, from Razumikhin condition, if , then the equilibrium point of system (12) is asymptotically stable, which completes the proof.

Remark 12. Similar to the proof of Theorem 11, by using Razumikhin condition, a delay-dependent stable result for systems (20) and (21) can be derived as follows.

Theorem 13. The equilibrium point of system (21) is asymptotically stable if .

Proof. Construct a positive definite Lyapunov function ; it follows thatWhen , it yields , and the derivatives of Lyapunov function along the trajectory of system (21) satisfySince , from (49), we haveObviously, from Razumikhin condition, if , then the equilibrium point of system (21) is asymptotically stable, which completes the proof.

Remark 14. If , system (21) degenerates into system (11); Theorem 13 becomes the same as in Theorem 8; thus the criterion derived in Theorem 8 can be regarded as a special case of Theorem 13.

Theorem 15. If the symbol in Theorems 2, 5, 8, and 11 becomes , then, for any initial value , the solutions of systems (2) and (12) converge to a related equilibrium point . In particular, these systems are asymptotically stable when their equilibrium set contains exactly one point.

Proof. Since is a bounded and closed convex set, as is well known, the state vectors of systems (2) and (12) are bounded, and is an invariance set. Utilizing LaSalle’s invariance principle (see [28]), similar to the proofs in [2023], one can easily obtain the results described in Theorem 15; this complete the proof.

Remark 16. Theorems 2, 5, 8, and 11 show that if the inequality signs are strict, then, for any initial whether it is in feasible set or not, the state vectors are exponentially or asymptotically convergent. When the inequality signs are not strict, Theorem 15 shows that the sufficient condition ensuring system being asymptotically stable is . On the other hand, notice that if is sufficiently small, nonstrict inequality sign in Theorems 2, 5, 8, and 11 can be guaranteed; this means that, for arbitrary initial value with appropriate dimension, systems (2) and (12) are asymptotically stable.

4. Numerical Simulation

4.1. Numerical Example

Example 1. Consider the quadratic optimization problem (1) withand . For quadratic optimization (1), the author in [29] constructed a discrete-time neural network to solve this problem and gave out a globally convergent criterion. In order to further illustrate the globally exponential convergence of the neural networks constructed in [29], Tan et al. gave out some new improvement stable criteria in [16]. In [30], the authors constructed a continuous neural network to solve problem (1) and derived a globally exponential convergent criterion. In order to reduce the conservation of the criterion established in [30], literature [31] further gave out an improved stable result as . If we construct model (11) to solve problem (1), by direct computation, it follows that ; from Theorem 8, it yields that the state vector of system (11) is exponentially convergent to the optimization value of problem (1). Simulation result is illustrated in Figure 1 with initial value , from which one can see that the state vector of system (11) is convergent to the equilibrium point exponentially. Obviously, ; this means that the criterion established in Theorem 8 is less conservative than some earlier results.

If we adopt model (19) to solve problem (1), by direct computing, it follows that ; from Theorem 11, it means that the state vector of system (19) is exponentially stable. Simulation result is illustrated in Figure 2 with . If we use model (21) to solve problem (1), from the result established in Theorem 13, one can see that the state vector stable condition is stringently required for time delay. For example, in system (21), let ; simulation result in Figure 3 shows that the state vector of system (21) is divergent. This means that even small time delay can cause the state vector of system (21) to be instable. Hence, when time delays must be considered, model (19) has more superiority than model (21).

4.2. Application to Compressed Sensing

The sparsest solution of an undermined linear system of equations can be found by solving the so-called -norm minimization problem; that is,where , , , and denotes the number of nonzero components in . It is well known that problem (52) is NP-hard. For this problem, an approximation model named “Basic Pursuit Problem” was proposed assince the convex envelope of is , where is the “”-norm of . Further, if is contaminated by small dense noise, then problem (53) can be modified as the following form:where is nonnegative parameter and is the Euclidean norm of . Recently, convex analysis shows that (54) is equivalent to the following Lagrangian version:where is a penalty parameter. Obviously, optimization problem (55) is convex but not differentiable. In order to solve problem (55), Liu and Hu [32] introduced a transform , , and . It follows that , where is the vector consisting of ones. Therefore, problem (55) can be rewritten as follows:Namely,where

Obviously, problem (57) is a typical quadratic optimization problem. From (1) and (2), the optimal solutions of (57) are equivalent to the equilibrium points of the following projective neural network:where is the positive orthant, , and . By direct computing, it follows that

From the analysis in Section 2, system (59) can be rewritten in the following equivalent linear robust neural network form:Letdirect computing shows that ; from Theorem 15, if we adopt initial in feasible set , then the state vector of system (61) converges to optimizing solution of problem (55).

Remark 17. In addition to the application to compressed sensing, projection neural networks can also be applied to the motion generation and control of redundant robot manipulators. In [33], based on control perspective and projection neural networks, the authors researched distributed task allocation problem of multiple robots. Utilizing projection neural networks, literature [34] researches manipulability optimization problem of redundant manipulators. These new applications in robotics extend new application fields for projection neural network. How to use the technique derived in this paper to robotics field and solve related optimization problem will be a meaningful work, and this is also our future work direction.

5. Conclusions

This paper proposed three linear robust systems to solve a class of quadratic optimization problems. Utilizing LMI technique, eigenvalue perturbation theory, Lyapunov-Razumikhin method, and LaSalle’s invariance principle, some stable criteria for related models are also established. Compared with previous criteria derived in the literature cited herein, the stable criteria established in this paper are less conservative and more practicable. Meanwhile, simulation results show that time-delay linear robust system type II is more sensitive to time delay than type I. This means that, in practical engineering problems, when time delay is needed to consider, model (19) has more superiority than model (21). Simulation example and application example in compressed sensing show that the results derived in this paper are valid.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grants 61472093 and 11461082) and Scientific Research Fund Project in Guizhou Provincial Department of Education (Grant KY243).