Research Article | Open Access

# Robust Linear Neural Network for Constrained Quadratic Optimization

**Academic Editor:**Manuel De la Sen

#### Abstract

Based on the feature of projection operator under box constraint, by using convex analysis method, this paper proposed three robust linear systems to solve a class of quadratic optimization problems. Utilizing linear matrix inequality (LMI) technique, eigenvalue perturbation theory, Lyapunov-Razumikhin method, and LaSalle’s invariance principle, some stable criteria for the related models are also established. Compared with previous criteria derived in the literature cited herein, the stable criteria established in this paper are less conservative and more practicable. Finally, a numerical simulation example and an application example in compressed sensing problem are also given to illustrate the validity of the criteria established in this paper.

#### 1. Introduction

Quadratic optimization problem is a simple but very important and basic problem in convex optimization theory. It is widely applied in many scientific and engineering applications, such as regression analysis, data fusion, system identification, filter design, and compressed sensing [1–5]. Among these applications, the real-time solutions of such quadratic optimization problems are often required. In order to solve quadratic optimization problem, many different algorithms are provided in previous reference, such as proximal point algorithm (PPA), extended PPA, and splitting methods [6–8]. However, in many practical optimization problems, the numbers of decision variables and constraints are usually very large. When a large-scale quadratic optimization problem has to be performed in practical problem, computation complexity becomes more challenging. For such applications, classical optimization techniques may not be competent due to the problem dimension and stringent requirement on computational time [9]. One promising method for solving these problems is to employ artificial recurrent neural networks, since neural network has parallel computing capacity [10]. Mathematically, the optimization problem to be solved is mapped into a dynamical system so that its state output can give the optimal solution and the optimal solution is then obtained by tracking the state trajectory of the designed dynamical system based on the numerical ordinary differential equation technique [11]. From the pioneering work of McCulloch and Pittes, numerous neural network models have been developed. Compared with conventional numerical optimization algorithm, neural network has a low model complexity and parallel computing capacity, it is more suitable for engineering applications, and it has a weaker global convergence condition.

In the recent decades, all kinds of different neural network models were established to solve variant constrained optimization problems. These optimization problems include game theory, linear programming problems, linear complement problems, projection equation problems, variational inequality problems, nonlinear optimization problems, general convex optimization problems, nonconvex optimization problems, and nonsmooth optimization problems.

Neural network for solving linear programming problem perhaps stemmed back from Pyne’s work [12] and Tank and Hopfield’s work [13]. Their seminal work has inspired other researchers to develop recurrent neural networks for nonlinear optimization. Zhang and Constantinides derived a Lagrangian neural network for solving nonlinear convex optimization problems with linear equality constraints in [14]. In [15], Zhang researched the exponential stability of quadratic optimization problem on neural network and established a discrete-time neural network model to solve quadratic optimization problem with convex constraint only. Tan et al. studied the global exponential stability of discrete-time neural network for constrained quadratic optimization problem in [16]. In order to solve more general optimization problem, Yashtini and Malek investigated a discrete-time neural network model for solving nonlinear convex problems with hybrid constraints in [17]. Bouzerdoum and Pattison presented a neural network for solving quadratic convex optimization problems with bounded variables in [18]. Xia proposed primal-dual neural networks for solving linear and quadratic programming problems in [19], researched a dual neural network for solving strictly convex quadratic programming problems in [20], and proposed a Bi-projection neural network for solving constrained quadratic optimization problems in [21]. To solve quadratic minimax optimization problems, Liu and Wang [22] proposed a projection neural network (PNN) for constrained quadratic minimax optimization problem. To solve nonsmooth optimization problems, Liu and Wang [23] proposed a one-layer PNN for solving a class of pseudoconvex and nonsmooth nonlinear optimization problems.

It is worth pointing out that most of above established neural network models are nonlinear forms. And the stability criteria derived in the literature are based on Lyapunov stable theory. However, when the constrained conditions of quadratic optimization are box constrained, nonlinear projection operator satisfies section constraint; in this case, nonlinear projection operator can be expressed by a linear form with uncertain term. This means that nonlinear projection neural network can be rewritten as a robust linear system. Thus, utilizing eigenvalue perturbation theory, we can give out some new system stable criteria. This idea inspires this work. In this paper, using convex analysis tool, we first establish some new robust linear neural network for box constrained quadratic optimization; then, by using LMI technique and eigenvalue perturbation theory, some exponentially stable criteria are established. When time delays are considered, by using Lyapunov-Razumikhin method and LaSalle’s invariance principle, we further derive some asymptotical stability criteria for the established time-delayed linear robust neural network. To illustrate the efficiencies and validity of the derived stability criteria in this paper, a numerical example and an application example in compressed sensing problem are also given.

The remainder of this paper is organized as follows. In Section 2, a constrained quadratic optimization problem and related neural network models are described. In Section 3, the global stability and convergence of the proposed neural network are analyzed. In Section 4, a simulation numerical example and an application to compressed sensing are given. Finally, conclusions are drawn in Section 5.

#### 2. Problem and Neural Network Model

Consider the following constrained quadratic optimization problem:where , is a symmetric nonnegative definite matrix, is a vector, and the superscript denotes the transpose operator. is defined by , where , are constants such that . It is seen that (1) can contain the constrained least square problem as a special case. Many solution methods for solving (1) were presented, including neural networks and numerical optimization algorithms. They all have a solution space being greater than the solution space of (1). As a result, when is large, these solution methods will have a slow convergence rate. However, fast computation of such large optimization problems is often required in practice. Due to low model complexity and parallel computing capacity, neural networks algorithm becomes a popular method to solve problem (1). In what follows, by using projection theory and equivalent transform, some neural network models for solving problem (1) will be first introduced.

##### 2.1. Neural Network without Time Delays

As is well known, without considering time delays, the optimal solution for problem (1) is equivalent to the equilibrium point of the following project neural network:where is an arbitrary constant and is a projection operator defined bywhere denotes norm of . Denote ; since , from [2], the display expression of is as follows:Set and substitute into (2); it yields thatDenote as an equilibrium point of system (5), and let ; system (5) can be rewritten in the following form:where . Notice the expression of in (4); one can obtain thatDenote ; since , it follows thatSetit follows that , .And system (6) can be rewritten in the following linear robust neural network model form:

##### 2.2. Neural Network with Time Delays

###### 2.2.1. Time-Delayed Neural Network Type I

When the time delay is taken into account, a delayed projection neural network, which can be regarded as an improvement form for model (2), can be suggested for solving (1) as follows:where time delay is a constant. Set , and substitute into (12); it yields thatDenote as an equilibrium point of system (13), and let ; system (13) can be rewritten in the following form:where . Notice the expression of in (4); one can obtain thatDenote ; since , it follows thatSetit follows that , .And system (14) can be rewritten in the following time-delayed linear robust neural network form:

###### 2.2.2. Time-Delayed Neural Network Type II

Another time-delayed neural network, which can be regarded as another improvement form for model (2), can be suggested for solving (1) as follows:where time delay is a constant. Set , and substitute into (20); similar to the technique used above, system (20) can be transformed into the following equivalent time-delayed linear robust neural network form:

#### 3. Stability Analysis

Since systems (11), (19), and (21) are all linear differential equations with uncertain term, the stability criteria for these systems can be derived by using eigenvalue perturbation theory, Lyapunov-Razumikhin method, and LMI technique. To derive the stability criteria, we first introduce the following lemma.

Lemma 1 (see [24]). *Suppose that , where is a constant matrix satisfying , and is an uncertain matrix satisfying , , , and of appropriate dimensions; the inequalityholds if and only if, for some ,*

By using Lemma 1, the following stability criterion can be derived.

Theorem 2. *The equilibrium point of system (2) is exponentially stable if there exists a positive constant such that the following linear matrix inequality holds:and the exponential convergence rate is , where is the eigenvalue of matrix , .*

*Proof. *From Section 2.1, it follows that the equilibrium point’s exponential stability of system (2) is equivalent to the exponential stability of the trivial solution of system (11). Notice that system (11) is a linear system structure; thus the exponential stability of the trivial solution of system (11) is equivalent to , where is an arbitrary eigenvalue of matrix .

Set , . Denote as the eigenvalues of matrix satisfying and as the eigenvalues of matrix satisfying . Let be an identity eigenvector belonging to eigenvalue ; namely,It follows thatThusSince is a normal matrix, there exists a unitary matrix such thatthis means thatSet ; one can obtain thatthusNotice that ; it yieldswhich means that . Obviously, if it means that . Because is eigenvalue of matrix and matrix is a symmetric real matrix, this implies that if matrix , then . On the other hand, is equivalent toSince , by Lemma 1, if there is a positive constant such thatthen , , which means that the trivial solution is exponentially stable; this completes the proof.

*Remark 3. *Obviously, the stability condition established in Theorem 2 is a LMI form. Using Matlab LMI tool box, it can be easily solved. However, when a large-scale quadratic optimization problem has to be performed, the computation complexity becomes challenging. In order to overcome this flaw, by using eigenvalue perturbation theory, another more simple and practical result can be derived as follows. Before continuing, the following lemma is needed.

Lemma 4 (see [25]). *Let , ; ; denotes the eigenvalues of matrix ; then for arbitrary , there exists an eigenvalue such thatwhere and .*

Theorem 5. *The equilibrium point of system (2) is exponentially stable if , and the exponential convergence rate is .*

*Proof. *From the proof of Theorem 2, it follows that the equilibrium point’s exponential stability of system (2) is equivalent to , where is an arbitrary eigenvalue of matrix . Let be an arbitrary eigenvalue of matrix ; by Lemma 4, it yields thatBy the definition of , we haveFrom (36) and (37), it yields thatObviously, means that arbitrary eigenvalue of matrix is negative; this means that the equilibrium point of system (2) is exponentially stable, which completes the proof.

*Remark 6. *Usually, matrix infinite norm is larger than matrix 2-norm; thus the result established in Theorem 5 can be rewritten by matrix 2-norm form further. And notice the special property of identity matrix ; by using improved eigenvalue perturbation theory, another exponentially stable criterion can be derived as follows. Before continuing, the following lemma is needed.

Lemma 7 (see [26]). *Let be normal matrix, are eigenvalues of matrix is an arbitrary matrix, and denotes the eigenvalue of matrix ; then there exists an eigenvalue of matrix such thatwhere denotes matrix 2-norm.*

Theorem 8. *The equilibrium point of system (2) is exponentially stable if , and the exponential convergence rate is .*

*Proof. *From the proof of Theorem 2, it follows that the equilibrium point’s exponential stability of system (2) is equivalent to , where is an arbitrary eigenvalue of matrix . Let be an arbitrary eigenvalue of matrix ; since is a normal matrix and all of the eigenvalues of matrix are −1, by Lemma 7, it yields thatBy the definition of , we haveFrom (40) and (41), it yields thatObviously, means that arbitrary eigenvalue of matrix is negative; this means that the equilibrium point of system (2) is exponentially stabile, which completes the proof.

*Remark 9. *As is well known, Lyapunov-Razumikhin method is a powerful stability analysis tool for linear time-delayed system. When time delays are considered, by using Lyapunov-Razumikhin method, a similar asymptotically stable result with Theorem 8 for systems (12) and (19) can be derived as follows. Before continuing, the following Razumikhin condition is needed.

Lemma 10 (see [27] (Razumikhin condition)). *The equilibrium point of system (12) is asymptotically stable if there exists a positive definite Lyapunov function satisfying*

By using Razumikhin condition, the following asymptotically stable criterion for systems (12) and (19) can be obtained as follows.

Theorem 11. *The equilibrium point of system (12) is asymptotically stable if .*

*Proof. *Construct a positive definite Lyapunov function ; it follows thatWhen , it yields , and the derivatives of Lyapunov function along the trajectory of system (19) satisfyBy the definition of , we haveFrom (45), (46), and Lemma 10, we haveObviously, from Razumikhin condition, if , then the equilibrium point of system (12) is asymptotically stable, which completes the proof.

*Remark 12. *Similar to the proof of Theorem 11, by using Razumikhin condition, a delay-dependent stable result for systems (20) and (21) can be derived as follows.

Theorem 13. *The equilibrium point of system (21) is asymptotically stable if .*

*Proof. *Construct a positive definite Lyapunov function ; it follows thatWhen , it yields , and the derivatives of Lyapunov function along the trajectory of system (21) satisfySince , from (49), we haveObviously, from Razumikhin condition, if , then the equilibrium point of system (21) is asymptotically stable, which completes the proof.

*Remark 14. *If , system (21) degenerates into system (11); Theorem 13 becomes the same as in Theorem 8; thus the criterion derived in Theorem 8 can be regarded as a special case of Theorem 13.

Theorem 15. *If the symbol in Theorems 2, 5, 8, and 11 becomes , then, for any initial value , the solutions of systems (2) and (12) converge to a related equilibrium point . In particular, these systems are asymptotically stable when their equilibrium set contains exactly one point.*

*Proof. *Since is a bounded and closed convex set, as is well known, the state vectors of systems (2) and (12) are bounded, and is an invariance set. Utilizing LaSalle’s invariance principle (see [28]), similar to the proofs in [20–23], one can easily obtain the results described in Theorem 15; this complete the proof.

*Remark 16. *Theorems 2, 5, 8, and 11 show that if the inequality signs are strict, then, for any initial whether it is in feasible set or not, the state vectors are exponentially or asymptotically convergent. When the inequality signs are not strict, Theorem 15 shows that the sufficient condition ensuring system being asymptotically stable is . On the other hand, notice that if is sufficiently small, nonstrict inequality sign in Theorems 2, 5, 8, and 11 can be guaranteed; this means that, for arbitrary initial value with appropriate dimension, systems (2) and (12) are asymptotically stable.

#### 4. Numerical Simulation

##### 4.1. Numerical Example

*Example 1. *Consider the quadratic optimization problem (1) withand . For quadratic optimization (1), the author in [29] constructed a discrete-time neural network to solve this problem and gave out a globally convergent criterion. In order to further illustrate the globally exponential convergence of the neural networks constructed in [29], Tan et al. gave out some new improvement stable criteria in [16]. In [30], the authors constructed a continuous neural network to solve problem (1) and derived a globally exponential convergent criterion. In order to reduce the conservation of the criterion established in [30], literature [31] further gave out an improved stable result as . If we construct model (11) to solve problem (1), by direct computation, it follows that ; from Theorem 8, it yields that the state vector of system (11) is exponentially convergent to the optimization value of problem (1). Simulation result is illustrated in Figure 1 with initial value , from which one can see that the state vector of system (11) is convergent to the equilibrium point exponentially. Obviously, ; this means that the criterion established in Theorem 8 is less conservative than some earlier results.

If we adopt model (19) to solve problem (1), by direct computing, it follows that ; from Theorem 11, it means that the state vector of system (19) is exponentially stable. Simulation result is illustrated in Figure 2 with . If we use model (21) to solve problem (1), from the result established in Theorem 13, one can see that the state vector stable condition is stringently required for time delay. For example, in system (21), let ; simulation result in Figure 3 shows that the state vector of system (21) is divergent. This means that even small time delay can cause the state vector of system (21) to be instable. Hence, when time delays must be considered, model (19) has more superiority than model (21).

##### 4.2. Application to Compressed Sensing

The sparsest solution of an undermined linear system of equations can be found by solving the so-called -norm minimization problem; that is,where , , , and denotes the number of nonzero components in . It is well known that problem (52) is NP-hard. For this problem, an approximation model named “Basic Pursuit Problem” was proposed assince the convex envelope of is , where is the “”-norm of . Further, if is contaminated by small dense noise, then problem (53) can be modified as the following form:where is nonnegative parameter and is the Euclidean norm of . Recently, convex analysis shows that (54) is equivalent to the following Lagrangian version:where is a penalty parameter. Obviously, optimization problem (55) is convex but not differentiable. In order to solve problem (55), Liu and Hu [32] introduced a transform , , and . It follows that , where is the vector consisting of ones. Therefore, problem (55) can be rewritten as follows:Namely,where

Obviously, problem (57) is a typical quadratic optimization problem. From (1) and (2), the optimal solutions of (57) are equivalent to the equilibrium points of the following projective neural network:where is the positive orthant, , and . By direct computing, it follows that

From the analysis in Section 2, system (59) can be rewritten in the following equivalent linear robust neural network form:Letdirect computing shows that ; from Theorem 15, if we adopt initial in feasible set , then the state vector of system (61) converges to optimizing solution of problem (55).

*Remark 17. *In addition to the application to compressed sensing, projection neural networks can also be applied to the motion generation and control of redundant robot manipulators. In [33], based on control perspective and projection neural networks, the authors researched distributed task allocation problem of multiple robots. Utilizing projection neural networks, literature [34] researches manipulability optimization problem of redundant manipulators. These new applications in robotics extend new application fields for projection neural network. How to use the technique derived in this paper to robotics field and solve related optimization problem will be a meaningful work, and this is also our future work direction.

#### 5. Conclusions

This paper proposed three linear robust systems to solve a class of quadratic optimization problems. Utilizing LMI technique, eigenvalue perturbation theory, Lyapunov-Razumikhin method, and LaSalle’s invariance principle, some stable criteria for related models are also established. Compared with previous criteria derived in the literature cited herein, the stable criteria established in this paper are less conservative and more practicable. Meanwhile, simulation results show that time-delay linear robust system type II is more sensitive to time delay than type I. This means that, in practical engineering problems, when time delay is needed to consider, model (19) has more superiority than model (21). Simulation example and application example in compressed sensing show that the results derived in this paper are valid.

#### Conflicts of Interest

The authors declare that they have no conflicts of interest.

#### Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grants 61472093 and 11461082) and Scientific Research Fund Project in Guizhou Provincial Department of Education (Grant KY243).

#### References

- L. Bravi, V. Piccialli, and M. Sciandrone, “An optimization-based method for feature ranking in nonlinear regression problems,”
*IEEE Transactions on Neural Networks and Learning Systems*, vol. 28, no. 4, pp. 1005–1010, 2016. View at: Publisher Site | Google Scholar - A. Cichocki and R. Unbehauen,
*Neural Networks for Optimization and Signal Processing*, Wiley, New York, NY, USA, 1993, Neural Networks for Optimization and Signal Processing. - A. Cichocki and S. Amari,
*Adaptive Blind Signal and Image Processing*, John Wiley & Sons, Ltd, Chichester, UK, 2002. View at: Publisher Site - Y. Cheung and L. Xu, “Dual multivariate auto-regressive modeling in state space for temporal signal separation,”
*IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics*, vol. 33, no. 5, p. 835, 2003. View at: Publisher Site | Google Scholar - Q. Liu and J. Wang, “L1-minimization algorithms for sparse signal reconstruction based on a projection neural network,”
*IEEE Transactions on Neural Networks and Learning Systems*, vol. 27, no. 3, pp. 698–707, 2016. View at: Publisher Site | Google Scholar | MathSciNet - B. He, X. Yuan, and W. Zhang, “A customized proximal point algorithm for convex minimization with linear constraints,”
*Computational Optimization and Applications. An International Journal*, vol. 56, no. 3, pp. 559–572, 2013. View at: Publisher Site | Google Scholar | MathSciNet - B. He and X. Yuan, “Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective,”
*SIAM Journal on Imaging Sciences*, vol. 5, no. 1, pp. 119–149, 2012. View at: Publisher Site | Google Scholar | MathSciNet - G. Gu, B. He, and X. Yuan, “Customized proximal point algorithms for linearly constrained convex minimization and saddle-point problems: a unified approach,”
*Computational Optimization and Applications. An International Journal*, vol. 59, no. 1-2, pp. 135–161, 2014. View at: Publisher Site | Google Scholar | MathSciNet - M. P. Kennedy and L. O. Chua, “Neural networks for nonlinear programming,”
*Institute of Electrical and Electronics Engineers. Transactions on Circuits and Systems*, vol. 35, no. 5, pp. 554–562, 1988. View at: Publisher Site | Google Scholar | MathSciNet - X. S. Zhang,
*Neural Networks in Optimization*, Kluwer Academic Publishers, Norwell, Mass, USA, 2000. - C. Yin, S.-m. Zhong, X. Huang, and Y. Cheng, “Robust stability analysis of fractional-order uncertain singular nonlinear system with external disturbance,”
*Applied Mathematics and Computation*, vol. 269, article 21461, pp. 351–362, 2015. View at: Publisher Site | Google Scholar | MathSciNet - I. B. Pyne, “Linear programming on an electronic analogue computer,”
*Transactions of the American Institute of Electrical Engineers, Part I: Communication and Electronics*, vol. 75, no. 2, pp. 139–143, 1956. View at: Publisher Site | Google Scholar - D. W. Tank and J. J. Hopfield, “Simple ‘neural’ optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit,”
*IEEE Transactions on Circuits and Systems*, vol. 33, no. 5, pp. 533–541, 1986. View at: Publisher Site | Google Scholar - S. Zhang and A. G. Constantinides, “Lagrange programming neural networks,”
*IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing*, vol. 39, no. 7, pp. 441–452, 1992. View at: Publisher Site | Google Scholar - F. M. Zhang, “Exponential stability of quadratic optimization on neural networks,”
*J. Huazhong Univ. of Sci. Tech*, vol. 36, no. 5, pp. 57–59, 2008. View at: Google Scholar - K. C. Tan, H. J. Tang, and Z. Yi, “Global exponential stability of discrete-time neural networks for constrained quadratic optimization,”
*Neurocomputing*, vol. 56, no. 1-4, pp. 399–406, 2004. View at: Publisher Site | Google Scholar - M. Yashtini and A. Malek, “A discrete-time neural network for solving nonlinear convex problems with hybrid constraints,”
*Applied Mathematics and Computation*, vol. 195, no. 2, pp. 576–584, 2008. View at: Publisher Site | Google Scholar | MathSciNet - A. Bouzerdoum and T. R. Pattison, “Neural network for quadratic optimization with bound constraints,”
*IEEE Transactions on Neural Networks*, vol. 4, no. 2, pp. 293–304, 1993. View at: Publisher Site | Google Scholar - Y. Xia, “A new neural network for solving linear and quadratic programming problems,”
*IEEE Transactions on Neural Networks*, vol. 7, no. 6, pp. 1544–1547, 1996. View at: Publisher Site | Google Scholar - Y. Xia and J. Wang, “A dual neural network for kinematic control of redundant robot manipulators,”
*IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics*, vol. 31, no. 1, pp. 147–154, 2001. View at: Publisher Site | Google Scholar - Y. Xia and J. Wang, “A bi-projection neural network for solving constrained quadratic optimization problems,”
*IEEE Transactions on Neural Networks and Learning Systems*, vol. 27, no. 2, pp. 214–224, 2016. View at: Publisher Site | Google Scholar | MathSciNet - Q. Liu and J. Wang, “A projection neural network for constrained quadratic minimax optimization,”
*IEEE Transactions on Neural Networks and Learning Systems*, vol. 26, no. 11, pp. 2891–2900, 2015. View at: Publisher Site | Google Scholar | MathSciNet - Q. Liu and J. Wang, “A one-layer projection neural network for nonsmooth optimization subject to linear equalities and bound constraints,”
*IEEE Transactions on Neural Networks and Learning Systems*, vol. 24, no. 5, pp. 812–824, 2013. View at: Publisher Site | Google Scholar - S. Zhou and J. Lam, “Robust stabilization of delayed singular systems with linear fractional parametric uncertainties,”
*Circuits, Systems, and Signal Processing*, vol. 22, no. 6, pp. 579–588, 2003. View at: Publisher Site | Google Scholar | MathSciNet - J. G. Zhang,
*Matrix Perturbation Analysis*, Science Press, Beijing, China, 2001. - T. Z. Huang, S. M. Zhong, and Z. L. Li,
*Matrix Theory*, Higher Education Press, Beijing, China, 2007. - N. Toshiki et al.,
*Differential Equations with Time Lag: Introduction to Functional Differential Equations*, Makino Shoten, Tokyo, Japan, 2002. - X. X. Liao,
*Theory Methods and Application of Stability*, Huazhong Univ. of Sci. Tech Press, Wuhan, China, 2005. - M. J. Pérez-Ilzarbe, “Convergence analysis of a discrete-time recurrent neural network to perform quadratic real optimization with bound constraints,”
*IEEE Transactions on Neural Networks*, vol. 9, no. 6, pp. 1344–1351, 1998. View at: Publisher Site | Google Scholar - K. Ding and N.-J. Huang, “A new class of interval projection neural networks for solving interval quadratic program,”
*Chaos, Solitons and Fractals*, vol. 35, no. 4, pp. 718–725, 2008. View at: Publisher Site | Google Scholar - Z. Liu, S. Lv, and S. Zhong, “Novel exponential stability conditions for a class of interval projection neural networks,”
*International Journal of Biomathematics*, vol. 2, no. 3, pp. 287–297, 2009. View at: Publisher Site | Google Scholar | MathSciNet - Y. Liu and J. Hu, “A neural network for
*ℓ*1-*ℓ*2 minimization based on scaled gradient projection: Application to compressed sensing,”*Neurocomputing*, vol. 173, pp. 988–993, 2016. View at: Publisher Site | Google Scholar - L. Jin and S. Li, “Distributed task allocation of multiple robots: a control perspective,”
*IEEE Transactions on Systems Man & Cybernetics Systems*, 2016. View at: Google Scholar - L. Jin, S. Li, H. M. La, and X. Luo, “Manipulability optimization of redundant manipulators using dynamic neural networks,”
*IEEE Transactions on Industrial Electronics*, vol. 64, no. 6, pp. 4710–4720, 2017. View at: Publisher Site | Google Scholar

#### Copyright

Copyright © 2017 Zixin Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.