About this Journal Submit a Manuscript Table of Contents
Journal of Applied Mathematics
Volume 2013 (2013), Article ID 695647, 6 pages
http://dx.doi.org/10.1155/2013/695647
Research Article

Linear Simultaneous Equations’ Neural Solution and Its Application to Convex Quadratic Programming with Equality-Constraint

1Center for Educational Technology, Gannan Normal University, Ganzhou 341000, China
2Research Center for Biomedical and Information Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
3School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou, Jiangxi 341000, China

Received 7 August 2013; Accepted 18 September 2013

Academic Editor: Yongkun Li

Copyright © 2013 Yuhuan Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A gradient-based neural network (GNN) is improved and presented for the linear algebraic equation solving. Then, such a GNN model is used for the online solution of the convex quadratic programming (QP) with equality-constraints under the usage of Lagrangian function and Karush-Kuhn-Tucker (KKT) condition. According to the electronic architecture of such a GNN, it is known that the performance of the presented GNN could be enhanced by adopting different activation function arrays and/or design parameters. Computer simulation results substantiate that such a GNN could obtain the accurate solution of the QP problem with an effective manner.

1. Introduction

A variety of scientific research and practical applications can be finalized as a matrix equation solving problem [16]. For example, the analysis of stability and perturbation for a control system could be viewed as the solution of Sylvester matrix equation [1, 2]; the stability and/or robustness properties of a control system could be obtained by the Lyapunov matrix equations solving [46]. Therefore, the real-time solution to matrix equation plays a fundamental role in numerous fields of science, engineering, and business.

As for the solution of matrix equations, many numerical algorithms have been proposed. In general, the minimal arithmetic operations of numerical algorithms are usually proportional to the cube of the dimension of the coefficient matrix, that is, [7]. In order to be satisfied with the low complexity and real-time requirements, recently, numerous novel neural networks have been exploited based on the hardware implementation [2, 4, 5, 813]. For example, Tank and Hopfield solved the linear programming problems by using their proposed Hopfield neural networks (HNN) [9], which promoted the development of the neural networks in the optimization and other application problems. Wang in [10] proposed a kind of recurrent neural networks (RNN) models for the online solution of the linear simultaneous equations in parallel-processing circuit-implementation. In the previous work [2, 4, 5, 11], By Zhang's design method, a new type of RNN models is proposed for the solution of linear matrix-vector equation associated with the time-varying coefficient matrices in real-time.

In this paper, based on the Wang neural networks [10], we present an improved gradient-based neural model for the linear simultaneous equation, and then, such neural model is applied to solve the quadratic programming with equality-constraints. Much investigation and analysis on the Wang neural network have been presented in the previous work [10, 12, 13]. To make full use of the Wang neural network, we transform the convex quadratic programming into the general linear matrix-equation. Moreover, inspired by the design method of Zhang neural networks [2, 4, 5, 11, 12], the gradient-based neural network (GNN), that is, the Wang neural network, is improved, developed, and investigated for the online solution of the convex quadratic programming with the usage of Lagrangian function and Karush-Kuhn-Tucker (KKT) condition. In Section 5, the computer simulation results show that, by improving their structures, we could also obtain the better performance for the existing neural network models.

2. Neural Model for Linear Simultaneous Equations

In this section, a gradient-based neural networks (GNN) model is presented for the linear simultaneous equations: where the nonsingular coefficient matrix and the coefficient vector are given as constants and is an unknown vector to be solved to make (1) hold true.

According to the traditional gradient-based algorithm [8, 10, 12], a scalar-valued norm-based energy function is firstly constructed, and then evolving along the descent direction resulting from such energy function, we could obtain the linear GNN model for the solution of linear algebraic (1); that is, where denotes the constant design parameter (or learning rate) used to scale the converge rate. To improve the convergence performance of neural networks, inspired by Zhang’s neural networks [2, 4, 5, 11, 12], the linear model (2) could be improved and reformulated into the following general nonlinear form: where design parameter is a positive-definite matrix, which is used to scale the convergence rate of the solution. For simplicity, we can use in place of with [4, 11]. In addition, the activation-function-array is a matrix-valued mapping, in which each scalar-valued process unit is a monotonically increasing odd function. In general, four basic types of activation functions, linear, power, sigmoid, and power-sigmoid functions, can be used for the construction of neural solvers [4, 11]. The behavior of these four functions is exhibited in Figure 1, which shows that a different convergence performance could be achieved by using different activation functions. Furthermore, new activation functions could also be generated readily based on the above four activation functions. As for the neural model (3), we have the following theorem.

fig1
Figure 1: Behavior of the four basic activation functions.

Theorem 1. Consider a constant nonsingular coefficient-matrix and coefficient vector . If a monotonically increasing odd activation-function array is used, the neural state of neural model (3), starting from any initial state , would converge to the unique solution of linear equation (1).

Proof. Let solution error . For brevity, hereafter argument is omitted. Then, from (3), we have where for simplicity. Therefore, its entry-form could be written as Then, to analyze subsystem (5), we can define a Lyapunov candidate function as . Obviously, for , and only for . Thus, the Lyapunov candidate function is a nonnegative function. Furthermore, combining subsystem (5), we could get the time-derivative function of as follows where . Since is an odd monotonically increasing function, we have and Therefore, if , and if and only if . In other words, the time-derivative is nonpositive for any . This can guarantee that is a negative-definite function. By Lyapunov theory [14, 15], each entry of solution error in subsystem (5) can converge to zero; that is, . This means that solution error as time . Therefore, the neural state of neural model (3) could converge to the unique solution of linear equation (1). The proof on the convergence of neural model (3) is thus completed.

3. Problem Formulation on Quadratic Programming

An optimization problem characterized by a quadratic objection function and linear constraints is named as a quadratic programming (QP) problem [1618]. In this paper, we consider the following quadratic programming problem with equality-constraints: where is a positive definite Hessian matrix, coefficients and are vectors, and is a full row-rank matrix. They are known as constant coefficients of the to be solved QP problem (8).

Therefore, is unknown to be solved so as to make QP problem (8) hold true; especially, if there is no constraint, (8) is also called quadratic minimum (QM) problem. Mathematically, (8) can be written as . For analysis convenience, let denote the theoretical solution of QP problem (8).

To solve QP problem (8), firstly, let us consider the following general form of quadratic programming: As for (9), a Lagrangian function could be defined as where denotes the Lagrangian multiplier vector and equality constraint . Furthermore, by following the previously-mentioned Lagrangian function and Karush-Kuhn-Tucker (KKT) condition, we have Then, (11) could be further formulated as the following matrix-vector form: where , , and . Therefore, we can obtain the solution of (8) by transforming QP problem (8) into matrix-vector equation (12). In other words, to get the solution of (8), QP problem (8) is firstly transformed into the matrix-vector equation (12), which is a linear matrix-vector equation similar to the linear simultaneous equations (1), and then, we thus can make full use of the neural solvers presented in Section 2 to solve the QP problem (8). Moreover, the first elements of solution of (12) compose the neural solution of (8), and the Lagrangian vector consists of the last elements.

4. Application to QP Problem Solving

For analysis and comparison convenience, Let denote the theoretical solution of (12). Since QP problem (8) could be formulated into the matrix-vector form (12), we can directly utilize the neural solvers (2) and (3) to solve problem (12). Therefore, neural solver (2) used to solve (12) can be written as the following linear form: If such linear model is activated by the nonlinear function arrays, we have In addition, according to model (14), we can also draw its architecture for the electronic realization, as illustrated in Figure 2. From model (14) and Figure 2, we readily know that different performance of (14) can be achieved by using different activation function arrays and design parameter . In the next section, the previously-mentioned four basic functions are used to simulate model (14) for achieving different convergence performance. In addition, from Theorem 1 and [4, 12], we have the following theorem on the convergence performance of GNN model (14).

695647.fig.002
Figure 2: Block diagram of the GNN model (14).

Theorem 2. Consider the time-invariant strictly-convex quadratic programming problem (8). If a monotonically increasing odd activation-function array is used, the neural state of GNN model (14) could globally converge to the theoretical solution of the linear matrix-vector form (12). Note that, the first elements of are corresponding to the theoretical solution of QP problem (8), and the last elements are those of the Lagrangian vector .

5. Simulation and Verification

In this section, neural model (14) is applied to solve the QP problem (8) in real-time for verification. As an illustrative example, consider the following QP problem: Obviously, we can write the equivalent matrix-vector form of QP problem (8) with the following coefficients: For analysis and compassion, we can utilize the MATLAB routine “quadprog” to obtain the theoretical solution of QP (15), that is, .

According to Figure 2, GNN model (14) is applied to the solution of QP problem (15) in real-time, together with the usage of power-sigmoid function array and design parameter . As shown in Figure 3, we know that, when starting from randomly-generated initial state , the neural state of GNN model (14) is fit well with the theoretical solution after 10 seconds or so. That is, GNN model (14) could achieve the exact solution. Note that the first elements of neural solution are corresponding to the theoretical solution , while the last elements are the Lagrangian multiplier vector.

695647.fig.003
Figure 3: Neural state of QP problem (15) by the GNN model (14) with the usage of power-sigmoid function and design parameter .

In addition, the residual error could be used to track the solution-process. The trajectories of residual error could be shown in Figure 4, which is generated by GNN model (14) solving QP problem (15) activated by different activation function arrays, that is, linear, power, sigmoid, and power-sigmoid functions, respectively. Obviously, under the same simulation environments (such as, design parameter and GNN model (14)), different convergence performance could be achieved when different activation function arrays are used. As shown in Table 1, we use , GNNpower, GNNsig, and GNNps to denote the performance of residual error obtained by GNN model (14) activated by linear, power, sigmoid, and power-sigmoid function arrays and have the following simulative results.(i)When the same design parameter is used, the performance of GNNps is the best, while the residual-error of GNNpower is bigger. For example, when design parameter , . (ii)When the same activation functions are used, the performance of residual-error would be better with the increase of the value of design parameter . For example, when linear functions are used, the values of residual-error are , , and corresponding to , , and , respectively.

tab1
Table 1: Performance of GNN model by using different design parameter and activations.
695647.fig.004
Figure 4: Online solution of QP problem (15) by GNN (14) with design parameter and the four basic activations.

Among the four basic activation functions, we could achieve the best convergence performance when using power-sigmoid functions under the same situations. Therefore, GNN model (14) has the best convergence performance when using power-sigmoid function, while when using power function, there exist apparent residual errors between the neural state and theoretical solution . We thus generally use power-sigmoid activation function to achieve the superior convergence performance, as shown in Figure 3.

6. Conclusions

On the basis of the Wang neural network, an improved gradient-based neural network has been presented to the solution of the convex quadratic programming problem in real-time. Compared to the other three activation functions, the power-sigmoid function is the best choice for the superior convergence performance. Computer simulation results further substantiate that the presented GNN model could solve the convex QP problem with accuracy and efficiency, and the convergence performance could be obtained by using the power-sigmoid activation function.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China under Grant 61363076 and the Programs for Foundations of Jiangxi Province of China (GJJ13649, GJJ12367, GJJ13435, and 20122BAB211019) and partially supported by the Shenzhen Programs (JC201005270257A and JC201104220255A).

References

  1. P. Liu, S. Zhang, and Q. Li, “On the positive definite solutions of a nonlinear matrix equation,” Journal of Applied Mathematics, vol. 2013, Article ID 676978, 6 pages, 2013. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. Y. Zhang, D. Jiang, and J. Wang, “A recurrent neural network for solving sylvester equation with time-varying coefficients,” IEEE Transactions on Neural Networks, vol. 13, no. 5, pp. 1053–1063, 2002. View at Publisher · View at Google Scholar · View at Scopus
  3. Y. Li, M. Reisslein, and C. Chakrabarti, “Energy-efficient video transmission over a wireless link,” IEEE Transactions on Vehicular Technology, vol. 58, no. 3, pp. 1229–1244, 2009. View at Publisher · View at Google Scholar · View at Scopus
  4. C. Yi, Y. Chen, and X. Lan, “Comparison on neural solvers for the Lyapunov matrix equation with stationary & nonstationary coefficients,” Applied Mathematical Modelling, vol. 37, no. 4, pp. 2495–2502, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  5. F. Ding and T. Chen, “Gradient based iterative algorithms for solving a class of matrix equations,” IEEE Transactions on Automatic Control, vol. 50, no. 8, pp. 1216–1221, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  6. M. A. Ghorbani, O. Kisi, and M. Aalinezhad, “A probe into the chaotic nature of daily streamflow time series by correlation dimension and largest Lyapunov methods,” Applied Mathematical Modelling, vol. 34, no. 12, pp. 4050–4057, 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. Y. Zhang and W. E. Leithead, “Exploiting Hessian matrix and trust-region algorithm in hyperparameters estimation of Gaussian process,” Applied Mathematics and Computation, vol. 171, no. 2, pp. 1264–1281, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  8. X. Zou, Y. Tang, S. Bu, Z. Luo, and S. Zhong, “Neural-network-based approach for extracting eigenvectors and eigenvalues of real normal matrices and some extension to real matrices,” Journal of Applied Mathematics, vol. 2013, Article ID 597628, 13 pages, 2013. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. D. W. Tank and J. J. Hopfield, “Simple neural optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit,” IEEE Transactions on Circuits and Systems, vol. 33, no. 5, pp. 533–541, 1986. View at Scopus
  10. J. Wang, “Electronic realisation of recurrent neural network for solving simultaneous linear equations,” Electronics Letters, vol. 28, no. 5, pp. 493–495, 1992. View at Scopus
  11. Y. Zhang, C. Yi, and W. Ma, “Simulation and verification of Zhang neural network for online time-varying matrix inversion,” Simulation Modelling Practice and Theory, vol. 17, no. 10, pp. 1603–1617, 2009. View at Publisher · View at Google Scholar · View at Scopus
  12. C. Yi and Y. Zhang, “Analogue recurrent neural network for linear algebraic equation solving,” Electronics Letters, vol. 44, no. 18, pp. 1078–1080, 2008. View at Publisher · View at Google Scholar · View at Scopus
  13. K. Chen, “Robustness analysis of Wang neural network for online linear equation solving,” Electronic Letters, vol. 48, no. 22, pp. 1391–1392, 2012.
  14. Y. Zhang, “Dual neural networks: design, analysis, and application to redundant robotics,” in Progress in Neurocomputing Research, pp. 41–81, Nova Science Publishers, New York, NY, USA, 2008. View at MathSciNet
  15. Y. Zhang and J. Wang, “Global exponential stability of recurrent neural networks for synthesizing linear feedback control systems via pole assignment,” IEEE Transactions on Neural Networks, vol. 13, no. 3, pp. 633–644, 2002. View at Publisher · View at Google Scholar · View at Scopus
  16. N. Petrot, “Some existence theorems for nonconvex variational inequalities problems,” Abstract and Applied Analysis, vol. 2010, Article ID 472760, 9 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  17. S. Burer and D. Vandenbussche, “A finite branch-and-bound algorithm for nonconvex quadratic programming via semidefinite relaxations,” Mathematical Programming, vol. 113, no. 2, pp. 259–282, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  18. Z. Dostál and R. Kučera, “An optimal algorithm for minimization of quadratic functions with bounded spectrum subject to separable convex inequality and linear equality constraints,” SIAM Journal on Optimization, vol. 20, no. 6, pp. 2913–2938, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet