Abstract

A gradient-based neural network (GNN) is improved and presented for the linear algebraic equation solving. Then, such a GNN model is used for the online solution of the convex quadratic programming (QP) with equality-constraints under the usage of Lagrangian function and Karush-Kuhn-Tucker (KKT) condition. According to the electronic architecture of such a GNN, it is known that the performance of the presented GNN could be enhanced by adopting different activation function arrays and/or design parameters. Computer simulation results substantiate that such a GNN could obtain the accurate solution of the QP problem with an effective manner.

1. Introduction

A variety of scientific research and practical applications can be finalized as a matrix equation solving problem [16]. For example, the analysis of stability and perturbation for a control system could be viewed as the solution of Sylvester matrix equation [1, 2]; the stability and/or robustness properties of a control system could be obtained by the Lyapunov matrix equations solving [46]. Therefore, the real-time solution to matrix equation plays a fundamental role in numerous fields of science, engineering, and business.

As for the solution of matrix equations, many numerical algorithms have been proposed. In general, the minimal arithmetic operations of numerical algorithms are usually proportional to the cube of the dimension of the coefficient matrix, that is, [7]. In order to be satisfied with the low complexity and real-time requirements, recently, numerous novel neural networks have been exploited based on the hardware implementation [2, 4, 5, 813]. For example, Tank and Hopfield solved the linear programming problems by using their proposed Hopfield neural networks (HNN) [9], which promoted the development of the neural networks in the optimization and other application problems. Wang in [10] proposed a kind of recurrent neural networks (RNN) models for the online solution of the linear simultaneous equations in parallel-processing circuit-implementation. In the previous work [2, 4, 5, 11], By Zhang's design method, a new type of RNN models is proposed for the solution of linear matrix-vector equation associated with the time-varying coefficient matrices in real-time.

In this paper, based on the Wang neural networks [10], we present an improved gradient-based neural model for the linear simultaneous equation, and then, such neural model is applied to solve the quadratic programming with equality-constraints. Much investigation and analysis on the Wang neural network have been presented in the previous work [10, 12, 13]. To make full use of the Wang neural network, we transform the convex quadratic programming into the general linear matrix-equation. Moreover, inspired by the design method of Zhang neural networks [2, 4, 5, 11, 12], the gradient-based neural network (GNN), that is, the Wang neural network, is improved, developed, and investigated for the online solution of the convex quadratic programming with the usage of Lagrangian function and Karush-Kuhn-Tucker (KKT) condition. In Section 5, the computer simulation results show that, by improving their structures, we could also obtain the better performance for the existing neural network models.

2. Neural Model for Linear Simultaneous Equations

In this section, a gradient-based neural networks (GNN) model is presented for the linear simultaneous equations: where the nonsingular coefficient matrix and the coefficient vector are given as constants and is an unknown vector to be solved to make (1) hold true.

According to the traditional gradient-based algorithm [8, 10, 12], a scalar-valued norm-based energy function is firstly constructed, and then evolving along the descent direction resulting from such energy function, we could obtain the linear GNN model for the solution of linear algebraic (1); that is, where denotes the constant design parameter (or learning rate) used to scale the converge rate. To improve the convergence performance of neural networks, inspired by Zhang’s neural networks [2, 4, 5, 11, 12], the linear model (2) could be improved and reformulated into the following general nonlinear form: where design parameter is a positive-definite matrix, which is used to scale the convergence rate of the solution. For simplicity, we can use in place of with [4, 11]. In addition, the activation-function-array is a matrix-valued mapping, in which each scalar-valued process unit is a monotonically increasing odd function. In general, four basic types of activation functions, linear, power, sigmoid, and power-sigmoid functions, can be used for the construction of neural solvers [4, 11]. The behavior of these four functions is exhibited in Figure 1, which shows that a different convergence performance could be achieved by using different activation functions. Furthermore, new activation functions could also be generated readily based on the above four activation functions. As for the neural model (3), we have the following theorem.

Theorem 1. Consider a constant nonsingular coefficient-matrix and coefficient vector . If a monotonically increasing odd activation-function array is used, the neural state of neural model (3), starting from any initial state , would converge to the unique solution of linear equation (1).

Proof. Let solution error . For brevity, hereafter argument is omitted. Then, from (3), we have where for simplicity. Therefore, its entry-form could be written as Then, to analyze subsystem (5), we can define a Lyapunov candidate function as . Obviously, for , and only for . Thus, the Lyapunov candidate function is a nonnegative function. Furthermore, combining subsystem (5), we could get the time-derivative function of as follows where . Since is an odd monotonically increasing function, we have and Therefore, if , and if and only if . In other words, the time-derivative is nonpositive for any . This can guarantee that is a negative-definite function. By Lyapunov theory [14, 15], each entry of solution error in subsystem (5) can converge to zero; that is, . This means that solution error as time . Therefore, the neural state of neural model (3) could converge to the unique solution of linear equation (1). The proof on the convergence of neural model (3) is thus completed.

3. Problem Formulation on Quadratic Programming

An optimization problem characterized by a quadratic objection function and linear constraints is named as a quadratic programming (QP) problem [1618]. In this paper, we consider the following quadratic programming problem with equality-constraints: where is a positive definite Hessian matrix, coefficients and are vectors, and is a full row-rank matrix. They are known as constant coefficients of the to be solved QP problem (8).

Therefore, is unknown to be solved so as to make QP problem (8) hold true; especially, if there is no constraint, (8) is also called quadratic minimum (QM) problem. Mathematically, (8) can be written as . For analysis convenience, let denote the theoretical solution of QP problem (8).

To solve QP problem (8), firstly, let us consider the following general form of quadratic programming: As for (9), a Lagrangian function could be defined as where denotes the Lagrangian multiplier vector and equality constraint . Furthermore, by following the previously-mentioned Lagrangian function and Karush-Kuhn-Tucker (KKT) condition, we have Then, (11) could be further formulated as the following matrix-vector form: where , , and . Therefore, we can obtain the solution of (8) by transforming QP problem (8) into matrix-vector equation (12). In other words, to get the solution of (8), QP problem (8) is firstly transformed into the matrix-vector equation (12), which is a linear matrix-vector equation similar to the linear simultaneous equations (1), and then, we thus can make full use of the neural solvers presented in Section 2 to solve the QP problem (8). Moreover, the first elements of solution of (12) compose the neural solution of (8), and the Lagrangian vector consists of the last elements.

4. Application to QP Problem Solving

For analysis and comparison convenience, Let denote the theoretical solution of (12). Since QP problem (8) could be formulated into the matrix-vector form (12), we can directly utilize the neural solvers (2) and (3) to solve problem (12). Therefore, neural solver (2) used to solve (12) can be written as the following linear form: If such linear model is activated by the nonlinear function arrays, we have In addition, according to model (14), we can also draw its architecture for the electronic realization, as illustrated in Figure 2. From model (14) and Figure 2, we readily know that different performance of (14) can be achieved by using different activation function arrays and design parameter . In the next section, the previously-mentioned four basic functions are used to simulate model (14) for achieving different convergence performance. In addition, from Theorem 1 and [4, 12], we have the following theorem on the convergence performance of GNN model (14).

Theorem 2. Consider the time-invariant strictly-convex quadratic programming problem (8). If a monotonically increasing odd activation-function array is used, the neural state of GNN model (14) could globally converge to the theoretical solution of the linear matrix-vector form (12). Note that, the first elements of are corresponding to the theoretical solution of QP problem (8), and the last elements are those of the Lagrangian vector .

5. Simulation and Verification

In this section, neural model (14) is applied to solve the QP problem (8) in real-time for verification. As an illustrative example, consider the following QP problem: Obviously, we can write the equivalent matrix-vector form of QP problem (8) with the following coefficients: For analysis and compassion, we can utilize the MATLAB routine “quadprog” to obtain the theoretical solution of QP (15), that is, .

According to Figure 2, GNN model (14) is applied to the solution of QP problem (15) in real-time, together with the usage of power-sigmoid function array and design parameter . As shown in Figure 3, we know that, when starting from randomly-generated initial state , the neural state of GNN model (14) is fit well with the theoretical solution after 10 seconds or so. That is, GNN model (14) could achieve the exact solution. Note that the first elements of neural solution are corresponding to the theoretical solution , while the last elements are the Lagrangian multiplier vector.

In addition, the residual error could be used to track the solution-process. The trajectories of residual error could be shown in Figure 4, which is generated by GNN model (14) solving QP problem (15) activated by different activation function arrays, that is, linear, power, sigmoid, and power-sigmoid functions, respectively. Obviously, under the same simulation environments (such as, design parameter and GNN model (14)), different convergence performance could be achieved when different activation function arrays are used. As shown in Table 1, we use , GNNpower, GNNsig, and GNNps to denote the performance of residual error obtained by GNN model (14) activated by linear, power, sigmoid, and power-sigmoid function arrays and have the following simulative results.(i)When the same design parameter is used, the performance of GNNps is the best, while the residual-error of GNNpower is bigger. For example, when design parameter , . (ii)When the same activation functions are used, the performance of residual-error would be better with the increase of the value of design parameter . For example, when linear functions are used, the values of residual-error are , , and corresponding to , , and , respectively.

Among the four basic activation functions, we could achieve the best convergence performance when using power-sigmoid functions under the same situations. Therefore, GNN model (14) has the best convergence performance when using power-sigmoid function, while when using power function, there exist apparent residual errors between the neural state and theoretical solution . We thus generally use power-sigmoid activation function to achieve the superior convergence performance, as shown in Figure 3.

6. Conclusions

On the basis of the Wang neural network, an improved gradient-based neural network has been presented to the solution of the convex quadratic programming problem in real-time. Compared to the other three activation functions, the power-sigmoid function is the best choice for the superior convergence performance. Computer simulation results further substantiate that the presented GNN model could solve the convex QP problem with accuracy and efficiency, and the convergence performance could be obtained by using the power-sigmoid activation function.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China under Grant 61363076 and the Programs for Foundations of Jiangxi Province of China (GJJ13649, GJJ12367, GJJ13435, and 20122BAB211019) and partially supported by the Shenzhen Programs (JC201005270257A and JC201104220255A).