Abstract

In this paper, a complex-variable neural network model is obtained for solving complex-variable optimization problems described by differential inclusion. Based on the nonpenalty idea, the constructed algorithm does not need to design penalty parameters, that is, it is easier to be designed in practical applications. And some theorems for the convergence of the proposed model are given under suitable conditions. Finally, two numerical examples are shown to illustrate the correctness and effectiveness of the proposed optimization model.

1. Introduction

In practical applications, many engineering problems such as machine learning, signal processing, and resource allocation [14] can be transformed into the optimization problems, and a lot of algorithms have been developed in recent years [57]. Since many engineering systems can be described by complex signals and complex-variable optimization problems widely exist in fields of medical imaging, adaptive filters, and remote sensing [810], the study of complex-variable optimization has also attracted more and more scholars’ attention.

In this paper, we consider the following complex-variable optimization:where is the decision variable, is the convex objective function, , is convex, , is a full row-rank matrix, and . , , and is the feasible region of optimization (1). means the optimal solution set for (1). , , and represent the interior, the closure, the boundary, and the complement of the set .

The method of neurodynamic optimization is to establish the corresponding relationship between the solution of the optimization problem and the state of the neurodynamic system and to obtain the optimal solution by tracking the state output of the neurodynamic system. Because of its parallel operation and fast convergence speed, the neural dynamic optimization method is superior to the traditional optimization algorithm in solving optimization problems. Therefore, various neural networks have been proposed solving optimization problems [1118]. In [11], the authors proposed a Hopfield-type neural network solving linear programming. In [12], Kennedy and Chua designed a model of neural networks based on the gradient to solve the constrained nonlinear programming problem with equality constraints. In [13], Liu et al. proposed a recursive neural network model with convergence in finite time to solve linear programming problems. In [14], Forti et al. promoted the optimal problem into the neural network model described by differential inclusion for solving the nonsmooth optimization. Furthermore, in [15, 16], Bian et al. extended the problem to the one with equality and inequality constraints. In [17, 18], based on the nonpenalty idea, Hosseini proposed the neural network models upon differential inclusions solving the nonsmooth optimization problems. However, all these works focused on real-variable optimization problems.

Because the properties of the complex-variable function and real-variable function are different, complex-variable optimization problems cannot be solved by traditional optimization methods. Most scholars usually decompose complex-variable optimizations into real-variable problems. The main drawback of this approach is that it breaks the two-dimensional structure of the data. In order to overcome the above difficulties, more and more scholars begin to pay attention to the neural dynamic optimization methods for complex-variable optimization.

With the improvement of complex-variable dynamic system theories including asymptotic stability [19, 20], exponential stability [21, 22], robust stability [23, 24], multistability [25, 26], finite-time stability [27], periodicity [28], synchronization [29], and dissipation [30], complex-variable optimization problems were also gradually studied. Zhang et al. [31] proposed a complex-variable model to solve the convex problem with equality constraints. In [32], the authors designed a three-layer projection neural network model to solve the complex-variable convex problems. In [33], a complex-variable projection model was proposed to solve an optimization with equality and inequality constraints. In fact, the semidefinite condition of global convergence of the neural networks in [31, 32] is not valid for the nonsmooth problems, and it is difficult to choose the penalty parameters of the neural networks designed based on the algorithm in [33].

In order to overcome the above difficulties, this paper designs a complex-variable model using a neural network described by differential inclusion for solving the optimization problem. Compared with the models in the existing literature [3133], the model proposed in this paper has the following characteristics. Firstly, the nonsmooth optimization problem can be solved because the model adopts differential inclusion description. Moreover, the model does not need to select penalty parameters in practical applications because the proposed model is based on the idea of nonpenalty design and does not contain any penalty parameters. Finally, the model structure we designed is simpler and more convenient to be applied in practical problems.

The sections of this article are as follows. In Section 2, we present some preliminaries and lemmas. In Section 3, a model is proposed for solving complex-variable convex problems, and the state of the model is proved to converge under some conditions. In Section 4, some typical examples are given to illustrate the effectiveness of the proposed model. Finally, a brief conclusion is given.

2. Preliminaries

In this paper, notation means the identity matrix. is the imaginary unit. The superscripts , and represent the transpose, conjugate transpose, and complex conjugate, respectively. means the absolute value of a real number or the model of a complex number. and denote 1-norm and 2-norm. The real and complex field are represented by and , respectively. The real and imaginary parts of a complex matrix or vector are represented by Re and Im.

Definition 1 (see [34]). Let and be normed spaces. If, for any , there exists such that , the set-valued map is upper semicontinuous at point .

Definition 2 (see [35]). (Let be Lipschitz near a point in norm space . The directional derivative of in direction in the Clarke sense is the following limit:A set is the subdifferential of at a point in the Clarke sense ifDenoteand is regular in the Clarke sense if .

Lemma 1 (see [36]). Suppose that is absolutely continuous on each compact interval and is regular in the Clarke sense; then,

Definition 3 (see [35]). Let be a convex set. For any two points and any , is a strictly convex function on if . For the rest of the calculation, the concept of relaxed CR calculus [37, 38] will be introduced. First, the auxiliary function is denoted byin which . Note that and , and we haveSuppose and exist. The following formula is the R-derivative of :

Definition 4 (see [37, 38]). If and exist, the complex gradient of is expressed asIf the complex gradient of convex function does not exist, the generalized complex gradient or subdifferential of is defined as follows.

Definition 5. The generalized subdifferential of at point is expressed as

Lemma 2. Suppose that is absolutely continuous on each compact interval and is a convex function; then,for a.e. .

Proof. From (10), one obtainsBy Lemma 1, one hasDenote According to Definition 5, Therefore,

Lemma 3 (see [35]). If is convex, for any and , we have

Lemma 4 (see [35]). Let and be a finite collection of convex functions; we havewhere .

Definition 7 (see [34]). An absolutely continuous function is a solution of on interval if , for almost all , where is a set-valued map.
Before moving on, we define the following functions:where

3. Neural Network Model

In this section, the following model is proposed to solve complex-variable problem (1):in which , and is the identity matrix.

Remark 1. In [32], with common projection methods, the authors proposed a -neurons and three-layer model. In [33], the authors proposed a single-layer model which needs to select the penalty parameters. In our paper, based on the nonpenalty idea, we propose a single-layer and -neurons model which avoids selection of penalty parameters. Thus, the proposed model is of lower dimensions and easier to apply in practical problems.

Lemma 5 (see [17]). is upper semicontinuous. From Lemma 5, the following theorem can be obtained by the result of Theorem 3 in [17].

Theorem 1. For any initial value , the neural network (19) has a local solution. A vector function is a state of model (19) if is its solution. There exist measurable functions , , , and satisfying .

Definition 8. is the equilibrium point for model (19) if there exist , , , and satisfying .

Theorem 2. is the optimal solution of the complex-variable optimization (1) if and only if it is the equilibrium point of model (19).

Proof. Firstly, let us prove the necessity. Let be the optimal solution of the complex-variable optimization (19); based on the Lagrange multiplier theorem in [32], we haveWe consider three cases, respectively:(1)If , one has andMultiplying both sides of (21) by , we obtainTaking and , , andThat is, there exist , , and such thatIn combination with , is the equilibrium point of model (19).(2)If , we have and . Then,Multiplying both sides of (25) by , one has
From and , we obtain is the equilibrium point for (19).
Secondly, we prove the sufficiency. Let be the equilibrium point of model (19). There exist , , , and satisfyingBecause is convex, for and , one getsAccording to (26), we obtain for any . It is clear that . Hence, one obtains which means . According to the convexity of , we have . Thus, . From (27), we getTherefore, is the optimal solution for complex-variable problem (1).

4. Convergence Analysis

The convergence and stability of the neural network (19) will be analyzed and proved in this section. Firstly, it is proved that the state of the neural network (19) will reach in finite time even if the initial point is chosen from outside of feasible region . Secondly, we prove that the equilibrium point of neural network (19) converges to the optimal solution of complex-variable optimization problem (1).

Theorem 3. For the neural network (19), given an arbitrary initial point , there exist and such that the state .

Proof. From Theorem 1, there exist three measurable functions , , and , satisfyingNow, we will prove that will reach in finite time by contradiction. That is, there exists such that for . The upper bound of , , will be evaluated for .
Define ; then,where represents the minimum eigenvalue of . is a full row-rank matrix because is a full row-rank matrix, that is, . From , we obtain . Hence, one hasIntegrating both sides of (31) from 0 to T, we obtainTherefore, . Hence, an upper bound for is . So, we have that will reach in finite time.
Next, it will be proved that the state will stay once reaches . Otherwise, there exists an interval such that for any . Note that (31) still holds for ; thus,which leads to a contradiction. So, the assumption does not support.
For the convergence of system (19), we define a function , and one hasbased on Lemma 2 if is a solution of (19). Lemma 6 supposed that and is a solution trajectory of the neural network (19); then, for .

Proof. By Theorem 1, we assume that for any . That is, for any , . There exist measurable functions , , , and such thatBecause , one hasThen, we have from which is full rank. Formulation (35) can be written asHence,where .
We discuss three cases of state :Case I: . This implies that , that is,Case II: . It follows that . We haveCase III: . We haveAbove all, we draw the conclusion.

Remark 2. According to Lemma 6 and Theorem 1, it is obvious that the solution of model (19) is bounded and global for any initial value .

Lemma 6. Suppose that is a compact set, , and is strictly convex on . There exist and satisfyingfor any .

Proof. From Theorem 3, there exists such that when . One haswhere the measurable functions , , and .
By (38), we obtainwhereFirstly, if , it is true that there exists such that . From the convexity of , we obtainBy the compactness of and the continuity of , there exists satisfyingSecondly, if , there exists such that . Suppose the result is not true, for , there exist and such thatWithout loss of generality, let . By the upper semicontinuity of , there exists such that . Let ; from (48), we obtainBy (17), when and , we haveOtherwise, ; since is strictly convex on , we haveThis leads to a contradiction.
Consequently, choose ; when , one obtains

Theorem 4. Suppose that is bounded and the conditions of Lemma 6 hold; then, the state of model (19) converges to .

Proof. By means of reduction to absurdity, suppose that the result is not true. We get for any . From Lemma 6, exists. Then, there exists such thatfor any . Otherwise, there exists for , and it will beBy compactness, there exists a sequence satisfying . Therefore,Taking the limit of (55) as , we obtainTaking the limit of (56) as , we get , that is, . This contradicts between the above conclusion and the hypothesis. Thus, we obtain the existence of .
We can assume there exists constant such that . Define the setand the set .
Thus, is bounded and . According to Lemma 6, there exist and such thatIntegrating (58) from to ,This means . This contradicts the existence of .

Remark 3. From Theorem 4, the solution of neural network (19) is actually convergent to its equilibrium point.

5. Simulations

In this section, we illustrate the effectiveness of the neural network (19) in solving the complex-valued nonlinear and nonsmooth convex optimization problem by two examples.

Example 1. Consider the following problem:Here, we design ten arbitrary initial states in our model (19). Figures 1 and 2 show that the state of the neural network (19) will converge to the optimal solution , and the minimum is .

Example 2. Consider the quadratic optimization with ten arbitrary initial states:From Figure 3, we can find the convergent behaviors of the state variables with the arbitrary initial states for Example 2. The optimal solution of Example 2 is . Simulation results show the minimum is .

6. Conclusion

This paper aims to solve a complex-variable nonlinear nonsmooth convex optimization problem. We present a nonpenalty neurodynamic model by differential inclusion to obtain the optimal solution of the problem. Different from the existing algorithms, the proposed neurodynamic model need not choose penalty parameters, which makes it easier to be designed in practical applications. And some theorems are given in order to ensure convergence. Finally, numerical examples are shown to illustrate the effectiveness of the proposed neural networks. In further studies, we will seek a new neurodynamic model with simpler structure to solve complex-variable nonlinear nonsmooth convex optimization based on the nonpenalty approach.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region (Grant no. 2018D01C039) and the National Natural Science Foundation of China (Grant no. U1703262).