Recent Advances in Function Spaces and its Applications in Fractional Differential Equations 2021View this Special Issue
Research Article | Open Access
Yumin Dong, Xiang Li, Wei Liao, Dong Hou, "Stability Analysis Based on Caputo-Type Fractional-Order Quantum Neural Networks", Journal of Function Spaces, vol. 2021, Article ID 3820092, 11 pages, 2021. https://doi.org/10.1155/2021/3820092
Stability Analysis Based on Caputo-Type Fractional-Order Quantum Neural Networks
In this paper, a quantum neural network with multilayer activation function is proposed by using multilayer Sigmoid function superposition and learning algorithm to adjust quantum interval. On this basis, the quasiuniform stability of fractional quantum neural networks with mixed delays is studied. According to the order of two different cases, the conditions of quasi uniform stability of networks are given by using the techniques of linear matrix inequality analysis, and the sufficiency of the conditions is proved. Finally, the feasibility of the conclusion is verified by experiments.
Fractional calculus is an arbitrary extension of integer calculus in order. It has strong advantages and wide application prospects in the fields of physics, chemistry, biology, economy, control, signal, and image processing. It has attracted extensive attention from scholars at home and abroad and has become one of the current research hotspots. In recent years, due to the continuous development of fractional differential equations, many researchers began to pay attention to the fractional-order theory, and the combination of fractional-order and neural network give full play to the advantages of fractional order. For example, literature [1–4] combined fractional order with neural network and achieved a good effect. Among them, Boroomand and Menhaj  presented the fractional-order Hopfield neural network model and studied its stability through the quasienergy function. [5–7] study on different fractional-order neural networks and explore the influence of different factors on fractional-order neural networks. This paper summarizes the synchronization problem of neural network [8–12]. Dominik et al.  considered discrete fractional-order artificial neural networks. Chaos and chaotic synchronization of fractional-order neural networks are proposed . Literature [15, 16] explained and analyzed the dynamics of fractional-order neural networks. The fractional-order neural network was applied in different fields [17–21]. In recent years, the stability of fractional-order neural network system has become a research hotspot [22–31]. In reference , the stability and passivity of a memristor-based fractional-order competitive neural network (MBFOCNN) are analyzed by using Caputo’s fractional derivative. The effectiveness of the proposed results is finally verified by using analysis techniques and other computational tools. In reference , the problem of robust dissipation of Hopfield-type complex valued neural network (HTCVNN) model with time-varying delay and linear fractional uncertainty is studied, and many numerical models are designed to verify the results. In reference [24, 25], the global asymptotic stability of fractional quaternion numerical bidirectional associative memory neural networks (FQVBAMNNs) and fractional quaternion numerical memristic neural networks (FOQVMNNs) is studied. The effectiveness of the results is proved by using related methods. In reference [26, 27], the stability of fractional-order continuous time quaternion numerical leaky integral echo state neural network (NN) with multiple time-varying delays is studied, and the feasibility of the method is verified by numerical examples. In reference , the uniform stability of a fractional-order leaky integral echo state neural network (FOESN) with multiple delays is studied. The simulation results show the effectiveness of the method. Literature [32, 33] proposed the time-delay correlation study of Caputo fractional-order neural network. However, there are few studies on the behavior of fractional quantum neural networks with mixed delay. In this paper, a multilayer activation function quantum neural network model is presented, and the quasiuniform stability of fractional quantum neural networks with mixed delay is studied. It is proved by the formula and simulated by the numerical case.
This article is organized as follows. In the second section, we give the structure of the multilayer activation function of the quantum neural network, based on which a fractional quantum neural network model with mixed delay is proposed. In the third section, it is proved that the fractional quantum neural network system with mixed time delay is quasiuniformly stable by corresponding definitions and lemmas. In the fourth section, a concrete example is given to verify the validity and applicability of the given results.
2. Model Composition and Preparation
2.1. Quantum Neural Network
Quantum neural network belongs to the feed-forward type of neural network [34, 35]. Compared with the traditional feed-forward type of neural network, the neurons in the hidden layer of quantum neural network refer to the idea of quantum state superposition in the quantum theory and carry out the linear superposition of several Sigmoid functions, which is called the multilayer activation function. Traditional activation functions can only represent two states and orders of magnitude. When quantized, a hidden layer neuron can represent more states and orders of magnitude.
Each Sigmoid function superimposed has a different quantum interval. By adjusting the quantum interval, the data of different classes can be mapped to different orders of magnitude or steps, so that the classification can have more degrees of freedom. The quantum interval of the quantum neural network can be obtained by training. The uncertainty in the sampled data can be obtained and quantified by a quantum neural network with an appropriate learning algorithm.
Figure 1 shows a traditional three-layer feedforward neural network. Assume that the input layer I has nodes, the output layer O has nodes, and the number of nodes in the hidden layer H is . Adjacent layer nodes are fully interconnected, and nodes of the same layer are not connected. The node output function in the hidden layer is
The output function of the node in the output layer is
In the formula, adopts Sigmoid function, and is the connection weight vector between each neuron in the input layer and each neuron in the hidden layer. is the connection weight vector between each neuron in the hidden layer and each neuron in the output layer; is the threshold of the hidden layer, and is the threshold of the unit of the output layer.
Quantum neural networks with multiple excitation functions:
In the formula: , is the network weight vector; is the network input vector; is the slope; is the input excitation of the quantum neuron; is the quantum interval .
The learning of quantum neural network can be divided into two steps: (1) adjusting the weight to make the input data correspond to different class spaces; (2) adjust the quantum interval of quantum neurons in the hidden layer to reflect the uncertainty of data. The BP algorithm is used to adjust the weight. Once the network weight is obtained, the quantum interval can be adjusted by an appropriate algorithm . The idea of the algorithm is to minimize the output change of the hidden layer neurons in the quantum neural network based on the same kind of sample data.
Assume that for class , the output of the th hidden layer neuron changes as: in the formula: , represents the output of the neuron in the hidden layer when the network input vector is ;
in the formula represents the cardinality of class . It can be seen that is a function of the quantum interval . By taking the derivative of on both sides of Equation (4) and finding the minimum value of , the variation formula of (i.e., layer S of the th neuron in the hidden layer) can be obtained.
In formula (6), is the learning rate; is the number of nodes in the output layer, namely, the total number of classes; is the number of quantum interval layers; : represents all samples belonging to the class.
In the formula: represents the output of the th quantum layer of the th hidden layer neuron when the input vector is .
2.2. Caputo-Type Fractional Derivative Definition
In definition, is a continuous function over , for any >; the -order Caputo-type derivative of , is defined as:
The following corollary can be drawn: (1)When , (2)When is a constant function, (3), ,especially,when , when (4)is a one-dimensional function, .(5)If and are two constants, then
2.3. Fractional-Order Quantum Neural Network Model
Suppose the following two conclusions are true: (1)If vector and matrix , we define the Euclidean norm of vector to be . The matrix norm of the matrix is defined as . In this paper, we set , and (2)The excitation functions , , and of the fractional quantum neural network with mixed delay both satisfy the Lipschiz condition, that is, for any , there exists a corresponding real number , such that
The fractional quantum neural network model with mixed time delay is shown below: is converted to:
Among them, , represents the number of neurons in a fractional quantum neural network with mixed delay, and is the state vector of the neuron at time .
and are the activation function of fractional quantum neural network; , , , and are all constant matrices; represents the rate of the isolated resting state of the first neuron in the fractional-order quantum neural network in the state of unconnected and without external additional voltage difference; , , and represent the weight of the connection between the th neuron and the th neuron; and represent the transmission delay of the th neuron along the axon; and represents the external input and deviation of the neuron.
Set the initial conditions of the system, usually assuming , and the norm on is defined as .
3. Main Result
Lemma 1. If and then
Lemma 2 (Hölder inequality). Suppose that the real number , and , satisfies , if , is a measurable function in space, and satisfies , , then is also a measurable function and satisfies In particular, when , it is the inequality that we usually see. That is
Lemma 3. Let be a nonnegative real number, then it can be obtained for any
Lemma 4 (Gronwall inequality). If is a continuous function on , and satisfies the following inequality Then, we can get
In special cases, if is a nonincreasing function, you can get
Definition 5. The initial time of the fractional quantum neural network system (11) with mixed delay is set to . For any , there are two constants and , so that for any , when has , then the system (11) is called quasiuniformly stable.
Theorem 6. When the order of the fractional quantum neural network system (11) with mixed delay is established, if the assumptions 2.3.1 and 2.3.2 are true and is true, where Then, the system (11) is quasiuniformly stable.
Proof. We set the initial time of the error system (12), the initial condition is , and the expression of the solution of the error system can be obtained from Lemma 1 as From the hypotheses 1 and 2 and the basic properties of the norm, we can get According to the Cauchy-Schwartz inequality in Lemma 2, we know Bring into Equation (24) to get In Lemma 3, let , we can get get Using Gronwall inequality and letting , get so that is ☐
Theorem 7. If the order of fractional-order quantum neural network system (11) with mixed delay is true, assuming that 1 and 2 are true and is true, where then the system (11) is quasiuniformly stable.
Proof from Theorem 6, we get let and , obviously , from Hölder’s inequality we can get because so
In Lemma 3, let , we can get
Let get then let then
Use Gronwall inequality and make , and get which is
In this part, we give a specific example to verify the validity and applicability of the given results.
The activation function in the above formula is:, .
By , , , and , inferred , , , and .
In this experiment, the experimental data show that by controlling the corresponding parameters, we can study the influence of another parameter on the trajectory of and with different initial values. We set the parameters . Set the parameters when =0.7, find , , , , , and find from the following inequality:
When , find . Find from the following inequality.
Figure 2 is for , , and . For different values , the corresponding trajectory of . Figure 3 is for , , and . For different values , the corresponding trajectory of . It can be seen that the state trajectories of and converge to the equilibrium point.
Figure 4 shows the trajectory of for , , and the initial value (t) =0.5, for different values of =0.05, 0.2, 0.3. Figure 5 shows the trajectory of for , , the initial value (t) =0.5, for different values of the trajectory of . It can be seen that the state trajectories of and converge to the equilibrium point.
Figure 6 shows the trajectory of when , , and , and the initial value takes different values. Figure 7 shows the trajectory of when , , and , and the initial value takes different values. It can be seen that the state trajectories of and converge to the equilibrium point.
Figure 8 shows the trajectory of when , , , and the initial value takes different values. Figure 9 shows the trajectory of when , , and , and the initial value takes different values. It can be seen that the state trajectories of and converge to the equilibrium point.
This paper uses the linear superposition of multilayer activation functions, uses learning algorithms to adjust quantum intervals and other operations to quantize the neural network, and proposes a quantum neural network model with multilayer activation functions. On this basis, the quasiuniform stability of fractional quantum neural networks with mixed time delays is studied. When belongs to different ranges, the sufficient conditions for the quasiuniform stability of the fractional quantum neural network system with mixed time delay are, respectively, discussed. Using the corresponding theorem, the proof of the theoretical result is given. Finally, through numerical simulation, the feasibility of the conclusions obtained in this paper is verified.
We did not use the data set in the research of this article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This work received support from the National Natural Science Foundation of China (Nos. 61772295, 61572270, and 61173056), the PHD Foundation of Chongqing Normal University (No. 19XLB003), the Science and Technology Research Program of Chongqing Municipal Education Commission (Grant no. KJZD-M202000501), and Chongqing Technology Innovation and Application Development Special General Project (cstc2020jscx-lyjsAX0002).
- C. J. Zuñiga Aguilar, J. Gómez-Aguilar, V. Alvarado-Martínez, and H. Romero-Ugalde, “Fractional order neural networks for system identification,” Chaos, Solitons & Fractals, vol. 130, article 109444, 2020.
- D. Sheng, Y. Wei, Y. Chen, and Y. Wang, “Convolutional neural networks with fractional order gradient method,” Neurocomputing, vol. 408, pp. 42–50, 2020.
- J. Wang, Y. Wen, Y. Gou, Z. Ye, and H. Chen, “Fractional-order gradient descent learning of BP neural networks with Caputo derivative,” Neural Networks, vol. 89, pp. 19–30, 2017.
- A. Boroomand and M. B. Menhaj, “Fractional-order Hopfield neural networks,” in Advances in Neuro-Information Processing. ICONIP 2008, M. Köppen, N. Kasabov, and G. Coghill, Eds., vol. 5506 of Lecture Notes in Computer Science, pp. 883–890, Springer, Berlin, Heidelberg, 2008.
- L. Zhang and Y. Yang, “Different impulsive effects on synchronization of fractional-order memristive BAM neural networks,” Nonlinear Dynamics, vol. 93, no. 2, pp. 233–250, 2018.
- Y. Gu, Y. Yu, and H. Wang, “Synchronization-based parameter estimation of fractional-order neural networks,” Physica A: Statistical Mechanics and its Applications, vol. 483, pp. 351–361, 2017.
- L. Chen, C. Liu, R. Wu, Y. He, and Y. Chai, “Finite-time stability criteria for a class of fractional-order neural networks with delay,” Neural Computing and Applications, vol. 27, no. 3, pp. 549–556, 2016.
- Y. Xingyu and J. Lu, “Synchronization of fractional order memristor-based inertial neural networks with time delay,” in 2020 Chinese Control And Decision Conference (CCDC), Hefei, China, 2020.
- W. Zhang, J. Cao, D. Chen, and F. Alsaadi, “Synchronization in fractional-order complex-valued delayed neural networks,” Entropy, vol. 20, no. 1, p. 54, 2018.
- L. Kexue, P. Jigen, and G. Jinghuai, “A comment on "α-stability and α-synchronization for fractional-order neural networks",” Neural Networks, vol. 48, pp. 207-208, 2013.
- H. Liu, S. Li, H. Wang, Y. Huo, and J. Luo, “Adaptive synchronization for a class of uncertain fractional-order neural networks,” Entropy, vol. 17, no. 12, pp. 7185–7200, 2015.
- T. Hu, X. Zhang, and S. Zhong, “Global asymptotic synchronization of nonidentical fractional-order neural networks,” Neurocomputing, vol. 313, pp. 39–46, 2018.
- D. Sierociuk, G. Sarwas, and A. Dzieliński, “Discrete fractional order artificial neural network,” Acta Mechanica et Automatica, vol. 5, pp. 128–132, 2011.
- X. Huang, Z. Zhao, Z. Wang, and Y. Li, “Chaos and hyperchaos in fractional-order cellular neural networks,” Neurocomputing, vol. 94, pp. 13–21, 2012.
- C. Song and J. Cao, “Dynamics in fractional-order neural networks,” Neurocomputing, vol. 142, pp. 494–498, 2014.
- I. Batiha, R. Albadarneh, S. M. Momani, and I. H. Jebril, “Dynamics analysis of fractional-order Hopfield neural networks,” International Journal of Biomathematics, vol. 13, no. 8, article 2050083, 2020.
- M.-R. Chen, B.-P. Chen, G.-Q. Zeng, K.-D. Lu, and P. Chu, “An adaptive fractional-order BP neural network based on extremal optimization for handwritten digits recognition,” Neurocomputing, vol. 391, pp. 260–272, 2020.
- M. Wu, J. Zhang, Z. Huang, X. Li, and Y. Dong, “Numerical solutions of wavelet neural networks for fractional differential equations,” Mathematicsl Methods in the Applied Sciences, pp. 1–14, 2021.
- C. Lu and X. Ding, “Periodic solutions and stationary distribution for a stochastic predator-prey system with impulsive perturbations,” Applied Mathematics and Computation, vol. 350, pp. 313–322, 2019.