Abstract

A neural network is a model of the brain’s cognitive process, with a highly interconnected multiprocessor architecture. The neural network has incredible potential, in the view of these artificial neural networks inherently having good learning capabilities and the ability to learn different input features. Based on this, this paper proposes a new chaotic neuron model and a new chaotic neural network (CNN) model. It includes a linear matrix, a sine function, and a chaotic neural network composed of three chaotic neurons. One of the chaotic neurons is affected by the sine function. The network has rich chaotic dynamics and can produce multiscroll hidden chaotic attractors. This paper studied its dynamic behaviors, including bifurcation behavior, Lyapunov exponent, Poincaré surface of section, and basins of attraction. In the process of analyzing the bifurcation and the basins of attraction, it was found that the network demonstrated hidden bifurcation phenomena, and the relevant properties of the basins of attraction were obtained. Thereafter, a chaotic neural network was implemented by using FPGA, and the experiment proved that the theoretical analysis results and FPGA implementation were consistent with each other. Finally, an energy function was constructed to optimize the calculation based on the CNN in order to provide a new approach to solve the TSP problem.

1. Introduction

Undoubtedly, the human brain is the most complex and wonderful information processing organ. It was formed by humans after a long-term natural evolution and contains approximately 100 billion neurons. These neurons transmit information to each other to perform cognitive functions and control human behavior characteristics and thoughts. The brain is part of the central nervous system (CNS) in the structure of the human body, which is composed of a large number of neuronal cells and is connected by about 1015 synapses, thus forming a complex neural network that transmits information in an orderly and hierarchical manner. McCulloch and Pitts abstracted human brain neurons and built a simple model to form a neural network, namely, artificial neural network [1]. Artificial neural network can be divided into three categories: shallow perceptron, simple artificial neural network, and deep neural network [213].

The neuron is the basic processor of the neural network. Each neuron has an output, which generally relates to its state and may affect several other neurons. Each neuron receives some input from connections called synapses. The input is the activated input neuron multiplied by the neuron’s synaptic weight. The activated neuron is calculated by applying a threshold function to the product, and the threshold function is modeled by a nonlinear function. When designing a neural network, the most important thing is to ensure that the dynamic system converges on the corresponding system. On the other hand, the richer the dynamics are, the wider the range of applications would be. For example, when a neural network model is used as an approximate method to solve combinatorial optimization problems, the transient chaotic nature provides higher search performance for global optimization or approximate optimization solutions. By considering the sum of time and space of external input and feedback input from chaotic neurons, a chaotic neural network can be constructed with chaotic neurons. The study in [14] proposed a new four-dimensional chaotic neural memory cell neural network, studied its dynamic behaviors, and designed chaotic synchronization based on sliding mode control. The proposed chaotic memory CNN system can be used for secure communication. The study in [15] studied the construction of a blind restoration model for a superresolution image based on a chaotic neural network. In the paper, a simplified chaotic neural network model is first constructed. The gray value of the image is used as the input of the network. The generated Toeplitz matrix is used to calculate the connection weight and bias input of the chaotic neural network. Hence, the problem that the traditional blind restoration model for superresolution image based on neural network falls into the local minimum is solved. The study in [16] considered the circuit implementation and application of chaotic neural networks of reconfigurable memory. The chaotic neural network has been widely applied in associative memory because of its rich chaotic characteristics. In the paper, not only was circuit implementation performed, but also the autoassociative memory, heteroassociative memory, superimposed pattern separation, many-to-many associative memory, and their application in three-view drawing were realized through simulation experiments. The study in [17] examined the local synchronization control of chaotic neural networks with saturated actuators and sample data. The author of [18] studied the global power rate synchronization of chaotic neural networks with proportional delays based on impulse control. The study in [19] analyzed the sliding mode synchronization control of time-delayed chaotic neural networks based on the observer. The study in [20] proposed a chaotic neural network for encryption. The study in [21] considered the dynamic behaviors of chaotic circuits in neural networks. The study in [22] inspected the chaotic multistability problem of neural networks based on memristors.

Inspired by previous work, we simulated and studied a chaotic neural network consisting of a linear matrix, a sine function, and three chaotic neurons, one of which is affected by the sine function. In this paper, we first propose a new chaotic neural network model, followed by performing a nonlinear dynamic analysis on it, including bifurcation behavior, Lyapunov exponential spectrum, Poincaré surface of section, and basins of attraction analysis, and give out the FPGA implementation of chaotic neural network. Fewer researches have been conducted in the existing chaotic neural network research literature; therefore, the research on dynamics of this type of system is of paramount importance and meaningful.

2. Chaotic Neural Network Model

In this paper, based on the Hopfield neural network model, we extend the external and internal membrane conductance of neurons to the linear layer of the neural network model. A new chaotic neural network model is proposed as shown in the following equation:where xi is the voltage on the capacitor Ci, Sij is the conductance of membrane resistance of the outside and inside neurons, Ii is the nonlinear external input current, and the matrix W = Wij is the synaptic weight of the connection strength between neurons. The activation function of neuron Vj is defined as

and when Ci = 1 and n = 3, the new chaotic neural network model is shown in the following equation:where and .where A = 20; equation (3) can be rewritten as follows:

Connection of the neural network with three neurons is shown in Figure 1.

The chaotic neural network proposed in this paper can be regarded as a nonlinear associative memory or content-addressable memory, which functions to retrieve the pattern in the memory to respond to the incompleteness or noise presented by this pattern. The essence of content-addressable memory is to map a basic memory xi to a stable fixed point in a dynamic system, where the stable fixed point of the network phase space is the basic memory of the network. We can describe their specific pattern as the starting point of the phase space. If the starting point is close to the fixed point, it represents the memory to be retrieved, and the system should converge on the memory itself over time. Therefore, the chaotic neural network is a dynamic system, whose phase space contains a set of stable fixed points that represent the basic system memory.

In this paper, we use the fourth-order Runge–Kutta methods to solve system (4), set the initial value to (0.1, 0.1, 0.1), and obtain the phase diagram of system (4). From the phase diagram, the system can produce a four-scroll chaotic attractor. For details, see Figures 2(a)2(c).

However, we found that the proposed system (4) has infinitely many equilibrium points and the Lyapunov exponents of system (4) are LE1 = 0.560261, LE2 = -0.001804, and LE3 = -4.056202, so system (4) is a multiscroll hidden attractor system. The Lyapunov exponents are shown in Figure 3.

3. Analysis of Bifurcation, Lyapunov Exponent, and Poincaré Section

The control parameter A of system (4) changes from 0 to 22, and the initial value of system (4) is (0.1, 0.1, 0.1). The step size of A is 0.04, and the bifurcation diagram of A of system (4) is shown in Figure 4(a). System (4) bifurcates from period doubling into chaos over time.

We can observe the dark lines in the bifurcation diagram. It is generally believed that all solid lines disappear after the bifurcation point; however, they can still be solved by algebraic equations; therefore, there is no reason to stop after the bifurcation point. But why cannot we see it? This is because they have become unstable periodic orbits after bifurcation. Then why cannot the unstable periodic orbit be seen in the bifurcation diagram? Essentially, this is a question of how to track unstable periodic orbits. Because it is a hidden attraction subsystem, every point on the unstable periodic orbit is unstable. As long as there is a little error, it will deviate more and more from the equilibrium point. The hidden attraction subsystem itself is affected by the initial state. In the bifurcation diagram, we can also observe hidden bifurcation phenomena.

The most essential element about the positive Lyapunov exponent is the source of local instability of the chaotic attractor. One of the most basic characteristics of chaos is its high sensitivity to initial conditions. The two orbits produced by two very close and different initial values will separate exponentially over time, causing this kind of orbit to separate exponentially. The root cause of this phenomenon is the positive Lyapunov exponent in the chaotic system. Therefore, the Lyapunov exponent essentially describes the local instability in a chaotic motion. However, if there is only this local instability factor, the entire attractor will diverge, and, as a matter of fact, the chaotic attractor only exists in a certain range of phase space. Therefore, we believe that there should be multistability factors in the hidden chaotic attractor, in addition to the factor of local instability. The hidden chaotic attractor is the result of the interaction of two trends, namely, local instability and multistability, finally forming the fractal structure of the whole chaotic attractor. It fully reflects the fact that the hidden chaotic attractor is a dialectical unity of local instability and multistability. In the next section of this paper, we will use the Lyapunov exponent to describe the basins of attraction of system (4) [23, 24].

The analysis from Figure 4(b) shows how the positive Lyapunov exponent changes, which suggests that system (4) alternates between the quasiperiodic state and the chaotic state and fully depicts the changes in local instability and multistability of system (4). This is basically the same as the state change shown in the bifurcation diagram.

The continuous trajectory of the phase space appears as some discrete mapping points on the Poincaré section. If the transition process in the initial stage is ignored, only the steady-state image of the Poincaré section is considered. When there is only one fixed point and several discrete points on the Poincaré section, it can be determined that the motion is quasi-periodic. When the Poincaré section presents a closed curve, it can be determined that the motion is quasi-periodic. When there is a dense point on the Poincaré section with a hierarchical structure, it can be determined that the motion is in the chaotic state.

The Poincaré map of the x-z plane of system (4) is shown in Figure 5. The Poincaré diagrams of the system on different planes show many dense points, which shows that the system has chaotic bifurcation characteristics and folding ability.

4. Analysis of Basins of Attraction

For chaotic neural networks, we can analyze the stability of the system by considering the Lyapunov exponential function (energy function) of the system. When the network is operating in the initial state, the network will move in the direction in which the Lyapunov exponential function decreases until it reaches a local minimum. The local minimum point of the Lyapunov exponential function represents the stable point of the phase space. Each attractor is around a substantial basin of attraction. In this sense, these points are also called attractors. These basins of attraction represent a stable network state. When a stable point enters the lowest area of the basins of attraction, the solution in the network can be obtained. The size of the basins of attraction is described by the radius of attraction, which can be defined as the maximum distance between all states contained in the basins of attraction or the maximum distance at which an attractor can attract a state. The number of attractors represents the memory capacity or storage capacity of the associative memory network, while the storage capacity is the maximum number of noninterfering memories in the network within a certain tolerance of the associative error probability. The storage capacity is related to the allowable error of associative memory, network structure, learning method, and network design parameters. In short, the more attractors are present in the network, the greater the storage capacity is. The basins of attraction of the attractor act as an index to measure the fault tolerance of the network, that is, the larger the basins of attraction, the better the fault tolerance performance of the network, and the stronger the association ability of the network.

In a dynamic system with multiple attractors, the corresponding basins may have fractal boundaries and an even more complex structure. Therefore, this means that in system (4) a coexisting attractor will have such a complicated basins boundary structure. The red area represents the basins of attraction of the attractor at infinity, which is the point set where the trajectory diverges. The yellow area represents the basins of attraction of the chaotic attractor, which shows the coexistence of multiple attractors and the fault tolerance of the network. The larger the yellow area, the better the fault tolerance of the network. The blue area is the transition area. The section of the basins of attraction is a series of symmetrical filaments, which are unevenly distributed but have a self-similar appearance. From Figure 6, it can be found that system (4) has four coexisting attractors.

The criteria for the existence of basins of attraction in a dynamic system [25] are as follows:(1)There is a smooth invariant subspace containing chaotic attractors(2)There is another asymptotic final state outside the invariant subspace (not necessarily a chaotic state)(3)The lateral Lyapunov exponent of the invariant subspace is negative(4)The lateral stability of the unstable periodic orbit of the attractor is related to the positive finite time change

The study in [25] has proved that the coupled Lorenz system satisfies conditions 1 and 2. There are two invariant (three-dimensional) manifolds in the six-dimensional phase space; as the trajectory from each subspace will always remain there, it will evolve toward the respective famous Lorenz attractor. For the synchronous attractor of the coupled Lorenz system, the literature has proposed using the sieve area to describe the synchronous attractor of the coupled Lorenz system. In this paper, we use the finite-time lateral Lyapunov exponent for comparison with the lateral exponent of a specific orbit and obtain results similar to those in [25].

We find that the basins of attraction of the dynamic system have the following properties:(1)The system orbit tends to a fixed point(2)The system is periodic and quasi-periodic(3)The system has chaotic or hyperchaotic behaviors(4)The time series of the system tends to infinity in a finite time

The improvement of the associative memory network must overcome a fundamental problem; that is, in addition to the attractors with memory samples, there are also “redundant” stable states (pseudostates). The existence of pseudostates affects the fault tolerance of the associative memory network. If the basins of attraction of pseudostates can be reduced or eliminated, the fault tolerance of the associative memory network can be improved and the memory capacity can be increased.

5. FPGA Implementation

The hardware experiment of system (4) is conducted by the method of fixed-point number, based on FPGA technology. We use Xilinx Zynq-7000 series XC7Z020 FPGA chip and AN9767 dual-port parallel 14-bit digital-to-analog conversion module with the maximum conversion rate of 125 MHz and adopt Vivado 17.4 and the System Generator to realize the joint debugging of Matlab–FPGA. Besides, we use oscilloscope to visualize the analog output. After the analysis, synthesis, and compilation of Vivado, to further confirm that the chaotic neural network system is correct, after confirming that the timing simulation results are correct, we generate the bit file by Vivado and download the generated bit file to the FPGA development board, convert the output of FPGA into the analog signal using AN9767 digital-to-analog converter, and then connect AN9767 digital-to-analog converter to the oscilloscope to observe the phase diagram of system (4) attractor. The phase diagrams displayed by the oscilloscope are, respectively, shown in Figure 7.

6. CNN-Based Optimization Calculation of TSP Problem

Traveling salesman problem (TSP) is a classic topic about combinatorial optimization. In a typical TSP scenario, a salesman has to rush around from one city to another to promote his goods and then goes back to his original city. How should he choose his shortest route through all the cities?

According to the Graph Theory, this problem is, in essence, to find out a Hamiltonian loop with the lowest weight in a weighted and directless graph. Given that the feasible solution to this problem is the total permutation of all the vertexes, as the vertexes increase, the resulting combinations explode; therefore, it is an NP-completed problem. The extensive application of this problem in transportation, circuit board, circuit design, and logistics distribution has led to its extensive research among scholars both at home and abroad. Various exact algorithms were used in early research to solve it, for example, branch-and-bound technique, linear programming technique, and dynamic programming technique. However, as the problem snowballs in scale, these methods fail to work anymore. Therefore, in later research, scholars turn to approximate or heuristic algorithms, mainly including Genetic Algorithm [26, 27], Simulated Annealing [28, 29], Ant Colony Algorithm [30, 31], Tabu Search Algorithm [29, 32], Greedy Algorithms [33, 34], and neural networks [35].

The chaotic neural network (CNN) herein is a feedback neural network structured similarly to a control system where there is a feedback from an output terminal to the corresponding input one. Upon a given input excitation, the state of the loop will change continuously and the values at both the input and output terminals also do so until being stable. Each output represents a state. Therefore, the CNN herein is a power system with multiple inputs and outputs. In a dynamic system, the equilibrium state can be understood as the one in which the value of a form of energy in a dynamic system continuously decreases to the minimum. The system can be converged in different states by setting different energy functions.

First, the problem is mapped onto the CNN. The problem to be mapped can be represented in a commutator matrix. For a case of n cities, the travel route should be represented by an matrix composed of n2 neurons. Each row and column in the commutator matrix had only a single element, 1, and all the other elements, 0. Such a matrix can help us uniquely identify the shortest travel route. To have the lowest energy point in the network correspond to the shortest travel route, it is necessary to delicately construct an energy function. According to system (1), an energy function was constructed to solve the traveling salesman problem. Compared with the Hopfield neural network, the chaotic neural network eliminates some shortcomings, such as a lower calculation speed, a troublesome parameter setting, a larger possibility of landing in an invalid solution, and a higher difficulty in identifying the optimal solution. The expression of the energy function is given in formula (5), and the change in the energy function is shown in Figure 7. From the analysis in Figure 8, when the path optimization results are obtained, the final state of the energy function is close to 0.

28 cities are set in this paper. Based on four optimization exercises, we obtain the findings: the optimal energy function is 1.5193, the initial route length was 13.1544, and the shortest route length was 5.3188. The simulation results show that the CNN herein can solve the traveling salesman problem very well. The results are shown in Figure 9.

7. Conclusion

In this paper, we put forward a new chaotic neural network model and a resulting CNN model. The CNN has rich chaotic dynamic behaviors and can generate multiscroll hidden chaotic attractors. Then, we study the dynamic behaviors, including bifurcation behaviors, Lyapunov exponent, Poincaré section, and the basins of attraction, and get knowledge of related characteristics of the basins of attraction. Furthermore, we realize the CNN through FPGA. The experiment proved that theoretical analysis and the FPGA realization led to consistent conclusions. Finally, we constructed an energy function to optimize the calculation based on the CNN, providing a new approach to solving the TSP problem. Since chaotic system and chaotic neural network have been widely used in image encryption [3640] and secure communication [4144], the application of these two aspects will be the focus of our future research.

Data Availability

All data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported in part by the National Key R&D Program of China for International S&T Cooperation Project (2019YFE0118700), the National Natural Science Foundation of China (61973110 and 61561022), Hunan Young Talents Science and Technology Innovation Project (2020RC3048), Natural Science Foundation of Hunan Province (2020JJ4315 and 2019JJ50648), and Education Department of Hunan Province Outstanding Youth Project (20B216).