With the rapid development of science and technology and the continuous expansion of network scale, traditional centralized control and optimization techniques have been difficult to solve large-scale complex network problems, and distributed optimization algorithms with more robustness and flexibility have attracted more and more attention. In view of the irreplaceable advantages of multi-intelligent systems in distributed computing, many researchers have used them as carriers of distributed optimization for theoretical research and application promotion. For the distributed optimization problem in the undirected network environment, this article outlines and summarizes the existing gradient-based distributed simultaneous optimization algorithms, based on which a novel distributed simultaneous momentum acceleration optimization algorithm is proposed. Under the premise that the local objective function is strongly convex and Lipschitz continuous, the algorithm uses heterogeneous steps to precisely drive each node to converge asymptotically to the global optimal solution, and we reduce or compensate the effect of the original error on the pairwise error by introducing an adjustable symmetric matrix. The distributed resource allocation problem under undirected graphs is studied, in which the local cost function of the intelligences is unknown. Using sinusoidal signal excitation, the input and output of the cost function are utilized, and the first-order and second-order polar search algorithms are designed, respectively. In addition, the algorithm innovatively introduces the dual acceleration mechanism combining the Nesterov gradient method and the Heavy-Ball method to greatly improve the convergence speed of the algorithm. Based on the interrelationship of the agreement error, the optimal distance, the state difference, and the tracking error, the step size and the range of the momentum parameters for the linear convergence of the algorithm to the optimal solution are analyzed using the linear matrix inequality technique.

1. Introduction

With the rapid development of science and technology and the continuous expansion of network scale, big data has become one of the important features of the information network era. Big data is characterized by a large amount of data, many types, and high-value density, and it is often stored in multiple interrelated intelligences [1]. Therefore, multi-intelligent body systems have become one of the important topics in the field of complexity control science and artificial intelligence research. A multi-intelligent body system is a group of multiple intelligences with perception, communication, computation, and execution capabilities associated with a network system [2]. With the development of multi-intelligent body system theory and coordination technology, many distributed optimization algorithms can be implemented by means of multi-intelligent body systems; i.e., multi-intelligent body systems can be used as carriers of distributed optimization algorithms [3]. Due to the limited computational power of a single intelligent body, it is difficult for it to process the large-scale complex tasks effectively, while in a multi-intelligent body system, the large-scale network complex tasks can be decomposed into multiple easy-to-process tasks and then assigned to each intelligent body for computational processing separately [4].

The efficiency of distributed optimization algorithms implemented based on multi-intelligent body systems depends not only on the computational power of each intelligent body but also on the collaboration among the intelligent bodies [57]. In a broad sense, an intelligent body can be a human, a flying vehicle, a vehicle, a computer, and other entities. Multi-intelligent body systems first originated from studying the behavior of biological clusters in nature [8]. In nature, a large number of individuals can often form coordinated, orderly, and even shocking movement scenes when they gather; for example, birds can fly and migrate in flocks, schools of fish can gather in different river and sea areas to rest, feed, and reproduce in an orderly manner, ants can follow some simple rules to plan the optimal food delivery path, and hyenas can hunt prey cooperatively [9]. These group phenomena show characteristics of distribution, coordination, self-organization, stability, and intelligence. Optimization theory is an important foundation in the study of operations research and control theory, and a large number of practical problems in engineering practice and management decision making can be modeled as a mathematical optimization problem [10]. With the rapid growth of data size, centralized optimization algorithms are limited by the computational bottleneck of a single node and are difficult to deal with large-scale optimization problems effectively [11]. In the master-slave computing architecture, the slave nodes only need to send their local information to the master node (central node), and the master node computes the optimization algorithm based on this information and finally feeds the results of the computation to the corresponding slave nodes [12]. This architecture has a high dependency on the master node and requires the master node to have strong computational power [1315]. Distributed optimization algorithms with multinode collaboration can significantly reduce the computational pressure of a single node and are gradually being favored by researchers.

A multi-intelligent distributed optimization algorithm for continuous and discrete systems under Markov switching topology is proposed for the case of communication topology change [16]. By linear transformation, the multi-intelligent distributed optimization problem is transformed into a stability analysis problem of the system [17]. A new Lyapunov function is constructed, and the convergence of the algorithm is analyzed using Lyapunov theory to obtain sufficient conditions for asymptotic and exponential convergence [18]. The results are further extended to distributed optimization problems with Markov switching topologies of unknown partial probability [19]. The algorithms are designed for distributed optimization problems under general directed strongly connected graphs. Most of the existing multi-intelligent distributed optimization algorithms are designed based on undirected connected graphs or weight-balanced graphs, although such algorithms can be designed simply and easy to analyze for convergence. However, in reality, both undirected connectivity graphs and weight balance graphs are difficult to implement. The distributed optimization algorithm is made to overcome these drawbacks using the information of the incoming and outgoing degrees of each intelligent body, and the convergence analysis is given using coordinate transformation and the Lyapunov method to obtain sufficient conditions for the convergence of the algorithm. The design of a multi-intelligent distributed optimization protocol considering communication cost and input saturation is investigated. Moreover, input saturation is a very common phenomenon in controller design, and if input saturation is not considered in the controller design, the system may fail or even crash.

2. Distributed Optimization Algorithms for Continuous Systems

2.1. Parameter Construction

In practical application problems, there is a certain range limit for information transmission by the sensors of the smart body, and during the movement of the smart body, other smart bodies may lose the information connection with the original smart body due to exceeding the receiving radius, and the smart body may regain the information connection by later moving continuously, and the smart bodies that have lost the information connection may regain the information connection due to their proximity to each other, thus leading to the communication between the smart bodies topology to change continuously.

Therefore, in this case, the communication topology of the multi-intelligent system is not fixed, and thus it is of more practical significance to consider multi-intelligent systems with dynamically transforming topologies, while the current algorithms for the design of multi-intelligent distributed optimization based on switching topologies are modeling the switching process using stochastic probabilities, but the communication topology of the system is often related to its state at the previous moment. Therefore, this chapter will study the communication topology and investigate the multi-intelligent distributed optimization problem under Markov switching topology conditions. This chapter first designs a feedback compensation-based distributed optimization algorithm to solve the above problem and transforms the original distributed optimization problem into a system stability analysis by variable substitution, as shown in Figure 1. The distributed optimization problem under Markov switching topology with unknown partial probability is also further discussed.

Consider a multi-intelligent system composed of n intelligences, whose dynamics equation is as follows:where xi(t) is the state of intelligence i and is the control law to be designed. The multi-intelligent body system communication topology is defined in . and when the Markov switching process satisfies φt=i, then we have . n intelligences cooperate to solve the m-dimensional optimization problem:

Since f(x) is too complex to compute its optimal point, it can be decomposed into several local cost functions and computed separately. Each intelligence knows only its own local cost function, and the intelligences can pass their state information to each other and update their own state by the designed algorithm.

In order to achieve distributed optimality of the system, the following distributed optimization algorithm under Markov switching topology for continuous systems is proposed:where a and b are two positive constants. To simplify the notation without loss of generality, we always let t=1 in this article. Assume that 2.1–2.4 holds and the initial value of ωi satisfies i=0. For the system, if there are positive parameters a< 1 and b satisfies the following conditions,then the distributed optimization problem under Markov switching topology is solved, in which

n denotes the second smallest eigenvalue of the Laplacian matrix, while μ is the positive constant resulting from the deflation using Young’s inequality. Also, the minimum convergence rate of the system is as follows:

From the control law, we have the following:

It also means that

Choose the orthogonal matrix Q= [r, R], where r and R satisfy

Transform the coordinates:

Substitute to getwhere η1 and η2 represent the first row elements and the remaining elements, respectively, and τ1, τ2 are the same. The Lyapunov candidate function is chosen as follows:

In addition, the values of the variables independent of the random link failure are the same in both sets of simulation examples. For a given network generator, the communication weights between the generators represent a series of doubly random symmetric adjacency matrices. In Simulation Example 1, we verify the performance of the proposed algorithm with a smaller network with the parameters, which is shown in Table 1. In Simulation Example 2, we demonstrate the stability of the algorithm with a larger network. To verify the effectiveness of the proposed distributed original pairwise algorithm, two sets of simulation examples are given. Both sets of simulation examples consider the case with and without random link failures, respectively, so that the numerical results can be compared to further verify the superiority of the proposed algorithm in this chapter.

2.2. Simulation Example

To show that the proposed algorithm scales well, we next apply the algorithm to solve the economic allocation problem in a relatively large network containing 50 generators. Each generator i has a quadratic cost function, i.e., fi(Pi)=ai+biP+ciP2i. and ai ∈ [6.78, 74.33], bi ∈ [8.3391, 37.6968], ci ∈ [0.0024, 0.0697]. The coefficients ai, bi, and ci of the generation function fi in this system correspond to the units. Pi is the electrical energy generated by generator i. We assume that each generator has a different generation capacity, Pi [Pmin, Pmax] for generator i, and Pmin ∈ [5, 150], Pmax ∈ [150, 400]. We assume that all generators know the overall need for the electrical energy of system P = 6000 MW. The heterogeneous constant step αi ∈ [0.015, 0.01]. According to the topology of the network unaffected by random links, we assume that its reliability per link under the influence of channel random links is p = 0.9 and the time-varying adjacency matrix A(k) is generated as shown in Table 2. From the tabular1 simulation results, it is clear that the proposed algorithm also has good scalability for large-scale random networks. Also similar to Simulation Example 1, the variables of the algorithm are only fluctuating at the beginning when affected by random link failures compared to the case when not affected by random link failures.

The resource allocation problem, as a special form of distributed optimization, is widely used in smart grids, communication networks, and economic systems. In the resource allocation problem, each intelligent body can allocate variables with other intelligent bodies through the network, while the sum of the allocation variables of all intelligent bodies needs to satisfy the global divisible resource constraint. The optimization goal of resource allocation is to optimize the global performance function while satisfying the global allocation balance. In practical situations, there may be problems of insufficient accuracy of system modeling or even unknown cost functions. Currently, there are fewer studies on distributed optimization problems with unknown cost functions. Therefore, it is urgent to design an algorithm that does not depend on the cost function model.

In this chapter, the economic dispatch problem in a smart grid with an unknown cost function is considered. In the whole network, there are n intelligences, each representing a bus. Each bus can only obtain the input and output of its own connected generation cost function and the local load. For this problem, a first-order distributed optimization algorithm is designed based on the extreme value search method, and the convergence of the designed algorithm is analyzed by the mean analysis method and Lyapunov stability analysis theory. Meanwhile, the second-order optimization algorithm is designed by adding a low-pass filter for the defect that the first-order algorithm oscillates too much near the optimal solution, and the system is proved to be semiglobally almost asymptotically stable by the same technical means. Finally, the effectiveness of the algorithm is proved by simulation experiments.

Consider n buses cooperating to solve the following economic scheduling problem:where each bus i ∈ {1, 2,., n} connects a generator and a load. xiRm denotes the output of the generator connected to bus i, diRm denotes the load of bus i, and x=[x1, … , xn]T is the output vector of generators for the whole network. For each generator i, there is a generation cost function fi(xi) of its own, but the form of fi(xi) and the information about the gradient of its cost function are unknown. However, the value of the output fi(xi) of the cost function is available. For the convenience of the next analysis, assume that m = 1. For the problem, the generators seek a globally optimal allocation scheme that satisfies the global supply and demand balance constraint n = 1 and minimizes the generation cost sum in a distributed cooperative manner.

The Lagrangian function for the construction problem is as follows:where D= [d1, …, dn], and Z = [z1, …, zn] are Lagrangian multipliers.

In this section, the proposed first-order and second-order distributed polar search algorithms are experimentally simulated, and the designed first-order algorithms are applied to the actual IEEE 6-bus model to verify the effectiveness of the designed algorithms. First, numerical simulations of the first-order and second-order distributed polar search algorithms are presented. Consider the economic dispatch problem of a smart grid containing 4 buses, where each bus has a generator with no generation capacity and a local load Li,i ∈ {1, 2, 3, 4}. In total, 4 buses can be used for information transfer through a dashed network and power transmission through a real network, as shown in Figure 2.

First, we verify the validity of the algorithm. Choose X0 = [1, 2, 3, 4]T, Z0 = [0, 0, 0, 0]T and with parameters b = 0.02, k = 5, and a sinusoidal signal with a frequency of ω = 100. The trajectory of the generator is shown in Figure 3. The dashed line indicates the optimal generation distribution scheme of the generator. Obviously, the power generation of the generator is finally stabilized in the neighborhood of the optimal generation allocation scheme, indicating the effectiveness of the algorithm. In comparison with the corresponding results in Simulation Example 1, the proposed algorithm is still able to converge to the optimal solution of the optimization problem under large-scale stochastic networks with only a slight increase in the number of iterations.

2.3. Convergence Test

Next, the validity of the algorithm is verified. The frequency of the low-pass filter ωl = 160 is chosen and the other parameters are the same as in the first-order algorithm. Similarly, the power generation of the generator eventually stabilizes in the neighborhood of the optimal generation allocation scheme, indicating the effectiveness of the algorithm. As mentioned above, the parameters are identical to those of the first-order algorithm, except for the low-pass filter frequency of ωl = 160 for the second-order algorithm. Observing the images, it can be seen that both the algorithm and the algorithm converge to the neighborhood of the optimal solution. In Figure 4, the trajectory of the third generator generated by the algorithm oscillates between 3.95 and 4.15, and the trajectory of the third generator generated by the algorithm oscillates between 3.9 and 4.3. In terms of convergence effect, the algorithm has a smaller oscillation range and better results. For the convergence time, the stabilization time of the algorithm is around 500 seconds, while the stabilization time of the algorithm is around 250 seconds. To highlight the respective advantages of the first-order and second-order algorithms, we compare the generation trajectories of the third generator for both algorithms. In comparison, it is known that the convergence time of the algorithm is faster. To sum up, it is better to choose the first-order algorithm when you want to get a faster convergence speed; if you want to go for better convergence results, it is better to choose the second-order algorithm.

3. Results and Analysis

3.1. Simulation and Results

It can be seen that the convergence rate of the ASY-NHDA algorithm is related to the step size and momentum parameters. For the step size, we choose α in the range [0.05, 0.95] and set β = 0.04 to test the effect of step size on the convergence performance of the algorithm. For the momentum parameter, we choose β in the range of [0, 0.09] and set α = 0.4 to test the effect of the momentum parameter on the convergence performance of the algorithm. For the effects of step size and momentum parameters, we plot the Ek variation curves of the algorithm at the 40,000th iteration for different step sizes and different momentum parameters, respectively. Figure 5 shows that the ASY-NHDA algorithm achieves optimal convergence when α = 0.85 and β = 0.085. For synchronous communication, in order to keep the local and global clocks between the intelligences, the communication channel between the intelligences needs to be kept in operation at all times.

Currently, distributed synchronous optimization algorithms are relatively mature. They can be broadly classified into two categories based on the network: algorithms in an undirected network environment and algorithms in a directed network environment.

It is easy to find that the global cost function f(x)=∑Ni=fi(x) is a convex function. Each intelligence only knows its own local cost function, and the ultimate goal is to achieve state consistency and minimize f(x) by exchanging state information and local state updates for all the intelligences. The initial state of the intelligence xi is chosen to be x(0) = [1, 2, −3, 1, 0]T, taking the coefficients in the theorem as α=β = 1. The trajectory of the smart body over time is shown in Figure 6. The state indices of all the smart bodies converge to the global optimum point x = 0.4218, and the multi-intelligent distributed optimization problem is solved. For the simulation of the discrete system, taking the step size ∈ = 0.4 to verify the theorem, all the intelligences can also converge to the global optimum . In this chapter, we will provide a synchronization algorithm for solving a distributed optimization problem in an undirected network and provide a convergence analysis of the algorithm. Then, we illustrate how to efficiently convert an algorithm based on an undirected network into an algorithm based on a directed network. Finally, we design a simultaneous algorithm for solving distributed optimization problems under directed networks and provide its convergence analysis.

The multi-intelligent distributed optimization problem considers minimizing the global cost function of the intelligences, the average-consistent problem requires that the states of all intelligences reach the average of the initial states and in inclusion control requires that all intelligences reach within the convex packet formed by the leader, all of which have clear requirements for the final stable state of the multi-intelligent system. Therefore, the study of average-consistent and inclusion control is of great help for the study of distributed optimization algorithms for multiple intelligences. Nowadays, most of the existing studies use variable substitution to transform the multi-intelligent body system problem into a stability analysis problem and then use Lyapunov’s method to analyze the system stability and finally obtain the convergence condition of the system.

As a comparison, the state trajectories of the intelligences under general directed strongly connected graphs are computed using the algorithm from the literature, where the parameters are the same as before. It can be seen that although all the intelligences reach the same state, the final convergence value is not the global optimal point, so it cannot solve the distributed optimization problem under general directed connected graphs. As for the “zero-gradient sum” algorithm under the general directed strongly connected graph, the communication topology of the system and the local cost function of each smart body are the same as those in the previous example. The initial state of the intelligences is set to x(0) = [0, 0, 1.455, 0, 0]T, which obviously satisfies n = 0. The results of the simulation are shown in Figure 7. All the intelligences converge to the global advantage , and the distributed optimization problem under the general directed strongly connected graph is solved. For the distributed optimization problem in the context of directed networks, we also propose a distributed simultaneous momentum acceleration optimization algorithm. Under the premise that the local objective function is strongly convex and Lipschitz continuous, the algorithm uses heterogeneous step size and a random weight matrix of ranks and columns to characterize the local communication process between nodes, which effectively overcomes the imbalance of directed networks.

The above optimization problems usually require individual intelligences to communicate with each other to achieve the global optimization goal. However, the optimization of such network systems usually cannot be achieved by the local computation of individual intelligences because the global cost function of the whole network system is related to the local cost function of each intelligence, and the feasible solution set of a single individual may also be coupled through the global constraint, so the optimization of the network systems needs to be solved by all the intelligences communicating with each other cooperatively. The study of multi-intelligent distributed algorithms can help to solve the problems related to optimization, gaming, and resource allocation that exist in various types of network systems. Compared to centralized control strategies, distributed control strategies are more flexible and fault-tolerant, especially for networked systems with coupling.

For example, in a smart grid scenario, the amount of power shared by the grid is limited, and how to allocate the power to maximize the revenue with limited resources can be achieved by distributed optimization algorithms.

4. Real-Time Scheduling

The undirected graph requires symmetric and bidirectional information transmission between intelligences, and its research is mainly conducted under ideal conditions, while in real systems, there are many communication methods that are unidirectional and easily disturbed by external factors, such as broadcast communication and data packet loss. Therefore, it is very meaningful to study distributed optimization algorithms under directed graphs. For directed graphs, a broadcast-based subgradient-push algorithm is developed using the push-sum protocol with column random matrices and decay steps, which converges with O(ln k/k) under the assumption that the subgradient is bounded and the graph is directed and connected.

A distributed predictive control model is established to achieve control of UAV obstacle avoidance based on the prediction error that introduces sufficient stability conditions. Parameter estimation problems and target localization problems exist in sensor networks, and wireless sensor networks are often used in life to monitor fires and control them; such problems can be considered as convex optimization problems, and various distributed optimization algorithms are proposed for solving them. The local computation of a single individual cannot solve the global optimization problem of some network systems because the global cost function of the network is related to the local cost function of all network individuals on the one hand. On the other hand, the feasible solution set of the state variables of the nodes may be affected by the global constraints, so the optimization decision problem of the network system needs to be solved cooperatively by all network individuals using all relevant data of the individuals.

The boundary coverage problem of wireless sensors has been studied in the literature, and the article presents a distributed motion coordination algorithm to autonomously form a sensor barrier between two given landmarks to achieve barrier coverage and the method is important for dealing with critical situations such as the occurrence of fires. With the rapid development of computer networks and communication technologies, distributed optimization of multi-intelligent body systems attracts the attention of research scholars in various fields with its advantages of robustness, autonomy, and flexibility, and a number of indispensable research results have been achieved. While there are many complex factors in the real system, which are easily disturbed by the external environment, there are still some unsolved problems in the distributed optimization research, and the design of distributed optimization algorithms with low operation costs in line with the actual system has a more far-reaching impact on guiding people’s practical life, as shown in Figure 8. Therefore, it is of great significance and great advantage to continue to study distributed optimization of multi-intelligent systems in depth in the future.

In a real system, various aspects (e.g., limited communication speed, the time required to calculate control inputs, and the time required to execute input commands) will cause that there is always a time delay in the information transmission process. Overall, the time delay is a very important property that reflects the drive, control, communication, and computation in real systems. Due to the presence of latency, the performance of the whole system will generally be diminished to varying degrees. Therefore, it is necessary to investigate the impact of latency on the performance and stability of the system in order to more effectively reduce the adverse effects of latency on the whole system. In the existing literature, the two most common types of latency are communication delay and input delay. To be precise, assume that the time required for intelligence i to receive a message from intelligence j is Tij, where Tij is the communication delay from intelligence i to intelligence j. For a more intuitive representation, for the first-order dynamical system , i = 1, …, n, its self-looping dynamical system in a fixed network topology can be expressed as xi(t) = xj(t − Tij) − xi(t)). For intelligence i, it always has access to its own instantaneous state information, so this input delay can be equivalently viewed as the sum of the computation time and execution time, as shown in Figure 9.

Most of the existing studies on distributed convex optimization algorithms assume that the information interaction between all the intelligences in the system and their neighbors is synchronized; i.e., all the intelligences in the system share a virtual global clock, and all the intelligences in the system complete the information interaction with their neighbors and their own state updates under the unified regulation of the clock. Although the synchronous communication is simple and easy to implement, the mechanism will inevitably lead to the blockage of information in the system when several intelligences in the system fail or the communication channel between intelligences fails, which further makes the synchronous update impossible to be guaranteed. Of course, the corresponding optimization goal is difficult to achieve.

For the intelligent body with limited energy, the channel utilization rate is very low, and the energy consumed by communication is much larger than the energy consumed by computation, which is obviously not cost-effective. Moreover, in practical applications, the intelligences in the network may be affected by link failures and communication delays, which will make it difficult to ensure the consistency of the global clock, and thus the update process of the algorithm executed by each intelligence in the system may not be synchronized. Therefore, it can be seen that a commitment to asynchronous communication and the implementation of algorithms is necessary.

5. Conclusion

In this article, based on the topological graph of undirected communication, we study the distributed optimization problem containing various constraints. For the set constraint, the projection method is used; for the inequality constraint, the penalty function method is used to construct the Sigmoid function, which removes the restriction on the initial conditions of the intelligences. Based on the original pairwise method and the saddle-point dynamics, the distributed optimization algorithm is designed, and the Lyapunov stability analysis theory is used to prove that the trajectories of all the intelligences converge to a consistent optimal solution that satisfies the constraints and minimizes the sum of cost functions. By means of the averaging analysis, it is proved that the averaging system is exponentially stable and the original system is a perturbation of the averaging system. Then, the performance of the first-order algorithm is compared with that of the second-order algorithm, and the first-order algorithm converges faster while the second-order algorithm has a smaller perturbation range. Finally, the proposed algorithm is applied to the economic dispatch problem of the smart grid, and the effectiveness and applicability of the designed algorithm are proved by simulation experiments. Due to the limited ability of our own, the distributed optimization problem of multi-intelligent systems is not considered comprehensively enough, and there are still many shortcomings. In the future, the study of average-consistent and inclusion control is of great help for the study of distributed optimization algorithms for multiple intelligences.

Data Availability

The data used to support the findings of this study are available from the author upon request.

Conflicts of Interest

The author declares no conflicts of interest or personal relationships that could have appeared to influence the work reported in this paper.