Abstract

The brain has the most complex structures and functions in living organisms, and brain networks can provide us an effective way for brain function analysis and brain disease detection. In brain networks, there exist some important neural unit modules, which contain many meaningful biological insights. It is appealing to find the neural unit modules and obtain their affiliations. In this study, we present a novel method by integrating the uniform design into the particle swarm optimization to find community modules of brain networks, abbreviated as UPSO. The difference between UPSO and the existing ones lies in that UPSO is presented first for detecting community modules. Several brain networks generated from functional MRI for studying autism are used to verify the proposed algorithm. Experimental results obtained on these brain networks demonstrate that UPSO can find community modules efficiently and outperforms the other competing methods in terms of modularity and conductance. Additionally, the comparison of UPSO and PSO also shows that the uniform design plays an important role in improving the performance of UPSO.

1. Introduction

Graph theory is a very helpful mathematical tool in the field of brain network analysis [13]. A brain can be represented as a modular network [4, 5], which is composed of some important neural unit modules. They can provide us rich and useful information and exhibit small-world properties of brain networks [6]. These modules are known as community modules. In brain networks, each vertex denotes a region of interest (ROI) [7], and each edge and its weight represent the connectivity and its strength, respectively [810].

Community detection methods are frequently used to find community modules. Girvan and Newman proposed the concept of modularity [1113], which is the widely used and best known metric. A larger modularity represents a better community partition. Modularity-based community detection methods find the best community modules by seeking the maximum modularity. Namely, when the modularity is maximal, the methods terminate. Therefore, community detection methods can be addressed by means of optimization methods. The FastQ [14] community detection method uses a greedy optimization to maximize modularity. It repeatedly joins communities together in pairs by choosing the join that results in the maximum alteration of modularity in each step. Danon et al.’s [15] community detection method is a modification of FastQ, in which the communities of different sizes are treated equally. The Louvain [16, 17] community detection method firstly calculates the gain of modularity by exchanging a node to its neighbor nodes. Then, the neighbor node obtaining the maximum gain replaces the node.

Particle swarm optimization (PSO) [1820], as one of the swarm intelligent optimization algorithms, was first put forward by Eberhart and Kennedy [21, 22]. It simulates the foraging process of birds. Each bird (particle) may search the feasible solution space individually and share its individual optimal information to the other bird (particle). The swarm can obtain the global optimal solution by comparing the best solutions of all birds (particles) in the swarm. PSO can obtain the optimal solution quickly. However, it has the drawback of premature convergence [23].

The uniform design belongs to the category of the pseudo-Monte Carlo method. It can generate the solutions scattered uniformly over the vector space, and the solutions are independent of each other [2426]. The uniform design can be applied to many problems, including bio-inspired intelligent optimizations. Zhang et al. [27] combined the uniform design and artificial bee colony to find the community of brain networks. Zhang et al. [26] introduced the uniform design into association rule mining and presented a multiobjective association rule mining algorithm based on the attribute index and the uniform design. Leung and Wang [24] integrated the uniform design and the multiobjective genetic algorithm to obtain the Pareto optimal solutions uniformly over the Pareto frontier. Zhu et al. [28] combined the uniform design and PAM to find the Pareto optimal solutions of the multiobjective particle swarm optimization. Dai and Wang [29] presented a new decomposition-based evolutionary algorithm with the uniform design. Liu et al. [30] proposed a hybrid genetic algorithm based on the variable grouping and the uniform design for global optimization problems. Tan et al. [31] adopted the uniform design to set the aggregation coefficient vectors of the subproblems and proposed the uniform design multiobjective evolutionary algorithm based on decomposition. Feng et al. [32] presented a uniform dynamic programming to alleviate the dimensionality problem of dynamic programming by means of introducing a uniform dynamic to dynamic programming.

There are only a few reports on community detection in brain networks in the literature. Liao et al. [33] utilized U-Net-based deep convolutional networks to identify and segment the brain tumor. Williams et al. [34] utilized both Louvain [16] and Infomap [35] community detection algorithms to identify modules in noisy or incomplete brain networks. Zhang et al. [27] utilized the artificial bee colony with the uniform design to detect community modules of brain networks. Wang et al. [36] used the multiview nonnegative matrix factorization to detect modules in multiple biological networks.

This study presents a novel method to find community modules of brain networks by integrating PSO with the uniform design. PSO is used to maximize modularity, while the uniform design is used to alleviate premature convergence of PSO by generating sampled points scattered evenly over the vector space.

The rest of this study is organized as follows: Section 2 describes the preliminaries of UPSO. Section 3 introduces two evaluation metrics. The dataset and the preprocessing method to be used are described in Section 4. The details of UPSO are shown in Section 5. The comparison between UPSO and several competing algorithms is illustrated in Section 6. The conclusion and future work are described in Section 7.

2. Preliminaries

In this section, we describe PSO and the uniform design.

2.1. Particle Swarm Optimization

In a d-dimensional search space, the position and velocity of the i-th particle are, respectively, represented as and , where , in which denotes the population size. The optimal solution of the i-th particle is called the individual optimum, while the optimal solution of the whole swarm is called the global optimum. They, respectively, are denoted as and . The following formulas are utilized to update the velocity and position of each particle in the swarm [21, 22], respectively:where ; is called the inertia weight coefficient reflecting the ability to track the previous speed; c1 and c2 are called the acceleration coefficients of the individual and the global optimum, respectively, and are commonly set as 2; and are two random numbers distributed uniformly in (0, 1).

From the theoretical analysis of a PSO algorithm, the trajectory of a particle converges to the mean of and . Whenever the particle converges, it “flies” to the individual best position and the global best position [37]. According to formulas (1) and (2), the individual optimum position of each particle gradually moves closer to the global optimum position. Therefore, all the particles may converge to the global optimum position.

2.2. Uniform Design

The uniform design is an experimental design method. Its main objective is to sample a small set of points from a given set of points such that the sampled points are uniformly scattered.

Let be the number of factors and be the number of levels per factor. When n and q are given, the uniform design selects q combination from all qn possible combinations such that these combinations are scattered uniformly over the space of all possible combinations. The selected q combinations are expressed in a uniform array , where is the level of the l2-th factor in the l1-th combination and can be calculated by the following formula [2426, 28, 29, 38]:where is a parameter given in Table 1.

Based on the uniform design, a crossover operator is as follows [24]. It quantizes the solution space defined by two parents into a finite number of points and then applies the uniform design to select a small sample of uniformly scattered points as the potential offspring.

Consider two parents and . The minimal and maximal values of each dimension for can generate a novel solution space , denoted as follows:

Each domain of is quantized into levels , where is a predefined prime number and is given as follows:

Then, the uniform design is applied to select a sample point as the potential offspring. The crossover operator of two parents and can acquire offsprings, which are scattered evenly over the vector space spanned by and . More details of the algorithm can be obtained from references [24, 25].

3. Evaluation Metrics

There exist many evaluation metrics for community modules of complex brain networks. In this study, we adopt the following metrics.

3.1. Modularity [27]

The modularity metric is a statistic that quantifies the degree to which the network may be divided into such clearly delineated groups [39, 40]. Newman et al. introduced the modularity function and modularity matrix to avoid the influences of random factors so as to obtain the better divisions of the community structure [12, 13, 41]. The modularity is the number portion of edges falling within communities minus the expected number portion in an equivalent network with edges placed at random. The modularity can be expressed as follows [42, 43]:where if vertices and belong to the same community or otherwise. denotes the number of edges in the network, is the degree of the vertex , and is the weight in the adjacent matrix . Let , which is called the modularity matrix. Formula (6) can be rewritten in the matrix format as follows:where the assignment matrix , in which if vertex i belongs to the community h or otherwise. The function denotes the sum of diagonal elements of a matrix.

A high modularity indicates a better partitioning of the graph. The search for optimal modularity Q is an NP-hard problem [44, 45] because the space of possible partitions grows faster than any power of system size.

3.2. Conductance [27]

The conductance of a cut is a metric that compares the size of a cut (i.e., the number of edges cut) and the number of edges in either of the two subgraphs induced by that cut. The conductance of a graph is the minimum conductance value between all its clusters.

Consider a cut that divides G into k nonoverlapping clusters C1, C2, …, Ck. The conductance of any given cluster is given by the following formula [43, 46]:where K denotes the number of clusters, is the weight in the adjacent matrix, Ck represents the k-th cluster (), and is the number of edges with at least one endpoint in Ci. This represents the cost of one cut that bisects G into two vertex sets Ci and (the complement of Ci). Since we want to find a number k of clusters, we will need k−1 cuts to achieve that number. The conductance for the whole clustering is the average value of those k−1 cuts.

The conductance metric can evaluate how difficultly a random walk is that leaves a cluster [40]. The more difficultly a random walk leaves a cluster is, the more compact cluster is. A low conductance indicates a better partitioning of the graph. The conductance metric usually ranges from 0 to 1, while 0 is the optimal score, which means that each cluster corresponds to a maximal strongly connected component of the network.

4. Dataset and Preprocessing

4.1. Dataset

A network is a mathematical representation of a real-world complex system and is determined by a collection of nodes (vertices) and links (edges) between pairs of nodes. Brain connectivity datasets comprise networks of brain regions connected by anatomical tracts or by functional associations. Nodes in brain networks usually represent ROIs, while links represent anatomical, functional, or effective connection [40]. A connectivity matrix (CM) is used to store the connectivity strength between all pairs of ROIs in a brain network [47].

The Autism dataset [6] collected 175 individuals with autism spectrum disorder (ASD) and typically developing (TD) ones, which were acquired from 79 resting-state functional MRI (rsfMRI: 42 ASD and 37 TD) brain networks and 94 diffusion tensor imaging (DTI: 51 ASD and 43 TD) brain networks. The dataset can be obtained from the UCLA multimodal connectivity database (http://umcd.humanconnectomeproject.org) [47]. Each rsfMRI imaging is composed of a 264 × 264 connectivity matrix (CM), in which each value denotes the z-transformed Pearson correlation coefficient (PCC) [6].

In this study, 79 rsfMRI brain networks (42 ASD and 37 TD) are utilized to test the proposed algorithm.

4.2. Data Preprocessing

In this study, we conduct the following preprocessing steps for the above dataset:(1)Reverse z-transformation is performed on the original CM to acquire the PCC connectivity matrix (PCM) according to the following formula:where and denote the original and new values, respectively.(2)The negative data in the PCM signify that the correlation among the vertices is negative correlation. In this study, these negative elements are taken as 0 to get rid of negative correlation.After conducting the above two steps, all data in the PCM are in [0, 1], and the PCM turns into a symmetric and nonnegative matrix.(3)To eliminate data noise, this study adopts the thresholding method to remove all edges with the weight less than a specific value . Namely, if , then . In the later numerical experiment,  = 0.2.

5. The Proposed Algorithm

In this study, we propose a novel algorithm for finding community modules of brain networks by integrating PSO with the uniform design (abbreviated as UPSO). Its coding and detailed steps are described as follows.

5.1. Coding

A brain network G can be represented as , where is a set of vertices and is a set of weighted edges (arcs) among N vertices. The adjacent matrix of G is expressed as , where aij denotes the weight between vertices i and j. From the above-mentioned dataset and data processing, we can see that A is a symmetric and nonnegative matrix. The number of community modules and centroid of a community module are denoted by K and , respectively. In PSO, the position coding xi of a particle is expressed aswhere xi is a -dimensional row vector and (the population size in PSO).

5.2. Detailed Steps

The proposed algorithm UPSO utilizes the uniform design to obtain the sampled points scattered evenly over the solution space. The initial method based on the uniform design can generate a group of suitable initial particles scattered evenly over the solution space. The crossover operator based on the uniform design can acquire the offspring scattered uniformly over the space spanned by two crossover parents. UPSO iteratively tries to improve a candidate solution in terms of modularity. It integrates the uniform design and PSO to find community modules of brain networks. It can not only obviate the shortcoming of premature convergence in PSO but also acquire the solutions scattered evenly over the solution space. It can find out community modules from brain networks without knowing the number of community modules. Its flow chart is illustrated in Figure 1.

The detailed steps of the proposed algorithm UPSO are described as follows.

Step 1. (generating a temporary initial swarm). The following operations are performed one after another:(1)Let K = K + 1, where K denotes the number of community modules, and its minimal and maximal values are, respectively, 1 and N (the number of vertices). K = 1 to N is to acquire the fittest number of community modules.(2)According to the swarm size Npop, the number of subintervals S and the swarm size of a subinterval Q0 are determined such that , where S can be taken as 2, or 22, or 23, etc.; Q0 is one of the prime numbers in the first column of Table 1. Here, any combination satisfying can be chosen.(3)The generation algorithm of initial population based on the uniform design described in reference [28] is implemented to generate a temporary initial swarm Tmp_pop in terms of K, in which each element xi contains K community centroids.

Step 2. (calculating the fitness of the temporary initial swarm). For each particle pi in Tmp_pop, the following operations are performed in sequence:(1)K community centroids CCk are separated from xi. For each element aij in the adjacent matrix A described in Section 5.1, the distances are calculated between aij and each CCk.(2)Each vertex is assigned to the closest community Ck to obtain its community affiliation IDXi and K community modules.(3)The modularity Q of K community modules is calculated using formula (6) or (7) in terms of IDXi, and it is taken as the fitness of xi.

Step 3. (generating the initial swarm from the temporary initial swarm). According to the acquired fitness of each particle in the temporary initial swarm, the best Npop ones of the particles are selected as the initial swarm pop.

Step 4. (regulating each community module). For each particle position xi in pop, the following operations are performed in sequence.
The centroid of the community Ck is updated according to the following formula:where ; nk is the number of vertices which belong to the community Ck; and is the new community centroid of Ck.
K new community centroids form a new position, marked as KCi, whose fitness and community affiliation are and KC_IDXi, respectively.
If , then xi = KCi, , and IDXi = KC_IDXi.

Step 5. (initializing the velocity si, individual optimal , and global optimal ). The velocity si and individual optimal of the particle pi are initialized as its position xi, and the fitness of is set as . The maximal value in all is taken as the global optimal , which stores the best xi and Q of the swarm. The community affiliation is stored into IDX.

Step 6. (increasing iterations and judging terminal conditions). Let t = t + 1, then judge whether terminal conditions are satisfied or not, where t denotes the t-th iteration and its initial value is 0. If K is known, and any of the terminal conditions is satisfied, the algorithm terminates and outputs the optimal solution and its community affiliation; otherwise, the algorithm moves to Step 7. Terminal conditions are described in Section 6.1.

Step 7. (computing the weight coefficient in PSO). The weight coefficient in PSO utilizes a linear decreasing strategy [48, 49] indicated in the following formula:where are the maximal and minimal values of and is the maximal number of iterations. In the later numerical experiment, .

Step 8. (updating the velocity and position of each particle). To guide the moving trajectory of a particle by KCi, formula (1) is modified into the following formula:The velocity and position of each particle in the pop are updated in terms of formulas (13) and (2), respectively.

Step 9. (calculating the fitness and regulating each community module). The fitness of each particle in the pop is calculated according to the operations in Step 2, and Step 4 is implemented to regulate each community module.

Step 10. (updating , , and IDX). For each particle in the pop, if , then and .
If , then , , and IDX = IDXi.

Step 11. (implementing the crossover operator based on the uniform design). For each particle in the pop, the following operations are performed in sequence:(1)The crossover operator based on the uniform design is implemented on xi and to acquire the Q1 offspring scattered uniformly over the space spanned by them and also on xi and to acquire another Q1 offspring.(2)The fitness of the offspring is calculated, and the best one of them is marked as . The fitness and community affiliation of Obest are expressed as and , respectively.(3)If , then xi = Obest, , , and .(4)If , then , , and .

Step 12. The algorithm is returned to Step 6.

Step 13. If K < N, the best Q and community affiliation IDX are saved and then the algorithm returns to Step 1; otherwise, the algorithm outputs the optimal solution , the community affiliation IDX, and the fittest K.

6. Numerical Results

In this study, we select four competing community detection algorithms to compare the performances of UPSO. They include the spectral clustering [50], FastQ [14], Danon et al. [15], and Louvain [16] algorithms. FastQ, Danon, and Louvain algorithms are three commonly used community detection methods. Among five algorithms, UPSO and the spectral clustering are stochastic search algorithms, while FastQ, Danon, and Louvain algorithms are deterministic search algorithms.

The parameter values of UPSO and the numerical results obtained by UPSO and four competing algorithms are described as follows.

6.1. Parameter Values

In this study, the parameters of UPSO are described as follows.

6.1.1. Parameters for PSO

The minimal and maximal inertia weight coefficients are (the recommended values in PSO); the acceleration coefficients c1, c2, and c3 are all equal to 2 (the recommended values in PSO); the population size Npop = 100; the maximal number of iterations tmax = 100.

6.1.2. Parameters for the Uniform Design

As the above-mentioned each rsfMRI imaging is a 264 × 264 CM, we set the number of subintervals S as 4 (S can be 21, 22, 23, ......); the number of sample points or the swarm size of each subinterval Q0 is set as 31 because Q0 can be any values in Table 1 and the product of Q0 and S must be larger than the population size Npop, namely, () > (Npop = 100). The parameter Q1 is set as 5 in order to only generate 5 offsprings in uniform cross to decrease time consumption.

6.1.3. Terminal Conditions

(1)The number of iterations t > tmax(2)The number of fitness remains unchanged, tno, and is larger than or equal to 30% of tmax

When any of the above two terminal conditions is satisfied, the algorithm terminates.

It is worth noting that the above parameter values are not fixed and can be changed according to different datasets. The above parameter values are only one of the suitable values, and they do not need to be fine tuned.

As the spectral clustering needs to preestimate the number of community modules, it uses the identical number of community modules to UPSO. FastQ, Danon, and Louvain algorithms do not necessarily need to estimate the number of community modules; therefore, they use their default parameters.

6.2. Results
6.2.1. Comparisons of Evaluation Metrics

All the 79 rsfMRI brain networks are utilized to test the performance of five algorithms. Five algorithms independently performed 20 runs to compare their average values. For stochastic search algorithms, UPSO and the spectral clustering, we also compare their standard deviations (the values in parentheses in Tables 2 and 3). Tables 2 and 3, respectively, show the results of modularity and conductance metrics obtained by five algorithms.

From Table 2, it can be obviously observed that, for all 79 rsfMRI brain networks, the modularity metrics obtained by UPSO are all the best among five algorithms. This fully demonstrates that the proposed algorithm outperforms other four competing algorithms in terms of modularity. The main reasons for UPSO obtaining good results are explained as follows: Firstly, UPSO is a heuristic optimization algorithm, so it can search a good solution as much as possible. Secondly, as UPSO is a swarm intelligent optimization algorithm, it can use all the individuals in a swarm to search the optimal solution, while the other four algorithms can use only one individual. Last but not the least, UPSO can use the uniform design to obtain the solutions scattered evenly over the feasible solution space.

In five algorithms, the gaps of the results obtained by UPSO and Louvain algorithm are much less than those by UPSO and other three algorithms, and even UPSO and Louvain algorithm obtain the identical results for ASD90B and ASD95 brain networks. Thus, the Louvain algorithm is the most competing in the other four algorithms.

We can also see from Table 2 that the standard deviations obtained by UPSO and the spectral clustering are all very small compared to the average values obtained by them. This demonstrates that UPSO and the spectral clustering are both relatively stable in terms of modularity for 79 rsfMRI brain networks. Meanwhile, we can also observe that, for 65 of 79 brain networks, the standard deviations obtained by UPSO are less than or equal to those obtained by the spectral clustering. This demonstrates that UPSO has higher stability than the spectral clustering in terms of modularity.

From Table 3, we can clearly see that UPSO obtains the best conductance metrics for most brain networks, but not for all 79 brain networks. This is because that the evaluation perspectives of two metrics are different. However, UPSO obtains the best conductance metrics for 50 brain networks and accounts for about 63% of 79 brain networks. This manifests that UPSO is superior to other competing algorithms in terms of conductance. Meanwhile, this also demonstrates that UPSO can acquire better conductance metrics while ensuring the best modularity metrics. The number of brain networks in that the spectral clustering, FastQ, Danon, and Louvain algorithms obtained the best conductance metrics is 1, 6, 20, and 2, respectively. We can also clearly observe that the best modularity metrics obtained by UPSO and Louvain algorithm are relatively close, but the number of brain networks in that the Louvain algorithm obtained the best conductance metrics is just 2. This fully demonstrates that UPSO outperforms the Louvain algorithm in terms of conductance.

From Table 3, we can also see that the standard deviations obtained by UPSO and the spectral clustering are all very small compared to the average values obtained by them. This is also similar to data in Table 2. Namely, for 79 rsfMRI brain networks, UPSO and the spectral clustering are relatively stable in terms of both modularity and conductance metrics. Meanwhile, we can also observe that, for 42 of 79 brain networks, the standard deviations obtained by UPSO are less than or equal to those obtained by the spectral clustering. This demonstrates that UPSO has higher stability than the spectral clustering in terms of conductance. This conclusion is also similar to that concluded from Table 2.

6.2.2. Comparisons of Other Perspectives

Besides the above-mentioned comparisons, we also evaluate the performances of UPSO from other perspectives, such as influences of the uniform design, comparisons with other heuristic algorithms, and complexity analysis.

To show the benefit of hybridizing the uniform design in PSO, we modify UPSO by removing the uniform design from UPSO. Namely, the initialization (Steps 1, 2, and 3) uses the random initialization method instead of the generation algorithm of the initial population based on the uniform design, and the crossover operator based on the uniform design (Step 11) is not performed. For brevity, the modified algorithm is called PSO. We compare the modularity metrics obtained by UPSO and PSO. The results are shown in Table 4.

To verify the performance of UPSO, we also compare it with ABC (artificial bee colony). Similar to PSO, ABC is also a heuristic algorithm. Table 4 also shows the results obtained by ABC.

From Table 4, we can clearly see that, for 67 of 79 brain networks, the modularity metrics obtained by UPSO are larger than those obtained by PSO. In comparison, there are just 4 brain networks for which the modularity metrics obtained by UPSO are less than those obtained by PSO. This fully demonstrates that the influence of the uniform design on improving the performance of UPSO is significant. Figures 2 and 3 in the next section also obviously illustrate the benefit of the uniform design.

By comparison of the modularity metrics obtained by UPSO and those obtained by ABC, it can be clearly seen from Table 4 that, for 79 brain networks, the modularity metrics of UPSO are all larger than those of ABC. This fully demonstrates that UPSO significantly outperforms ABC in terms of modularity. A comparison of PSO and ABC is the same as the comparison of UPSO and ABC. Namely, for 79 brain networks, the modularity metrics of PSO are all larger than those of ABC. It follows from the above that PSO is also superior to ABC for 79 rsfMRI brain networks even without the uniform design.

By a detailed analysis of the proposed algorithm UPSO, its computational complexity is obtained as follows: if the number of community modules K is pregiven or preestimated, the time complexity of UPSO is O(); otherwise, the time complexity of UPSO is O(), where tmax, Npop, and N, respectively, denote the maximal number of iterations, the population size, and the number of vertices in brain networks. Thus, unless it is absolutely necessary, UPSO often uses the pregiven K or the same K as that of the other methods to decrease its computational complexity.

6.2.3. Representative Brain Networks

According to different cases of the modularity and conductance metrics in Tables 2 and 3, two representative brain networks are chosen to demonstrate the performance of UPSO.

TD86C Brain Network. For the TD86C brain network, the best modularity and conductance metrics are both obtained by UPSO. Figure 4 illustrates the plot of the modularity metrics obtained by UPSO and PSO.

The community plot of the TD86C brain network is illustrated in Figure 2.

From Figure 4, it can be seen that the modularity metrics obtained by UPSO and PSO both converge to a stable state when the number of iterations increases. Meanwhile, we can also clearly see that the plot of UPSO is always above that of PSO after the third iteration. This obviously illustrates that the uniform design plays an important role in improving the performance of UPSO.

ASD104 Brain Network. For the ASD104 brain network, the best modularity is obtained by UPSO, while the best conductance metric is obtained by the Danon algorithm. Figure 3 illustrates the changing process of the modularity metrics obtained by UPSO and PSO with the number of iterations. Figure 5 illustrates the community plot of ASD104 brain networks.

We can clearly observe from Figure 3 that the plots of UPSO and PSO both go up when the number of iterations increases, which show the processes of searching the optimal solution. However, the plot of UPSO is above or overlapping that of PSO in the whole iterating process. This fully illuminates that the influence of the uniform design is considerable.

7. Conclusions and Future Work

In this study, we design a particle swarm algorithm with the uniform design (UPSO) for finding the community modules in brain networks. We conduct UPSO and several competing algorithms on 79 rsfMRI brain networks. The obtained results demonstrate that UPSO can find community modules with maximal modularity and obviously outperforms other competing methods in terms of modularity. The comparison of UPSO and PSO shows that the uniform design plays an important role in improving the performance of UPSO. The comparison of PSO and ABC shows that PSO is superior to ABC for 79 rsfMRI brain networks.

The proposed algorithm UPSO does not apply to very high-dimensional problems because it more likely needs long execution time. To solve the limitations, UPSO can be designed as a parallel algorithm and implemented in the cloud computing platform. In addition, our proposed algorithm is going on for further improvement, such as designing more efficient coding to speed up its converging rate and stability.

Data Availability

The data we used can be publicly available at http://umcd.humanconnectomeproject.org.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (Nos. 61841603, 61762087, and 61772552), Guangxi Natural Science Foundation (Nos. 2018JJA170050 and 2018JJA130028), Natural Science and Engineering Research Council of Canada (NSERC), and Open Foundation for Guangxi Colleges and Universities Key Lab of Complex System Optimization and Big Data Processing (No. 2017CSOBDP0301).