Abstract
With the prevalence of online social networks, the potential threat of misinformation has greatly enhanced. Therefore, it is significant to study how to effectively control the spread of misinformation. Publishing the truth to the public is the most effective approach to controlling the spread of misinformation. Knowledge popularization and expert education are two complementary ways to achieve that. It has been proven that if these two ways can be combined to speed up the release of the truth, the impact caused by the spread of misinformation will be dramatically reduced. However, how to reasonably allocate resources to these two ways so as to achieve a better result at a lower cost is still an open challenge. This paper provides a theoretical guidance for designing an effective collaborative resource allocation strategy. First, a novel individuallevel misinformation spread model is proposed. It well characterizes the collaborative effect of the two truthpublishing ways on the containment of misinformation spread. On this basis, the expected cost of an arbitrary collaborative strategy is evaluated. Second, an optimal control problem is formulated to find effective strategies, with the expected cost as the performance index function and with the misinformation spread model as the constraint. Third, in order to solve the optimal control problem, an optimality system that specifies the necessary conditions of an optimal solution is derived. By solving the optimality system, a candidate optimal solution can be obtained. Finally, the effectiveness of the obtained candidate optimal solution is verified by a series of numerical experiments.
1. Introduction
Misinformation refers to false or inaccurate information, which especially contains a deceptive purpose [1]. The threat level of misinformation is measured in the number of victims. Normally, the more people are cheated, the greater the impacts on the social order. Because online social networks (such as Weibo, Facebook, and Twitter) have wide coverage and fast information circulation [2], once misinformation appears, it will spread quickly on networks and affect a large number of victims, causing a huge impact on our society. For instance, a fake tweet in 2013 claimed that Obama has been injured in an explosion at the White House was viewed more than 100 million times in just a few hours and ultimately caused the stock market to suffer a massive of 130 billion dollars [3]. Therefore, it is of great significance to study how to effectively control the spread of misinformation [4, 5].
The dissemination of the truth is one of the most effective approach to controlling the spread of misinformation [6]. Knowledge popularization and expert education are two complementary ways to achieve that. Knowledge popularization means informing the truth to the public in a simple way so that a large part of citizens can get a basic understanding of the truth. A common way to do it is to post online leaflets. Expert education refers to the indoctrination of the truth in a profound way to a particular group of citizens so that this small set of people can gain a deep acquaintance of the truth. Some common methods include holding guest lectures and sitting series of courses. Generally, knowledge popularization is more suitable for increasing the number of people who can reach the truth than for increasing the speed with which it is accepted, whereas expert education is more about the speed than the quantity [7–9].
1.1. Motivation
Given the above discussion, we wonder if the two ways can be combined to speed up the release of the truth. That is for sure. In reality, posting online leaflets and holding expert lectures are not in conflict. However, a challenge is how to reasonably allocate resources to these two ways so as to achieve a better result at a lower cost. This paper calls it the collaborative resource allocation problem of knowledge popularization and expert education, also the collaborative resource allocation (CRA) problem for short. For convenience, this paper refers to the solution of the CRA problem as the CRA strategy and refers to the optimal solution as the optimal CRA strategy.
To settle the CRA problem is challenging. A reasonable strategy should take into full consideration the collaborative effect of knowledge popularization and expert education. If too many resources are put in knowledge popularization; even if the number of people who knows the truth increases, the accept speed cannot be guaranteed. On the contrary, if too many resources are set in expert education, the truth can be quickly accepted by a small group of people, but the number cannot be promised. In order to design an optimal CRA strategy, it is necessary to deeply understand how the two methods influence each other and then find the balance between them. Unfortunately, as far as we know, there are few theoretical models that evaluate the combined effect of these two truthpublishing methods on collaboratively controlling the spread of misinformation, not to mention the theoretical guidance for designing an effective collaboration strategy.
1.2. Contributions
This paper proposes a dynamic resources allocation strategy that combines knowledge popularization and expert education to control the spread of misinformation. Specifically, this paper is committed to solving the CRA problem. The main works are as follows:(1)An individuallevel misinformation spread model is proposed. This model well describes the comprehensive effect of knowledge popularization and expert education on the collaborative control of misinformation spread. On this basis, the expected cost of any CRA strategy is evaluated, and a continuoustime optimal control model is formulated to find an optimal CRA strategy, with the expected cost as the performance index function.(2)In order to solve the optimal control model, an optimality system that specifies the necessary conditions of an optimal solution is derived, and a corresponding numerical iteration algorithm is designed. By solving the optimality system, a candidate optimal solution can be obtained.(3)The effectiveness of the obtained candidate optimal solution is verified by a series of numerical experiments. Experimental results show that the candidate optimal solution is significantly better than other comparison schemes. Therefore, the candidate optimal solution can be considered as effective and can be recommended as the optimal CRA strategy.
The remainder of this paper is structured as follows. Section 2 reviews the related work. Section 3 focuses on system modeling and problem formulating. Section 4 discusses solutions. Section 5 shows some numerical experiments. Section 6 closes this paper.
2. Related Work
This section reviews related efforts. First, we investigate some common approaches to publishing the truth. Second, we discuss misinformation spread models.
2.1. TruthPublishing Approaches
In the past decade, how to effectively release the truth to the public has received a considerable interest from academic community. Knowledge popularization and expert education are two commonly used truthpublishing approaches. In the way of knowledge popularization, the truth is released to all the people on a social network [6]. Around this topic, the optimization of the truth spread rate is a key challenge. Wen et al. [10] propose a mathematical model that evaluates the effects of different knowledge popularization methods on the control of misinformation spread. Through optimization techniques, Pan et al. [11] study the optimal spread rate of the truth, which achieves a better performance with a lower cost. By using optimal control theory, Wan et al. and Lin et al. [12, 13] focus on the optimization of dynamic misinformation intervention strategies, Liu and Buss [14] investigate efficient misinformation impeding policies, and Chen et al. [15] propose a costeffective antirumor messagepushing scheme. In the way of expert education, only a small subset of people (usually ones who have great influence on social networks) are selected to publish the truth to their followers and friends. Hence, the main research direction around this topic is to develop a fast, effective algorithm for seeking a proper subset of influence people. See [16–19], for some typical literature.
As far as we know, although the research about knowledge popularization and expert education is rich, there are few theoretical models that evaluate the combined effect of these two truthpublishing methods on the collaborative control of misinformation spread, not to mention the theoretical guidance for designing an effective collaboration strategy.
2.2. Misinformation Spread Models
In order to design an effective CRA strategy, it is essential to evaluate the comprehensive effect of knowledge popularization and expert education. To this end, a misinformation spread model is introduced in our work.
Generally, a misinformation spread model refers to a mathematical model that characterizes the process where a piece of misinformation spreads over a social network under (or without) a certain control measure. Although misinformation is broadly a kind of information, there are some differences between the processes of information dissemination and misinformation spread. The biggest difference is that the former emphasizes a unilateral diffusion process, i.e., information dissemination models seldom consider the situation that people refuse to acquire information (see [14, 20–22], for some examples), whereas the latter emphasizes a competitive process between misinformation and the truth, i.e., misinformation spread models usually account for the situation that some people believe misinformation; meanwhile, some people believe the truth (see [13, 15, 23], for some examples). Therefore, information dissemination models may not be very suitable to characterize the effect of countermeasures on containing misinformation spread. In the rest of this section, we only focus on the discussion of misinformation models.
Common used misinformation spread models can be population level, network level, or individual level. In a populationlevel model, people on a social network are classified into groups according to their opinions on misinformation. See [12, 24–26]. It is assumed in these models that the social network is homogeneously mixed, i.e., there is no difference between people on the network. As a result, populationlevel models can only be applied to homogeneous networks. In a networklevel model, people on a social network are classified into groups according to their opinions on misinformation as well as the numbers of their friends on the same network. See [27–30]. It is assumed in these models that there is no difference between people with the same number of friends on the network. Hence, networklevel models are applied to some special kinds of social networks, e.g., scalefree networks [31]. In an individuallevel model, every person on a social network has multiple states that indicate the person’s opinion on misinformation. See [11, 14, 15, 32–34]. In these models, it is assumed that every person is a distinct individual. Therefore, individuallevel models can be applied to any arbitrary social network.
In this paper, a novel individuallevel misinformation spread model is proposed, so as to evaluate the influence of a CRA strategy. It considers the interaction between misinformation and the truth under knowledge popularization and expert education. To our beset knowledge, this is the first time to characterize the collaborative effect of the two truthpublishing approaches.
3. System Modeling and Problem Formulating
This section discusses how to find an optimal CRA strategy. First, we introduce basic terms and notations and specify the mathematical form of a CRA strategy. Second, we propose a novel individuallevel misinformation spread model, which considers the effect of a CRA strategy on controlling misinformation. Third, based on the proposed misinformation spread model, we quantify the expected costs of different CRA strategies so as to find an optimal CRA strategy and formulate a continuoustime optimal control model with the expected cost as the performance index.
3.1. Basic Terms and Notations
Suppose that a truthpublishing campaign will last units of time. In this paper, we focus on the time horizon . Consider a social network of individuals. All individuals are denoted by . In practice, any individual at any given moment will either believe misinformation, believe the truth, or remain neutral. For any time , let , 1, and 2 to indicate that the individual is in the neutral state, misinformationbelieving state, and truthbelieving state at the time , respectively. By definition, the number of victims is , where , and when the event holds true, , otherwise, .
If misinformation appears and spreads on a social network, a truthpublishing campaign consisting of knowledge popularization and expert education will be carried out on that social network to response. As the previous discussion, in the way of knowledge popularization, the truth can be released to all individuals on the network, while in the way of expert education, the truth can only be released to a set of specific groups on the network. Without loss of generality, it is assumed that the truth can be released to specific groups by means of expert education. Denote these groups by .
At any time , is denoted as the instantaneous resource investment rate for disseminating the truth in a knowledge popularization way to all individuals; , , is denoted as the instantaneous resource investment rate for disseminating the truth in an expert education way to group . Then, a CRA strategy is expressed as a dim function on time .
In practice, a CRA strategy should be as easy to perform as possible, with some upper and lower bounds due to resource constraints. First, to make a CRA strategy easy to perform, assume that the admissible set of CRA strategies consists of all dimensional piecewise continuous realvalued functions defined on time horizon . Such an admissible set is represented by . Then, there is . Second, due to the limitation of resource flow, the resource investment rate of each part should not be infinite. Without loss of generality, denote by the upper bound of at any time . Let ; then, for any time , there is . Therefore, the admissible set of CRA strategies can be expressed as
In this paper, CRA strategies are designed to minimize the impact caused by the spread of misinformation at the lowest possible cost. By definition, the cost of a CRA strategy within the time horizon is
Recall that the impact caused by misinformation spread is determined by the number of victims. Given that the cost caused by victim per unit time is , then, in the time horizon , the total cost caused by misinformation is
For any given time , let , , and . Then, the expected total cost caused by misinformation is
Combining the above discussions, during the time horizon , the expected total cost of a CRA strategy is
3.2. Misinformation Spread Model under a CRA Strategy
With the above terms and notations, we now discuss the control effect of a given CRA strategy on the spread of misinformation. To this end, we need to examine how individual states shift over time under a CRA strategy.
Firstly, let us consider the influence of misinformation spread on individual states. Let represent the social relation between the individuals and , where represents that and are friends with each other (i.e., they can share ideas to each other); otherwise, . Suppose that, at any time, a neutral individual will transfer to the misinformationbelieving state at an average rate of due to the influence of a single misinformationbelieving friend. Assume that the influence between friends is linearly cumulative; then, at any time , the neutral individual will transfer to the misinformationbelieving state at the average rate of .
Secondly, we consider the influence of truth dissemination on individual states. Suppose that, at any time, due to the influence of a single truthbelieving friend, each neutral individual and each misinformationbelieving individual transfer to the truthbelieving state at average rates of and , respectively, where is the discount factor which indicates the fact that individuals are difficult to change their cognition of things because of their preconceived ideas. Due to the linear accumulation of friends’ influences, at any time , the neutral individual and the misinformationbelieving individual will transfer to the truthbelieving state at the average rates of and , respectively.
Finally, we consider the influence of a CRA strategy on individual states. Suppose that if the instantaneous resource investment rate of knowledge popularization is , each neutral individual and each misinformationbelieving individual will transfer to the truthbelieving state at average rates of and , respectively. Suppose that if the instantaneous resource investment rate of expert education in group is , each neutral individual and each misinformationbelieving individual will transfer to the truthbelieving state at average rates of and , respectively. Let represent the subordinate relation between the individual and the group , where represents that belongs to ; otherwise, . If the influences of knowledge popularization and expert education on individuals are independent, then, at any time , the neutral individual and the misinformationbelieving individual will transfer to the truthbelieving state at the average rates of and , respectively.
According to the modeling idea of individuallevel epidemic theory [15, 23, 35], the evolution of individual states over time follows a continuoustime Markov chain. At any time , the transfer rates among the individual s states are as follows:(1)If is in the neutral state, then transfers to the misinformationbelieving state at the total average rate of(2)If is in the neutral state, then transfers to the truthbelieving state at the total average rate of(3)If is in the misinformationbelieving state, then transfers to the truthbelieving state at the total average rate of
Because every continuoustime Markov chain admits a Kolmogorov forward equation [36], the expected probability of each individual state will evolve over time with the differential dynamical system:where represents mathematical expectation. Because , , the above differential dynamical system can be simplified as
Because the above dynamic system depicts the evolution of individual states over time, we call it the individual state evolution model, or simply the evolution model for short. The evolution model is an individuallevel misinformation spread model that considers the cooperation of knowledge popularization and expert education to publish the truth. For writing convenience, let , and let denote the evolution model.
3.3. Optimal CRA Strategy Model
Having proposed the evolution model in the previous section, we now need to formulate an optimization problem to design an effective CRA strategy.
Based on previous discussions, we formulate the following optimization problems:
This is a continuoustime optimal control problem. We call it the optimal CRA strategy model.
4. Solution
After having formulated an optimal control model to seek optimal CRA strategies, in this section, we discuss the solutions. First, we derive a set of necessary conditions for an optimal solution. We refer to the collection of all these necessary conditions as the optimality system. With the optimality system, we can obtain a solution that is most likely to be the optimal one, which is called the candidate optimal solution. Second, we give an iteration algorithm to numerically obtain a candidate optimal solution.
4.1. Optimality System
Firstly, the Hamiltonian function for the optimal CRA strategy model (11) is constructed as follows:where denotes the adjoint function of the Hamiltonian.
Let represent an optimal CRA strategy, represent the corresponding expected individual state evolution trajectory, and represent the corresponding adjoint function. According to Pontryagin’s principle [37], the following conclusions must hold true simultaneously.(1)The optimal CRA strategy satisfies(2)The expected individual state evolution trajectory meets where is an initial condition.(3)The adjoint function satisfies
After direct calculation, we get that the trajectory admits the evolution model (10) and the adjoint function admits the following dynamic system:
Combining the above discussions, the optimality system includes equation (13), evolution model (10), and dynamical system (16). By solving the optimality system, a candidate optimal CRA strategy can be obtained. Because of the complexity of optimal control model (11), it is difficult to directly get a real sense of optimal CRA strategy. As a result, the optimality system provides great convenience for problem solving or at least reduces the range of optimal solutions. Even if the candidate optimal CRA strategy we got is not necessarily a real sense of optimal solution, it can be regarded as optimal if it shows a better performance than most other comparison schemes.
4.2. Numerical Iteration Algorithm
Because of the complexity of the optimality system, it is very difficult to directly obtain the analytical form of solutions. Therefore, a numerical algorithm is needed. Because solving the optimal system is essentially solving a twopoint boundary value problem [38] and because the forwardbackward sweep method (FBSM) [39] is just a practical numerical method to solve that problem, this paper use the FBSM to solve the optimality system. The pseudocode is shown in Algorithm 1.

We need to note that it is difficult to prove the convergence of the FBSM, as the literature [40] explains. However, as described in [41], it can achieve good convergence in most practical cases. Therefore, we can still choose it as the numerical method.
In lines 1, 5, and 7 of Algorithm 1, there are several ordinary differential equations (ODEs) to be solved. Common methods for solving ODEs include the Euler method [42] and the Runge–Kutta method [43]. The Runge–Kutta method has not only been proven more accurate than the Euler method but also has higher time consumption due to its higher complexity. However, practical experience shows that the accuracy difference between them is usually negligible as long as the discrete step length on time is small enough. Therefore, the Euler method is used as part of the FBSM in this paper to solve ODEs.
5. Numerical Experiments
Having discussed the solution of the optimal CRA strategy model (11), we now give a series of numerical experiments to illustrate obtained candidate optimal strategies.
5.1. Network Topological Structure
In order to better show the misinformation spread process and the control effect of a CRA strategy on it, let us consider a 100node grid social network plotted in Figure 1. Suppose that the truth can be released to three specific groups of individuals in the way of expert education, where (a) group consists of the individuals , , , , , , , , and , (b) group consists of the individuals , , , , , , , , and , and (c) group consists of the individuals , , , , , , , , and . Individuals of different groups are marked with different colors in Figure 1. For convenience, we denote this network by .
Besides, in order to show the influences of arbitrary network topological structures, the following three networks are involved in our experiments. The first one is a 100node smallworld network obtained from [35, 44]. We denote this network by . The second one is a 100node scalefree network obtained from [35, 44]. We denote this network by . The last one is a 100node Facebook network obtained from [23, 45]. We denote this network by . In each network, the truth can be released to three random groups of nodes in the way of expert education. The topological structures of these three networks are shown in Figure 2.
(a)
(b)
(c)
5.2. Candidate Optimal CRA Strategy
Experiment 1. Consider the social network . Consider a situation, where , , , , is generated within , , , , and . Denote the obtained candidate optimal CRA strategy by and the corresponding individual state evolution trajectory by . Define the corresponding expected network state evolution trajectory as , where and .
Results: the obtained candidate optimal CRA strategy is shown in Figure 3. We can see that the instantaneous resource investment rate of knowledge popularization, i.e., , first remains at its upper bound till time , then gradually decreases to its lower bound, and finally remains at its lower bound. Each component of the instantaneous resource investment rates of expert education, i.e., , , first remains at its upper bound till time , then gradually decreases to its lower bound, and finally remains at its lower bound. Figure 4 shows the corresponding expected network state evolution trajectory. We can see that gradually increases from 0.2 till time , then reaches the peak 0.7 at time , and gradually decreases to 0. During time , quickly then slowly increases from 0 to 1. Figure 5 shows the distribution of the misinformationbelieving state at some moments with respect to the network . Combining the results in Figure 4, we can see that when the average expected probability of people being in the misinformationbelieving state increases, the expected probabilities of people who belong to the three groups , , and increase more slowly than those of the others. When the average expected probability decreases, the expected probabilities of those people decrease more quickly than the probabilities of the others. Figure 6 shows the distribution of the truthbelieving state at some moments with respect to the network (shown in Figure 1). Combining the results in Figure 4, we can see that, in terms of the expected probability of being in the truthbelieving state, those of people belong to the three groups , , and increase more quickly than those of the others.
Reasons: at the beginning of the truthpublishing campaign, misinformation has large coverage over the social network, and therefore, the speed of misinformation spreading is remarkable. In this context, the optimal strategy suggests putting the highest amount of resources in publishing the truth, so as to reduce the coverage of misinformation and further contain the speed of misinformation spreading. Knowledge popularization helps all people accept the truth at a low rate, expanding slowly the coverage of the truth. Expert education helps people in the three groups accept the truth at a high rate. Once people in the three groups become believers of the truth, they can quickly influence the individuals around them and further expand the coverage of the truth. When the truth has large coverage over the social network, the optimal strategy suggests reducing the resource investment in releasing the truth, so as to reduce the cost. Under the effect of wordofmouth, even if the truth is no longer released, it can eventually fill the whole social network.
(a)
(b)
(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
Experiment 2. Given the parameters, where , , , , is generated within , , , , and , consider the social networks , , and , respectively.
Results: Figure 7 shows the obtained candidate optimal CRA strategies in Experiment 2. Figure 8 shows the expected network state evolution trajectories in Experiment 2. It is seen that results are similar to those of Experiment 1. Therefore, we may be able to conclude that even though network topological structures are different, the obtained candidate optimal CRA strategies should be similar. Furthermore, we examine the state evolution trajectories of nodes that have different degrees, as shown in Figure 9. It is seen that despite of network topological structures, nodes of higher degrees are more sensitive to CRA strategies.
In the above two experiments, the functions and are set as concave. Next, we examine the cases when they are convex.
(a)
(b)
(c)
(d)
(e)
(f)
(a)
(b)
(c)
(a)
(b)
(c)
Experiment 3. Given the parameters, where , , , , is generated within , , , , and , consider the social networks , , , and , respectively.
Results: the obtained candidate optimal CRA strategy is shown in Figure 10. We can see that all instantaneous resource investment rates first remain at their upper bounds, then drop to their lower bounds, and finally remain at their lower bounds. Figure 11 shows the expected network state evolution trajectories of the four networks. We can see that their results are similar. On each network, the average probability of the misinformationbelieving state, i.e., , first increases, then reaches the peak, and finally decreases to 0. The average probability of the truthbelieving state, i.e., , quickly then slowly increases from 0 to 1 during the whole time horizon.
Reasons: at the beginning of the truthpublishing campaign, misinformation has large coverage over the social network, and therefore, the speed of misinformation spreading is remarkable. In this context, the optimal strategy suggests putting the highest amount of resources in publishing the truth, so as to reduce the coverage of misinformation and further contain the speed of misinformation spreading. When the truth has large coverage over the social network, the optimal strategy suggests reducing the resource investment in releasing the truth, so as to reduce the cost. Under the effect of wordofmouth, even if the truth is no longer released, it can eventually fill the whole social network.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(a)
(b)
(c)
(d)
5.3. Effectiveness Verification
Experiment 4. Consider network . Consider a situation, where , , , , is generated within , , , , and . Denote the obtained candidate optimal CRA strategy by . Generate a hundred uniformly random CRA strategies and denote them by .
Figure 12 compares the expected costs under the optimal strategy and the uniformly random strategies . We can see that the optimal strategy is better than the 100 random strategies. To prevent the contingency, we did another 1000 similar experiments and obtained the same conclusion. Hence, we recommend the candidate optimal strategy as an optimal solution.
5.4. Influences of Crucial Factors
Finally, let us investigate the influences of some crucial factors in the evolution model (10), including the misinformation spread rate , the truth spread rate , and the discount factor .
Experiment 5. Given the parameters, where , , , is generated within , , , , , and , consider the social networks , , , and , respectively.
Figure 13 shows the influence of the misinformation spread rate . Even though the four networks have different topological structures, their experiment results are similar. We can see that the minimized expected cost is increasing with the misinformation spread rate. This conclusion suggests that it is important to improve the citizens’ ability to distinguish misinformation so that the misinformation spread rate is reduced.
(a)
(b)
(c)
(d)
Experiment 6. Given the parameters, where , , , is generated within , , , , , and , consider the social networks , , , and , respectively.
Figure 14 shows the influence of the truth spread rate . Even though the four networks have different topological structures, their experiment results are similar. We can see that the minimized expected cost is decreasing with the truth spread rate. This conclusion suggests that it is necessary to make the truth more interesting so that the truth spread rate is enhanced.
(a)
(b)
(c)
(d)
Experiment 7. Given the parameters, where , , , is generated within , , , , , and , consider the networks , , , and , respectively.
Figure 15 shows the influence of the discount factor . Even though the four networks have different topological structures, their experiment results are similar. We can see that the minimized expected cost is decreasing with the discount factor. This conclusion suggests that it is necessary to help the truth change people’s cognition of things.
(a)
(b)
(c)
(d)
6. Conclusion
This paper addresses the CRA problem, a problem about how to reasonably allocate resources to knowledge popularization and expert education so as to mitigate the spread of misinformation with a better performance but a lower cost. First, a novel individuallevel misinformation spread model is proposed to characterize the collaborative effect of knowledge popularization and expert education on containing the spread of misinformation. Thereafter, based on the proposed misinformation spread model, the CRA problem is reduced to an optimal control problem, and an optimality system for solving it is derived. Finally, with a series of numerical experiments, the obtained solution is verified to be optimal.
Still, there are some problems to be solved in the future. First, in our work, the truth is continuously released to the public, which may not be convenient in practice. Instead, it is more common to release the truth at some given moments. If so, a continuoustime optimal control problem may not be very suitable, and we may formulate an impulsive optimal control problem [46] to seek effective CRA strategies. Second, in our numerical experiments, the values of model parameters are set according to some related literature and our experience. In the future work, they should be determined by practical data.
Data Availability
All the data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work was supported by the National Natural Science Foundation of China, under Grant 61572006.