An Opinion Evolution Model Based on Heterogeneous Benefit with Malicious Nodes Added
Individuals with different levels of education have substantial differences in their willingness to communicate with malicious nodes in a group; thus, the results of evolution of opinions tend to differ significantly. In this study, malicious nodes, driven by the benefits of a game, were added to groups of individuals with different levels of education, and a theoretical model of the game theory of group opinions that introduces malicious nodes was established. The influence of the proportion of malicious node spreading messages, the extent of tampering when malicious nodes spread messages, and the distribution of education levels in the group on the evolution of group opinions were considered. It was found that the rate of evolution of group opinions declined in groups with higher average education levels. The results of this study can be used to explain the phenomenon of fewer knowledge exchange behaviors in communities with high education levels, as is found in actual sociology. The reason is that highly educated individuals are more affected by distorted news when communicating. Therefore, the loss of communication with malicious nodes is greater, resulting in lower vigilance and willingness to communicate.
The process of dissemination and evolution of opinions in a group is often affected by the nature of the individuals in the group. The communication behavior between individuals will be sparser in a group with higher education levels. Through social investigation and analysis, several scholars have reported that, in a group with a high level of education, each individual’s communication behavior and willingness are mainly affected by their trust in the group, which leads to sparse communication behavior. The main source of trust is the income of individual communication . Game theory, which assumes that individuals determine their own behavior based on their own game benefits, can be used to study the phenomenon outlined above.
Regarding the evolution of game theory and viewpoints, many scholars have conducted detailed studies before , and researches on social dilemmas and the prisoners’ dilemmas have been carried out in articles [3, 4]. The application of game theory in the evolution of public opinion can be traced back to the seminal work of Abramson and Kuperman  in their analysis of the classic prisoner game in different networks. Broom  subsequently applied game theory to the ability of individuals to process information. Then, Mare and Latora analyzed the Deffuant model of evolution from a game perspective , combining the evolution of group opinions with game theory.
Research applying game theory to the evolution of group views focuses on different game methods and the improvement of details such as game gains. In terms of theoretical analysis, Takehara et al.  studied the pure strategy of Nash equilibrium of game behavior in the evolution of opinions. Munagala  studied the influence of collaborative games, between individuals, on the formation of group opinions. Li et al.  established the NSO model under the consideration of individual heterogeneity. Bauso and Cannon  established a repeated game model and studied the conditions for the group to reach a consensus. Nogales and Zazo  applied the traditional replicator theory to the group. These studies carried out theoretical analysis on different game modes and investigated the influence of different game modes on the evolution of group opinions.
In contrast to the studies cited above, Jiang et al.  established a variety of game frameworks for research under different network types. Wang et al.  extended this research to multilayer networks. Furthermore, to make the game process more realistic, many scholars have perfected and expanded classic game models. For example, Zinoviev and Duong  established a game model, in which individual parameters evolve with the evolution of opinions, and studied the influence of individual types on the evolution of public opinion based on different individual parameters. Qiu et al.  and Yu et al.  proposed the theory of utility function to measure the individual’s income in a game. Furthermore, Cao et al.  combined the above three theories to establish a game model related to individual income and individual type. However, these models considered the heterogeneity of individuals exclusively in the weight of the utility function, excluding the distribution of individual parameters. Additionally, they do not consider the influence of the distribution of individual parameters on the evolution of opinions. Li et al.  and Alberto et al.  added fraud in a game link, simulating the dissemination of false news in the process of public opinion dissemination, but their research on fraud is not combined with benefits.
In this study, malicious nodes driven by game benefits were added to the group, fraud was added to the exchange of group opinions, and the game process of communication between individuals was constructed. By considering the heterogeneity of individuals, a group opinion evolution game theory model with malicious nodes was established. Combining the conclusions obtained in Fullwood and Rowley  and Zhong et al. , the level of education, belief in knowledge, and attention to events as factors that affect individual communication behavior in the model were set. On this basis, an individual’s game benefit formula was used, and the influence of the existence of malicious nodes and the individual’s education level on the evolution of group views was studied. Furthermore, the constructed game opinion evolution model was applied to simulate and explain the phenomenon and reasons for fewer knowledge exchange behaviors in communities with high education levels, verifying the correctness and practicability of the model.
2. Proposed Model
2.1. Problem Description
Because of the scale-free nature of scale-free network, the degree of the nodes in the network shows a power law distribution, which is more in line with the actual network situation, and a scale-free network was used to simulate a social network in this article. In this network, there is a certain percentage of malicious nodes. These nodes will choose to propagate distorted messages with a certain probability according to their attributes and those of their neighboring nodes. According to game theory, the probability of individuals spreading distorted news is determined by their benefits. Similar to social networks, individuals typically spread rumors from self-interest; that is, one can obtain higher benefits when spreading distorted messages. On the other hand, because of the existence of malicious nodes, when a node receives a message, it will also choose whether to believe it according to the situation. This constitutes a game between malicious nodes and ordinary nodes in spreading and receiving information. To study the evolution process of group opinions under this type of game, combined with the dynamic analysis of opinion evolution, a network opinion evolution model with added malicious nodes was constructed in this study based on game theory. The simulation steps are as follows:(1)Establish a scale-free network with a certain number of nodes N; set the distribution of the parameters in the three intervals of high, medium, and low according to a certain ratio; and set attribute values for nodes.(2)Set up a certain number of malicious nodes in the network according to the proportion of malicious nodes (initially set to 0.1) before the simulation starts.(3)Start simulation. At the beginning of each simulation step, a node is randomly selected to spread opinions to all neighbors. For ordinary nodes, the real message is unconditionally transmitted during propagation; for malicious nodes, according to the probability obtained by the game, the distorted message is transmitted according to the following formula:(4)When neighboring nodes receive opinions, they can choose from either two strategies: believe it or not. They choose the strategy according to the corresponding benefit. If they choose to believe, then the two nodes will exchange opinions and attributes; otherwise, the two nodes will not communicate. As a consideration of trust radius, they will always choose to believe if the difference between the opinions of two nodes is less than 0.1.(5)Repeat steps (3) and (4) until the group’s opinions are consistent or the specified simulation step has been reached.
The model is based on a scale-free network with N = 1000, in which a certain proportion of malicious nodes are set. At the beginning of each simulation step, a node is randomly selected to communicate with all neighbor nodes, and the two parties play the game according to the analysis in Section 2.3. When a malicious node chooses to spread a distorted message, it spreads its own opinion value according to the following formula:
In the formula, is the true opinion value of the propagating node, is the extent of tampering when the malicious node propagates the distorted message, is the true opinion value of the receiving node, and is the distorted opinion value after tampering.
2.2. Model Details
2.2.1. Individual Attribute
Through the analysis of the classic rumor formula and the actual situation, we believe that the three parameters of the individual’s education level, the individual’s belief in their own knowledge, and the degree of attention to the event can be combined to measure the importance of an event to the individual and the degree of ambiguity of information. Thus, individual attributes in the network are described by three parameters: level of education L, belief in knowledge B, and attention to event I. As mentioned above, these three attributes are distributed in the interval [0,1]. In this model, to better divide the user groups, the interval [0,1] was divided into three small intervals—[0,0.3], [0.3,0.7], and [0.7,1]—representing the low, medium, and high levels of each parameter, respectively. By adjusting the proportions of these three intervals in the group, the influence of the distribution of the three parameters on the evolution of group opinions can be studied.
2.2.2. Attribute Evolution
If the receiving node chooses to believe its opinion value, the two parties will exchange the attributes. The evolution law of the three attributes of L, B, and I in the model cannot be based on a simple additive relationship because of its particularity. For example, considering the actual situation, the education level of the two will not increase or decrease immediately after exchange. Considering the special nature of those parameters, we give the following evolution formula of those three parameters as we learned in articles [15, 16]:
In the above formulas, , , and represent the new attribute values of both parties after the exchange between them. Conversely, when the receiving node chooses not to believe, the two nodes will not exchange attributes.
2.2.3. Divergence of Opinion
To measure the degree of convergence of group opinions, the parameter of opinion divergence is introduced. To calculate this parameter, the concept of opinion distance is also introduced. The opinion distance is the absolute value of the opinion difference between two nodes.
Further, the divergence is defined as the standard deviation of the opinion between every two nodes in the group.where N is the total number of nodes in the network.
2.3. Analysis of the Game Process of Opinion Exchange
The classic rumor spread formula states that the spread of rumors is determined by the vagueness of the information and the importance of the event. From an individual’s perspective, the two factors can be determined by the individual’s level of education L, belief in knowledge B, and their attention to event I. These three main parameters of the individual in this model are distributed in [0,1]. The product of L and B can be expressed as the overall attitude of the individual to the event.
On the basis of the above three parameters, the benefits of the game process have a definition. During the analysis below, node i is defined as the propagation node and node j as the receiving node.
First, consider malicious nodes. There are two strategies when disseminating messages: s1 is to disseminate real messages, and s2 is to disseminate distorted messages. Correspondingly, the receiver also has two strategies: t1 is to believe, and t2 is to disbelieve. When the strategy combination is s2t1, the rumor spreads successfully, and the malicious node obtains a higher benefit (denoted as A). The greater the difference between the two attitudes toward the event, the higher the benefits for spreading the rumor. Similarly, the higher the receiving node’s attention to the event, the higher the benefits for spreading the rumors. When the strategy combination is s2t2, the spread of rumor fails, and the malicious node will receive a loss (denoted as C), which is easy to understand. The higher the spreading party’s attention to the event, the greater the loss. For the other two cases, the individual only obtains the basic communication benefits. For the convenience of research, the basic communication benefit was set to zero, and the benefit matrix is given in Table 1.
It is evident that when the node sends a real message with probability –C/(A-C) and sends the rumor with probability 1-C/(A-C), then this is a Nash equilibrium of the game. Among the matrix, π and θ are the weights corresponding to the attitude and attention degree, which are evenly distributed on [0,1] and π + θ = 1.
It is important to note that when ordinary nodes are used as the propagators, there is only one propagating strategy, which is to spread the true message. As receivers, both ordinary nodes and malicious nodes have two strategies: believe and not believe. Similarly, when the strategy combination is t1s1—that is, the sender chooses to spread the true message and the receiver chooses to believe it—the receiver obtains the benefit of the exchange (denoted as X). When the strategy combination is t1s2, the receiver receives the distorted message and suffers a loss (denoted as Y). The higher the receiver’s attention to the event, and the more knowledge it has, the greater the loss. Similarly, when the receiver chooses strategy t2, the two nodes do not communicate; hence, the benefit is zero. This results in the income matrix presented in Table 2.
Similar to the game of the propagator, when the receiver chooses to believe the message with probability 1 + Y/(X–Y) and chooses not to believe with probability Y/(X–Y), then this is a Nash equilibrium of this game. Further, in considering that the main purpose of individuals receiving messages is to enhance their understanding of events and the additional benefits derived from the internal coupling relationship of knowledge, an exponential function is chosen.
3. Simulation Analysis
3.1. Analysis of Malicious Node Proportion and Tampering Extent
According to the model and simulation steps described above, the three parameters (L/B/I) were divided into three intervals: low interval [0,0.3], middle interval [0.3,0.7], and high interval [0.7,1]. The proportion of the three intervals was initially set to 3 : 4:3. In each interval, the individual parameters were uniformly distributed, and the corresponding weights distributed according to the same law with and the simulation step length t = 10,000. The malicious node ratio selected for the simulation was 0, 0.05, 0.1, and 0.2. We run the simulation under those conditions, on a scale-free network with N = 1000, to observe the evolution of group opinions after a certain step.
The evolution of the group opinion is shown in Figure 1. Figure 1(a) corresponds to the situation without malicious nodes. Figures 1(b)–1(d) correspond to the situations with malicious nodes. In these figures, the horizontal axis represents simulation steps performed, and the vertical axis represents the opinion values of all individuals. From the simulation results, it can be observed that the group opinion value enters an evident stable interval after 2,000 steps when there are no malicious nodes, and agreement is reached around 5,000 steps, where all individual opinion values are equal. After malicious nodes are added, the opinion value of the group reaches a stable state after a certain number of evolution steps, the opinion of all individuals fluctuates within a stable range, and no consensus can be reached. The more the malicious nodes, the larger the stable interval. In other words, the more malicious nodes there are in the group, the less the group’s viewpoints can be unified, and the more unstable the final state of the group is.
Further, to study the influence of the proportion of malicious nodes on the group opinion evolutions, we keep the other parameters unchanged and simulate the evolution of group opinions under different proportion of malicious nodes to observe the change of the divergence of opinion. Thus, was set, we set N = 10000, and the proportion of malicious nodes was taken from 0 to 0.3 every 0.01 to simulate 5,000 steps. When the simulation reached 5,000 steps, the dispersion of the group’s opinion was calculated. For each proportion of malicious nodes, 20 simulations were performed, and the results averaged. The following results were obtained. It can be observed from Figure 2 that when the proportion of malicious nodes is changing, the increase in malicious nodes has a significant inhibitory effect on the convergence of group opinions.
On the other hand, to study the influence of the extent of tampering on the evolution of group opinions, the ratio of malicious nodes was set to 0.1, was in the range 0–0.25, and the value was taken every 0.05. The same simulation was carried out, and the results were averaged 20 times. The result is shown in Figure 3, and with the increase of the extent of tampering, the divergence of group opinion has a significant increase. It shows that when the proportion of malicious nodes is certain, the influence of the tampering range on the evolution of group opinions is extremely obvious. Compared with the previous experimental results, it can be observed that the extent of tampering has a greater impact on the evolution of group opinions than the ratio of malicious nodes to the evolution of group opinions. In other words, the extent of tampering has a greater impact on the evolution of group opinions.
In order to observe the influence of the proportion of malicious nodes and the extent of tampering, we conducted the following experiment. The proportion of malicious nodes was then set in the range 0–0.5, and the extent of tampering range was set as 0–0.25. For each group of different values, the simulation was repeated 20 times, taking the average of the degree of dispersion when the group opinion evolved to 5,000 steps. The results are shown in the joint distribution diagram of Figure 4. It can be clearly seen that the divergence of group opinion increases as the proportion of malicious nodes and the extent of tampering increases.
As shown in Figure 4, the following conclusions can be drawn. (1) The increase in the proportion of malicious nodes and the extent of tampering will increase the divergence of group opinion, which is not conducive to a consensus of group opinions. (2) The degree of influence of the proportion of malicious nodes on group opinions is less than the degree of influence of the extent of tampering on group opinions.
3.2. Analysis of Individual Attributes
Some social surveys [1, 21] have indicated that, in communities with higher average knowledge levels, communication behaviors among individuals are sparse. Fullwood and Rowley  and Zhong et al.  stated that trust between individuals will affect both the attitude of the individual in the exchange of opinions and their communication behavior.
Based on the perspective of game benefits, it is found that when individuals are highly educated, their loss will increase when they are affected by malicious nodes in the exchange of opinions, which leads to lower trust. Therefore, the proportion of individual parameters in the group was adjusted to simulate and study this phenomenon. In the model, individuals have three main attributes: level of education L, attention I, and belief in knowledge B. The three parameters were set into three intervals: [0,0.3], [0.3,0.7], [0.7,1], which are low, medium, and high intervals. By default, the distribution ratio of the three intervals was 3 : 4:3. Next, the distribution of knowledge and attention in the group was analyzed. Two extreme situations were simulated. The first was that the proportion of individuals with a higher education level was zero, and the second was that the proportion of individuals with a higher education level was 0.7. The evolution of the group’s opinion is shown in Figure 5 for both cases. Figure 5(a) is the case where the proportion is zero, and Figure 5(b) is the case where the proportion is 0.7. The horizontal axis and the vertical axis, respectively, represent the simulation steps performed and the opinion values of all individuals. Compared with situation (a), it can be observed that situation (b), the distribution of individuals with a higher level of education, converges at a slower speed.
To verify this situation, the proportion of individuals in the group with high education was adjusted gradually in the range 0.1–0.7, and a 1000-step simulation was performed on a scale-free network, in which N = 10000. In the simulation process, when the recipient chose to believe the message of the other, the communication between the two individuals was successful. The number of successful communications between individuals in each simulation was adjusted. The number of successful communications changed with the proportion of individuals with high education in the group, as shown in Figure 6.
Evidently, as the number of highly educated individuals increases, the number of successful exchanges of opinions between individuals decreases significantly. The results of the model simulation are consistent with the phenomenon reported by Fullwood and Rowley : the communication among individuals in groups with higher education levels is relatively sparse. From the perspective of game benefits, this model can explain this phenomenon well. When there are malicious nodes in the group, individuals with higher education levels tend to take a higher loss caused by the distorted information disseminated by malicious nodes, and such nodes will be more vigilant; hence, the probability of choosing not to trust the other’s information will also be higher. Therefore, the number of individual exchanges between the groups decreases as the distribution of highly educated individuals increases.
To verify this situation further, the experiment was further refined, and the proportion of individuals with high levels of education was set as HL = 0.2/0.3/0.7. The model was run for 1,000 simulation steps on a network, in which N = 1000, and the change in the degree of divergence of the group’s opinion at each step was observed, repeating 50 times to take the average. The results are shown in Figure 7. Figure 7 shows how divergence of opinion changes as the simulation goes under those above three situations.
Noticeably, as the proportion of highly educated individuals increased, the downward trend in the divergence of opinions significantly slowed down. This is because when the number of highly educated individuals increases, the number of communications between individuals decreases, so the probability of communication between individuals with large differences in opinions in the group also decreases. Therefore, the divergence of the group’s opinion will also decrease at a lower rate, which implies that the rate of evolution of the group’s opinion has become lower.
On the other hand, Figure 8 shows the distribution for a range of high level of education of 0–0.7, and a simulation step was set to 2,000. The simulation was run 20 times to obtain the average, and the final group opinion divergence was obtained. Figure 8 shows that although the distribution change of the highly educated individuals will slow the downward trend of the divergence of the group’s opinion, it has little effect on the final evolution of the group opinion. In other words, the attributes of individuals in the group only have an impact on the evolution process instead of the evolution result. After certain evolution steps, the final evolution result of group opinions is largely only affected by the proportion of malicious nodes in the group and the extent of tampering when they spread their opinions.
In this study, a model based on game analysis was established, malicious nodes were added to the network, and two parameters of individual viewpoint distance and group opinion divergence were introduced to measure the dispersion of group opinions. Based on this model, the influence of the proportion of malicious nodes and the extent of tampering on the evolution of group opinions was analyzed. Furthermore, the evolution process of opinions among groups with different levels of education was simulated, and the evolution of group opinions among groups with high levels of education was analyzed. The results were combined with the game benefits in the model to explain the sparse group communication among highly educated groups, while analyzing and explaining the phenomenon. It verified the correctness and practicability of the model. From the model, it was found that the proportion of malicious nodes, the extent of tampering when malicious nodes spread messages, and the distribution of individual education levels in the group will all affect the evolution of the group opinions. The specific findings are as follows:(1)The increase in the proportion of malicious nodes and the extent of tampering will cause the final opinions of the group to be scattered and unable to be unified. The higher the proportion of malicious nodes, the greater the extent of tampering, and the more scattered the evolution of the final opinions of the group;(2)The influence of the extent of tampering when malicious nodes disseminate messages on the evolution of group opinions is more significant than that caused by the proportion of malicious nodes, and the effects of the two magnify each other.(3)The higher the education level of individuals in the group, the lower the frequency of individual communication in the group. This phenomenon is consistent with the phenomenon in sociology . The reason is that the highly educated individuals believe that the higher the loss caused by the distorted information transmitted by malicious nodes, the more vigilant such nodes when communicating opinions, and the probability of choosing not to believe the other’s information will also be increased. The reduction in the willingness of individuals to spread due to benefits and risks has led to a decrease in the frequency of individual communication between groups. It is also found that the same highly educated group has a slower rate of opinion evolution. This is because the number of exchanges between individuals has decreased, and the probability of communication between individuals with large differences in opinions in the group has also decreased. Therefore, the dispersion of group opinions will also decrease at a lower rate, thereby reducing the speed evolution of group opinions.
Note that there are still some imperfections in the construction of this model. Therefore, more detailed research should be carried out in the following aspects: (1) the refinement and improvement of individual attributes in the game process; (2) the analysis of malicious nodes, such as location, attributes, and number of neighbor nodes; and (3) further study on the punishment mechanism for malicious nodes using this model.
The raw/processed data required to reproduce these findings cannot be shared at this time as those data also form part of an ongoing study.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
This work was supported by the National Natural Science Foundation of China under Grant no. 71974063.
A. Mare and V. Latora, “Opinion formation models based on game theory,” International Journal of Modern Physics C, vol. 18, pp. 1377–1395, 2006.View at: Google Scholar
R. Takehara, M. Hachimori, and M. Shigeno, “A comment on pure-strategy Nash equilibria in competitive diffusion games,” Information Processing Letters, vol. 112, pp. 59-60, 2011.View at: Google Scholar
K. Bhawalkar, S. Gollapudi, and K. Munagala, “Coevolutionary opinion formation games,” Theory of Computing, 2013.View at: Google Scholar
J. M. S. Nogales and S. Zazo, “Replicator based on imitation for finite and arbitrary networked communities,” Applied Mathematics and Computation, vol. 378, 2020.View at: Google Scholar
D. Zinoviev and V. Duong, “A game theoretical approach to broadcast information diffusion in social networks,” in Proceedings of the Spring Simulation Multi-conference, Boston, MA, USA, 2011.View at: Google Scholar
W. Qiu, Y. Wang, and J. Yu, “A game theoretical model of information dissemination in social network,” in Proceedings Of the International Conference on Complex Systems, Taichung, Taiwan, 2013.View at: Google Scholar
J. Yu, Y. Wang, J. Li, H. Shen, and X. Cheng, “Analysis of competitive information dissemination in social network based on evolutionary game model,” in Proceedings Of the Second International Conference on Cloud & Green Computing, Boston, MA, USA, 2013.View at: Google Scholar
X. Cao, C. Yan, and C. Jiang, “An evolutionary game-theoretic modeling for heterogeneous information diffusion,” in Proceedings Of the IEEE Global Conference on Signal & Information Processing, Orlando, FL, USA, 2015.View at: Google Scholar
A. Alberto, S. Angel, and T. Marco, “Cooperation survives and cheating pays in a dynamic network structure with unreliable reputation,” Scientific Reports, vol. 6, 2016.View at: Google Scholar
L. Zhong, Z. Wang, and C. Tan, “Influencing factors of user knowledge exchange in virtual academic community,” Information Science, vol. 38, no. 03, pp. 137–144, 2020.View at: Google Scholar