Complexity

Complexity / 2021 / Article
Special Issue

Foundations and Applications of Process-based Modeling of Complex Systems

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 4805404 | https://doi.org/10.1155/2021/4805404

Chulwook Park, "Role of Recovery in Evolving Protection against Systemic Risk: A Mechanical Perspective in Network-Agent Dynamics", Complexity, vol. 2021, Article ID 4805404, 23 pages, 2021. https://doi.org/10.1155/2021/4805404

Role of Recovery in Evolving Protection against Systemic Risk: A Mechanical Perspective in Network-Agent Dynamics

Academic Editor: Tomas Veloz
Received19 Mar 2020
Revised22 Aug 2020
Accepted25 Sep 2020
Published28 Apr 2021

Abstract

We propose a model of evolving protection against systemic risk related to recovery. Using the failure potential in network-agent dynamics, we present a process-based simulation that provides insights into alternative interventions and their mechanical uniqueness. The fundamental operating principle of this model is that computation allows greater emphasis on optimizing the recovery within the general regularity of random network dynamics. The rules and processes that are used here could be regarded as useful techniques in systemic risk measurement relative to numerical failure reduction analyses.

1. Introduction

1.1. Background

Various contemporary studies have argued that systemic risk and abrupt failure events are related to the highly interconnected systems and networked structures created by agents [1]. Based on a simple set of properties, observations derived from several models indicate that the proportion of protection between nodes in the network can be described as a probability. This is related to how systemic risk should be coped with rather than simply predicting it based on the probability of failure (e.g., a failed bank in a financial system, asymptomatic transmission of a disease, and cognitive bias in decision-making). The risk of propagation is higher than that of independent failure events, extending to interdependent ones, which we refer to as cascading failures among system components [2]. Many scenarios that arise in simulations should be regarded not as indicating uncertainty or mistakes but rather as the consequences of inappropriate settings and interactions. In particular, proper protection against systemic risk with system components can be evolved heuristically through strategy dynamics (social learning and exploration) as a potential means of limiting failure. The present model expands this concept to investigate nonlinear randomness effects due to delayed responses, which may result in sensitivity to small changes that are difficult to prepare or manage [3]. To assess the profound implications that this approach may have for our understanding of dynamic behavior, including protection processes, this study investigates the influence of the necessary heuristics through which a proper response could mediate risk diffusion before a system completely fails [4], as discussed below.

1.2. Literature Review

By investigating how the complexity of networked structures underpins real-world systemic phenomena, various simulation studies have identified implications for individual robustness, the propagation of systemic risk, protection flow, and collective behavior across networks [5]. A distinguishing feature of such phenomena is that they emerge from the complex interactions among individual elements in a system or from their associations with each other [3]. The effect of context-varying mechanical flux on a system’s risk is highly complex. The possibility of quantifying such a risk needs to be evaluated in consideration of the distortions and patterns of such effects [4]. The investigations involve a variety of information, e.g., social contacts that are favored as the infection and spreading route, which in turn can be used to infer the characteristics of the underlying networks [6]. As demonstrated by the existence history (e.g., the financial crisis in 2008, the outbreak of COVID-19 in 2019), a reproduction (i.e., failure, bias, or virus) varies greatly from individual to individual, which is generally believed to affect the spreading dynamics significantly [7]. Whether such individual inhomogeneity aggravates the outbreak is a challenging question, and the answer depends on the specific model [8, 9]. In particular, the connectivity patterns of individuals are key to understanding [10] how networks are structured and communicated with each other [11]. Other network properties that have been investigated include the concept of evolutionary dynamics, which helps characterize and understand the architecture of artificial systems in relation to the network properties [12]. As most tools for laying out networks are variants of an algorithm, it is difficult to use them to explore how the conditions of a network affect the network’s dynamics [13]. The assessment process can be used to make macroscale observations for input performance, while approaches for microscale evaluations to simultaneously obtain more detailed insights must be treated within the structure of the network itself [14]. Several studies have reported such structures in terms of both microscale (e.g., individual incentives and relative gain versus effort) and macroscale (e.g., institutional competition and central intervention) behavior. For example, evolutionary explanations of systemic risk demonstrate how optimal decision makers are constrained when creating biased estimates of their capability and show how individuals alter their strategies in response to perceptions of resource value [15]. Standard evolutionary models in complex environments show that potentially different biases in decision-making expose different experimental groups at different transition probabilities [16]. A recent study found that by employing strong mitigation (i.e., social distancing and isolation of confirmed cases as guided by risk diffusion testing) related to the different response strategies, an outbreak can be suppressed to levels below the normal critical care capacity [10]. Although a triggered cascade can evolve over a certain time scale (i.e., days), it can be mitigated with intervention by the central system [17]. Evidence from many nowcasting and forecasting estimates indicates that in the absence of prevention and control measures other than simply isolating the risk cases, the probability of continued transmission with the projected trajectory remains high (exponential growth of the number of infections) [7, 8]. Thus, there is an urgent need to reduce propagation rates and control the growth of this risk to reduce not only the peak demand on the system but also the total number of eventually affected individuals [18].

1.3. Gap Statement

The computational modeling technique shows no bridge between the dynamics of agent nodes (with the vertex as a fundamental element) and the emergent properties of failure in recovery [19] (note that “recovery” in the context of financial systemic risk often refers to the fraction of a loan that is recovered after the default of the counterparty; here, it refers to a different quantity called “recovery time delay” similar to the concept of intervention). Most tools for laying out networks are variants of an algorithm and hence cannot easily be used to explore how the conditions affect the dynamics of the network [20] owing to the following factors: (1) Many of them take the form of a theoretical explanatory insight constructed in response to a hypothetical assumption. (2) The type and number of individuals are arbitrary or left undefined. (3) Validation with respect to stylized mechanical parameters cannot explain their potential over-parameterization. (4) There is an extended transient or burn-in phase that is discarded before analysis [21]. (5) Most importantly, the time units for many of these models may have no clear interpretation. To address these issues, we extend the model to fit an estimation of macro-/microscale variables, such as protections and interventions. The assessment process can be used to make macro- or microscale observations of input performance, while approaches for improving the recovery delay and obtaining more detailed insights should be investigated in the structure of the network itself [22]. This requires the combination of large repositories to construct representations of trajectories that can be analyzed at different scales and from different perspectives [23]. Indeed, the mechanisms and serial algorithms that underpin our understanding of systemic risk in networked agents must be evaluated through various means. Accordingly, we can establish a common ground for the integration of knowledge and methodologies with consistent definitions and reconcile the approaches for studying networks from various fields, which will intuitively enable us to face all the difficulties and pitfalls that are inherent in interdisciplinary work.

1.4. Purpose

This study develops a modeling framework that can account for quantitative measurement in agented networks, allowing us to explore how the recovery time delay affects the risk potential in both macro- and microscale cases. To regard agent dynamics as a random network, this model follows the standard approach in agent-network modeling, where by default, a small event (agent n is hit by a shock at time t) can trigger the initial passage in a risk diffusion process. The mechanism tests the clear implications for different values of the interaction, including interruption, and how protection may be related to a set of interconnectedness with mitigation entities against failure potential, rather than solely focusing on cascading events.

1.5. Value

With the objectives of better risk assessment and effective risk reduction, this model will enable us to not only directly observe the spread of failure in agented network industries but also better understand how protection can be accomplished through intervention. This work is related to the mainstream of research that contributes to the discussion of systemic robustness.

2. Model (Process-Based Methodology)

The concept of network generation in this study is based on random processes, such as those described in graph theory [24]. Although an arbitrary construction cannot fully capture the local characteristics of individuals observed in real-world networks, everyone in the world is connected to everyone else through a chain of mutual acquaintances or even stronger relationships [25]. To examine the systemic risks that may result from failures originating and cascading on such contact networks, together with how the networked agents may be expected to protect themselves against failure cascades, we consider an agent-based systematic-risk model with evolving protection strategies that had been developed by Ulf Dieckmann at the International Institute for Applied Systems Analysis (IIASA). This model enables agent-based simulations [26], beginning with the simple assumption that pairs of nodes can be randomly connected by an edge with a given connection probability (Table 1). Using a parameter to evaluate the impact of risk on the networked agents, we can estimate the influence of primary risk along the structure as a general failure property [27]. Through scaling for the different evolutionary (Table 2) and nonevolutionary components (Table 3), each step computes a new entity and generates a new proportion in relation to the intervention [3] (Table 4).


Number of individuals (nodes)∈ (1, ∞)
Connection probability (degrees)∈ (0, 1)


Imitation probability∈ (0, 1)
Selection intensity∈ (0, 1)
Exploration probability∈ (0, 1)
Normally distributed increment∈ (0, 1)


Maintenance∈ (0, 1)
Propagation probability for each node∈ (0, 1)
Propagation probability through each link∈ (0, 1)
Protection maximum∈ (0, 1)
Reference point∈ (0, 1)


Time periods1∼∞
Recovery rate1
Recovery time delay1∼10
Realization1∼100

2.1. Operating Principles

First, each agent is characterized by two values: capital and strategy. An agent that has lost all of its capital is regarded as failed. Initially, only one agent is given as failed. Each agent is assigned to a node of the network, which is given with a number of nodes, a connection probability , and a resultant adjacency matrix Second, at each time step, units of payoff () are added to each agent’s capital, of which fractions and are spent on maintenance and protection. The updated capital is expressed as . Third, a failure potential can originate at each node with probability and can propagate through each link with probability . The system may reach one of the two states after the appearance of a random probability (initially failed node of through the link of ) and before the second probability occurs. Each failed node with a failure potential becomes a failed node with probability according to the equation below. A failed node losses its capital and remains in a failed state for one time step . We can measure capital in units of to obtain . Fourth, each agent’s strategy values, namely, and , are updated through social learning and exploration. Social learning represents the process of choosing a random role model with probability and imitating the role model’s strategy. The probability of imitation is , where is the difference between the role model’s capital and the focal agent’s capital and is the selection intensity. Then, exploration is performed by altering a randomly chosen strategy with probability with a random value chosen from a normal distribution with mean 0 and standard deviation . The protection level for each agent is determined according to the heuristics , where is the eigenvector centrality of the graph, which is a measure of the influence of a node in a network. If denotes the adjacency matrix of the graph, the eigenvector centrality or must satisfy the equation , where the vector is normalized to 1. We can normalize this vector to a maximum value, which brings the vector components closer to 1; here, is important for the measurement of (note that we did not explicitly demonstrate this because it has been shown in a previous study [1]). The value for must be truncated to the interval (0, 1 − fm). Finally, the failure potential, which lasts for only one time step (), is controlled by another variable, which we refer to as the recovery time delay (). The details of these process-based mechanisms are presented below.

2.2. Process 1. Basic Properties of Created Networks

Individuals in this model are considered as vertices (nodes), and sets of two elements are drawn as a line (edge) connecting two vertices. Data are stored in the nodes, and the edges represent the connections between them, although they can also store data. The edges between the nodes can describe any connection (adjacency) between them (Figure 1(a)). The nodes can contain any amount of data that is assigned to them, and the edges include the data of the strength of the connection they represent.

Connectivity is another essential property of this structure. A disconnected network has some vertices that cannot be reached by the edges from any other vertex (Figure 1(b)).

A disconnected network might have one vertex connected to no edges at all, or it might have two connected networks that have no connection between them. Similarly, a connected network has no disconnected vertices; thus, a metric called connectivity is used to describe a network as a whole, and it depends on the information being presented, usually identified by (n, d[p])]. Networks also have additional properties, i.e., edges can have a direction, such that a relationship between two nodes is only one-way and not reciprocal. However, in the present model, we used an undirected network, the edges of which have no direction, because in our case, edges are drawn between two individual nodes who have met; hence, all relationships being represented are reciprocal. Thus, the relationship created (network) is undirected and begins with edges randomly drawn between one pair of nodes at a time. For example, four nodes may have edges between them, as shown in Figure 2.

By contrast, if we take an adjacency matrix, we may consider the rows and columns of the matrix to be labeled by the vertices (nodes), giving us one, two, three, or four vertices here. We may use any actual labeling; here, we denote our adjacency matrix by . The definition of our matrix is that an entry in row and column is equal to 1 if there is an edge between and , and 0 otherwise:

Note that the adjacency matrix of a network contains all the information contained in that network. Similarly, note that the presented network is given a random appearance because of the way the computer generates the visuals. If the program was to run a second time, a different picture would be generated; however, regardless of how it is run, the same relationship holds between the vertices (nodes) and the edges (line), resulting in the same degrees:

The resulting matrix is obtained as [], and any node () can be randomly linked to any other node. Because the collection of nodes influences the connection probability [], the model investigates the distribution of the connection in the network (probability of degree) (Figure 3).

Figure 4 is based on the standard representation in network theory. Note that the vertices have degree d []. This d-regular network has degree d for all vertices, and the resulting matrix can be compared to another well-known random network called the Erdös–Rényi network [28], obtained from [], shown in Figure 5, where any node () can be randomly exposed to any other nodes, creating random connections.

Technically, with respect to the collection of nodes influenced by the connection probability [], it can be observed that both cases with a higher degree (on the right-hand side of the plots) show a higher probability of being connected to an agent. However, for simplicity, the model presents a random regular structure owing to the connection degree [] because the network has all vertices of a certain degree (the same connection probability). In general, various network types (e.g., institutions, firms, banks, food distributors, and supply chains) have ambiguous effects in relation to individual ties because they allow for different ways of diversifying risk; this is particularly true as they influence each other in the same procedure of originating strategies and distributing security [18]. This is considered in the literature to provide strong empirical evidence for a technical approach for comparison with another network (note the lattice and calculation of eigenvector_centrality (Table 5)) to obtain a simple numerical solution (Figure 6).


0123456789101112131415

d = 4 [≈0.25]0.250.250.250.250.250.250.250.250.250.250.250.250.250.250.250.25
 = 0.25 [d≈4]0.330.2370.1650.2070.3200.2230.1500.1710.2120.1000.3530.2270.3820.2310.1100.339

d = random regular property obtained from Figure 4(a);  = random property obtained from Figure 5(a).

In the following section, analyses are conducted for the case of a random regular network such that agents within the strategy have the same level of diversification [17].

2.3. Process 2. Primary Risk Influence

Next, to observe the propagation process, the model uses an array (vector) to represent the probability of failure [ ∈ (0, 1)] with given initially influenced nodes (1 ≤ j ≤ N) denoted merely by (). Each node can be in one of the two states: not failed or failed. All nodes are initially without failure.

Extending the above output, given the failure dynamics, the following is obtained:

That is, the states matrix:

Note that the matrix is denoted by instead of because it no longer represents the adjacency matrix. is continued to be labeled as rows and columns with values of 1 and 0; the key difference is the possibility of showing the state of each node (: 1 = failure and 0 = absence of failure) according to the time step () (Figure 7).

In relation to the fundamental characteristic of the model, we stipulate that an individual (node) can fail if one of its neighbors is infected with failure through a network (Figure 8). An elementary level of risk (or cascades of failure) depends on the co-occurrence of and of the nodes. This implies that individuals are more biased against other individuals that are highly linked in their network.

Next, the probability of failure can be determined by the number of links from the node of the specification scaled by . If we retain the individual characteristics as constant (), is equal to the risk (failure probability:  ∈ (0, 1)) as a function of the connectivity created by the connections (), and is set to . Nodes at lower (higher) links can be expected to have a lower (higher) connectivity to their risk. In other words, if we remove nodes from the network, the bias is reduced where the links are lower, even if they retain their individual characteristics throughout the process.

2.4. Process 3. Impose Protection against Systemic Risk

Along with the basic intuitions mentioned above, protection dynamics is also implemented. First, we divide the program into the subdynamics (payoff, failure, and strategy). Then, the result of each subdynamics is saved. These subdynamics are trivial problems that add complexity to the dynamics. To implement each of these, we use simple equations. These equations combine the previously computed variables with the newly added or computed variables, for example, a ⟶ store in the table, b ⟶ store in the table, a + b = c ⟶ lookup a, b ⟶ compute c, d ⟶ store in the table, a + d = e ⟶ lookup a, and d ⟶ compute e. In the example given in Figure 9, we use the values already stored in the table to compute new variables. This technique is often referred to as memorization (Figure 9).

2.4.1. Payoff Dynamics

An agent is associated with each node and is characterized by its capital and strategy as follows: for each time step, each agent receives one unit of payoff, which is added to that agent’s capital , of which fractions and are spent on maintenance and protection, respectively; thus, the capital value is updated as . We used an elementwise computation with arrays for vectorization () instead of a loop:

The previous initial random network property is as follows:

Thus, the applied output given the payoff dynamics becomeswhere the vector (  = payoff_dynamics) components are equal to matrix . This product is equal to (Figure 10).

2.4.2. Failure Dynamics

The failure potential can originate at each node with probability , and it also propagates along each link with probability at each time step. The failure potential turns into failure with probability depending on the agent’s investment in protection: a possible choice is [], where protection () is equal to the applied (saturation) function. Here, is a designated protection maximum, denotes an allocated reference point, and represents an evolutionary protection level multiplied by the updated capital:

The applied output given the failure dynamics is as follows:where the vector (  = failure_dynamics) components are equal to matrix . This product is equal to . In this section, a prewritten function (regular) is used to create a short iterate 1 = D array for vectorization () instead of using the adjacency matrix directly. This substitution shortens the loop. The failure lasts for one time step (default) and results in the loss of an agent’s capital (Figure 11).

2.4.3. Strategy Dynamics

Each agent chooses its protection level according to the heuristics truncated to the interval :

For initialization of the strategy values, two arrays are added for vectorization ((), ()):where is a vectorization as the designated strategy of and is a vectorization as the designated strategy of , multiplied by the eigenvector centrality from the random graph (), which is a measure of the centrality of the agent’s node, normalized to the interval :

The eigenvector centrality for node I is , where is the matrix of the network with eigenvalue . The strategy values and evolve through social learning and strategy exploration as follows: at each time step, each agent randomly chooses another agent as a role model with probability and imitates that agent’s strategy values with the following probability:where is the probability of acceptance of the role model for imitation, is the capital of the focal individual, is the capital of the role individual, denotes the exponential, and is the intensity of the selection ( < 1 = weak selection;  = strong selection). The focal individual imitates the strategy of the nearby role individual, comparing its new capital (large  = large capital difference; small  = small capital difference); then, the focal individual chooses to imitate the strategy of the role individual. In relation to this imitation, a temporary matrix is employed to avoid changing and using one matrix in the loop (Figure 12).

Finally, at each time step, each agent with probability randomly chooses one of its two strategy values and alters it with a normally distributed increment with mean 0 and the following standard deviation (Figure 13):

2.5. Process 4. Reset Failure (and/or Protection) Potential

A failure lasts for one time step and results in the loss of an agent’s capital, according to the reset failure potential and/or failure, as follows:where denotes randomly chosen individuals with a certain probability [], and the failure potential of any individual is (), which is chosen according to a certain probability that approaches 0:

By default, this recovery rate is implemented by resetting the failure potential after every time step. At the same time, to control this intervention, we allowed the number of time steps to be controlled by another parameter (), which represents the recovery time delay (Figure 14).

2.6. Mechanical Insight of the Programming

The broader objective of this step-by-step procedure is to show the computerized process underlying the fundamental mechanisms that are used. The Tables 15 and matrix mentioned above account for the details of the rules and procedures (see Appendix 6 for details on the coding). In relation to the technical insights of the implementation, we present the details in Figure 15 that may interest program developers (Figure 15).

The versions on the left- and right-hand sides in Figure 15 represent vectorization using a loop and an array, respectively. Both versions are used in the development of our programming, as can be seen in the code book, which presents simple calculations. Specifically, following the step-by-step procedures of the operation, the version on the left-hand side can be recommended in general; however, it requires more time to simulate many individuals across time steps (see Output in Figure 15). Thus, in some dynamics implementations of models, we used the version on the right-hand side owing to its efficiency.

3. Performance (Results)

Given the set of features in relation to the dynamics presented according to the model description, the results of the simulation show the fundamental characteristics of risk diffusion in a randomly networked system and present a framework that enables us to examine the assumptions efficiently while imposing realistic protections against failure. In particular, the simulation characterizes the role of the recovery time delay from the observations of applied dynamics over time to indicate how failure spreads.

First, individuals in the model are considered as vertices, and a set of two elements is drawn as edges connecting two vertices in relation to the information given in the graph. This representation involves two parameters: the number of nodes (n) and the probability that an edge is present (d). In network analysis, indicators of centrality identify the most important vertices within a graph, and their applications include the identification of the most influential node(s) in a network. The eigenvector centrality of a node is defined as , where represents the adjacency matrix of the network with eigenvalue . The principal has an entry for each of the vertices. The larger the entry for each vertex, the higher it’s the ranking with respect to the eigenvector centrality (Figure 16).

Figure 16(a) represents the influences of failures, computed as the frequency of failure nodes in relation to their initial and final states, with the computed relative frequencies , where is the frequency of the failure state () and the percentage of occurrences of that failure outcome in the statistical ensemble corresponds to the relative frequency . Figure 16(b) is created to be run in a number of such simulations according to the time step, as one could spread the failure through the network , with a time series of the number of failures for 100 realizations (black lines) until the number of failed nodes is stabilized. This implies that the greater the propagation from a higher rank (with respect to degree or centrality), the greater the area drawn between the failed nodes (red) and the nonfailed nodes (blue). This result intuitively implies that, primarily, higher-degree agents have greater exposure to cascading failure risk than lower-degree agents; this increases the potential for cascading failure [29].

3.1. Protection against Failure (Imposing Realistic Dynamics)

In line with our proposition, protection dynamics was applied in reference to the risk of failure. This model allows an agent to make a large investment in protection. Note that we assume that the failure potential from the dynamics becomes failure with probability depending on the agent’s investment in protection; when a failure lasts for one time step (Figure 17). The probabilities of these scenarios can be proven as follows.

First, scenario A’s failure potential is 0.527 because the investment in protection is 0.473, based on the possible choice : , for certain parameter values (, , ). Second, scenario B’s failure potential becomes 0.953 because the investment in protection is 0.047 based on the possible choice : , for certain parameter values (, , ). As can be seen from the set on the left-hand side of the plot, this scenario has a better result than the set on the right-hand side, corresponding to the coexistence of failure and the absence of the 20 eurist with time. The results imply that contagion and systemic risk are likely to be intensified by the parameter setting (i.e., , , and ), resulting in a significant failure cost [30]. Note that the failure potential also changes in relation to the choice of strategy values and based on social learning () and exploration () because these probabilities can lead to different levels of protection in [1].

3.2. Role of Recovery (Imposing Recovery Time Delay)

Here, we present a specific characterization to determine whether an alternative intervention could affect the failure trends observed in coexistence scenario A (, , ). The model shown in Figure 18 implies that contagion and systemic risk are also likely to be intensified with the recovery time delay, resulting in a significant failure cost. We observe that immediate or delayed intervention is associated with microscale propagation criteria for each node (Figure 18).

In Figure 18, the individuals can be distinguished clearly. Obviously, nodes that have immediate recovery (plots on the left-hand side) appear to have the potential to protect against the propagation of failure; on the contrary, nodes with a malfunction (plots on the right-hand side) do not have such potential. The simulation consisted of repeated trials, and each trial had two possible outcomes: failure (= red) and absence of failure (= blue). The probability of failure is the same for every trial, as in flipping a coin times, based on the binomial variable that we defined in the model’s basic structure. The probability of failure in each trial is given by , where is the total number of trials, is the total number of failure events, and is the probability of failure in a single trial.

In the plots on the left-hand side in Figure 18, the probability of absence of failure is approximately 0.6, and the probability of failure is 0.4 (the case probability for Scenario A is calculated by the number of failures from the time steps:). We assume a random variable , which is equal to the number of failures after a certain number of time steps. One of the first conditions of this result is that it consists of a finite number of independent trials. This means that the probability of obtaining failure or absence of failure in each trial is independent of whether we obtained failure in a previous trial. Thus, in the case of the plots on the left-hand side with recovery at every time step (immediate intervention), the results of the simulation show independent trials. Another condition is that each trial clearly has one of the two discrete outcomes in which the variable should be clearly classified as showing either failure or absence of failure with a given fixed number of trials. Then, the final condition of the probability of failure and absence of failure for each trail is constant, as we have already obtained measurements for each trial from the cases of Scenarios A and B.

Proof of the role of the recovery rate. We now examine the frequency of failed agents observed on the left-hand side in Figure 18. Suppose that at a given time, there are failed nodes () and nonfailed nodes (). The number of failed nodes at is given bywhere denotes the protection probability (arbitrarily designated as 0.5 only for this numerical calculation); conversely, the number of failed nodes isFailure is propagated through an existing link with probability ; at the same time, a link exists with a certain probability through the created network (,  = random regular graph). Therefore, if we impose the condition that the failure potential can propagate through each link, and the propagation probability is less than 1 (when the original probability  = constant), this equation can be modified simply by replacing withFor example, when we arbitrarily consider the protection and failure propagation () through a random network () to be sufficiently high (), the protection influenced by the failure propagation becomes small because . On the contrary, if a weak failure propagation occurs, such as , the protection influenced by the failure propagation increases becauseHowever, in the plots on the right-hand side in Figure 18, the probability is no longer the same but changes from trial to trial. We have the variable (), which is equal to the number of failures from a designated population. It seems to represent the same operation because, with it, each trial can be classified as either a failure or an absence of failure over a fixed number of trials (t = 16). At the same time, there is the probability that the variable is not constant for each trial owing to the recovery time delay, which does not consist of independent trials. The probability of failure or absence of failure in the first trial would be equal to the total number of individuals between the two simulation cases:However, the probability of the second trial (and the following trial) would not be the same because the simulation of the right-hand side case depends on what happened in the first (previous) trial:In other words, each trial is carried out without replacement, which results in an exponentially large difference between the two cases. Thus, this result does not meet the independent condition: the probability present in the following trial depends on what happened in the previous one. Because replacement does not take place, the probability of failure for each trail is also not constant, unlike the simulation on the left-hand side, where the probability of failure is constant:Inspired by the plausible scenarios included in the recovery delay for cases A and C in Figure 18, we focus on the comparison of the parameter (when recovery delay constant = 1) with more time steps (=100). This is because of the applied function [] for this application, which will primarily be decided by when we consider and the recovery time as constants (Figure 19). This refers to the individual for control by interventions to be protected against the risk of failure when the failure propagation mechanism influences agents’ decisions as they are generated.
Figure 19 shows a scatter plot constructed using (Figure 19(a)) with a clear negative correlation between failure and capital. Although it is constructed by the recovery time delay (Figure 19(b)), strong power law relationships are still obeyed in cascading failure, even if the capital values remain strong (see the averaged values for the failure and capital in the lower part of Figure 10). In the following results, this trend (failure and capital) is determined by two parameters (); however, their influences do not have the same weight. Here, we should note that intervention applied to recovery can persist for a short period of time, which we could cite as a bias or rationality.

3.3. Generalize with Stationarity

Let us assume that the system has evolved and reaches stationarity, which implies that all variables have nearly constant fluctuation around their mean values. In stationarity, the state of the system can be considered independently of the initial conditions. It is useful to obtain a relationship among the variables in stationarity. First, we denote the number and fraction of failed agents by and , respectively. For stationarity, the average number of failed agents remains constant. Consider the two successive steps and. Suppose that at time , there are and failed and nonfailed nodes, respectively. Therefore, the number of nonfailed nodes at is , and the number of failed nodes is . Stationarity means that the number of failed (or nonfailed) nodes remain constant (on average), i.e.,

Let us define . The above formula for can be written as , which is simple. Here, is the average probability of protection among the nonfailed nodes. Note that the protection probability for the failed nodes is zero because they have no capital. In other words, , where represents the protection probability for node . In the second equality, and are defined similarly.

This relationship shows that the stationary value for is bounded in the interval [0, 1] and is a decreasing function of . The limiting values seem promising. For , i.e., full protection, there is no failed agent. For , i.e., no protection, there is a failed agent. Note that at the end of each time step, all the failed agents recover (as an initial setting). If the propagation probability is less than 1, this equation should be modified by replacing as follows:

Again, by defining , we implicitly obtain . To check the validity of these formulae with a simulation, we increase the total simulation time (t = 1,000); for smaller , a longer time is required to reach stationarity (Figure 20).

Proof of the stationary state. we calculate the average value of the capital in the stationary state to prove the long-term role of the recovery time delay. First, consider the case . It can be easily seen that the average value of the capital in the stationary state satisfies the following equation:where is the average value of the capital among nonfailed individuals. Combining the above equations, we can obtain and as a function of in the stationary state. However, it is easier to obtain as a function of , such that the protection probability should be replaced by when . Therefore, we obtain the following (note that in the simulation, we use a truncation of to constrain it to the interval [] [see code book]):According to the trajectories (upper part of Figure 20), the outcomes such as failure (or capital) reach a stationary state even if the trajectories from the strategies continue to evolve and exhibit different behaviors (see the inset of the upper plots). Therefore, the presence of the detailed dynamic features of the system (capital and failure) indicates the possibility of stationarity in the coevolutionary process. This observation can enable us to gain a sense of the qualitatively different nature of propagation (compared to ) within systems, such that failure is not limited to protection or strategy but is due to regulation over time. In other words, there is another rate that continues to increase the bias as the length of the delay causes the cascading failure, even where the individuals still have potential (Figure 21).
As indicated by the protection (Figure 21; bottom, marker of green, nodes that are controlled by (left-hand side) seem to lose their ability to protect, owing to the propagation of failure. On the contrary, nodes that are controlled by (right-hand plots) seem to maintain their potential, even if they all feature cascading failure. We noted that the different levels of potential (between the two parameters [, ]) came from their capital, as marked in blue in Figure 20 (see bottom plots of the figure).
Thus, to reduce the potential ramifications () of such additional losses to others, an immediate recovery intervention may not only be preferred to the potential damage () from individual failures but also guarantee that it is strategically possible for even large insolvent individuals to recover losses to uninsured connectors. Before the potential for failure can advance the propagation value, it must be identified, and the recovery value of the individual protection potential (i.e., capital) must be estimated.

4. Discussion

We presented a simple general model to quantify the protection for mitigating systemic risk, with the recovery time delay as the result. Using a simple set of properties, the observations from this model indicate how probability describes the proportion of protection that can be characterized by how systemic risk should be coped with rather than being predicted by the probability of failure [31].

4.1. Summary of the Model

We proposed the following process-based steps (1–4) to create a model. First, network-agent properties were established: when the basic data structure was constructed, the functionalities of the mechanism began with a specific undirected relationship between agents through agent-based simulations, in a coexisting macroscale structure with individuals interconnected at the microscale level in the network. Next, the primary risk influence was established using a parameter to evaluate the impact of risk on the networked agents; the influence of primary risk was estimated along the structure as a general failure property. Then, it was necessary to implement protection against systemic risk by embedding protection dynamics that emphasizes the roles of payoff, failure, and strategy dynamics. Finally, recovery was considered through scaling for the different evolutionary and nonevolutionary components; this is crucial to the way this system functions, where each step computes a new entity and generates a new proportion in relation to the intervention. The implications of this model are as follows.

4.2. Theoretical Implications

A suite of plausible dynamics and decentralized bottom-up mechanisms was constructed by establishing appropriate rules for the interaction, within which the system components can self-organize, including mechanisms for ensuring rule compliance (vectorized microscale implementation). This evolutionary heuristic promotes balance with respect to the interactions [32]. For example, when investment in protection is weak (low investment), a pattern of strong systemic risk emerges in the form of agent failure in networked conditions. By contrast, when investment in protection is strong (high investment), a pattern of protection emerges, with little diversification against all challenges (Figure 17). The simulation also reflected a clear correlation between a set of parameter values (between those for capital and failure and between those for the strategies of and ). Thus, the strategy of social learning could be another crucial factor in resource provision that violates expectations and leads to novel trends with high impact. These results shed light on the propagation modes. The observed contagion and persistence patterns can be regarded not as a direct causal link but in relation to an accumulated rational driven by interconnectedness [33].

More importantly, in this study, the failure potential also reflects the time delay after the official failure of an individual [34]. Newly damaged individuals may affect others or may recover from the next time step onwards (recovery occurs with the rate ). The potential needed for recovery is related to the number of healthy individuals , and if individuals recover at time , the protection potential changes as follows:

Once the capital is exhausted, the systemic risk increases rapidly. This is based on suitable real-time intervention related to short-term anticipation of risk flows [35]. However, because regulation occurs over time, another rate itself increases the failure potential even if the capital is still exhausted (note that the recovery rate mainly influences failure and does not have the same weight for capital). From our observations, we assume that the protection (or robustness) of an agent decreases (or increases) by an amount proportional to the relative exposure of risk sharing . Consistent with the network of dynamics described above, the dynamics, including the recovery time delay (in time series), can be formalized as follows:

For convenience, with respect to the results, substitutes the protection potential, defined by , such that the function determines the extent of the loss caused by the default when we fix risk sharing . This indicates that the agent has defaulted at a previous time, denoted by the time variable . In the simulations, there was no delay during the development of the cascade . However, this dynamic differs across the default of the recovery [36]. If the system delays the intervention, the cascade tends to be much larger (following the power law) because the defaults of the neighbors of a given agent are statistically dependent on , such that their default is more likely to cause the default of others:

In Table 6, such a range [] will be determined by failure propagation through each link in the random regular network (), which was determined by the recovery time delay. We simply proved that this numerical trend of the protection probability defined by could not be the same as that of [when  = constant (Figure 20)]. Thus, the failure influences in these simulations are not identical, as changes according to these parameters, including where increases or decreases occur following a nonlinear curvature [37].


12345678910

21.3331.21.1421.1111.090 1.0761.0661.0581.050
0.7720.5850.5630.5390.5380.5610.5480.5890.5330.566

4.3. Practical Implications

We propose a modified version of the protection model against systemic risk in terms of its failure recovery mechanics, as demonstrated above. The mechanisms here are based on a few simple basic rules: a node fails at a time with a probability of failure if its failure potential is greater than or equal to that of its nearest neighbors [1], and it fails owing to the interconnected potential, although it spontaneously recovers at an external recovery or according to an intervention probability [3]. The consequences of the damage are crucial for systemic risk and controllability; however, they have not been explored systematically thus far [38]. We show how the process of embedding and the related recovery times impact the dynamics of failure processes in a network (see the section on the proof of the role of the recovery time delay). Therefore, we propose that the extent of the intervention regime in the system can be a source of evolutionary (in)stability, wherein the immediate recovery of components can mitigate damage and propagate failure in dynamic networks [17]. These decentralized management principles could be applied to logistical and production systems, or even to administrative processes and governance [18].

Governments are often reluctant to resolve insolvency in institutions [e.g., banks, firms, supply chains, and infected virus individual (i.e., Ebola, SARS, and COVID-19)], and they permit them to continue operating despite their negative effects. The length of the attendant delay causes a cascading failure, regardless of each individual’s capital (i.e., immunity), which then leads to reductions in network welfare. These delays may propagate the damage to many other individuals in the network, increasing their fragility and probability of failure [39]. Evidence from governing systems suggests that if individuals’ troubles are estimated before their capital becomes negative, institutions could, based on their risk potential, weed out the inefficient or unfortunate individuals to avoid more serious adverse effects [40]. This indicates the importance of resolving challenges to individuals as quickly as possible and developing more rapid responses to certify protected individuals by providing immediate intervention. On the contrary, prompt corrective action can increase the willingness to supply protection to reduce the chances of systemic risk [41]. Given the evolutionary mechanisms shown in this simulation model, we observed that the evolutionary response often obtains a critical value for a plausible protection potential. We demonstrated that, although structures have high potential for individual protection and a strategy to maintain their capital, the function of the interconnected recovery delay makes them weak amplifiers, where unprofitable intervention shows a high bias.

5. Concluding Remarks

Cascading failure was used to assess how rules and processes can collectively add up to a series of interconnected unsuitable outcomes. In the simulations, although every object was considered to be doing well individually, systemic risks could emerge, as the whole was vast. In other words, even if every single object is considered to be highly fitted and behaving properly (i.e., capital), there is no standard solution for proper management for everyone’s benefit. Therefore, the elimination of resources needed for recovery requires a strong effort to halt cascades as they begin, when the damage is still small, and the problem may not yet be perceived as threatening. These issues can and must be treated with a proper (re)design of the system and adoption of management principles, as shown by the simulation results [42].

Many disasters in the real-world result from incorrect thinking and inappropriate system design. To address such risks more completely, a better understanding of proper intervention and resilience is crucial. However, there remains a lack of an effective method for calculating networked risk. The proposed model can facilitate realistic calculation of the interdependence and propagation of risks in a network and how they can be absorbed, and it can mediate such that both the system components and the plausible systemic intervention and outcomes will work well.

We derived a unifying framework for the interplay of observations that embed realistic dynamics such as payoff, failure, strategy, and recovery in a random regular network. The mechanics showed that the complexity should be described by the essential features of the model’s processes that capture evolutionary damage spread owing to the coexistence of crucial hypotheses in the system. Thus, it may be possible to create an account where protection and failure are not static quantities, and propagation is likely to be understood in terms of how frequently the system is in a condition that leads to a large diffusion. This availability of mechanisms raises additional expectations that predictability and controllability are a simple matter of proper system design and operation [43]. More intuitively, this study can provide a better understanding of recovery, where real-time management can overcome instabilities caused by delays in feedback or lack of information.

Data Availability

The data used to support this study can be obtained from the corresponding author upon request.

Ethical Approval

This study was approved by the local ethics committee (SNUIRB No.1509/002–002) and conformed to the ethical standards of the 1964 Declaration of Helsinki (Collaborative Institutional Training Initiative Program, report ID 20481572).

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this article.

Acknowledgments

This research was supported by Basic Science Research Program thorough the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant Number: 2020R1l1A1A01056967, PI: Chulwook Park).

Supplementary Materials

Appendix 1: the results of the entire simulation for protection dynamics against systemic risk (left-hand side of Figure 17). Appendix 2: the results of the entire simulation for protection dynamics against systemic risk (right-hand side of Figure 17). Appendix 3: the results of the entire simulation for protection dynamics against systemic risk (right-hand side of Figure 18). Appendix 4: stationarity controlled by pp,max: (nodes = 10  10, connection d = 9, time step = 1,000). Appendix 5: stationarity controlled by rec_t: (nodes = 10  10, connection d = 9, time steps = 1,000). Appendix 6: code book. (Supplementary Materials)

References

  1. C. Park, “Network and agent dynamics with evolving protection against systemic risk,” Complexity, vol. 2020, Article ID 2989242, 16 pages, 2020. View at: Publisher Site | Google Scholar
  2. S. V. Buldyrev, R. Parshani, G. Paul, H. E. Stanley, and S. Havlin, “Catastrophic cascade of failures in interdependent networks,” Nature, vol. 464, no. 7291, pp. 1025–1028, 2010. View at: Publisher Site | Google Scholar
  3. D. Helbing, “Globally networked risks and how to respond,” Nature, vol. 497, no. 51, p. 7447, 2013. View at: Publisher Site | Google Scholar
  4. P. C. Trimmer, A. I. Houston, J. A. R. Marshall, M. T. Mendl, E. S. Paul, and J. M. McNamara, “Decision-making under uncertainty:  “biases and Bayesians,” Animal Cognition, vol. 14, no. 4, pp. 465–476, 2011. View at: Publisher Site | Google Scholar
  5. R. Pastor-Satorras, C. Castellano, P. Van Mieghem, and A. Vespignani, “Epidemic processes in complex networks,” Reviews of Modern Physics, vol. 87, no. 3, pp. 925–979, 2015. View at: Publisher Site | Google Scholar
  6. Å. Brännström, H. Sjödin, and J. Rocklöv, “A Method for Estimating the True Number of Infections from the Reported Number of Deaths with Application to COVID-19. Submitted to Eurosurveillance,” 2020. View at: Google Scholar
  7. K. Sneppen and L. Simonsen, “Impact of Superspreaders on dissemination and mitigation of COVID-19,” 2020. View at: Google Scholar
  8. J. Lloyd-Smith, S. Schreiber, P. Kopp, and W. Getz, “Superspreading and the effect of individual variation on disease emergence,” Nature, vol. 438, pp. 355–359, 2020. View at: Google Scholar
  9. T. Britton, “The Disease-Induced Herd Immunity Level for Covid-19 Is Substantially Lower than the Classical Herd Immunity Level,” 2020. View at: Google Scholar
  10. H. Sjödin, A. Johansson, Å. Brännström et al., “COVID-19 healthcare demand and mortality in Sweden in response to non-pharmaceutical (NPIs) mitigation and suppression scenarios,” Tentatively Accepted for Publication by International Journal of Epidemiology. medRxiv, vol. 2020, 2020. View at: Google Scholar
  11. A.-L. Barabási and R. Albert, “Emergence of scaling in random networks,” Science, vol. 286, no. 5439, pp. 509–512, 1999. View at: Publisher Site | Google Scholar
  12. O. Sporns, D. Chialvo, M. Kaiser, and C. Hilgetag, “Organization, development and function of complex brain networks,” Trends in Cognitive Sciences, vol. 8, no. 9, pp. 418–425, 2004. View at: Publisher Site | Google Scholar
  13. M. E. J. Newman, “The structure and function of complex networks,” SIAM Review, vol. 45, no. 2, pp. 167–256, 2003. View at: Publisher Site | Google Scholar
  14. S. Fortunato, “Community detection in graphs,” Physics Reports, vol. 486, no. 3–5, pp. 75–174, 2010. View at: Publisher Site | Google Scholar
  15. J. A. R. Marshall, P. C. Trimmer, A. I. Houston, and J. M. McNamara, “On evolutionary explanations of cognitive biases,” Trends in Ecology & Evolution, vol. 28, no. 8, pp. 469–473, 2013. View at: Publisher Site | Google Scholar
  16. T. W. Fawcett, B. Fallenstein, A. D. Higginson et al., “The evolution of decision rules in complex environments,” Trends in Cognitive Sciences, vol. 18, no. 3, pp. 153–161, 2014. View at: Publisher Site | Google Scholar
  17. S. Battiston, D. Delli Gatti, M. Gallegati, B. Greenwald, and J. E. Stiglitz, “Liaisons dangereuses: increasing connectivity, risk sharing, and systemic risk,” Journal of Economic Dynamics and Control, vol. 36, no. 8, pp. 1121–1141, 2012. View at: Publisher Site | Google Scholar
  18. N. Ferguson, D. Laydon, G. Nedjati Gilani et al., “Report 9: Impact of Non-pharmaceutical Interventions (NPIs) to Reduce COVID19 Mortality and Healthcare Demand,” 2020. View at: Google Scholar
  19. L. Böttcher, M. Luković, J. Nagler, S. Havlin, and H. J. Herrmann, “Failure and recovery in dynamical networks,” Scientific Reports, vol. 7, 2017. View at: Publisher Site | Google Scholar
  20. N. Dehmamy, S. Milanlouei, and A.-L. Barabási, “A structural transition in physical networks,” Nature, vol. 563, no. 7733, pp. 676–680, 2018. View at: Publisher Site | Google Scholar
  21. S. Poledna, M. G. Miess, and C. H. Hommes, “Economic forecasting with an agent-based model,” 2019. View at: Google Scholar
  22. X. Guardiola, R. Guimera, A. Arenas, A. Diaz-Guilera, D. Streib, and L. A. N. Amaral, “Macro-and Micro-structure of Trust Networks,” 2002. View at: Google Scholar
  23. M. A. Di Muro, C. E. La Rocca, H. E. Stanley, S. Havlin, and L. A. Braunstein, “Recovery of interdependent networks,” Scientific Reports, vol. 6, no. 1, 2016. View at: Publisher Site | Google Scholar
  24. N. C. Wormald, “Models of Random Regular Graphs. London Mathematical Society Lecture Note Series,” 1999. View at: Google Scholar
  25. A. Vespignani, “Twenty years of network science,” Nature, vol. 558, no. 10, pp. 528-529, 2018. View at: Publisher Site | Google Scholar
  26. E. Bonabeau, “Agent-based modeling: methods and techniques for simulating human systems,” Proceedings of the National Academy of Sciences, vol. 99, no. 3, pp. 7280–7287, 2002. View at: Publisher Site | Google Scholar
  27. J. M. Pacheco, A. Traulsen, and M. A. Nowak, “Coevolution of strategy and structure in complex networks with dynamical linking,” Physical Review Letters, vol. 97, no. 25, p. 258103, 2006. View at: Publisher Site |