Abstract

In the context of the virtual economy, monitoring and controlling the operation of the virtual economy to prevent excessive asset bubbles due to inherent volatility and minimize the harm to the macroeconomy is one of the important tasks of economic development. This study applies the error correction model to the analysis of GDP growth factors, improves the algorithm to adapt it to the needs of GDP growth analysis, and constructs an analysis model of the impact of interest rate and virtual financial reforms on GDP growth. Moreover, this study combines data analysis to verify the performance of the model in this study. Through experimental analysis, we can see that the error correction model proposed in this study can play an important role in the analysis of GDP growth factors. At the same time, this study verifies that virtual financial reform and interest rate reform can have a certain impact on GDP growth and have a certain degree of relevance.

1. Introduction

Since the birth of currency, the development of human society and economy has entered a period of monetization. The development of currency was barter at the beginning, and it was physical, and then, it showed a fixed barter. At this time, the currency has its intrinsic value, and its effectiveness and credibility are guaranteed by its intrinsic value. Later, the currency has evolved from physical barter to lower and lower transaction costs into paper money with little intrinsic value. At this time, currency effectiveness and credibility are guaranteed by government laws, and physical currency is transformed into credit currency. However, with the disintegration of the Bretton Woods system and the implementation of the noncash paper currency standard system in various countries, the issuance of credit currency has become less and less binding, and it is mainly determined by the monetary authorities. It is precisely under the current credit currency system that the research on currency issuance becomes necessary. How much money should be provided to make the economy run smoothly without generating economic and financial risks is a question worthy of research [1].

If we want to examine the relationship between virtual economy and real economy, we must define the concept of virtual economy and real economy. Through the investigation of the theoretical background, the concepts of virtual economy and real economy are based on the discussion of Marx and Xifaheng on the concept of virtual capital and real capital, Veblen on money capital and industrial capital, and Keynes on finance and industry [2]. Among them, Marx’s understanding of virtual capital is more comprehensive and objectively dialectical. Combining the analysis and summary of Marx and many domestic and foreign scholars, we describe virtual capital as follows: it uses currency as a tool, relies on the financial system, and through capitalization pricing produces psychological expectations that have an important impact on price changes. Moreover, price changes are uncertain, asymmetry, and positive feedback, and they can realize value-added assets through market transactions. In addition, specific forms include debt certificates, ownership certificates, financial derivatives, or physical assets (such as real estate) [3].

Actual capital is a form of capital corresponding to virtual capital. The core of it is that both are pursuing monetized profits. Among them, virtual capital is an asset that realizes value appreciation through market transactions. Domestic scholars generally tend to accept Marx’s statement that the concept of virtual economy is derived from virtual capital, so the understanding of the connotation of virtual economy is mostly formed on the basis of the concept of virtual capital. In foreign countries, scholars rarely mention the concept of virtual economy. The current understanding of the virtual economy mainly believes that the main body of the virtual economy is finance, and more is to understand the virtual economy from the perspective of finance. Based on the definition of the concept of virtual capital, we will define the virtual economy as follows: the concept of virtual economy is proposed relative to the real economy. It refers to the sum of economic operation mode and economic system arrangement, which takes virtual assets as the price carrier and pursues monetary income independently of the real economy. The virtual assets here mainly refer to stocks, bonds, funds, forwards, futures, options, swaps, asset securitization products, etc. The real economy refers to the economic operation mode related to the circular movement of real capital.

Once the virtual economy runs out of control, it will cause very serious harm and negative effects on social and economic development and people’s normal lives. In the context of the economic development of global financial integration, the intrinsic nature and operating laws of the virtual economy are studied further, the transmission mechanism of the virtual economy affecting the real economy is deeply analyzed, the statistical accounting of the virtual economy is explored, the management of the virtual economy is strengthened, and the macroeconomic policy is improved. The formulation of the implementation effect and the realization of the stable and coordinated operation of the virtual economy and the real economy are already urgent real needs. Therefore, the research motivation of this article is based on the theoretical research and empirical analysis of the relationship between the virtual economy and the real economy. According to modern economic theories such as development economics, combined with the development status of the virtual economy and the background of economic globalization, a theoretical analysis of the development path selection of the virtual economy is carried out, and the development effect of the virtual economy is improved through the error correction model.

The innovation of this article is to conduct a comprehensive and in-depth analysis of the structural levels of the real economy and the virtual economy, to find the intermediary transmission media that interact and penetrate each other, and to deduce the transmission mechanism of the virtual economy’s influence on the real economy, revealing the relationship between the virtual economy and the real economy and the law of development; the thesis is also based on modern economic theories such as financial innovation and new institutional economics, an objective and dialectical exploration, and analysis of the essence of the evolution of the virtual economy. Combining the characteristics of the virtual economy and the transmission mechanism that affects the real economy, referring to the existing national economic accounting system and currency and financial statistics, defines the scope and caliber of the virtual economic statistical accounting, designs and proposes corresponding virtual economic accounting rules, and issues the error correction mechanism in a timely manner. The existing problems in economic development are trusted and the quality and effectiveness of economic development are improved.

Under such a general background, it is necessary to monitor and control the operation of the virtual economy while promoting the development of the virtual economy to prevent excessive asset bubbles from appearing due to its inherent volatility, thereby minimizing its harm to the macroeconomy. This is of great significance for ensuring the stable development of the entire economy.

Literature [4] used the revised model framework of the quantity theory of money to deduce that the growth rate of money quantity is a function of the growth rate of the real economy and the growth rate of the virtual economy and analyzed the relationship between the virtual economy and the real economy. Literature [5] further pointed out that the trend of world economic virtualization has made the deviation between the virtual economy and the real economy a normal state. According to whether the proportion of the virtual economy and the real economy’s capital flow in the currency circulation is equal to the ratio of its scale, the virtual economy in promoting the growth of the real economy will also bring negative effects. Literature [6] concluded through quantitative analysis that there is no long-term and stable cointegration relationship between the virtual economy and the real economy. The real economy is not the basis for the development of the virtual economy. The virtual economy deviates from each other. Literature [7] used the probability model of resource transfer to analyze the relationship between the virtual economy and the real economy, and the conclusion is as follows: from the long-term economic growth, any aspect of the waste is not beneficial to the growth. In the short term, any investment imbalance will also cause large fluctuations in the macroeconomy. Literature [8] researched the time-varying characteristics of the correlation between the virtual economy and the real economy and found that the virtual economy is relatively independent as an economic system relative to the real economy. Literature [9] used the Grange causality test to verify that the interaction between the real economy and the virtual economy is dynamic and non-bidirectionally balanced. The influence of the real economy on the virtual economy is getting weaker and weaker, while the influence of the virtual economy on the real economy is gradually strengthened. Moreover, the more developed the economy, the impact of the real economy on the virtual economy will be weakened, and the independence and virtuality of the virtual economy will gradually increase. The asymmetric influence of the virtual economy and the real economy is an inevitable trend in the process of economic virtualization. Literature [10] proposed the concept of financial correlation rate to observe the degree of deviation between financial assets and real economy, that is, the ratio of total financial assets to the stock of physical assets (or national wealth). Literature [11] described in detail the value of financial assets and physical assets, showing the gradual increase in the ratio of financial assets in a broad sense and the tendency of financial assets to deviate from physical assets. Literature [12] pointed out that in today’s world economy, less than 2% of financial transactions are related to physical transactions every day. Virtual financial assets deviate from the scale of the real economy, and a large number of virtualized financial assets are growing indefinitely. Literature [13] believed that the development of the financial market and the real economy is increasingly separated and pointed out that in the study of macroeconomics the real economy is the first, followed by finance. Literature [14] examined the reasons for the separation of finance and the real economy from the perspective of credit expansion. Its research shows that the expansion of credit policies is a typical factor that causes the separation of finance and the real economy. Literature [15] conducted research from the perspective of the independence of financialization and believed that financialization, to a greater extent, begins to mark the shift from a booster of economic growth to its own growth. Literature [16] summarized the phenomenon of separation between financial markets and the actual economy and came up with a typical “deviation hypothesis” argumentation model. If the hypothesis is true, then the expansion of the virtual economy is like a virtual mirror when the real economy does occupy a fundamentally dominant position. In the past, the productive real economy was at the center of economic development, and the virtual economy sector existed as a role in assisting the development of the real economy. However, with the rapid development of the virtual economy, this relationship has now undergone major changes. The financial market has become relatively independent, operating according to its own logic and laws, while the real economy has to adapt itself to the operating laws of the virtual capital market.

3. Application of Error Correction Model in Virtual Financial Data Processing

3.1. Variable Selection of Semi-Parametric Variable Coefficient Partial Linear Measurement Error Model

Traditional research models assume the specific parameter form first, but in many cases it is difficult to know or test the specific functional form of the dependent variable and the covariate. At this time, if we still use our conventional parametric model to process real high-dimensional data, there will be a large model deviation and loss of effect when predicting. At this time, nonparametric regression models are often used. The nonparametric model does not make assumptions about the specific form of unknown parameters, but assumes certain properties, so it has good robustness. Commonly used methods include local smoothing, spline approximation, and orthogonal series approximation. These methods do have good statistical properties when the covariate is one yuan, but when they are extended to multiple variables, the so-called “dimensional bane” will appear and it is difficult to implement—the iterative process requires a lot of data, and the convergence speed is slow. Estimated results are unstable. These methods are not practical in financial data processing, so this article explores the application of error correction models in virtual financial data processing to improve the processing effect of virtual financial data.

The variable coefficient partial linear model is a direct extension of the classic linear model. It not only retains the advantage of easy interpretation of the parametric model but also promotes the flexibility of the nonparametric model and avoids the curse of dimensionality. Therefore, it has been extensively studied recently.

Y represents the linear model function. Among them, X and Z are d-dimensional and p-dimensional covariates, β is the unknown regression coefficient of d-dimensional covariates, α(.) is the unknown p-dimensional regression function vector, U is the nonparametric part of the unary scalar, and e is the model error of the finite variance with the mean of 0.

When using conventional parametric models to process real high-dimensional data, large model deviations and loss of effect will be produced when predicting. At this time, nonparametric regression models are often used. The nonparametric model does not make assumptions about the specific form of unknown parameters, but assumes certain properties, so it has good robustness. Commonly used methods include local smoothing, spline approximation, and orthogonal series approximation. These methods do have good statistical properties when the covariate is one yuan. However, when they are extended to multiple variables, the so-called “dimensional bane” will appear and it is difficult to implement—the iterative process requires a lot of data, and the convergence speed is slow. Estimated results are unstable.

Therefore, this article first analyzes the variable selection of the semi-parametric variable coefficient partial linear measurement error model and then analyzes the error correction model on this basis.

The traditional optimal subset selection, stepwise regression, ridge regression, principal component regression, partial least squares, and Lasso estimation can only achieve some of the goals. The smoothly clipped absolute deviation (SCAD) penalty function is specifically defined as follows [17]:where represents a penalty function and is a variable parameter. Among them, a is often taken as 3.7, and is a threshold parameter, as shown in Figure 1.

This article extends the “correction for attenuation” method often used in linear measurement error models to this model to eliminate the influence of measurement errors on the estimation and introduces a penalty function to achieve the purpose of estimating coefficients while selecting variables. This forms this article the penalty least squares. Considering that the penalty least squares contain an unknown nonparametric part (.), this study proposes two methods to solve it. The first estimation method is to use the kernel estimation method to convert the problem into the common penalty least squares. This method is more straightforward. In the actual estimation, it takes up less computing resources, when the nonparametric part has a higher dimensionality. When it is time, there will be an annoying “curse of dimensionality” phenomenon. The second method is based on the local linear estimation to perform a local linear fitting of Q (.), and then, the fitted value is substituted into the penalty least-squares term, and then, it is transformed into the common penalty maximum tJ,-multiplication. As an improvement to Method 1, Estimation Method 2 effectively avoids the phenomenon of “dimensional curse” and shows a better and more robust estimation effect.

To better show the difference in several penalty functions, when the row vectors of X are orthogonal, we set z = XTy and to examine the following least-squares estimation [18]:

Among them, the penalty term does not have to be equal for each term. For the convenience of writing, we mark it as and further define as . Minimizing the above equation is equivalent to minimizing each item. That is [19],

Therefore, we have the following estimation results:(1)The hard thresholding rule is as follows: when , we obtain the hard thresholding rule (Antoniadis and Fan (1997)):(2)Bridge regression is a follows: the corresponding penalty function is .(a)When q = 1, the Lasso estimate can be obtained (Tibshirani (1996)):Among them, Sgn is a symbolic function, and z means that when z > 0, the value is taken as z; otherwise, it is taken as 0. This penalty is called the soft thresholding rule, which was proposed by Donoho and Johnstone (1994).(b)When q = 2, the ridge regression can be obtained as follows [20]:(c)When q = 0, the general least-squares estimate is obtained.(3)SCAD : when pi (24) satisfies (2), the obtained SCAD estimate is as follows:

From Figure 2, we can see that the purpose of selecting variables can be achieved by adding the three penalty items hard, L1, and SCAD. However, the estimation obtained by adding the hard penalty term is not continuous. This discontinuity may lead to the instability of model selection. That is, small changes in some independent variable data may lead to large changes in the optimal model selected.

The traditional parameter model is first given the specific model parameter form and then statistically inferring the specific coefficients based on the given data. However, in practice, there are often parametric forms that cannot be determined and tested. At this time, a better method is to not specify the specific parameter form of the model, but only assume that the unknown function has certain functional properties, and let the data determine the specific functional form by itself. Commonly used nonparametric regression function estimates include local smoothness, spline approximation, and orthogonal series approximation. Local smoothing methods include kernel estimation and local polynomial estimation.

The data are . Among them, is a real number, is a d-dimensional vector, and the regression function of on is [21] as follows:

We want to estimate the function value m (zo) of the function m (z) at point z. When X = zo, there are a large number of observations, and these observations can be averaged. However, when X = ro, there is no observation value. A natural idea is to approximate the value of the function at other points in the z domain. Since m (z) is often assumed to be smooth, the function value of the point closer to xo should be closer to the m (zo) function value and then be given a larger weight. The one that is farther from x is given a smaller weight. The weight can be represented by a bounded support kernel function, which results in a kernel estimate.

Among them, is a two-dimensional array, hni > 0 and hni ⟶ 0. When n ⟶ ∞ and i ⟶ ∞, k (.) is a d-dimensional kernel function.

In particular, when hni = hn, the Nadaraya–Watson estimates are obtained. When d = 1, then there are the following:

Among them, , and h is the window width.

When d > 1, then there are the following:

Among them, . is a p-dimensional kernel vector function with bounded compact supports and bounded Hessian matrix. f Ki (u) du = 1, and H is a p × p symmetric positive definite window width matrix. H is a diagonal matrix, and K is the Cartesian product of p univariate kernel functions. That is, H = diag (h, h2, ..., hq) and K (z) =  k (z(i)).

The kernel function usually takes two types. One of them is the Gaussian kernel function:

The other type is the symmetric family 3 kernel functions:

When y = 0, it is a uniform kernel function; when y = 1, it is an Epanechnikov kernel function; when y = 2, it is a biweight kernel function; and when y = 3, it is a triweight kernel function.

We consider the model .

Among them, E (e) = 0, Var (E) = 1, and X and e are independent.

The sample is (X, Y), (Xz, Y),..., (Xn, Y.) (X, Y). We want to estimate and m′ (zo), m″(zo),..., mP) (zo).

According to least squares, we have the following:

If we assume that m (z) is differentiable at xo point p + 1 order, then there is the following:

We also consider that the local approximation is relatively accurate for the one that is close to the Z point, and then, a larger weight is given, while the farther point is given a smaller weight. The following weighted least squares can be obtained:

Among them, h is the window width and K (.-) is the symmetric bounded support kernel density function.

We define the following:

Moreover, , , and , and then, formula (17) can be written as follows:

The corresponding solution is as follows:

Usually, under the criterion of minimizing MSE, the optimal kernel function is the Epanechnikov kernel function.

The use of local polynomials will involve the following issues:(1)The choice of window width: local polynomials are more sensitive to the window width. Too large a window width will cause a large estimation deviation; if the window width is too small, it will cause a large estimation deviation. Two window widths are often used, the global optimal window width and local optimal window width. When the estimated function m (z) has similar smoothness in the entire area, the global optimal window width is used; when the estimated function m (z) has a large difference in smoothness throughout the area, the local optimal window width is used. Because the optimal window width value contains unknown parameters, in practice, rule of thumb, PI, and other methods are often used. Some data-driven methods are also often used, such as cross-checking and generalized cross-checking;(2)The choice of order p: for a fixed window width, a large p will reduce the deviation, but will lead to an increase in the variance and an increase in the amount of calculation; a small p will increase the deviation. When estimating m (z), it is often used p = j + 1 or j + 3. In particular, when estimating m (z), it is often only expanded to the first order. At this time, a special case of local polynomial estimation method—local linear estimation—is obtained; and(3)Selection of kernel function: the choice of kernel function has little effect on the final estimate. Usually, under the criterion of minimizing MSE, the optimal kernel function is selected as the Epanechnikov kernel function .

We set to be a random sample from the following variable coefficient partial linear measurement error model (20).

Among them, and Z are one-dimensional and p-dimensional covariates, respectively, and have no measurement error, X is the covariate with measurement error that is not actually observed in the d dimension, is a substitute for the observed X, is the model error and E () = 0, and U is the measurement error of the covariance matrix with zero mean, and we assume that U is independent of .

Regardless of variable selection, since  =  = 0, , and bring it into (20), we get the following:

Using common nonparametric smoothing methods, such as the Nadaraya–Watson estimation, we get and . At this time, the natural way to find β-estimation is the least-squares method, as shown in the following formula:

We can get the following:

Due to the error in X measurement, if the measurement error is ignored and is directly used instead of X, it will inevitably lead to inconsistency in the estimated results. In linear models, the so-called “correction for attenuation” method is often used to correct the inconsistency of estimates caused by measurement errors. We apply this method to the above model and get the following:

The obtained estimate of β is as follows:

From the previous discussion, we know that most of the variable selection process can be implemented by adding different penalty functions, and then, the corresponding penalty least squares are as follows:

From the variable selection part in the introduction, we know that the SCAD penalty function has good statistical significance in all penalty functions, so this study chooses the SCAD penalty function as the penalty item of 2–7, and the specific definition is as (2).

Next, we give the theoretical properties of the proposed penalty least-squares estimation. We set the following:

Without loss of generality, we assume that and .

If the choice of window width satisfies the conditions in Appendix Lemma 1, and maa (lp5.(I3ol) IBo ≠ 0) ⟶ 0, then Q (B) has a local maximum point β, which satisfies β-ll = o and (n-1/2 + an). Among them, an = max {lps, (lBzol), B5o ≠ 0}.

We know that when an appropriate window width and n are selected, and formula (27) has a penalty least-squares estimation with a convergence rate of √n, we further give the estimated Oracle properties.

If , , and , then the probability tends to 1, and the local maximum point in Theorem 1 satisfies the following:(a)The sparsity is as follows: .(b)The asymptotic normality is as follows:. Among them, .

In the same way, can be found.

Step 1. Since the SCAD penalty function does not have a continuous second derivative, the Newton iteration method cannot be used directly. Fan and Li proposed a local quadratic approximation (LQA) to solve this problem. It first gives an initial value β of β, and if βo is very close to 0, then β = 0; otherwise, it is approximated by the following formula:So, we can get the following:At this time, minimization (27) is reduced to the problem of minimizing the quadratic term.By minimizing (33), the following solution is obtained as follows:We use the above formula as the iterative formula of Newton’s iterative method to iterate until convergence.
From the previous preliminary knowledge, we know that the choice of window width has a significant impact on the estimation of E() and E(WIV). If the window width is too small, the estimated variance is too large. However, the selection is too large, and the deviation is caused by it. The choice of threshold parameters also has a direct impact on our final variable selection. If the threshold parameter value is set too large, some significant variables will be excluded from modeling, resulting in insufficient fitting. However, if the setting is too small, it will not serve the purpose of selecting variables. This study chooses the method of cross-verification to select the window width and threshold parameters.Among them, is the estimate of β obtained by removing the main sample from the total sample for regression.When p is small, the method of grid search can be used to find the optimal parameter value. When p is large, but the random variables of are similar, such as independent and identically distributed, a consistent window width can be used. When p is large and the random variables of differ greatly, it will be very difficult to directly minimize the p + 2-dimensional space.
Although the above method is relatively simple and intuitive, the process of estimating β does not make full use of the information of the model (20), but relies on multidimensional nonparametric estimation. In this way, when the dimensionality of Z is higher, the so-called dimensional bane problem will appear. In this section, the local polynomial method combined with penalized least squares is used to improve the above method.
When a (.) is known and X has no measurement error, we can use the following penalty least-squares method to get an estimate of β.When X has a measurement error, if is directly substituted for X into the above formula, it will lead to inconsistency in the estimates. Similarly, when applying the “correction for attenuation” method to correct, we get the following:Since the specific functional form of α (.) is not given, it is not possible to minimize the above formula directly with respect to β. We use the local linear method to first estimate a (), i = 1,...,n and then bring it into equation (37). Since aT () E ( = ) = E (Y-XTB| = ) = E(Y- = ), we can directly use instead of X to apply to the local linear estimation, and no correction is needed.
Specifically, for a point in the u field, there are the following:We set a = (a1,...,a)T and b = (b,...,b) T. Regarding the minimization of a, b, and β, the following local least squares exist:Among them, K (.) is the kernel function, and KA(t) = h-1K(t/h). {a, 6,} is the solution of minimizing 2–12, and then, &(u) = a. We use the estimate of α to bring in (36) to get the following penalty least squares:Specifically, we set Y = (,,Y.)T,  = (aT bT3T)T, and  = diag {ka (-),..., kA(.)}.Equation (40) can be written in the following matrix form:The above one-step estimation does not require iteration, so the calculation speed is faster. Considering that the dimensions involved in minimizing a, b, and β (2–12) are too high, which may lead to Z instability of the estimation, the following full iterative algorithm can also be used:

Step 2. The algorithm gives an estimated of β.
Regarding a and b minimizing the following formula, we have the following:

Step 3. About β-minimization:

Step 4. The algorithm iterates Step 3 and Step 4 until convergence.
Similarly, we choose the method of cross-checking to select the window width and threshold parameter values.Among them, is the estimate of β obtained by removing the ith sample from the total sample for regression.

3.2. Error Correction Mechanism

VECM is used for inspection. VECM is derived on the basis of the VAR model and is often referred to as the cointegrated VAR model. When economic theory cannot fully reveal the inherent tight specification of a multivariable system, the VAR model is often used to describe the multivariable dynamic system to avoid the problem of simultaneity bias. In fact, VAR is a linear model with n variables and n equations. In VAR (p), each variable in the system is gradually interpreted as a linear function of its own p-order lag value and the p-order lag value of other variables in the system. The main use of VAR is in causal test and impulse response analysis. The VAR model that introduces an error correction mechanism is called the vector error correction model (VECM). VECM is a derivative model of VAR and is mainly used for the correlation analysis of cointegration sequences. Equation (47) is the general form of VECM.

If Yt-I (1)3 is cointegrated, then the dynamic characteristics of Yt can be described by equation (48) VECM. Now assume Yt-I (1) (i.e., y1t, y2t, …, ynt constituting Yt are all I (1) sequences). Under this assumption, there are Yt-1-I (1), △Yt-I (0), and △Yt-i (i = 1,2, ..., p-1)-I (0). It can be seen that the left side of equation (1) is the sequence of I (0), and its right side must be stable. This requires Yt-1 to be stable by multiplying with the matrix Π (i.e., after some linear transformations). Therefore, the matrix Π contains a mechanism that can make Yt smooth without difference operation.

Π is a matrix of order n × n, assuming Rank (Π) = r(0 ≤ r ≤ n), and then, Π can be decomposed into the product of two n × r matrices α and β. It can be seen that Rank (α) = Rank (β) = r. Regardless of Yt-I (0) or Yt-I (1), and when Yt-I (1), whether Yt is cointegrated or not, there is βYt-1-I (0). The only difference is that corresponding to different situations of Yt, the rank of matrix Π will be different. ① When Yt-I (1) and Yt are cointegrated, ß is the cointegration vector of Yt, so βYt-1-I (0) can be satisfied, and 0 < Rank (Π) = r < n. The reason for time-series cointegration is that they are affected by the same random trend, so after a simple linear operation, the common trend can be eliminated and stabilized. At this time, r is called the cointegration rank of Yt. ② When Yt-I (1), and Yt is not cointegrated, Rank (Π) = 0. The non-cointegration of Yt means that there is no common I (1) trend in it, so the only way to satisfy βYt-1-I (0) is β = [0]; that is, Π = [0]. Therefore, Rank (Π) = 0. At this time, equation (1) degenerates into a differential form of VAR (p-1). ③When Rank (Π) = n, there must be Yt-I (0). When Π is a full-rank matrix, the row vector and column vector of the matrix ß are not correlated with each other, and any I (1) trend will not be subtracted. Therefore, only when Yt is stable βYt-1-I (0) can be satisfied. At this time, the analysis of Yt should adopt the VAR (p) model. In summary, only Yt-I (1) and Yt cointegration are suitable for the description by formula (48). Therefore, the empirical test must first determine the stationarity and cointegration of the sequence. VECM can well reflect the long-term and short-term adjustment relationships within the dynamic system. Equation (48) can also be written as follows:

Once the term βYt-1 is not equal to zero, △Yt at a later time will receive feedback from this term. When the mean value of the I (0) sequence of βYt is zero, whether βYt-1 is greater than or any value less than zero represents a deviation from its equilibrium state. Since △Yt is not divergent, once the value of β′Yt-1 shifts to its equilibrium state, the deviation will be α times the speed reduce and feedback to the value of △Yt to make β′Yt at the next moment closer to its equilibrium state. The adjustment of the sequence VECM with a nonzero mean value will also subtract the influence of the mean in a short time so that βYt. The value oscillates near zero. The βYt-1 term in equation (20) is often referred to as the VECM error correction (error correction) mechanism, which represents the deviation of the long-term equilibrium state of the system at the previous moment. An important use is the analysis of causality. From the previous description, it can be seen that there are two levels of causality in VECM: long term and short term.(1)The short-term causality. △yit is affected by △y1, △y2,...,yn from 1 to p-1 lag value, and the influence is fed back to the value of Yt. These influences can be regarded as a kind of noise that may cause Yt to move toward equilibrium or it may deviate from equilibrium.(2)Long-term causality. Since the cointegration relationship between vectors within △Yt exists for a long time, once ECt-1 feedbacks the deviation from the long-term equilibrium state in the long term, VECM can make △Yt adjust to the equilibrium state at a speed of α times the deviation. The short-term causality is judged by the significance of the joint distribution of the lag value coefficients from 1 to p-1 in the VECM estimation result and expresses the situation that △Yt is affected by the lag difference terms. The long-term causality is the error correction term ECt-1. The significance of the coefficient is used to determine the expression of △Yt affected by Yt-1.

4. The Impact of Interest Rate and Virtual Financial Reforms on GDP Growth Based on the Error Correction Model

The error correction mechanism of this article is as follows:(1)Improve credit policy support and promote the rational allocation of financial resources(2)Actively promote the marketization of interest rates and comprehensively promote the improvement of capital utilization(3)Speed up the development of multiple financing channels and gradually improve the financial structure dominated by indirect financing(4)Attach importance to the establishment of a credit system and create excellent conditions for the optimal allocation of financial resources(5)Grasp the development of the situation and ensure the stability of market expectations

When the nominal interest rate does not change, the actual interest rate and prices change in the opposite direction; when the actual interest rate does not change, the nominal interest rate changes in the same direction as the price. Based on this, we can construct an interest rate fluctuation mechanism based on changes in price levels, as shown in Figure 3.

The interest rate is determined by the supply and demand of money. The interest rate is the currency loan price when the money supply and demand reach equilibrium, so the change in the interest rate level depends on the change in the money supply and demand. Based on this, we can construct an interest rate fluctuation mechanism based on changes in the money supply, as shown in Figure 4(a). The RMB exchange rate has a significant negative impact on interest rates, while the interest rate has no significant impact on the RMB exchange rate. We can construct an exchange rate-based interest rate fluctuation mechanism, as shown in Figure 4(b). The level of the investment effect of interest rate fluctuations directly affects the sensitivity of investment changes to interest rate fluctuations and has an important impact on the effective implementation of interest rate policies. Keynes’ interest rate transmission theory shows that the change in interest rate is negatively correlated with the change in investment. Based on this, we can construct the formation mechanism of the investment effect of interest rate fluctuation, as shown in Figure 4(c).

Changes in interest rates will inevitably cause consumer behavior and cause changes in the total consumption of the entire society. According to the neo-Keynesian monetary transmission theory, the rise in interest rates will cause a decline in investment and demand for consumer durables. Based on this, we can construct the formation mechanism of the consumption effect of interest rate fluctuations, as shown in Figure 4(d).

In the virtual economy market, if there is a clear connection between the original virtual capital such as stocks and bonds and the real economy, the trading activities of financial derivatives such as options and swaps derived from stocks and bonds are a further manifestation of economic virtualization and the most obvious manifestation of virtual economy. The trading activities of options, swaps, and other derivative products are intertwined with the bank market and foreign exchange market and become the main body of the virtual economic system. The emergence of financial derivatives and the development of stocks and bonds in the global market have increased the size of the virtual economy and further strengthened its influence on the real economy.

With the continuous expansion of the form and scope of credit relationships, financial innovation activities have also gained sufficient sources and motivation, and financial derivatives have sprung up into the capital market. The margin system for transactions in the derivatives market enables transactions to have multiple creative capabilities, that is, financial leverage. As a result, the development of the financial derivatives market can be described as leaps and bounds. Figure 5 shows the model of credit creation and financial virtual operation.

The decline in market interest rates, the rise in foreign exchange reserves, and the rise in exchange rates have caused a significant increase in liquidity. Under the conditions of slow technological progress and inadequate financial supervision, excess liquidity flows into the real economy in a small amount and flows into the virtual economy in a large amount, causing the price of virtual assets to rise. Under the circumstances that the macroeconomy is functioning well, the public is actively expecting and the supervision is not in place, the asset price will be promoted to further rise, and this cycle will lead to the formation of a virtual economic bubble. Through the above analysis, the process of the virtual economy bubble can be represented as shown in Figure 6.

After constructing the above model, this study uses the existing network data to study the model of this study and analyzes the relevance analysis of the impact of interest rate and virtual financial reform on GDP growth through the error correction model. The GDP growth rate and the lagging value of the virtual economy comprehensive index are selected to represent the macroeconomic operating conditions and public expectations, respectively. To quantitatively measure the level of science and technology, we must first distinguish two concepts, namely science and technology in a narrow sense and science and technology in a broad sense. Science and technology in the narrow sense refers to natural science, which mainly includes science and technology related to the production and operation activities of the main body of microeconomic activities. In a broad sense, technology refers to factors other than input factors in production that have an impact on output, such as management level. The research in this study is broad-based science and technology. The commonly used Cobb–Daughter Glass production function, namely C-D production function, in A is also a reflection of the level of broad-based science and technology. Therefore, the measurement in this article is broad science and technology. Among them, Figure 7 and Table 1 are the analysis of the relevance between the interest rate reform and GDP growth, and Figure 8 and Table 2 are the analysis of the relevance between the virtual financial reform and the GDP growth.

From the above experimental research, it can be seen that the error correction model proposed in this study can play an important role in the analysis of GDP growth factors and at the same time verifies that virtual financial reform and interest rate reform can have a certain impact on GDP growth and have a certain degree of relevance.

5. Conclusion

With the continuous development and deepening of the market economy, the virtual economy accounts for an increasing proportion of the economic system. However, with the development, the virtual economy has developed into a certain degree of independence through the initial development of the real economy as the basis and serving the real economy and even has a certain degree of departure from the real economy. The difference between virtual asset capitalization pricing and physical asset cost and technology pricing has become the fundamental reason for the deviation between the virtual economy and the real economy. The capitalization pricing of virtual assets and people’s limited ability to predict the market make the price of virtual assets extremely unstable, which results in the inherent volatility and high risk of the virtual economy. In addition, due to the continuous increase in the scale of the virtual economy, the inherent volatility and high risk of the virtual economy can easily cause severe damage to the entire economy. This study analyzes the relevance analysis of the impact of interest rate and virtual financial reform on GDP growth through the error correction model and concludes that virtual financial reform and interest rate reform can have a certain impact on GDP growth.

Data Availability

The labeled dataset used to support the findings of this study is available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This study was sponsored by Taizhou Vocational College of Science and Technology.