Advances in Civil Engineering

Advances in Civil Engineering / 2021 / Article
Special Issue

Failure Mechanisms, Prediction, and Risk Assessment of Natural and Engineering Disasters through Machine Learning and Numerical Simulation

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 2015408 | https://doi.org/10.1155/2021/2015408

Fei Yin, Yong Hao, Taoli Xiao, Yan Shao, Man Yuan, "The Prediction of Pile Foundation Buried Depth Based on BP Neural Network Optimized by Quantum Particle Swarm Optimization", Advances in Civil Engineering, vol. 2021, Article ID 2015408, 15 pages, 2021. https://doi.org/10.1155/2021/2015408

The Prediction of Pile Foundation Buried Depth Based on BP Neural Network Optimized by Quantum Particle Swarm Optimization

Academic Editor: Faming Huang
Received19 Apr 2021
Revised28 May 2021
Accepted16 Jun 2021
Published24 Jun 2021

Abstract

Due to the fluctuation of the bearing stratum and the distinct properties of the soil layer, the buried depth of the pile foundation will differ from each other as well. In practical construction, since the designed pile length is not definitely consistent with the actual pile length, masses of piles will be required to be cut off or supplemented, resulting in huge cost waste and potential safety hazards. Accordingly, the prediction of pile foundation buried depth is of great significance in construction engineering. In this paper, a nonlinear model based on coordinates and buried depth of piles was established by the BP neural network to predict the samples to be evaluated, the consequence of which indicated that the BP neural network was easily trapped in local extreme value, and the error reached 31%. Afterwards, the QPSO algorithm was proposed to optimize the weights and thresholds of the BP network, which showed that the minimum error of QPSO-BP was merely 9.4% in predicting the depth of bearing stratum and 2.9% in predicting the buried depth of pile foundation. Besides, this paper compared QPSO-BP with three other robust models referred to as FWA-BP, PSO-BP, and BP by three statistical tests (RMSE, MAE, and MAPE). The accuracy of the QPSO-BP algorithm was the highest, which demonstrated the superiority of QPSO-BP in practical engineering.

1. Introduction

Pile foundation is one of the oldest foundation forms. With the development of history, pile foundation has become the most commonly used foundation form of high-rise buildings, significant structures, tunnels, bridges, offshore platforms, and other structures on soft ground [1]. Pile, as a member of foundation structure, is vertically or aslant set in the soil and has certain stiffness and bending shear capacity. It allows itself to pass through the soft compressible soil layer, compact the weak soil, and transmit part or all of the load from the superstructure to the soil layer or rock below with low compressibility and certain bearing capacity, thus avoiding the excessive settlement of the foundation and improving the bearing capacity of the soil layer [2]. As one of the most important steps in foundation construction, the construction of pile foundation is a large-scale project; meanwhile, there is also a problem of waste of materials. It is usually found that the designed length of pile is much higher or lower than the required value in actual engineering, which can be seen from Figure 1

Each pile to be bored would be surveyed in advance according to current construction technique. There is a construction technique of “one pile with one investigation” or “one pile with lots of investigation” [3], namely, each pile to be driven will be investigated in advance, and the pile length is designed by the elevation of pile bottom which is estimated by the most unfavorable principle, but this technique is not adopted in every project [4]. Only a few survey boreholes are arranged to predict the soil distribution of the whole site in general engineering.

When the irregular fluctuation of the bearing stratum changes greatly, it may result in a large elevation difference between the undrilled area and the nearby drilled area. Therefore, in the area where the bearing stratum is relatively shallow, the pile will reach the bearing stratum too early and cannot continue going deeper. At this time, the pile head will extend too long from the soil and needs to be cut off, which can be seen from Figure 2 below. On the contrary, in the area where the bearing stratum is deeply distributed, the pile length will be insufficient and needs to be supplemented. The reason for this situation is that the variety of pile length is limited, especially the prefabricated pile. If the construction is carried out according to the designed pile length, a large number of piles will be cut off or supplemented, resulting in unnecessary waste in pile foundation engineering. Therefore, this paper predicted the buried depth of pile foundation and bearing stratum, then made targeted technical scheme preparation and construction deployment in practical engineering according to the prediction results.

At present, there are few research studies on predicting the buried depth of pile foundation and the fluctuation of bearing stratum; however, in the field of pile foundation engineering, many scholars have made some achievements. In the static load test of pile foundation finished by Qi et al. [5], a first-order linear dynamic differential equation was derived by studying the settlement of pile under various loads. On the basis of gray system theory, the GM (1, 1) model of load-settlement relationship of single pile was established and employed to predict the ultimate bearing capacity and the complete load-settlement relationship. Despite the accurate prediction results obtained by this method, the uniformity of original data should be ensured. According to the measured data, Gao et al. [6] adopted the hyperbolic method to predict the bearing capacity of squeezed branch pile. Although the error between the predicted results and the measured curve was not great, the values predicted by this method were generally too large and had certain limitations. Deng et al. [7] used the superlong and large-diameter cast-in-place pile foundation of the Sutong Yangtze River Bridge as an example to calculate its settlement amount by adopting different specifications, and then a new empirical formula considering pile compression and modifying additional stress of the pile tip was proposed by comparing with the settlement value of the large-scale centrifugal model test. Afterwards, this formula was applied to verify the settlement value of a super large-scale pile group foundation on the Nei-Kun Line, and the calculation result of which was relatively consistent with the measured data. However, this formula was not suitable for the analysis of single pile settlement, which had certain limitations as well.

Most of the forecasting methods mentioned above rely on a fixed knowledge framework and can only be adopted under certain preconditions, which are rigid and not flexible enough. As a consequence, an intelligent technology that can deal with various problems flexibly and has self-learning awareness will be needed. Machine learning, as a technology that computers build models based on data to simulate human activities, can meet this condition. It possesses the strong generalization ability and has been applied in different aspects. Methods of machine learning were used by Ahmadi et al. [8] to successively predict the solubility of carbon dioxide (CO2) in brines, porosity and permeability of petroleum reservoirs, the amount of dissolved calcium carbonate concentration throughout oil field brines, condensate-to-gas ratio in retrograded condensate gas reservoirs, solubility of hydrogen sulfide (H2S) in ionic liquids, etc. The prediction of pile foundation depth belongs to a highly nonlinear problem, which can also be analyzed by this method. The artificial neural network (ANN) is a kind of machine learning, which is a powerful intelligent learning tool with functions [9, 10] such as mapping nonlinear relations, information processing, optimization calculation, classification, and recognition. It has been widely applied in the fields of model prediction, content prediction, cost control, fault diagnosis, information processing, construction engineering, mechanical engineering, medicine, etc., and the results were great as expected. Hamid et al. [11] built an ANN model based on the critical pressure (Pc), critical temperature (Tc), and molecular weight () of pure ionic liquids to predict the solubility of hydrogen sulfide (H2S) in different temperature, pressure, and concentration ranges. Moosavi et al. established an ANN model based on 214 data records of published CO2-foam injection tests into oil-reservoir cores to predict the CO2-foam flooding performance for improving oil recovery. Shang et al. [12] proposed two kinds of ANN models to analyze the content and types of heavy metals in the soil based on the complex permittivity of materials, so as to determine whether the soil is contaminated. The first ANN model was adopted to confirm the presence of heavy metals in the site, and the second model was employed to classify the presence of heavy metals in the site. Alias et al. [13] completed the cost of the skeletal system of a building project through the ANN model. Sorsa et al. [14] detected and diagnosed the faults in the testing process based on three different ANN structures. Jin et al. [15] established an ANN model with water-cement ratio, specimen shape, and section size as input parameters based on the size effect of concrete compressive strength, then predicted the compressive strength with different section sizes. Suzuki [16] found it feasible to apply the improved ANN to reduce false positives in computer detection of pulmonary nodules in low-dose computed tomography images, and the results turned out to be great.

The emergence of ANN provides a more convenient and intelligent prediction way for many fields. It no longer needs a large amount of statistical data to predict the future trend but can achieve a good prediction effect based on limited data. Therefore, in this paper, the highly nonlinear problem of the prediction of pile foundation burial depth can be solved by relying on ANN.

Among them, the backpropagating (BP) neural network is the most widely used artificial neural network, which is a multilayer feedforward neural network trained according to the error backpropagation algorithm. It has a certain ability of summary and extension. Some mathematical analysis has demonstrated its ability to deal with any nonlinear problem to solve complex internal mechanisms [17]. However, there are still some shortcomings of the traditional BP network at present. (1) It is a complex process for the BP neural network to optimize the objective function by using the gradient descending method, and when the output of neurons is in the vicinity of 0 and 1, the weight error only changes within a small range. This phenomenon causes training to almost stop, so that the efficiency of the BP neural network is low and the speed of the BP neural network is at a slow pace [18]. (2) From the mathematical point of view, the algorithm used by the BP neural network is mainly to search the local area, which will easily fall into the local extreme value. The final training curve presented is almost going to be a straight line, as a result of which the training of the network will fail [19]. (3) The prediction ability of the network is proportional to the learning ability within a certain range, but once beyond this range, the prediction ability of the BP neural network will decline with the improvement of the learning ability, which leads to the phenomenon of “overfitting.” At this time, even if the network has learned a large sample, it cannot directly and correctly reflect its rules [20]. To sum up, there will be plenty of deficiency when the BP neural network is solely used for prediction; as a result, it is necessary to apply some algorithms to optimize the BP neural network and then establish a more accurate training model.

In recent years, several swarm intelligence optimization algorithms have emerged in an endless stream. This kind of algorithm is also one of the optimization algorithms that scholars pay the most attention to, which has the characteristics of simplicity and high efficiency when compared with others [21]. So that they have been also widely adopted in various fields. The swarm intelligence algorithm uses the group relations among some animals or individuals in the society, such as interaction, heredity, variation, cooperation, and other behaviors, to achieve the purpose of searching for optimal solution.

Fireworks algorithm (FWA) is a new swarm intelligence optimization algorithm proposed by Tan et al. [22] in recent years, which simulates the mechanism of the simultaneous explosion and diffusion of the firework explosion operator. It introduces the idea of concentration suppression in the immune algorithm and the mechanism of distributed information sharing, thus having stronger global search capabilities [23]. Compared with traditional algorithms, the population of FWA is more diverse, and its characteristics have also attracted the attention of many scholars. However, the FWA still has several shortcomings. For example, when the explosion point range is large and there are many explosion operators, the targets generated by the explosion will overlap, resulting in irrelevant searches. Then, it will greatly affect the optimization efficiency of FWA, which is the major cause of slower convergence speed and lower search accuracy [24].

Particle swarm optimization (PSO) algorithm [25], as one of the most classic optimization algorithms, is inspired by the foraging behavior of birds. It seeks the optimal value in the stochastic solution of particle swarm through constant iteration. Compared with FWA, it has the advantages of simple operation and fewer parameters to be adjusted [26]. Ahmadi et al. [27] used the neural network model optimized by the PSO algorithm to predict asphaltene precipitation due to natural depletion. Wang et al. [28] predicted the mechanical properties of hot rolled strip steel in material processing based on the PSO-BP model. Likewise, this PSO-BP model was applied by Ismail et al. [29] in the field of soil-structure composite interaction to predict the load-deformation characteristics of axially loaded piles as well. Shafiei et al. [30] predicted the solubility of hydrogen sulfide in different temperature, pressure, and concentration ranges in the same way. Ahmadi et al. used the PSO-ANN model to predict the dew point pressure of condensate gas reservoir. In another paper by the same author, estimation of efficiency of chemical flooding in oil reservoirs was predicted. Although the above literature studies have achieved relatively good prediction results, the PSO algorithm still has plenty of problems, and it has been proved that it is not a globally convergent algorithm [31]. In the meantime, it also has some problems to be solved, such as premature convergence, lack of dynamic adjustment of velocity, easy to fall into local extreme value, lack of randomness in particle position change, inability to effectively deal with discrete and combinatorial optimization, and limitation of search space [32]. From the perspective of dynamics, there is a point with potential energy field in the search area that attracts the particle swarm, causing the surrounding particles to constantly approach this point. When the velocity decreases to 0, the particles converge to this point as well. Therefore, the motion of each particle in the traditional PSO algorithm is carried out along a fixed orbit, the velocity of the particle is always a finite value, and the search area of its feasible solution is also small [33]. In order to improve the global optimization capability of PSO, this algorithm needs to be optimized. As a result, the concept of quantum particle swarm algorithm (QPSO) was proposed by Sun [34].

Based on the traditional PSO algorithm, the QPSO algorithm randomizes the velocity of the particle. In the quantum space, the state of the particle is not represented by position and velocity vector any more, but by wave function. In this way, within the feasible region, the probability of particles appearing at a position is random, and the motion of particles is no longer along a fixed orbit. Their updated position in the next second has no correlation with the previous position; that is, the search can be carried out in the whole feasible solution region, which improves the global optimization performance of particles. Chen et al. [35] took the gear reducer of belt conveyor as the research object and optimized the parameters such as modulus, tooth width coefficient, and helical angle of the gear reducer based on the QPSO and PSO algorithm. The results showed that the optimization effect of QPSO was obviously better than that of PSO. Genetic algorithm (GA) and ANN, PSO, and QPSO algorithms were used by Lu et al. [36] to predict the parameters of the batch fermentation kinetic model. The results demonstrated that the prediction effect of QPSO in all aspects was superior to that of other algorithms. Therefore, on the basis of the BP neural network, this paper used the QPSO algorithm to optimize the BP model and then predicted the buried depth of pile foundation. Finally, three error analysis tools, RMSE, MAE, and MAPE, were, respectively, employed to analyze its reliability and uncertainty.

Based on the above, this paper provides the following contributions. (1) In this paper, the ANN in machine learning was used to predict the buried depth of pile foundation. However, there were very few research studies on this topic before; as a result, it can be applied as a new field in practical engineering. (2) In this paper, the samples of piles were collected on the spot based on engineering examples. The relevant parameters of pile samples in this area were sorted out and summarized, which were X-coordinate, Y-coordinate, Z-coordinate, thickness of miscellaneous fill h1, thickness of silty clay h2, thickness of silt h3, thickness of fine sand h4, and pile buried depth H. Some samples were selected as training models. (3) In this paper, the steps of predicting test objects after optimizing the BP neural network by the QPSO algorithm were described in detail. (4) This paper used the QPSO algorithm to optimize the BP neural network for modeling training and then predicted the remaining samples in step (2). The great global optimization of the QPSO algorithm successfully made up for the defect that the traditional PSO algorithm was easy to fall into the local extreme value, and the prediction results were very close to the measured results, indicating that this method had achieved a good prediction effect in the research objects. (5) This paper compared the errors of the QPSO algorithm with other robust models: PSO algorithm, FWA, and BP neural network. The results showed that QPSO had higher prediction accuracy.

This paper also introduces the following parts. Section 2 introduces training parameters based on project example. Section 3 describes the concept of BP neural network and the optimization methods of PSO and QPSO algorithms. Section 4 is the error analysis after using different algorithms to optimize the BP neural network for prediction. Section 5 is the conclusion of the above description and the analysis of the predicted results. At the same time, this paper also gives an overview of how to apply this method in engineering examples with similar soil propriety.

2. Project Example

2.1. Project Profile

The project is located in the East Campus of Yangtze University in Jingzhou District, Jingzhou City, Hubei Province, which was to build dormitory and canteen in this area. According to the design document, this investigation site with pile location layout is shown in Figures 35 below.

The distribution of boreholes and piles can be obtained from the figure. Each long black dotted line, such as “11-11′,” represents “11-11′ section” of boreholes from K64 to K67. According to the section drawings, the soil stratification at each borehole from K64 to K67 can be known. First of all, the piles at the boreholes were selected as the data of the network model, X-coordinate, Y-coordinate, and Z-coordinate of each pile were taken according to the coordinate information provided by layout drawings, and the length of the bearing stratum and the buried depth of pile were obtained from the section drawings. In order to make the selected data representative, 43 piles were randomly selected as the training samples and 10 piles were randomly selected as the prediction samples from the boreholes of investigation in the figure. With the difference of the geographical location, the fluctuation of the bearing stratum of the site will have a certain trend of change as well. The process of driving the pile into the bearing stratum needs to pass through different soil layers on the upper side. However, the thickness of each soil layer at the undetected coordinates is an uncertain unknown. As the thickness of the soil layer is different, the depth of the bearing layer changes to another number, which will affect the buried depth of the pile. According to the field data, the piles were all driven into the fine sand layer, which indicated that the fine sand layer was the bearing stratum. The depth of the sample pile into the first layer of soil called miscellaneous fill is h1, the depth of the sample pile into the second layer of soil called silty clay is h2, the depth of the sample pile into the third layer of soil called silt is h3, and the depth of the sample pile into the fourth layer of soil called fine sand is h4 which presents the bearing stratum. H is the sum of h1, h2, h3, and h4, which represents the buried depth of pile. The depth of the sample pile into different layers of soil can be calculated by combining the geological profile and the data of pile buried depth H measured from actual engineering. Based on the above, the X-coordinate, Y-coordinate, and Z-coordinate of each pile were collected as input parameters for model training. In this training, to predict the fluctuation of the bearing stratum is to predict the depth of h4, to predict the buried depth of pile is to predict the depth of H, and the thickness of h4 is less than the thickness of fine sand layer. In addition, H = h1+h2+h3+h4, H and h4 are output parameters. The schematic diagram is shown in Figure 6.

2.2. Geological Overview

The terrain of the site is relatively flat, and the absolute elevation value of the ground is in the range of 31.5 m–32.88 m, which belongs to the first-grade terraced geomorphic unit on the north bank of the Yangtze River. There is no adverse geological action such as landslide, soil collapse, and debris flow. According to the detailed investigation report of the site, the area within this depth range can be divided into artificial fill soil layer, Quaternary Holocene alluvium and Quaternary Upper Pleistocene alluvium and diluvium according to its genetic type and sedimentary age [37].

According to their properties and composition, the geotechnical layers can be classified into the following parts, which are distributed as follows: (1) artificial fill soil layer (Qml), miscellaneous fill, brown, moist, and loose. The main component is clay, containing a small amount of plant rhizomes. This layer is distributed in the whole field, and the soil uniformity is poor. The thickness is 0.4 m–1.7 m. (2) Quaternary Holocene alluvium (), silty clay, yellowish-brown to grayish-brown, soft to plastic, saturated, full-field distribution. This layer contains a small amount of ferromanganese nodules and medium compressibility. The thickness is 4.5 m–14.8 m. (3) Quaternary Holocene alluvium (), silt, gray, slightly density to medium density, saturated, full-field distribution, medium compressibility.

The thickness is 1.9 m–12.4 m. (4) Quaternary Holocene alluvium (), fine sand, gray, medium density, saturated, full-field distribution, mainly composed of quartz and feldspar, and low compressibility. The thickness is 3.9 m–16.5 m. (5) Quaternary Upper Pleistocene alluvium and diluvium (), pebbles, gray, white and other colors, medium dense to dense state, low compressibility, and full-field distribution. The main component is quartzite, with good roundness and poor sorting. The particle size is generally 3∼5 cm, and the larger particle size is greater than 7 cm, of which the particle size greater than 2 cm accounts for about 51%. The filling material between pebbles is fine silty sand.

It can be seen from the above data that the thickness and uniformity of each layer are greatly different.

3. Optimization Algorithms for Pile Depth Prediction

3.1. Implementation of BP Neural Network Algorithm

As the name suggests, the neural network is an artificial intelligence algorithm to simulate the human brain nervous system, which has a strong self-learning ability and can deal with complex nonlinear models [9, 10]. Through the connections of countless neurons, it can carry out huge parallel processing and analysis on the information of the previous input layer and then pass it to the next layer. A large amount of training can constantly update the weights of the neuronal connections in the front and rear layers, so as to achieve the goal of reducing error and meeting people’s expectations.

X-coordinate, Y-coordinate, and Z-coordinate of pile were regarded as input parameters for model training of the BP algorithm. Besides, the depth of bearing stratum h4 and buried depth of pile H were regarded as output parameters. The detailed process can be described as the following steps. (1) A training model based on X-coordinate, Y-coordinate, Z-coordinate, h4, and H of 43 training samples was established. (2) The h4 and H of 10 remaining samples were predicted. (3) The values of output parameters were compared with measured values. (4) The error between predicted values and measured values was analyzed. During the process of prediction, there was no correlation between the input and output parameters, which was a nonlinear function. As a result, the three-layer network for nonlinear function can meet the training requirements in the BP algorithm [17]. The diagram is shown in Figure 7.

The S-type action function shown in (1) is its activation function [38]:where xk is the input parameter of the input layer; is the output parameter of hidden layer; yk is the output parameter of output layer; is the connection weight of neurons between the input layer unit and the hidden layer unit; and is the connection weight of neurons between hidden layer unit to output layer unit. The number of neurons of input layer, hidden layer, and output layer is, respectively, n, m, and l.

The training process was as follows:(1)Since the activation function of the neural network is a logarithmic S-type function, it may have the problem of convergence; that is, the infinite or infinitesimal results appear in the calculation process. Therefore, the input data of X-coordinate, Y-coordinate, and Z-coordinate and output data h4 and H of the samples should be normalized first, which was to make these values vary from 0 to 1.(2)The values randomly generated in the interval [−1, 1] were taken, and the initial values were assigned to the weights.(3)The independent variable parameters of the processed sample data were input at the corresponding nodes of the input layer, and the output values of the BP neural network were calculated at the corresponding nodes of the output layer through the action of weight and activation function.(4)The output values of the BP neural network training were compared with the expected values, and then the error between them was calculated.(5)The error obtained was propagated back from the output layer, and the weight was corrected according to the gradient method when it reached the first layer, and then step (3) was repeated and recalculated.(6)The above steps (1)–(5) were repeated until the error function satisfied equation:

After the BP neural network had been trained according to the above steps, the trained network model could be used to predict the samples. Based on the measured data from engineering project, 43 and 10 piles were, respectively, selected as input and test vectors, and each pile was determined by three parameters.

The BP algorithm with single hidden layer was adopted in this paper. It was difficult to determine the number of neurons in the hidden layer, and the neurons affected the determination of accuracy to a certain degree. If the number of neurons was too small, the algorithm had almost no ability to train. On the contrary, if the number of neurons was too large, the training time would be extended, which was easy to fall into the local optimal solution. As a result, the normal predicted values were not available to obtain. Generally, there are three methods to identify hidden layer neurons [39]: (1) For FangfaGorman theory, the relation between the number of neurons S and the input parameter N is S= log2N; (2) for Kolmogorov theory, the relation between the number of neurons S and the input parameter N is S= 2N+ 1; (3) the relation between the number of neurons S and the input parameter N and the output parameter M is S=sqrt (0.43 MN+ 0.12NN+ 2.54 M+ 0.77 N+ 0.35) + 0.51. The input parameter N was 3, and the output parameter M was 2 in this BP training; thus, the calculated S using the above three method was, respectively, 1.58, 7, and 3.45. Since the number of neurons needed to be an integer, three values of 2, 4, and 7 were selected for prediction and the errors of them were compared in the later section.

The transfer function in hidden layer and output layer was S-type tangent function and logarithmic function, respectively. The network training function was “traingdx,” gradient descent method was used during learning process, and the learning rate was adaptive.

3.2. Implementation of PSO-BP Algorithm

The process of the BP algorithm optimized by PSO is shown as follows [40]:(1)Firstly, the maximum number of iterations required, the number of independent variables required by the objective function, the maximum particle velocity, and the position information of particles were set as the whole search space. In addition, the velocity and position coordinates were initialized randomly in the velocity interval and search space, and each particle was given an initialized random flight velocity.(2)The fitness function was defined, and each particle would have an extreme value, which was the individual extreme value and the optimal solution of the unit particle. Then, a global value was found from the optimal solution of all particles, that was, the global optimal solution. Finally, this optimal solution was updated after comparing with the global optimal solution obtained in history.(3)The updating velocity and position [41] were, respectively, shown in equations:where ω is inertial factor and a nonnegative value. When ω is large, the ability to find the global optimal solution is strong, but the ability to find the local optimal solution is weak. When ω is small, the ability to find the global optimal solution is weak, but the ability to find the local optimal solution is strong. Therefore, the ability to find the global and local optimal solution can be adjusted by different values of ω. C1 and C2 are learning factors, and current research studies [42] have investigated that a better solution can be obtained when C1 and C2 are constants. The values of C1 and C2 are between [0, 4], which are equal to 2 in general. The random (0, 1) is a random value on the interval [0, 1]. Pid is the individual extremum of i-th variable at d-dimension; Pgd is the global optimal solution at d-dimension.

The weights and thresholds optimized by PSO can be assigned as initial value for training and prediction of the BP algorithm [40]. The detailed process is shown in Figure 8.

The PSO-BP algorithm can be realized in two methods. (1) By combining the powerful global searching ability of the PSO algorithm with the local searching ability of the BP neural network, the global searching performance of the PSO algorithm is used to compensate for the topological structure, weight, and threshold of the BP neural network, so as to optimize the generalization and training ability, and the overall searching performance of the BP neural network. (2) The BP algorithm is added to the PSO algorithm, and the optimization performance of the PSO algorithm is improved through the powerful training and learning skills of the neural network, which can reduce the huge required workload and accelerate the convergence of the PSO algorithm. In this paper, the first method was adopted to obtain the optimal initial threshold through the PSO algorithm, and it was assigned to the BP algorithm to improve the efficiency and accuracy.

However, in the PSO algorithm, the convergence form of the particle is along the orbit, and the maximum velocity of the particle is always a finite value, which leads to certain limitations in the search area of the PSO, which cannot guarantee that it can search the whole feasible space, and the global convergence will be affected [33].

3.3. Implementation of QPSO-BP Algorithm

Based on the traditional PSO algorithm, the QPSO algorithm randomizes the velocity of the particle. In the quantum space, the state of the particle is not represented by position and velocity vector, but by wave function. Due to the uncertainty principle, the probability of a particle appearing at a certain place x is expressed by a probability density function, and no longer along a fixed orbit. As a result, the position of the particle has no relationship with the previous position [43]. The evolution equations of each dimension of the particle state are shown by the following equations:where Pid is the attractor of the i-th particle in the evolutionary iteration, Xid is the current position of the i-th particle, φid and uid are uniformly distributed random numbers on [0, 1], and Lid represents the characteristic length of the attractor and potential well, which is used to describe the search range of a particle, and Lid can be shown in the following equation [43]:where is the mean best position, and is the compression-expansion factor.

By substituting equations (7) into (6), the iterative equation (8) below of quantum group evolution can be obtained:

The size of the particle swarm is set as M. The implementation of its specific steps is as follows. (1) Initialize the particle swarm and set the maximum number of iterations. (2) Determine and initialize the individual optimal extremum and global optimal extremum of the particle swarm. (3) The fitness value of each particle is calculated. (4) The individual optimal extremum of each particle and global optimal extremum of the particle swarm are updated. (5) The new position of the particle swarm is calculated according to equation (8), and then the original particle swarm is updated. (6) Repeat steps (2)–(5) until the fitness values of the particle swarm meet the convergence condition.

The above is the principle and realization of QPSO, and the method of using the QPSO algorithm to optimize the BP neural network is similar to that of PSO. The explanation in Section 3.2 above can be used as a reference.

4. Analysis of Results

4.1. Prediction Results of Different Models

The original 43 groups of data for training were shown in Table 1, which were prepared to form a database to predict the other 10 groups of samples to be evaluated, where X-coordinate, Y-coordinate, and Z-coordinate were input parameters and h4 and H were output parameters.


No.Pile numberBorehole numberX-coordinateY-coordinateZ-coordinateh1 (m)h2 (m)h3 (m)h4 (m)H (m)

13K4211.7510171.288031.93000.4012.603.806.6023.40
212K4332.2510174.288031.70000.7012.104.505.9023.20
327K4459.6510174.288031.96001.3012.603.106.7023.70
441K4582.4510175.638031.92000.8014.202.706.9024.60
551K4691.7000164.938031.97000.9012.703.307.6024.50
3915K6653.0730246.524031.74001.4012.302.706.8023.20
40160K5612.7330204.027031.61001.309.005.507.6023.40
41151K5736.2730204.941031.59001.3011.903.007.1023.30
42148K5853.0730203.541031.86001.2013.302.106.8023.40
43139K5973.5230204.941031.94001.304.509.907.7023.40

The original 10 groups of data to be evaluated are shown in Table 2.


No.Pile numberBorehole numberX-coordinateY-coordinateZ-coordinateh1 (m)h2 (m)h3 (m)h4 (m)H (m)

1314K3214.4510136.788031.98000.5011.004.908.1024.50
2350K3325.0510124.488031.87001.2010.605.207.4024.40
9306K2997.5713103.199232.03000.4014.303.905.9024.50
10269K2874.0510102.588031.91001.1013.105.005.2024.40

The first was the prediction result of the BP neural network. As mentioned above, the number of hidden layer neurons was calculated by three different methods, which were 2, 4, and 7, respectively. Therefore, three different prediction results and error comparison for h4 and H based on the number of neurons in the hidden layer were obtained, which are shown in Figures 912.

It can be known from the above figures that different numbers of hidden layer neurons can affect the forecast results. When the number of neurons in the hidden layer was 7, the errors of the BP neural network in predicting h4 and H were smaller than that of the other two neurons. This phenomenon indicated the forecast result of the second method mentioned in Section 3.1 named Kolmogorov theorem was the best. In addition, no matter how many neurons there were, the errors between the prediction results of the sample using the BP neural network algorithm and the actual values were still very large. Especially, for the prediction results of h4, the maximum error was up to about 41% when the number of neurons was 2 and was up to about 31% when the number of neurons was 7. Besides, it was found in Figures 9 and 10 that the curves of prediction results of the BP neural network were all relatively gentle and the basic trend was a straight line, which proved that the BP network is easy to fall into the characteristics of searching for local optimal solution. Therefore, in order to compensate the lack of global search capacity of the BP network, the QPSO algorithm was going to be employed to optimize the BP network model when the number of neurons in the hidden layer was 7.

The linear fitting formula for predicting h4 and H had been calculated. The calculation results indicated that the deviation between the prediction values and the actual values reached 58.8%, which demonstrated the necessity of using the QPSO-BP algorithm. In order to intuitively observe the error comparison between different algorithms, relative error was adopted to compare the accuracy of QPSO-BP, PSO-BP, and FWA-BP. The formula can be seen in the following equation:where xp is the predicted value and xa is the actual value.

The parameter settings of the BP neural network and QPSO algorithm were as described below. The number of iterations of the BP network was 1000, the training goal of the BP network was 0.01, the learning rate of the BP network was 0.001, the population size of the QPSO algorithm was 20, the dimension of the QPSO algorithm was 30, and the iteration termination error of QPSO was 10−7. Compared with other models, QPSO was simple to operate and had fewer parameters to set.

The predicted curves of different models are shown in Figures 13 and 14. The predicted error curves of different models are shown in Figures 15 and 16.

According to Figures 15 and 16, a conclusion can be confirmed that the relative error of QPSO-BP was the smallest compared with that of PSO-BP, FWA-BP, and linear fitting. In the process of predicting h4, the minimum relative error was 9.4%, the maximum relative error was only 14.7%, and all of the errors were basically around 11%; in the process of predicting H, the maximum relative error was merely 2.9%, which confirmed the powerful prediction accuracy of QPSO. Furthermore, the prediction curve of QPSO-BP possessed the characteristic of fluctuation, rather than an almost flat straight line like the BP network. This showed that QPSO successfully compensated for the lack of global search characteristics of BP and was not easy to fall into the endless loop of finding local optimal solution. After comparison and analysis, the accuracy of these models in descending order was QPSO-BP > PSO-BP > FWA-BP > linear fitting.

Figure 17 shows the convergence curves of QPSO-BP, PSO-BP, and FWA-BP. It can be seen from the figure that the curve of decreasing fitness of QPSO-BP started to be very smooth and tended to a fixed value after iterating for about 25 times, and then the program stopped iterating when the number of iterations was around 143, which presented the optimal value of QPSO-BP had been discovered. Besides, from the comparison of the iterative curves of the other two algorithms, it can be seen that the decline rate of QPSO in the early stage was the fastest, and it was the first of the three models to converge in the subsequent iterative process, which demonstrated the capability of fast search and convergence of QPSO.

The above conclusions indicated that it was feasible to use the QPSO-BP method in machine learning to predict the buried depth of pile foundation and the fluctuation of bearing stratum, and the particle swarm optimization algorithm was already a relatively mature optimization algorithm compared with many other algorithms, which was not difficult to implement. QPSO-BP has the advantages of fast search and fast convergence speed, simple operation, and high precision; therefore, it was more reasonable to apply this algorithm in this paper.

4.2. Error Analysis of Different Models

In order to further prove the powerful prediction accuracy of QPSO, three statistical test methods were, respectively, used [4446], namely, RMSE, MAE, and MAPE. The three formulas can be obtained from equations (10)–(12), and the error comparison of different algorithms can be seen in Table 3 in detail.


ModelRMSEMAEMAPE (%)

BPh41.3170451.21807221.6218
H3.1593383.04716612.4648

FWA-BPh41.4030031.25322220.5951
H1.9431371.7394237.1161

PSO-BPh41.0309850.92052115.1887
H0.9984720.933523.819

QPSO-BPh40.6622840.6519311.0622
H0.4781910.4239761.7347

RMSE is root mean square error, which represents the square root of the ratio of the square deviation between the actual value and the predicted value to the number of test sets. It evaluates the model by the following criteria: the smaller the value of RMSE, the smaller the error of the model and the higher the accuracy. When the actual value is completely consistent with the predicted value, it means this model is a perfect model:where xai is the actual value, xpi is the predicted value, and n is the number of test sets.

MAE is mean absolute error, which represents the average of the absolute values of the deviations of all predicted values and the arithmetic mean. It evaluates the model by the following criteria: the smaller the value of MAE, the smaller the error of the model and the higher the accuracy. Similar to RMSE, when the actual value is exactly the same as the predicted value, it is also a perfect model:

MAPE is mean absolute percentage error, which measures the relative errors between the average predicted value and the actual value on the test set [40]. It evaluates the model by the following criteria, the smaller the value of MAE, the smaller the error of the model and the higher the accuracy. Similar to RMSE and MAE, it is also an ideal model when the actual value is consistent with the predicted value:

According to the statistical tests, the minimum RMSE of QPSO-BP was only about 0.48, that of PSO-BP was about 0.99, and that of FWA was about 1. 94, while the maximum RMSE of BP reached 3.16. Similarly, the MAE and MAPE values of QPSO were the minimum values compared with those of the other three models, which confirmed our checking calculation above. Furthermore, by separately comparing the prediction results of h4 and H, the accuracy of these models was as the following orders, QPSO-BP > PSO-BP > FWA-BP > BP, which was also echoed above.

Because of its high accuracy, fast convergence, and few parameters, QPSO successfully demonstrates its advantages in practical application.

4.3. Application of QPSO-BP in Practical Engineering

In a practical project similar to the geological condition of the project in Section 2.2, the method proposed in this paper can be adopted to predict the distribution of bearing stratum and the buried depth of pile foundation. The specific implementation steps are as follows: (1) it is necessary and the most critical step for professional personnel to conduct geotechnical investigation, which determine whether the soil properties in the area meet the predicted conditions or not. (2) The designers should determine the specific location of each pile foundation in AutoCAD based on the pile foundation design drawings and geotechnical investigation report, and then sort out the X-coordinate, Y-coordinate, and Z-coordinate, which must be based on the geodetic origin. (3) The QPSO-BP model is employed to predict the indicators to be predicted with a method similar to that in this paper. (4) According to the prediction results, the length of the precast pile in the area with different bearing depth can be determined, and then the pile number and corresponding coordinates are recorded. Finally the piles will be driven into the soil layer one by one in the actual engineering.

5. Conclusion

It is of great significance to determine the fluctuation of bearing stratum and the buried depth of pile foundation before construction, which can effectively reduce the project cost and avoid unnecessary losses. The QPSO-BP model was adopted to deal with this highly nonlinear problem, and the following part is a summary of the specific work completed in this paper:(1)Based on engineering examples, the BP network model was used to predict the fluctuation of bearing stratum and the buried depth of pile foundation in this paper. Besides, when the number of neurons S in the hidden layer and the input parameters N meet S=2N+1, the error of prediction results was the minimum.(2)The prediction results indicated that the BP network would easily fall into the local optimal solution, and the error between the predicted value and the actual value was quite large; the maximum error of which reached about 41% when the number of neurons was 2 and 31% when the number of neurons was 7. Therefore, although the predicted value of the BP neural network could be used as a reference, its algorithm still had shortcomings and disadvantages. Therefore, it needed to be optimized.(3)The QPSO algorithm was adopted to optimize the BP network, and then the model of QPSO-BP was no longer trapped in the infinite loop of searching for local optimal solution. The relative error was merely 9.4% in predicting h4 and 2.9% in predicting H. Besides, two other optimization algorithms (FWA and PSO) were used to optimize the BP model, and the results demonstrated the high accuracy of QPSO-BP. The error of QPSO-BP was the smallest of the three algorithms.(4)Three different statistical tests (RMSE, MAE, and MAPE) were further employed to evaluate the accuracy of the three models. The calculation results of the three statistical tests were consistent with the above, and the accuracy followed the order of QPSO-BP > PSO-BP > FWA-BP.(5)All the evidence demonstrated the superiority of the QPSO-BP model in engineering application.

Data Availability

The case analysis data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This study was supported by the Science and Technology Project of Jingzhou (2019AC27), China.

References

  1. M. F. Randolph, “Science and empiricism in pile foundation design,” Géotechnique, vol. 53, no. 10, pp. 847–876, 2003. View at: Publisher Site | Google Scholar
  2. J. R. Meyer, Analysis and Design of Pile Foundations, ASCE, Virginia, NV, USA, 2015.
  3. Z. M. Zhang, “Achievements and problems of geotechnical engineering investigation in china,” Journal of Zhejiang University-Science, vol. 12, no. 2, pp. 87–102, 2011. View at: Publisher Site | Google Scholar
  4. W. Fleming, A. J. Weltman, M. F. Randolph, and K. Elson, Piling Engineering, CRC Press, Boca Raton, FL, USA, 3rd edition, 2009.
  5. K. J. Qi, M. J. Xu, and J. M. Zai, “Gray prediction of ultimate bearing capacity of single pile,” Chinese Journal of Rock Mechanics and Engineering, vol. 23, no. 12, p. 2069, 2004. View at: Publisher Site | Google Scholar
  6. X. J. Gao and X. R. Zhu, “Forecasting ultimate bearing capacity of single squeezed branch pile by hyperbola method,” Rock and Soil Mechanics, vol. 27, no. 9, pp. 1596–1600, 2006. View at: Publisher Site | Google Scholar
  7. Y. S. Deng, W. M. Gong, and A. M. Yuan, “Research on calculating methods for settlement of extra-long large-diameter pile group,” Journal of the China Railway Society, vol. 29, no. 4, pp. 87–90, 2007. View at: Publisher Site | Google Scholar
  8. M. Ali Ahmadi, “Applying a sophisticated approach to predict CO2 solubility in brines: application to CO2 sequestration,” International Journal of Low Carbon Technologies, vol. 11, no. 3, pp. 325–332, 2016. View at: Publisher Site | Google Scholar
  9. E. Másson and Y. J. Wang, “Introduction to computation and learning in artificial neural networks,” European Journal of Operational Research, vol. 47, no. 1, pp. 1–28, 2007. View at: Publisher Site | Google Scholar
  10. M. D. Himmelblau, “Accounts of experiences in the application of artificial neural networks in chemical engineering,” Industrial & Engineering Chemistry Research, vol. 47, no. 16, pp. 5782–5796, 2008. View at: Publisher Site | Google Scholar
  11. H. R. Amedi, A. Baghban, M. A. Ahmadi et al., “Evolving machine learning models to predict hydrogen sulfide solubility in the presence of various ionic liquids,” Journal of Molecular Liquids, vol. 216, pp. 411–422, 2016. View at: Publisher Site | Google Scholar
  12. J. Q. Shang, W. Ding, R. K. Rowe, and L. Josic, “Detecting heavy metal contamination in soil using complex permittivity and artificial neural networks,” Canadian Geotechnical Journal, vol. 41, no. 6, pp. 1054–1067, 2004. View at: Publisher Site | Google Scholar
  13. M. Alias, R. Dhanya, and G. Ramasamy, “Study on factors affecting the performance of construction projects and developing a cost prediction model using,” Annals of Forestry, vol. 8, pp. 2189–2194, 2015. View at: Google Scholar
  14. T. Sorsa and H. N. Koivo, “Application of artificial neural networks in process fault diagnosis,” IFAC Proceedings Volumes, vol. 24, no. 6, pp. 423–428, 1991. View at: Publisher Site | Google Scholar
  15. L. Jin, R. Zhao, and X. L. Du, “Neural network prediction model for size effect of concrete compressive strength,” Journal of Beijing University of Technology, vol. 47, no. 3, pp. 260–268, 2021. View at: Publisher Site | Google Scholar
  16. K. Suzuk, S. G. Armato, F. Li, S. Sone, and K. Doi, “Massive training artificial neural network (MTANN) for reduction of false positives in computerized detection of lung nodules in low-dose computed tomography,” Medical Physics, vol. 30, no. 7, pp. 1602–1617, 2003. View at: Publisher Site | Google Scholar
  17. S. Ding, H. Li, C. Su, J. Yu, and F. Jin, “Evolutionary artificial neural networks: a review,” Artificial Intelligence Review, vol. 39, no. 3, pp. 251–260, 2013. View at: Publisher Site | Google Scholar
  18. T. He, S. Zheng, P. Zhang, and M. Zou, “Input values function for improving generalization capability of BP neural network,” in Proceedings of theAsia-Pacific Conference on Wearable Computing Systems, pp. 228–231, Shenzhen, China, April 2010. View at: Publisher Site | Google Scholar
  19. H. Ai and S. Guo, “Bridge health evaluation system based on the optimal BP neural network,” International Journal of Control & Automation, vol. 7, no. 1, pp. 331–338, 2014. View at: Publisher Site | Google Scholar
  20. J. H. Wang, J. H. Jiang, and R. Q. Yu, “Robust back propagation algorithm as a chemometric tool to prevent the overfitting to outliers,” Chemometrics and Intelligent Laboratory Systems, vol. 34, no. 1, pp. 109–115, 1996. View at: Publisher Site | Google Scholar
  21. C. C. Shi, Y. Y. Zeng, and S. M. Hou, “Application of swarm intelligence algorithm in image segmentation,” Computer Engineering and Applications, vol. 57, no. 08, pp. 36–47, 2021. View at: Publisher Site | Google Scholar
  22. Y. Tan and Y. C. Zhu, “Fireworks algorithm for optimization,”, vol. 6145, pp. 355–364, 2010. View at: Publisher Site
  23. Y. Tan, FWA Application on Non-Negative Matrix Factorization, Springer, Berlin, Germany, 2015.
  24. S. Zheng, A. Janecek, and Y. Tan, “Enhanced fireworks algorithm,” in Proceedings of the IEEE Congress on Evolutionary Computation, IEEE, Cancun, MX, USA, June 2013. View at: Google Scholar
  25. J. Kennedy, “Particle swarm optimization,” in Proceedings of the 1995 IEEE International Conference Neural Networks, Perth, Australia, December 2011. View at: Publisher Site | Google Scholar
  26. A. Khare and S. Rangnekar, “A review of particle swarm optimization and its applications in solar photovoltaic system,” Applied Soft Computing Journal, vol. 13, no. 5, pp. 2997–3006, 2013. View at: Publisher Site | Google Scholar
  27. M. A. Ahmadi and S. R. Shadizadeh, “New approach for prediction of asphaltene precipitation due to natural depletion by using evolutionary algorithm concept,” Fuel, vol. 102, 2012. View at: Publisher Site | Google Scholar
  28. P. Wang, Z. Y. Huang, M. Y. Zhang, and X.-W. Zhaoi, “Mechanical property prediction of strip model based on PSO-BP neural network,” Journal of Iron & Steel Research International, vol. 15, no. 3, pp. 87–91, 2008. View at: Publisher Site | Google Scholar
  29. A. Ismail, D.-S. Jeng, and L. L. Zhang, “An optimised product-unit neural network with a novel PSO-BP hybrid training algorithm: applications to load-deformation analysis of axially loaded piles,” Engineering Applications of Artificial Intelligence, vol. 26, no. 10, pp. 2305–2314, 2013. View at: Publisher Site | Google Scholar
  30. A. Shafiei, M. A. Ahmadi, S. H. Zaheri et al., “Estimating hydrogen sulfide solubility in ionic liquids using a machine learning approach,” The Journal of Supercritical Fluids, vol. 95, pp. 525–534, 2014. View at: Publisher Site | Google Scholar
  31. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of ICNN’95-International Conference on Neural Networks, pp. 1942–1948, Perth, WA, Australia, December 1995. View at: Publisher Site | Google Scholar
  32. W. Fang, J. Sun, Y. Ding, X. Wu, and W. Xu, “A review of quantum-behaved particle swarm optimization,” IETE Technical Review, vol. 27, no. 4, pp. 336–348, 2010. View at: Publisher Site | Google Scholar
  33. J. Li, J. Q. Zhang, C. J. Jiang, and M. Zhou, “Composite particle swarm optimizer with historical memory for function optimization,” IEEE Transactions on Cybernetics, vol. 45, no. 10, pp. 2350–2363, 2017. View at: Publisher Site | Google Scholar
  34. J. Sun, B. Feng, and W. Xu, “Particle swarm optimization with particles having quantum behavior,” in Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No. 04TH8753), pp. 325–331, Portland, OR, USA, June 2004. View at: Publisher Site | Google Scholar
  35. Y. T. Chen and Q. Zhang, “Optimization design and simulation of belt conveyor gear based on levy flight quantum particle swarms,” Machinery Design & Manufacture, vol. 4, no. 4, pp. 54–57, 2020. View at: Publisher Site | Google Scholar
  36. K. Lu and R. Wang, “Application of PSO and QPSO algorithm to estimate parameters from kinetic model of glutamic acid batch fermetation,” in Proceedings of the 2008 7th World Congress on Intelligent Control and Automation, pp. 8968–8971, Chougqing, China, June 2008. View at: Publisher Site | Google Scholar
  37. Q. P. Zuo and L. Huang, “Evaluation factors of overburden thickness and site type in Jingzhou urban area,” Resources Environment & Engineering, vol. 29, no. 6, pp. 940–944, 2015. View at: Publisher Site | Google Scholar
  38. Z. B. Xu, R. Zhang, and W. F. Jing, “When does online BP training converge?” IEEE Transactions on Neural Networks, vol. 20, no. 10, pp. 1529–1539, 2009. View at: Publisher Site | Google Scholar
  39. S. X. Xu and L. Chen, “A novel approach for determining the optimal number of hidden layer neurons for FNN’s and its application in data mining,” in Proceedings of the 5th International Conference on Information Technology and Applications (ICITA), pp. 683–686, Cairns, Australia, June 2008. View at: Publisher Site | Google Scholar
  40. L. J. Liu, D. H. Liu, H. Wu, and X. Y. Wang, “The prediction of metro shield construction cost based on a backpropagation neural network improved by quantum particle swarm optimization,” Advances in Civil Engineering, vol. 2020, Article ID 6692130, 15 pages, 2020. View at: Publisher Site | Google Scholar
  41. Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), pp. 69–73, Anchorage, AK, USA, May 1998. View at: Publisher Site | Google Scholar
  42. H. Li, Q. Zhang, and Y. Zhang, “Improvement and application of particle swarm optimization algorithm based on the parameters and the strategy of co-evolution,” Applied Mathematics & Information Sciences, vol. 9, no. 3, pp. 1355–1364, 2015. View at: Publisher Site | Google Scholar
  43. D. Z. Pan and Y. J. Chen, “Quantum-behaved particle swarm optimization algorithm based on reverse random-weighted mean best position,” Journal of China West Normal University (Natural Sciences), vol. 33, no. 3, pp. 281–285, 2012. View at: Publisher Site | Google Scholar
  44. M. Najafzadeh and G. A. Barani, “Comparison of group method of data handling based genetic programming and back propagation systems to predict scour depth around bridge piers,” Scientia Iranica, vol. 18, no. 6, pp. 1207–1213, 2011. View at: Publisher Site | Google Scholar
  45. M. Najafzadeh and F. Saberi-Movahed, “GMDH-GEP to predict free span expansion rates below pipelines under waves,” Marine Georesources & Geotechnology, vol. 37, no. 3, pp. 1–18, 2018. View at: Publisher Site | Google Scholar
  46. F. Saberi-Movahed, M. Najafzadeh, and A. Mehrpooya, “Receiving more accurate predictions for longitudinal dispersion coefficients in water pipelines: training group method of data handling using extreme learning machine conceptions,” Water Resources Management, vol. 34, pp. 529–561, 2020. View at: Publisher Site | Google Scholar

Copyright © 2021 Fei Yin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views396
Downloads635
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.