Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 6510303 | 19 pages | https://doi.org/10.1155/2016/6510303

Particle Swarm Optimization with Double Learning Patterns

Academic Editor: Manuel Grana
Received16 Jul 2015
Revised11 Oct 2015
Accepted15 Oct 2015
Published27 Dec 2015

Abstract

Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants.

1. Introduction

Particle Swarm Optimization (PSO) [1, 2], firstly proposed by Kennedy and Eberhart in 1995, was inspired by the simulation of simplified social behaviors including fish schooling and bird flocking. Similar to genetic algorithm, it is also a population-based algorithm, but it has no evolutionary operations such as crossover, mutation, or selection. PSO finds the global best solution by adjusting the trajectory of each particle not only towards its personal best particle pbest but also towards the historically global best particle gbest [3]. Recently, PSO has been successfully applied to optimization problems in many fields [47].

In the basic PSO [1], each particle in the swarm learns from pbest and gbest. During the evolutionary process, gbest is the only shared information acquired by the whole swarm, which finally leads to all particles converging to the same destination and the diversity losing quickly. If gbest is a local optimum far from the global one, the swarm is easy to be trapped in local optimum. The learning mechanism of the basic PSO can cause a fast convergence rate, but it easily leads to the premature convergence when solving multimodal optimization problems. In order to overcome this problem, researchers proposed many strategies to improve it.

An adaptive strategy of the learning parameter [3, 818] is an effective way to improve the PSO performance. Shi and Eberhart [8] proposed a linearly decreasing inertia weight (LDIW) to balance the local search and global search. Ratnaweera et al. [3] proposed a time-varying acceleration coefficient (TVAC), which is beneficial to enhancing the exploration ability of particles in the early evolutionary phase and improving the local searching ability of particles in the late phase. In [3], the two variants of the PSO-TVAC were developed, namely, the PSO-TVAC with mutation (MPSO-TVAC) and the self-organizing hierarchical PSO-TVAC (HPSO). Zhan et al. [9] proposed an adaptive PSO in which the learning parameters were adaptively adjusted with the change of the evolutionary states of the swarm. Kundu et al. [10] proposed a nonlinearly time-varying acceleration coefficient and an aging guideline to avoid the premature convergence. They also suggested a mean learning strategy to enhance the exploitation search.

To increase the swarm diversity, auxiliary techniques are introduced into PSO’s framework, such as genetic operators [3, 12, 13, 19], differential evolution [20], and artificial bee colony (ABC) [21, 22]. Mahmoodabadi et al. [22] combined the multicrossover and the bee colony mechanism to improve the exploration capability of PSO. In [9], an elitist learning strategy, similar to the mutation operation, is developed to help the gbest particle jump out of the local optima.

The topological structure of the swarm has a significant effect on the performance of PSO [2326]. Kennedy [23] pointed out that small neighborhood is fit for complex problems, while large neighborhood is used for simple problems. Parsopoulos and Vrahatis [24] integrated the benefits of the global PSO and the local PSO and then proposed a unified PSO (UPSO). Mendes et al. [25] proposed a fully informed PSO (FIPSO) in which the updating velocity depends on neighborhoods of each particle instead of gbest and pbest. Bratton and Kennedy [26] proposed a standard version of PSO (SPSO) in which a local ring topology is employed. Experimental results indicated that the local model is more effective than the global model on many test problems.

The design of learning strategies improves the performance of PSO in complex multimodal problems [2731]. In the basic PSO, each particle learns from gbest. Hence, the swarm diversity loses easily in the initial evolutionary process. Zhou et al. [27] developed a random position PSO (RPPSO). In [27], if the randomly generated number is smaller than the acceptance probability, a random position is used to guide the particle. Liang et al. [28] proposed a comprehensive learning PSO (CLPSO) in which each particle can select its pbest or other’s pbest as the learning exemplar according to the given probability. Li et al. [29] developed a self-learning PSO which contains four learning strategies: exploitation, jumping out, exploration, and convergence. Huang et al. [30] proposed an example-based learning PSO (ELPSO) algorithm that uses multiple global best positions as elite examples to retain the swarm diversity. Chen et al. [31] proposed a PSO with an aging leader and challengers (ALC-PSO). In [31], an aging mechanism is designed to promote a suitable leader to lead the evolution of the swarm.

Multiswarm PSO (MS-PSO) [3236] is developed to maintain the balance of the exploration/exploitation search. In a homogenous MS-PSO, each swarm adopts a similar learning strategy. On the contrary, each swarm uses different learning strategy to implement the different search task in a heterogeneous MS-PSO. Niu et al. [32] presented a multiswarm cooperative optimizer (MCPSO) where the population consists of a master swarm and several slave swarms. Each slave swarm searches the better solution independently, while the master swarm collects the best particles from the slaves to refine the global optimization. Sun and Li [33] presented a two-swarm cooperative PSO (TCPSO) where one swarm is used to concentrate around the local optimum and accelerate the convergence, and the particles of the other one are dispersed around the search interval to keep the diversity.

Local topological structures and dynamical exemplar strategies can keep the swarm diversity to efficiently prevent the premature convergence, but the convergence rate is slow. The heterogeneous multiswarm method is powerful in balancing the local search and the global search by taking advantage of different learning strategies of the subswarms. In this method, it is important to design learning strategies which directly influence the performance of the algorithm. In order to develop efficient learning strategies, this paper analyzes the motion behavior of the swarm based on the probability characteristic of the learning parameters. Meanwhile, we point out that the probability characteristic of the learning parameters has an influence on the search space of the particles. Then, we propose a PSO with double learning patterns (PSO-DLP) to improve both the convergence rate and the accuracy of PSO. PSO-DLP adopts the master swarm and the slave swarm to obtain a good balance between the exploitation and exploration search. The master swarm is used in the exploration search, and the slave swarm is employed to carry out the exploitation search and to accelerate the convergence. The two swarms fulfill their tasks by adjusting the probability characteristics of the learning parameters. An interaction mechanism between two swarms is developed, which can help the slave swarm in jumping out of the premature convergence and improve the convergence precision of the master swarm. The simulation results show that PSO-DLP has powerful ability of global search and fast convergence speed. Experimental studies on 20 well-known benchmark functions show that the proposed PSO-DLP obtains a promising performance in terms of the accuracy and the convergence speed.

The rest of this paper is organized as follows. Section 2 describes the basic PSO. Section 3 presents the behavior analysis of the basic PSO. Section 4 presents the methodologies of PSO-DLP in detail. Section 5 provides the experimental settings and the results. Finally, Section 6 concludes this work.

2. Basic PSO

PSO is a population-based algorithm, which consists of a group of particles. Each particle is represented by two vectors, namely, a position vector and a velocity vector . The position of each particle in the search space is treated as a potential solution. Each particle updates its velocity and position with the following equations: where and represent the th dimension of the position and the velocity of particle at the th iteration. (pbest) is the personal best experience of the th particle and (gbest) is the group best experience found by the whole swarm. is an inertia weight; and are acceleration coefficients reflecting the weighting of the stochastic acceleration terms that pull each particle toward pbest and gbest, respectively. Random factors and are two independent random numbers in the range of .

The first item of (1) (i.e., ) is the previous velocity, which provides the necessary momentum for particles to roam around the search space. The second item (i.e., ), known as the “cognitive’’ component, represents the personal thinking of each particle. The cognitive component encourages the particles to move towards pbest. The third item (i.e., ), regarded as the “social’’ component, expresses the collaborative effect of the particles in finding the global optimal solution. The social component always pulls the particles to gbest.

3. Behavior Analysis of PSO

In the basic PSO, each particle toward the optimum solution is guided by the cognitive component and the social component. Therefore, proper control of these two components is very important to find the optimum solution accurately and efficiently [3]. Hence, researchers have proposed various strategies, such as the linearly varying acceleration coefficients [3], the nonlinearly varying acceleration coefficients [911], and the time-varying acceleration coefficients with the evolutionary state [16, 17]. In addition, Krohling [14] presented a Gaussian PSO in which random factors and acceleration coefficients are instead of the two positive random numbers generated according to the Gaussian distribution. Similar to the Gaussian PSO, Richer and Blackwell [15] introduced the Lèvy distribution to replace the uniform distribution of random factors. The previous works show that the setting of acceleration coefficients and the probability distribution of random factors can affect the performance of PSO.

In order to facilitate the analysis, the velocity updating equation needs to be transformed. Substituting (1) into (2), we obtainAccording to (3), we can also writeLet . Then (4) can be simplified as follows:Let and   ; then from (5), we getwhere and are functions with respect to the random factors , and the undetermined parameter . Hence and are correlative random variables in . From (6), the movement of the particles from the th iteration to the th iteration can be divided into two parts. The particles firstly enter into a search space, defined by the second term of (6), and then make an inertia motion decided by the third term. Let , where is a point on the line connected by and . Given a fixed , the term represents a line segment from to , namely, one-step search space. The acceleration coefficient and the variable can influence the size of the one-step search space, and can also decide the distribution of the location of particles in this space. The variable is the weighting coefficient, which reflects the differences of exploitation of pbest and gbest. When , the particle learns from pbest; when , the particle learns from gbest. Variables and are the learning factor and the distribution factor.

In order to analyze the effect of the two factors on the one-step search space, we need to calculate the probabilistic characteristics of the factors. Considering a general situation, that is, (i.e., ), we obtain

and are two independent uniform random numbers in the range of . Hence we calculate the density function () of the learning factor and the joint density () of the learning factor and the distribution factor (see the Appendix). The density function and the joint density are given byUsing (8) and (9), we can calculate the conditional probability density of given :

We can see from (8) that is a unimodal function and it is symmetrical at equal to 0.5 (Figure 1(a)). If the interval of the learning factor is divided into three smaller ones, that is, (0, 0.25), (0.25, 0.75), and (0.75, 1), the probability of the learning factor lying in the interval is the same as that in interval . When the learning factor is located in the interval , we consider the particle emphasis on learning from gbest. If, on the contrary, the learning factor is located in the interval , the particle pays attention to learning from pbest.

From (10), we can see that the range of the distribution factordepends on the value of the learning factor. The relationship between the learning factor and the distribution factor is shown in Figure 1(b). When the learning factor equals 0.5, the value of the distribution factor ranges from 0 to 1. When the learning factor equals 0 or 1, the value of the distribution factor ranges from 0 to 0.5. If given a fixed , increases with the rising of the distribution factor. This means that the particles tend to the large value of the distribution factor.

In the basic PSO, the probability characteristics of the learning factor and the distribution factor may bring forth the clustering of the swarm. During an iteration, each particle emphasizes learning from gbest with the probability that ; at the same time, the value range of the distribution factor is restricted because the learning factor is in the interval of , which means that the one-step space of the particle is lessened. This situation will eventually lead to the swarm clustering move toward gbest. The conditional probability density of the distribution factor is an increasing function with the increased distribution factor, which means that the particle trends to a “long-distance flying’’ in the one-step space. This flying may accelerate the clustering of the particles. When the learning factor equals 0.5, the value of the distribution factor falls into its maximum range from 0 to 1. But the probability of the learning factor around 0.5 is 0.182,   . In other words, the one-step space of the particle is shrunk with the probability of 0.818. All of the above reasons may bring about the quick clustering of the swarm.

4. PSO with Double Learning Patterns (PSO-DLP)

In this section, we describe PSO-DLP in detail. According to the analysis of the basic PSO, we know that the probability characteristics of the learning parameters can influence the search behavior of particles. PSO-DLP takes advantage of the probability characteristics of the leaning parameters to achieve the global search.

4.1. Learning Patterns

In PSO-DLP, we develop two learning patterns, called a uniform exploration and an enhanced exploitation. And two swarms, namely, a master swarm and a slave swarm, are also employed. The master swarm adopts the uniform exploration to avoid the premature convergence and the slave swarm uses the enhanced exploitation to accelerate the convergence speed.

In the uniform exploration learning, we present three novel strategies. Firstly, the learning factor and the distribution factor are independent. The value of the distribution factor varies from 0 to 1, which enlarges the search space of particles efficiently. In the basic PSO, the value of the distribution factor reaches its maximum value of 1 only when the value of the learning factor equals 0.5. Secondly, the distribution factor is subject to the uniform distribution in , which is beneficial for preserving the swarm diversity. Thirdly, the learning factor decreases with iteration, which helps the particles in emphasizing the exploration in the earlier stage of the evolution and enhancing the convergence in the later stage. For particle in the master swarm, its velocity is updated as follows:where , ;   is the number of fitness evaluations (); and is the maximum defined by the user. is the gbest of the master swarm.

The purpose of the enhanced exploitation learning is to make the search focus on a region for refining a promising solution. In this pattern, the learning factor and the distribution factor are independent. The learning factor is a uniform random number in [0, 0.5], which can make the search concentrate around gbest. The distribution factor is also a random number generated in the interval of uniformly, which shrinks the search space to accelerate the convergence.

For particle in the master swarm, its velocity updates as follows:where is the gbest of the slave swarm.

4.2. Interaction between Swarms

In PSO-DLP, two learning patterns play different roles in the evolutionary process. The master swarm is used for the global search, while the slave swarm is employed for the local search. But particles in the slave swarm get easily trapped in the local optima. To improve the convergence precision of the slave swarm, the interaction between two swarms is necessary. This interaction is unidirectional and the information flow is from the master swarm to the slave swarm. Particles in the master swarm do not receive information from the slave swarms in order to keep the ability to perform the global search. When the best particle in the slave swarm is not improved from successive or the fitness value of is less than the one of the best particle in the slave swarm (the higher the fitness value, the better the performance of the particle), particles in the slave swarm can learn from the best particle in the master swarm   . Too small values and too large ones of are not desirable, as the former tends to weaken the exploitation capability of particles in the slave swarm and the latter leads to wasting computation resources (as the slave swarm may suffer from the premature convergence). In this study, we set .

4.3. PSO-DLP Procedure

PSO-DLP is developed based on the two learning patterns and the interaction between the swarms discussed above. The pseudocode of the PSO-DLP is given in Algorithm 1. As no additional operation is introduced into PSO-DLP, the computational complexity and the storage memory are the same as the basic PSO. Therefore, PSO-DLP keeps the simplicity of the basic PSO.

Input:
 Master swarm size (), slave swarm size (), the dimensionality of the problem space
 (), maximum number of fitness evolution (iterMax), objective function ()
(1) Randomly initialize the position and velocity of all particles in the master swarm.
(2) Randomly initialize the position and velocity of all particles in the slave swarm.
(3) Calculate the fitness values of and ,
(4) Set and to be and for each
particle of the master swarm and slave swarm, respectively.
(5) Set the particle with the best fitness of the master swarm to be , and set the particle
with the best fitness of the slave swarm to be .
(6) Set generation , the counter .
(7) while   ≤ iterMax  do
(8)  for    do
(9)  for    do
(10)    Update the dimensional velocity according to (11), and Update the dimensional
    position according to (2).
(11)  end for
(12)  Calculate the fitness value ;
(13)  if    then
(14)      ; ;
(15)      if    then
(16)       ; ;
(17)      end if
(18)  end if
(19)  end for
(20)  for    do
(21)  for    do
(22)    Update the dimensional velocity according to (14), and Update the dimensional
    position according to (2);
(23)  end for
(24)  Calculate the fitness value ;
(25)  if    then
(26)      ; ;
(27)      if    then
(28)       ; ; ;
(29)      else
(30)       ;
(31)      end if
(32)  end if
(33)  end for
(34)  if    then
(35)     ;
(36)  end if
(37)  ;
(38) end while
(39) Set the global best position
(40) return  Gb

5. Experimental Setup and Simulation Results

5.1. Benchmark Functions

The 20 scalable benchmark functions are used to investigate the performance of the proposed algorithm, including unimodal functions, multimodal functions, rotated functions, and shifted functions. These functions are widely adopted in [3, 813, 1940]. These problems are minimization problems and the expressions of the functions are given in Table 1. The shifting and rotating methods used in the test functions are from [28, 39]. In Table 1, denotes the orthogonal matrix, and denotes the shifted global optimum and denotes the shifted fitness value. All functions are evaluated with 30 variables.


NumberDescription and expressionSearch space

Group 1: conventional problems
Sphere [−100, 10010−60
Schwefel’s function 1.2 [−100, 10010−60
Noise quadric [−1.28, 1.2810−20
Rosenbrock [−10, 1010−20
Ackley [−32.768, 32.76810−60
Griewank [−600, 60010−60
Rastrigin [−5.12, 5.1210−60
Noncontinuous Rastrigin   [−5.12, 5.1210−60
Expanded Schaffer
[−100, 10010−60

Group 2: rotated problems
Rotated Rosenbrock , , is an orthogonal matrix[−10, 10100
Rotated Ackley , [−32.768, 32.768100
Rotated Griewank , [−600, 600100
Rotated Rastrigin , [−5.12, 5.12100
Rotated noncontinuous Rastrigin , [−5.12, 5.12100

Group 3: shifted problems
Shifted Sphere , , [−100, 10010−6−450
Shifted Rosenbrock , , [−10, 1010−6390
Shifted Rastrigin , , [−5.12, 5.1210−6−330
Shifted non-Rastrigin , , [−5.12, 5.1210−6−330
Shifted rotated Ackley’s function with global optimum on bounds[−32.76, 32.7610−6−140
Shifted rotated Rastrigin’s function [−5.12, 5.1210−6−330

5.2. Parameter Settings for the Involved PSO Variants

For the comprehensive comparison with PSO-DLP, eight PSO variants are employed in this paper. They are PSO-W [8], HPSO [3], FIPS [25], CLPSO [28], SPSO [26], HEPSO [22], COMPSO [32], and TS-CPSO [33]. The parameter settings of each peer algorithm are extracted from the corresponding literatures and they are given in Table 2.


AlgorithmYearPopulation topologyParameter settings

PSO-LDIW1998Fully connected: 0.9–0.4,
HPSO2004Fully connected: 0.9–0.4, : 2.5–0.5, : 0.5–2.5
FIPSO2004Local Ring ,
CLPSO2006Comprehensive learning: 0.9–0.4, ,
SPSO2007Local Ring,
COMPSO2007Multiswarm (fully connected): 0.9–0.4, ,
HEPSO2014Fully connected: 0.9–0.4, : 2.5−0.5, : 0.5−2.5, ,
TS-COMPSO2014Multiswarm (fully connected and local ring), ,
CMA-ES2007,
SADE2009, , ,
JADE2009,
PSO-DLPMultiswarm (fully connected): 0.9–0.3, ,

The swarm size is set to 40 [28] for eight algorithms expect for COMPSO. For COMPSO, the population size is set at 80, as the suggestion proposed by its author. COMPSO has four subswarms, and TS-CPSO and PSO-DLP have two subswarms. In the three algorithms, each subswarm has the same size which is set to 20. To ensure a fair comparison, the maximum number of fitness evaluations is set at . Each algorithm was run 30 times independently to reduce random discrepancy.

5.3. Performance Metrics

In this study, we adopted the fitness mean (Fm), success rate (SR), and success performance (SP) to assess the accuracy, the reliability, and the efficiency of PSO, respectively [39]. Fm is the mean difference between the best fitness value of the algorithm and the global optimum ( and represent the global solution and the best solution found by an algorithm, resp.). SR denotes the consistency of an algorithm to achieve the solution with a predefined . SP denotes the number of required by an algorithm to solve the problem with a predefined . The Wilcoxon test [41, 42] is applied to perform pairwise comparison between PSO-DLP and its peers. The confidence level is fixed at 0.95. If the performance of PSO-DLP is better than its peer, the value is denoted by “+’’; if the value is “’’ or “−’’, this indicates that the performance of PSO-DLP is almost the same as or is significantly worse than its peers, respectively. The average ranking can be calculated to undertake multiple comparisons [41, 42].

5.4. Parameter Sensitivity Analysis

The effect of parameter on the performance of PSO-DLP was investigated. The seven benchmarks are selected, namely, , , , , , , , and . The value of PSO-DLP was set to an integer value varying from 10 to 100. Table 3 presents the mean values and the standard deviation obtained by PSO-DLP with different .


Fm

SD2.80E − 1910.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Fm0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00

SD7.97E − 177.57E − 178.07E − 187.97E − 172.01E − 193.04E − 197.97E − 177.97E − 178.27E − 178.05E − 17
Fm1.17E − 172.10E − 173.05E − 182.00E − 176.19E − 192.33E − 192.01E − 171.02E − 172.31E − 175.01E − 17

SD5.90E − 134.76E − 143.55E − 161.06E − 150.00E + 000.00E + 000.00E + 000.00E + 001.59E − 141.17E − 14
Fm1.28E − 121.04E − 137.94E − 162.38E − 150.00E + 000.00E + 000.00E + 000.00E + 002.43E − 147.53E − 15

SD1.93E − 136.82E − 142.84E − 150.00E + 000.00E + 000.00E + 000.00E + 000.00E + 003.19E − 154.28E − 15
Fm2.67E − 131.48E − 134.08E − 150.00E + 000.00E + 000.00E + 000.00E + 000.00E + 007.14E − 151.56E − 15

SD1.71E + 001.71E + 001.19E + 007.46E − 012.00E − 012.01E − 012.87E − 011.72E + 001.09E + 001.94E + 00
Fm1.10E + 002.10E + 001.54E + 001.53E − 012.83E − 012.93E − 011.86E − 011.55E + 001.72E + 001.91E + 00

SD7.90E − 037.90E − 035.40E − 03 7.40E − 033.70E − 033.70E − 031.08E − 024.90E − 036.94E − 035.91E − 03
Fm5.11E − 031.39E − 038.60E − 037.00E − 035.23E − 035.23E − 031.01E − 027.00E − 039.60E − 035.70E − 03

SD3.48E + 013.56E + 013.02E + 012.65E + 012.55E + 012.55E + 012.92E + 012.98E + 012.56E + 012.60E + 01
Fm4.02E + 008.08E + 007.22E + 004.60E + 001.34E + 011.54E + 015.97E + 003.56E + 006.58E + 009.87E + 00

SD5.68E − 145.11E − 145.11E − 145.11E − 141.27E − 141.54E − 141.97E − 143.97E − 144.54E − 143.41E − 14
Fm0.00E + 001.27E − 141.27E − 141.27E − 141.48E − 131.55E − 141.57E − 141.55E − 141.55E − 141.27E − 14

From Table 3, we can see that the experimental results of functions and decrease with the increasing of   because and are unimodal and the higher value of is beneficial to the exploitation search. For multimodal functions , , , , , and , the performance of PSO-DLP tends to deteriorate when is set too low (i.e., = 10, 20, and 30) or too high (i.e., = 90, 100). When is set too low, the interaction between subswarms is overemphasized such that the slave swarm receives the best solution from the master swarm so frequently that the search of the slave swarm is not sufficient to refine the search. When is set too high, the interaction between swarms is not enough such that the slave swarm may get into the premature convergence for solving multimodal functions. If the interaction interval is long, the master swarm could not help the slave swarm in getting out of the local optimum. From the above experimental findings, the parameter value of at 50 had the promising performance of PSO-DLP. Hence, this parameter setting was adopted in the following experiments.

5.5. Experimental Results and Discussions
5.5.1. Comparison among the Fm Results

Table 4 presents the results of the fitness means Fm and standard deviation SD of the nine algorithms on the conventional problems, rotated problems, and shifted problems. The best results among the nine algorithms are shown in bold.


PSO-WSPSOCLPSOHPSOFIPSHEPSOTS-CPSOCOMPSOPSO-DLP

Fm1.34E − 1178.86E − 973.35E − 875.87E − 3225.66E − 050.00E + 005.92E − 3221.48E − 3230.00E + 00
SD2.16E − 1161.86E − 965.37E − 872.18E − 3229.24E − 060.00E + 004.09E − 3221.87E − 3230.00E + 00
+ + + = + + ==

Fm1.41E − 455.29E − 024.48E − 077.61E − 531.56E − 024.81E − 433.96E − 092.92E − 473.00E − 86
SD1.47E − 453.47E − 022.26E − 071.68E − 528.60E − 022.48E − 435.33E − 095.89E − 474.25E − 86
++++++++

Fm2.34E − 035.62E − 033.50E − 033.10E − 034.50E − 031.61E − 039.43E − 031.21E − 032.05E − 04
SD9.83E − 041.61E − 038.73E − 042.40E − 031.30E − 036.15E − 044.80E − 034.60E − 041.24E − 05
++++++++

Fm2.53E + 012.25E + 011.40E + 012.89E + 002.02E + 011.36E + 017.71E − 029.806E − 012.01E − 19
SD1.81E + 003.76E + 003.50E + 004.03E + 002.30E − 012.194E + 003.87E − 022.19E + 006.19E − 19
++++++++

Fm7.11E − 155.68E − 157.10E − 157.10E − 157.54E − 050.00E + 003.34E − 134.97E − 157.11E − 15
SD0.00E + 001.94E − 150.00E + 000.00E + 002.47E − 060.00E + 003.45E − 131.94E − 150.00E + 00
====== + =

Fm7.39E − 031.04E − 140.00E + 001.47E − 021.02E − 040.00E + 002.65E − 025.30E − 030.00E + 00
SD1.05E − 021.47E − 140.00E + 002.08E − 025.19E − 050.00E + 001.46E − 027.30E − 030.00E + 00
+++++=++

Fm2.12E + 011.31E + 011.15E − 141.50E − 128.98E + 020.00E + 001.42E − 151.65E + 010.00E + 00
SD1.02E + 011.25E + 01 3.63E − 153.20E − 128.17E + 010.00E + 001.48E − 153.19E + 000.00E + 00
+++++==+

Fm2.37E + 013.97E + 014.85E − 102.00E − 017.36E + 019.93E − 131.20E + 001.34E + 010.00E + 00
SD1.45E + 014.48E + 002.44E − 104.47E − 011.42E + 015.67E − 131.09E + 001.34E + 000.00E + 00
+++++=++

Fm1.27E + 002.36E + 002.59E + 001.25E + 009.07E + 001.31E + 007.03E + 002.18E + 006.87E − 01
SD4.43E − 015.21E − 015.53E − 011.75E − 019.10E − 023.33E − 011.43E − 014.36E − 013.60E − 01
++++++++

Fm2.51E + 012.55E + 012.09E + 012.57E + 012.17E + 011.35E + 011.70E + 019.65E + 002.00E − 01
SD2.14E + 011.48E + 014.05E + 012.79E + 01