Abstract

Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted.

1. Introduction

The idea of Particle Swarm Optimization (PSO) stems from biology where a swarm of birds coordinates itself in order to achieve a goal. When a swarm of birds looks for food, its individuals will spread in the environment and move around independently with some degree of randomness which enables it to discover food accumulations. After a while, one of them will find something digestible and, being social, communicates this to its neighbors. These can then approach the source of food, thus leading to the convergence of the swarm to the source of food. Following this analogy, PSO was largely derived from sociopsychology concept and transferred to optimization [1], where each particle (bird) uses the local information regarding the displacement of its reachable closer neighbors to decide on its own displacement, resulting to complex and adaptive collective behaviors.

Since the inception of PSO technique, a lot of work has been done by researchers to enhance its efficiency in handling optimization problems. One such work is the introduction of linear decreasing inertia weight (LDIW) strategy into the original PSO to control its exploration and exploitation for better performance [24]. However, LDIW-PSO algorithm from the literature is known to have the shortcoming of premature convergence in solving complex (multipeak) problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. The challenge of addressing this shortcoming has been on for a long time and has attracted much attention of researchers in the field of global optimization. Consequently upon this, many other inertia weight PSO variants have been proposed [2, 516], with different levels of successes. Some of these variants have claimed better performances over LDIW-PSO, thereby making it look weak or inferior. Also, since improving on the performance of PSO is an area which still attracts more researchers, this paper strives to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters, like velocity limits for the particles, are properly set. Using the experimentally obtained values for particle velocity limits in LDIW-PSO, results show that LDIW-PSO outperformed other PSO variants adopted for comparison.

In the sections that follow, inertia weight PSO technique is described in Section 2, LDIW-PSO and the PSO variants adopted for comparison are reviewed in Section 3, parameter settings were experimentally conducted in Section 4, presentations and discussions of the experimental results of LDIW-PSO and its competing variants are made in Section 5, while Section 6 concludes the paper.

2. Particle Swarm Optimization

This technique is a simple but efficient population-based, adaptive, and stochastic technique for solving simple and complex optimization problems [17, 18]. It does not need the gradient of the problems to work with, so the technique can be employed for a host of optimization problems. In PSO, a swarm of particles (set of solutions) is randomly positioned (distributed) in the search space. For every particle, the objective function determines the food at its place (value of the objective function). Every particle knows its own actual value of the objective function, its own best value (locally best solution), the best value of the whole swarm (globally best solution), and its own velocity.

PSO maintains a single static population whose members are tweaked (adjust slightly) in response to new discoveries about the space. The method is essentially a form of directed mutation. It operates almost exclusively in multidimensional metric, and usually real-valued, spaces. Because of its origin, PSO practitioners tend to refer to candidate solutions not as a population of individuals but as a swarm of particles. Generally, these particles never die [19], but are moved about in the search space by the directed mutation.

Implementing PSO involves a small number of different parameters that regulates the behavior and efficacy of the algorithm in optimizing a given problem. These parameters are particle swarm size, problem dimensionality, particle velocity, inertia weight, particle velocity limits, cognitive learning rate, social learning rate, and the random factors. The versatility of the usage of PSO comes at a price because for it to work well on any problem at hand, these parameters need tuning and this could be very laborious. The inertia weight parameter (popularly represented as ) has attracted a lot of attentions and seems to be the most important compared with other parameters. The motivation behind its introduction was the desire to better control (or balance) the scope of the (local and global) search and reduce the importance of (or eliminate) velocity clamping, , during the optimization process [2022]. According to [22], the inertia weight was successful in addressing the former objective, but could not completely eliminate the need for velocity clamping. The feature of the divergence or convergence of particles can be controlled only by parameter , however, in conjunction with the selection of values for the acceleration constants [22, 23] as well as other parameters.

Each individual in the particle swarm is composed of three -dimension vectors (current position, previous position, and velocity), where is the dimensionality of the search space. Thus, in a physical -dimensional search space, the position and velocity of each particle are represented as the vectors and , respectively. In course of movement in the search space looking for the optimum solution of the problem being optimized, the particle’s velocity and position are updated as follows: where, and are acceleration (weighting) factors known as cognitive and social scaling parameters that determine the magnitude of the random forces in the direction of best (previous best) and best (global previous best); and are random numbers between 0 and 1; is iteration index; is inertia weight. It is common that the positions and velocities of particles in the swarm, when they are being updated, are controlled to be within some specified bounds as shown in Algorithms 1 and 2, respectively. An inertia weight PSO algorithm is shown in Algorithm 3.

If
     
else if
     
end if

If
      
else if
      
end if

Begin Algorithm
     Input: function to optimize,
       swarm size,
       problem dimension,
       search space range,
       velocity range,
     Output: : the best value found
     Initialize:  for all particles in problem space
             and
             ,
     Evaluate in variables and get ,
      best best of
      Repeat
          Calculate
          Update for all particles using (1)
          Update for all particles using (2)
          Evaluate in variables and get ,
          If is better than then
          If the best of is better than best then best best of
     Until Stopping criteria (e.g., maximum iteration or error tolerance is met)
      best
     Return
End Algorithm

3. A Review of LDIW-PSO and Some of Its Competing PSO Variants

Despite the fact that LDIW-PSO algorithm, from the literature, is known to have a shortcoming of premature convergence in solving complex (multipeak) problems, it may not always be true that LDIW-PSO is as weak or inferior as it has been pictured to be by some PSO variants in the literature [2, 7, 13]. Reviewed below are some of these variants and other variants, though not directly compared with LDIW-PSO in the literature, but have been adopted for comparison with LDIW-PSO.

3.1. Linear Decreasing Inertia Weight PSO (LDIW-PSO)

The inertia weight parameter was introduced into the original version of PSO by [20]. By introducing a linearly decreasing inertia weight into the original version of PSO, the performance of PSO has been greatly improved through experimental study [24]. In order to further illustrate the effect of this linearly decreasing inertia weight, [4] empirically studied the performance of PSO. With the conviction that a large inertia weight facilitates a global search while a small inertia weight facilitates a local search, a linearly decreasing inertia weight was used with an initial value of 0.9 and a final value of 0.4. By reason of these values, the inertia weight can be interpreted as the fluidity of the medium in which a particle moves [21], showing that setting it to a relatively high initial value (e.g., 0.9) makes particles move in a low viscosity medium and performs extensive exploration. Gradually reducing it to a much lower value (e.g., 0.4) makes the particle moves in a high viscosity medium and performs more exploitation. The experimental results in [4] showed that the PSO converged quickly towards the optimal positions but slowed down its convergence speed when it is near the optima. Thus, by using the linearly decreasing inertia weight, the PSO lacks global search ability at the end of run even when the global search ability is required to jump out of the local minimum in some cases. As a result, employing adapting strategy for adjusting the inertia weight was suggested to improve PSO’s performance near the optima. Towards achieving this, there are many improvements on LDIW-PSO in the literature [2, 3, 16, 2426], which have made PSO to perform with varying degree of successes. Represented in (3) is the LDIW: where and are the initial and final values of inertia weight, is the current iteration number, is the maximum iteration number, and is the inertia weight value in the iteration.

3.2. Chaotic Descending Inertia Weight PSO (CDIW-PSO)

Chaos is a nonlinear dynamic system which is sensitive to the initial value. It has the characteristic of ergodicity and stochastic property. Using the idea of chaotic mapping, CDIW-PSO was proposed by [2] as shown in (5) based on logistic mapping in (4). The goal was to improve on the LDIW-PSO to avoid getting into local optimum in searching process by utilizing the merits of chaotic optimization where and is the chaotic number. The map generates values between 0 and 1, provided that the initial value and that : where and are the initial and final values of inertia weight, and is a uniform random number in . The experimental results in [2] show that CDIW-PSO outperformed LDIW-PSO in all the test problems used in the experiment in terms of convergence precision, quick convergence velocity, and better global search ability.

3.3. Random Inertia Weight and Evolutionary Strategy PSO (REPSO)

This variant proposed in [7] used the idea of simulated annealing and the fitness of particles to design another inertia weight represented by (6). A cooling temperature was introduced to adjust the inertia weight based on certain probability to facilitate jumping off local optimal solutions.

It was experimentally proven that REPSO is significantly superior LDIW-PSO in terms of convergent speed and accuracy: where are constants with and . The simulated annealing probability is defined as follows:

where is the number of particles, is the fitness value of particle in the iteration, and the adaptive cooling temperature in the iteration, , is defined as shown in (8): where is the current best fitness value, and which is defined in (9), is the average fitness value in the iteration: The combined efforts of the simulated annealing idea and fitness variance of particles improved the global search ability of PSO. In all the experiments performed, REPSO was recorded superior to LDIW-PSO in convergence velocity and precision.

3.4. Dynamic Adaptive Particle Swarm Optimization (DAPSO)

DAPSO was introduced by [3] with the aim of proffering solution to the PSO premature convergence problem associated with typical multipeak, high dimensional function optimization problems so as to improve its global optimum and convergence speed. A dynamic adaptive strategy was introduced into the variant to adjust the inertia weight value based on the current swarm diversity and congregate degree as well as the impact on the search performance of the swarm. The experimental results recorded showed that DAPSO was more effective compared with LDIW-PSO. The inertia weight formula that was used is represented in (10): where and are the minimum and maximum inertia weight values, is the current number of iterations, the diversity function and adjustment function , both in the iteration, are represented in (11) and (12), respectively: where is the group fitness as shown in (13): where and are the total numbers of iterations: where is the swarm size, is the fitness of particle , and represented in (14) is the current average fitness of the swarm:

3.5. Adaptive Particle Swarm Optimization (APSO)

This PSO variant was proposed in [5], in which an adaptive mutation mechanism and a dynamic inertia weight were incorporated into the conventional PSO method. These mechanisms were employed to enhance global search ability and convergence speed and to increase accuracy, while the mutation mechanism affected the particle position updating formula, the dynamic inertia weight affected the inertia weight formula shown in (15). Though APSO was not compared with LDIW-PSO, it outperformed all its competitors as evidenced by all the experimental results: where is the fitness of current best solution in the iteration, and the parameter is predefined which could be set equal to the fitness of the best particle in the initial population. For the updating of the particle’s position, when a particle is chosen for mutation, a Gaussian random disturbance was added as depicted in (16): where is the component of the particle, is a random variable with Gaussian distribution with zero mean and unit variance, and is a variable step size which dynamically decreases according to current best solution fitness. is defined in iteration according to

3.6. Dynamic Nonlinear and Dynamic Logistic Chaotic Map PSO (DLPSO2)

In [11], two types of variants were proposed to solve the premature convergence problem of PSO. In this variant, two dynamic nonlinear methods and logistic chaotic map were used to adjust the inertia weight in parallel according to different fitness values. One of the dynamic nonlinear inertia weights is represented by (18) and (19), while the other is represented by (20) and (21). They are meant to achieve tradeoff between exploration and exploitation: where is the dynamic nonlinear factor, is the inertia weight, and are the maximum and minimum values of , respectively, and are the maximum and minimum values of , respectively, and and are the current iteration numbers and the maximum iteration number, respectively.

A dynamic logistic chaotic map in (4) was incorporated into the PSO variant inertia weight as shown in (23) to enrich searching behaviors and avoid being trapped into local optima: where is the dynamic chaotic inertia weight adjustment factor, and are the maximum and minimum values of , respectively, and Lmap is the result of logistic chaotic map. In this variant, using (19) and (23) was labeled DLPSO1, while using (21) and (23) was captioned DLPSO2.

For the purpose of achieving a balance between global exploration and local exploitation and also avoiding premature convergence, (19), (21), and (23) were mixed together to dynamically adjust the inertia weight of the variant as shown in Algorithm 4, where is the fitness value of particle and is the average fitness value of the swarm. Experiments and comparisons showed that the DLPSO2 outperformed several other well-known improved particle swarm optimization algorithms on many famous benchmark problems in all cases.

if
        compute using (19) or (21)
        elseif
        compute using (23)
end if

3.7. Discussions

LDIW-PSO is relatively simple to implement and fast in convergence. When [4] experimentally ascertained that LDIW-PSO is prone to premature convergence, especially when solving complex multimodal optimization problems, a new area of research was opened up for improvements on inertia weight strategies in PSO, and LDIW-PSO became a popular yard stick for many other variants.

From the variants described previously, there are ample expectations that they should outperform LDIW-PSO judging by the various additional strategies introduced into the inertia weight strategies used by them. For example, CDIW-PSO introduced chaotic optimization characteristic, REPSO introduced a combined effort of simulated annealing idea and fitness variance of particles, DAPSO introduced a dynamic adaptive strategy based on swarm diversity function, APSO introduced an adaptive mutation to the particle positions and made the inertia weight dynamic based on the best global fitness, while DLPSO2 used different formulas coupled with chaotic mapping. The general aims of remedying the problem of premature convergence by these variants were not achieved, rather they only struggled to move a bit further than LDIW-PSO in trying to optimize the test problems because a total solution to this problem is for an algorithm to escape all possible local optima and obtain the global optimum. With this, it is possible that LDIW-PSO was subjected to settings, for example, the particles velocity limits [24], which were not appropriate for it to operate effectively.

4. Testing with Benchmark Problems

To validate the claim in this paper, 6 different experiments were performed for the purpose of comparing the LDIW-PSO with 6 other different PSO variants, namely, AIW-PSO, CDIW-PSO, REPSO, SA-PSO, DAPSO, and APSO. Different experiments, relative to the competing PSO variants, used different set of test problems which were also used to test LDIW-PSO. Brief descriptions of these test problems are given next. Additional information on these problems can be found in [2729]. The application software was developed in Microsoft Visual C# programming language.

4.1. Benchmark Problems

Six well studied benchmark problems in the literature were used to test the performance of LDIW-PSO, AIW-PSO, CDIW-PSO, REPSO, SA-PSO, DAPSO, and APSO. These problems were selected because their combinations were used to validate the competing PSO variants in the respective literature referenced. The test problems are Ackley, Griewank, Rastrigin, Rosenbrock, Schaffer's f6, and Sphere.

The Ackley problem is multimodal and nonseparable. It is a widely used test problem, and it is defined in (24). The global minimum is obtainable at , and the number of local minima is not known:

The Griewank problem is similar to that of Rastrigin. It is continuous, multimodal scalable, and nonseparable with many widespread local minima regularly distributed. The complexity of the problem increases with its dimensionality. Its global minimum is obtainable at = 0, and the number of local minima for arbitrary is not known, but in the two-dimensional case, there are some 500 local minima. This problem is represented by

The Rastrigin problem represented in (26) is continuous, multimodal, scalable, and separable with many local minima arranged in a lattice-like configuration. It is based on the Sphere problem with the addition of cosine modulation so as to produce frequent local minima. There are about 50 local minima for two-dimensional case, and its global minimum is obtainable at :

Shown in (27) is the Rosenbrock problem. It is continuous, unimodal, scalable, and nonseparable. It is a classic optimization problem also known as banana function, the second function of De Jong, or extended Rosenbrock function. Its global minimum obtainable at , lies inside a long narrow, parabolic shaped valley. Though it looks simple to solve, yet due to a saddle point it is very difficult to converge to the global optimum:

The Schaffer's f6 problem represented in (28) is 2-dimensional, continuous, multimodal, and nonseparable with unknown number of many local minima. Its global minimum is obtainable at = 0:

The Sphere problem also known as the first De Jong function is continuous, convex, unimodal, scalable, and separable. It is one of the simplest test benchmark problems. Its global minimum is obtainable at = 0, and the problem is represented by

4.2. Parameter Settings

The limits of particle velocity could negatively affect the performance of PSO algorithm if it is not properly set. As a result, different work has been done to determine the velocity limits of particles in order to improve on the performance of PSO. Researches in this direction are [4, 24, 30] the three major methods that appear in the literature, for computing the velocity clamping ( and ) are (i) multiplying the search space range with certain percentage (); that is, and ; (ii) multiplying both the minimum and maximum limits of the search space separately with certain percentage (); that is, and ; (iii) assigning the search space upper limit to . It is obvious from (i) and (ii) that the parameter is very important. As a result, different values have been used by different authors [5, 6, 13, 30] for to determine velocity clamping for particles.

In trying to substantiate the fact that LDIW-PSO is not as weak or inferior as many authors claimed it to be, an experiment was conducted to investigate the effect of the parameter on the performance of LDIW-PSO using the benchmark problems described previously. The results were used as a guide to set in LDIW-PSO before embarking on some experimental comparison, between it and some other PSO variants described previously to prove that LDIW-PSO is superior to many of the variants that have been claimed to be better that it. The results of the experiments are listed in the Appendix. Using the results as guide, the value of was set in LDIW-PSO for the various test problems as listed in Table 1. However, was set to 0.015 for in Experiment 2 and 0.25 for in Experiments 2 and 5.

4.3. Experimental Setup

The settings for the different experiments carried out for the comparisons are described next one after the other. In all the experiments, LDIW-PSO was subjected to the settings of its competitors as recorded in the literature. For LDIW-PSO, , , , , and . Table 2 shows the respective search and initialization ranges for all the algorithms, the symbol “–’’ means that the corresponding test problem was not used by the algorithm under which the symbol appears.

Experiment  1. The purpose of this experiment was to compare LDIW-PSO with CDIW-PSO [2]. All the test problems described previously were used in this experiment, except . The dimension for was 2, while it was 30 for others. The maximum numbers of iterations were set to 1500 with swarm size of 20, and the experiment was repeated 500 times. Stated in Table 3 are the set goals (criteria) of success for each of the problems.

Experiment  2. The purpose of this experiment was to compare LDIW-PSO with REPSO [7]. All the test problems in Table 1 except were used. The dimension for was 2, while it was 10 for others. The performances of the algorithms were considered at different number of iterations as shown in Section 5, in line with what is recorded in the literature [7]. The swarm size used was 30, and the experiment was repeated 50 times.

Experiment  3. The purpose of this experiment was to compare LDIW-PSO with DAPSO [13]. Test problems were used with four different problem dimensions of 20, 30, 40, and 50. The maximum number of iterations and swarm size was set to 3000 and 30, respectively, and the experiment was repeated 50 times.

Experiment  4. The purpose of this experiment was to compare LDIW-PSO with APSO [5]. , , and were used as test problems with three different problem dimensions of 10, 20, and 30. The respective maximum numbers of iterations associated with these dimensions are 1000, 1500, and 2000, respectively. The experiment was carried out over three different swarm sizes, 20, 40, and 80 for each problem dimension, and the experiment was repeated 30 times.

Experiment  5. This experiment compared LDIW-PSO with DLPSO2 [11]. All the test problems except were used in the experiment with two different problem dimensions of 2 (for and ) and 30 (for , , and ). The maximum number of iterations was set to 2000 and swarm sizes to 20, and the experiment was repeated 20 times.

5. Results and Discussions

Presented in Tables 48 are the results obtained for all the experiments. The results for all the competing PSO variants were obtained from the respective referenced papers, and they are presented here the way they were recorded. Thus, the recording of the results for LDIW-PSO was patterned after them. In each of the tables, bold values represent the best results. In the tables, mean best fitness (solution) is a measure of the precision that the algorithm can get within a given number of iterations, standard deviation (Std. Dev.) is a measure of the algorithm's stability and robustness, while success rate (SR) [31] is the rate at which an algorithm obtains optimum fitness result in the criterion range during a given number of independent runs.

Experiment  1 (comparison of LDIW-PSO with CDIW-PSO). The results in Table 4 clearly reveal a great difference in performance between LDIW-PSO and CDIW-PSO [2]. The results are compared based on the final accuracy of the averaged best solutions, success rate (SR), and standard deviation (Std. Dev.) of the best solutions. In all the test problems, the result indicates that LDIW-PSO can get better optimum fitness result, showing better convergence precision. LDIW-PSO is also more stable and robust compared with CDIW-PSO, because its standard deviation is comparatively lesser in three of the test problems. Besides, LDIW-PSO has better global search ability and could easily get out of local optima than CDIW-PSO.

Experiment  2 (comparison of LDIW-PSO with REPSO). In Table 5, the comparison between LDIW-PSO and REPSO was based on the final accuracy of the averaged best solutions relative to the specified number of iterations and convergence speed as recorded in [7]. From the results, REPSO appears to converge faster in Griewank and Rastrigin at the beginning but was overtaken by LDIW-PSO which eventually converged faster and had better accuracy. In Rosenbrock and Sphere problems, LDIW-PSO had better convergence speed and accuracy in comparison with REPSO. The symbol “—” means that the corresponding iteration number was not considered for the test problem under which the symbol appears.

Experiment  3 (comparison of LDIW-PSO with DAPSO). The results for DAPSO were obtained from [13]. Comparing these results with that of LDIW-PSO were measured using the final accuracy of the respective mean best solutions across the different problems dimensions as shown in Table 6. In all the problems and dimensions except dimension 40 of Rastrigin, LDIW-PSO outperformed DAPSO getting better fitness quality and precision. This is a clear indication that LDIW-PSO has better global search ability and is not easily trapped in local optima compared with DAPSO.

Experiment  4 (comparison of LDIW-PSO with APSO). Recorded in Table 7 are the results for LDIW-PSO and APSO [5] over different swarm sizes, dimensions, and iterations. The basis for comparison is the final accuracy and quality of their mean best fitness. The two variants put up a good competition. In Griewank and Rastrigin, APSO performed better in smaller dimensions, while LDIW-PSO performed better in higher dimensions. But in Rosenbrock, LDIW-PSO outperformed APSO in locating better solutions to the problem.

Experiment  5 (comparison of LDIW-PSO with DLPSO2). The results for LIDIW-PSO and DLPSO2 [11] in Table 8 are compared based on the best fitness, mean best fitness, convergence speed, as well as standard deviation (Std. Dev.) of the best solutions. In Rastrigin, the two algorithms have equal performances. However, in other test problems, the result indicates that LDIW-PSO can get better optimum fitness result, showing better convergence speed. LDIW-PSO is also more stable and robust compared with DLPSO2, because its standard deviation is comparatively smaller in all the test problems. Besides, LDIW-PSO demonstrated better global search ability and getting out of local optima than DLPSO2.

6. Conclusion

Motivated by the superiority claims by some PSO variants over LDIW-PSO in terms of performance, a number of experiments were performed in this paper to empirically verify some of these claims. Firstly, an appropriate (approximate) percentage of the test problems search space limits were obtained to determine the particle velocity limits for LDIW-PSO. Secondly, these values were used in the implementation of LDIW-PSO for some benchmark optimization problems and the results obtained compared with that of some PSO variants that have previously claimed superiority in performance. LDIW-PSO performed better than these variant. The performances of the two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO on similar problems with the latter showing competitive advantage. This work has therefore showed that with good experimental setting, LDIW-PSO will perform competitively with similar variants. Precious claims of inferior performance might therefore be due to some unfavourable experimental settings. The Appendix provides further simulation results that can provide useful hints for deciding the setting velocity threshold for particles for LDIW-PSO.

Appendix

Tables 9, 10, 11, 12, and 13 show the results of LDIW-PSO in optimizing some benchmark problems so as to determine a suitable value for that was used to set the velocity limits for the particles. The experiments were repeated 500 times for each of the problems. Two different swarm sizes of 20 and 30 were used for each of the three different problem dimensions 10, 30, and 50. The respective number of iterations that was used with the dimensions is 1000, 1500, and 2000. The LDIW strategy was decreased from 0.9 to 0.4 in course of searching for solution to the problem [7, 1012, 27], the acceleration constants ( and ) were set to 2.0, and and . In the tables, bold values represent the best mean fitness value.

Acknowledgment

Thanks are due to the College of Agricultural Science, Engineering and Sciences, University of Kwazulu-Natal, South Africa, for their support towards this work through financial grant.