Abstract

A novel artificial immune system algorithm with social learning mechanisms (AIS-SL) is proposed in this paper. In AIS-SL, candidate antibodies are marked with an elitist swarm (ES) or a common swarm (CS). Correspondingly, these antibodies are named ES antibodies or CS antibodies. In the mutation operator, ES antibodies experience self-learning, while CS antibodies execute two different social learning mechanisms, that is, stochastic social learning (SSL) and heuristic social learning (HSL), to accelerate the convergence process. Moreover, a dynamic searching radius update strategy is designed to improve the solution accuracy. In the numerical simulations, five benchmark functions and a practical industrial application of proportional-integral-differential (PID) controller tuning is selected to evaluate the performance of the proposed AIS-SL. The simulation results indicate that AIS-SL has better solution accuracy and convergence speed than the canonical opt-aiNet, IA-AIS, and AAIS-2S.

1. Introduction

Artificial immune systems (AIS) are intelligent computational models or algorithms inspired by the principles of human immune system with the characteristics of self-organization, learning, memory, adaptation, robustness, and scalability [1]. By imitating the immune process of human immune system, AIS has been developed as an effective tool for scientific computing and engineering applications [2], such as function optimization [3, 4], data mining [5], pattern recognition [6, 7], anomaly detection [8, 9], Internet of Things [10], and industrial control systems [1114].

In the past few years, most AIS and their variations were derived from four major models or theories [1], that is, clone selection, negative selection, danger theory, and artificial immune network. Artificial immune network model is based on the theory of immune network, which states that the immune cells act as a mutually reinforcing network constructed by the matching between paratopes and idiotopes of antibodies. Consequently, there are interactions between not only antibodies and antigens but also antibodies themselves, which causes suppression of the antibodies.

Nunes De Casto and Von Zuben proposed an early version of artificial immune network (aiNet) for data analysis, which is aiming at reducing data redundancy to compress the input set representation [15]. Further, aiNet was developed as opt-aiNet to solve multimodal optimization problems [16]. Due to well-designed immune operators, opt-aiNet is capable of both exploration and exploitation. And soon, Stibor and Timmis presented a closeness measure between the input data set and the compressed output data set by considering the compression quality of opt-aiNet [17]. As a revised version of opt-aiNet, dopt-aiNet [18] was proposed for function optimization in dynamic environments. Inspired by omnioptimization, Coelho and von Zuben proposed an improved omni-aiNet [19], which is able to adjust the population size dynamically. As a result, high redundancy within the population is avoided. Cob-aiNet [20] designed an evaluation method of concentration to direct immune operators. By introducing the elitist-learning policy of PSO, aiNet-EL [21] and its enhanced version [22] were proposed. Clearly, Both of these two algorithms discriminate elitist antibodies and nonelitist antibodies during mutation operation, and their results indicate that the elitist-learning mechanism is effective to improve the convergence speed. In order to enhance the adaptability of AIS, an improved adaptive artificial immune system (IA-AIS) [23] was designed, where the cloning operator and mutation operator are based on affinity. To maintain the tradeoff between exploration and exploitation, an adaptive AIS algorithm with two swarms, that is, AAIS-2S [24], was proposed. AAIS-2S separates the candidate antibodies into an elitist swarm (ES) and a common swarm (CS) by their affinity, where the antibodies experience self-learning and elitist-learning, respectively. All these achievements indicate that the artificial immune networks have been studied deeply and are of interest to researchers.

Note that in AAIS-2S, CS antibodies just learn from some ES antibody, which brings in the potential risk of falling into the local optimum. This is because the current ES antibody can probably not evolve as the final global optimum in the future, especially in some complex multimodal functions. To alleviate such potential risk, two social learning mechanisms, that is, stochastic social learning (SSL) [25] and heuristic social learning (HSL) [26], were proposed, respectively. As their extended research, this paper integrates SSL with HSL into an AIS algorithm and gets a new algorithm named AIS-SL for complex optimization problems. AIS-SL is able to choose either SSL or HSL as its learning strategy during mutation. In this case, AIS-SL may be subdivided into two different algorithms, that is, AIS-SSL and AIS-HSL. In addition, a dynamic searching radius update strategy is designed. The proposed AIS-SL is expected to quickly capture the optimal solution of complex optimization problems. A series of comparative simulations about benchmarks and industrial PID controller will prove the high-performance and effectiveness of the proposed AIS-SL in both convergence speed and solution accuracy.

The rest of this paper is organized as follows. Section 2 reviews the related theories of immune systems including aiNet, opt-aiNet, IA-AIS, and AAIS-2S. Subsequently, Section 3 presents the theoretical analyses and the technical details of the proposed AIS-SL with different social learning mechanisms, that is, AIS-SSL and AIS-HSL. And Section 4 compares the performances of AIS-SL with opt-aiNet, IA-AIS, and AAIS-2S by a series of numerical simulations and a practical industrial PID application. Finally, Section 5 draws some conclusions.

2. Reviews of Immune Systems

2.1. Human Immune System

In the natural environment, different kinds of pathogens, such as harmful germs and viruses, may flow into our bodies, which lead to deadly damage. Thanks to the powerful immune systems, our bodies can be kept healthy and secure. Once it invades the organism, the pathogen becomes antigen and provokes the immune response. If such antigen is detected for the first time, which is referred to as the primary response, the immune cells will propagate a mass of cloned antibodies to destroy the antigen. During the propagation, the antibodies undergo somatic hypermutation. The matching between antibody and antigen is measured by affinity. The higher the affinity is, the better the candidate antibody matches the specific antigen. In the immune network theories, there is a suppression process because immune cells can also recognize each other. As the reactions continue, immune cells become mature and further differentiate into plasma cells or memory cells. When the same antigen appears again, memory cells will quickly react, which is referred to as the secondary response. It is worth mentioning that the secondary response is triggered much more sharply and instantly than the primary response.

2.2. Artificial Immune System

Generally, most of theoretical problems and engineering applications can be modeled as optimization problems. As a typical AIS model, inspired by immune network theories, aiNet is designed to solve these practical problems. Corresponding to human immune systems, aiNet has some new concepts, where the term antigen indicates the objective problem to be solved, the term antibody represents the feasible solution of the objective problem, and the term affinity is used to evaluate the value of the feasible solution to the objective problem.

In the initialization phase, candidate antibodies are randomly produced to form the population , where is the th individual in the th time epoch. And the affinity of these antibodies is evaluated by . In each time epoch, any antibody is cloned for a number of offsprings. And then, these clones except for the parent one experience the mutation operator. Only the one with the highest affinity among these clones is selected to remain. If the current average affinity is not significantly different from that of the previous time epoch, the suppression operator is activated. Among antibodies whose similarity or distance is less than a suppression threshold , all but the one with the highest affinity are suppressed. And then a number of randomly generated antibodies are recruited. Repeat these iterative process until the termination condition is met.

Opt-aiNet [16] is a representative of aiNet algorithms. And it employs a uniform cloning operator where each parent antibody is cloned for a fixed number of Nc and introduces an affinity-based Gaussian mutation (AGM) where the mutated level is determined by affinity. In addition, the Euclidean distance measure is employed to evaluate similarity, and the suppression threshold is also fixed, which is determined by trial and error. Please see [16] for more details.

The number of clones and suppression threshold of opt-aiNet is fixed, which it is necessary to tune for specific problems. Moreover, the simple proximate linear relationship between mutated antibodies and their affinity impairs the solution accuracy, and the slight difference in mutation scale between antibodies reduces the convergence speed seriously. Taking into account these disadvantages, IA-AIS [23] is proposed to adapt complex optimization problems. In IA-AIS, an affinity-based cloning operator is proposed, where the antibody with higher affinity has more offsprings. And a controlled affinity-based Gaussian mutation (CAGM) operator is designed to make antibodies with lower affinity mutate much more than those with greater affinity. In addition, the suppression threshold can be adjusted dynamically and is proportional to the similarity between antibodies. Please see [23] for more details.

Considering that antibodies in both opt-aiNet and IA-AIS evolve independently, which will impair the exploration and decelerate the convergence process, AAIS-2S [24] is proposed to introduce a learning mechanism. In AAIS-2S, the candidate antibodies are grouped into an elitist swarm (ES) and a common swarm (CS) by their affinity. The ES antibodies experience a self-learning mutation operator, while the CS antibodies go through an elitist-learning mutation operator to accelerate the convergence. And these two swarms are updated every other fixed number of time epochs following a dynamic swarm update strategy, which guarantees that those better CS antibodies can be luckily updated into ES. In addition, the searching radius is adjusted adaptively based on the distance among ES antibodies. Please see [24] for more details.

3. The Proposed Artificial Immune System with Social Learning Mechanisms

In order to improve the solution quality and the convergence speed, this paper proposes an artificial immune system with social learning mechanisms (AIS-SL) for optimization. Figure 1 gives the flowchart of the proposed AIS-SL algorithm. The self-learning mechanism and the social learning mechanism are denoted in the red boxes, respectively. The candidate antibodies are regrouped into ES and CS in the swarm update phase. ES antibodies experience the affinity-based cloning operator and the self-learning mutation operator, where the searching radius is updated adaptively by a well-designed mechanism. And each CS antibody learns from a target antibody which is selected randomly for AIS-SSL or obtained by using affinity for AIS-HSL. It is worth pointing out that each CS antibody undergoes the social learning mutation operator. Hence, in the mutation process, two social learning scenarios are considered, and the resultant AIS-SSL and AIS-HSL mutation operators are redesigned. The suppression operator is triggered if the suppression condition is met. Repeat the above process until it is terminated.

The technical details of AIS-SL are presented as follows.

3.1. Clone

In AIS-SL, the cloned number of a parent antibody is determined by its affinity. The greater the affinity of parent antibody is, the more offsprings are reproduced. And the number of clones is nonlinear to the affinity of the parent antibody. Thus, the pseudocode of the cloning operator is shown in Algorithm 1, where and are the maximum and minimum number of offsprings, respectively, and is the power factor of the control function. All antibodies, including CS antibodies and ES antibodies, should execute the same cloning operator only once during each time epoch.

PROCEDURE  Cloning_Operator()
  ;
  ;
FOR    to    DO
  ;
  ;
END FOR
END PROCEDURE
3.2. Mutation

The ES antibodies have higher affinity and act as memory cells which react much more sharply and instantly in the secondary immune response. As a result, these ES antibodies play an important role in local search and experience a self-learning mutation operator. On the other hand, the CS antibodies play a role of global search under the guidance of ES antibodies, so all CS antibodies experience social learning to accelerate the convergence process. If the affinity of one CS antibody is less than that of the antibody selected from ES, it learns from the selected ES antibody. Or if its affinity is greater than the selected ES antibody but less than the best ES antibody, it selects the best ES antibody to learn from. Otherwise, it executes the self-learning mutation operator. Consequently, the pseudocode of the mutation operator is shown in Algorithm 2, where rand is a uniform random variable, is a Gaussian random variable with zero mean and standard deviation 1, is the best ES antibody in affinity, and is the searching radius of in the tth time epoch.

PROCEDURE  Mutation_Operator
IF    THEN
    ;
ELSE
    Select from ES;
    IF    THEN
   ;
    ELSEIF    THEN
   ;
    ELSE
   ;
    END IF
END IF
  ;
END FOR
END PROCEDURE

In such canonical AIS as opt-aiNet and IA-AIS, the searching radius is fixed, which impairs the convergence speed and the solution accuracy. It is because that any elitist antibody can easily step out of the optimum if it appears close to the optimum and its searching radius is too large. Or if the searching radius is too small, the convergence speed of AIS will be slowed down. Thus, in AIS-SL can be updated dynamically, which is determined by where is the lower bound of the searching radius, and

For , according to (1) and (2), the initialized searching radius is set to be whose value is equal to the half of the threshold . After the suppressor, the distance between any two antibodies is greater than , so can maximize the searching area without overlap. If the affinity of is improved after mutation, maintains its value. Otherwise, will be halved. However, the searching radius cannot be less than a lower bound , because too small searching radius will decrease the convergence speed severely. As a result, when is less than , will be set to which is decreased to the half of its previous value . Also, cannot be less than . And will be reset to when it is less than . The pseudocode of the dynamic searching radius update strategy is shown in Algorithm 3.

PROCEDURE  Searching_Radius_Update()
IF     THEN
   ;
END IF
IF     THEN
   ;
ELSEIF    THEN
   ;
ELSEIF    THEN
   ;
ELSE
   IF    THEN
   ;
   ELSE
   ;
   END IF
    ;
    ;
END IF
END PROCEDURE

In the proposed AIS-SL, two selection mechanisms are proposed to determine which ES antibody to be learned from. They are stochastic social learning (SSL) and heuristic social learning (HSL). In SSL, the ES antibody to be learned is selected randomly and the selection probability of every ES antibody is uniform. In other words, the selection probability is not related to the antibody individual but the population scale of ES, that is, . In HSL, the ES antibody to be learned is determined by affinity. In detail, the selection probability for the ES antibody followsObviously, the selection probability is nonuniform. The higher the affinity is, the greater selection probability the antibody has. And then, the roulette method is employed for HSL to select the ES antibody to learn.

Assume that the optima of multimodal function is significantly different in affinity. The higher the affinity is, the greater probability the antibody has to evolve as the global optimum. So, AIS-HSL will have faster convergence speed. However, if ES does not overlap the global optimum, AIS-HSL has risk to fall into some local optima. If the optima are similar in affinity, AIS-HSL will have similar performance to AIS-SSL. In the extremely special condition where ES has only one antibody, two algorithms are same in effect.

3.3. Suppression

The proposed AIS-SL employs the dynamic suppressor which is the same to IA-AIS. And the threshold is proportional to the similarity of the antibody population, whose pseudocode is shown in Algorithm 4, where is the Euclidean distance between the th antibody and the jth antibody in the tth time epoch, and . After the suppression, a number of randomly generated antibodies are recruited to keep the population size at the number of N.

PROCEDURE  Suppression_Operator()
  ;
  ;
  ;
END PROCEDURE
3.4. Swarm Update

As we know, it is possible that some CS antibodies capture higher affinity than those ES antibodies due to the elitist-learning. Consequently, it is necessary to update two swarms and allow these better CS antibodies to enter into ES. Thus, in the swarm update, all antibodies are sorted by their affinity in a descending order. Then, the first antibodies step into ES and the others step into CS. Note that after the initialization, all antibodies experience the self-learning mechanism until the suppression operator is triggered. This allows all antibodies to evolve within an enough small region around them. So it is helpful to find the peaks of the desired problem. As a result, in the first swarm update, all antibodies belong to ES, while CS is empty. The pseudocode of the swarm update mechanism is shown in Algorithm 5.

PROCEDURE  Swarm_Update()
IF    THEN
  ;
  ;
ELSE
    Sort by affinity in a descending order, i.e., ;
    ;
    ;
END IF
  ;
END PROCEDURE

4. Numerical Simulations and Results

In the numerical simulations, five benchmark functions are selected from 2005 CEC Special Session on Real-Parameter Optimization [27] to examine the solution accuracy and the convergence speed of AIS-SL. Furthermore, as a practical industrial application, the PID controller tuning is considered to evaluate the proposed AIS-SL. In addition, AIS-SL is compared with opt-aiNet [16], IA-AIS [23], and AAIS-2S [24] in optimization performance.

4.1. Benchmark Functions

The five selected benchmark functions are listed as follows:(i)-Shifted Schwefel’s Problem 1.2: unimodal;(ii)-Shifted Rosenbrock’s Function: multimodal;(iii)-Shifted Rotated Rastrigin’s Function: multimodal;(iv)-Shifted Rotated Weierstrass Function: multimodal;(v)-Schwefel’s Problem 2.13: multimodal.

4.2. Parameter Sensitivity Analysis

There are three key parameters in the proposed AIS-SL, that is, the population scale of ES (), the power factor of the cloning operator (), and the suppression scale (), which need to be tuned. In this subsection, five benchmark functions in 10D are used to discuss the impact of these three parameters. For the sake of clear observations, the results are expressed in lg (i.e., log 10) scale.

Figure 2 represents the impact of the population scale of ES on error, where and . As seen from Figures 2(a) and 2(b), has little impact on error for , , and . However for , the error is much larger when is smaller than 0.2. And the impact is more significant for , where the error is much greater when is greater than 0.4 for SSL and 0.5 for HSL. The reason can be got in Figure 3, which shows the impact of on the convergence speed for . The greater is, the slower the convergence speed of the proposed AIS-SL is. As a result, is usually set to be 0.2–0.4.

Figure 3 represents the impact of the power factor () of the cloning operator on error, where and . As seen from Figures 3(a) and 3(b), it is obvious that has little impact on error for all the benchmark functions. For each function, no matter what value has, both of AIS-SSL and AIS-HSL have similarity optimized results.

Figure 4 represents the impact of the suppression scale on error, where and . As seen from Figures 4(a) and 4(b), has little impact on error for , , and . But for optimized by AIS-HSL, the error is much larger when is greater than 0.3. And for , the error increases with , when is greater than 0.3 for SSL and 0.2 for HSL. And the error is also larger if is smaller than 20% for . Above all, is usually set to be 0.2–0.3.

4.3. Parameter Settings

For the sake of fairness, the initialized population size of these five algorithms is 100, the maximum number of time epochs is 1000 for 2D and 5000 for 10D, and all the simulations are repeated for 50 trials. In the cloning operator, the number of clones for opt-aiNet is 20, while the maximum and minimum numbers of clones are 20 and 4 for the other algorithms, respectively. According to the results of parameter sensitivity analysis, all the other parameters of the proposed AIS-SL are listed in Table 1. Please see [16, 20, 21] for the rest parameters of the other three algorithms.

4.4. Numerical Simulation Results

Tables 2 and 3 present the numerical simulation results for five benchmark functions, including the best, worst, mean, and standard deviation (Std) of error in 2D and 10D, respectively, where the best results are typed in bold. As seen from the results, both AIS-SSL and AIS-HSL have perfect performance. In the simulations of 2D especially, the proposed AIS-SSL and AIS-HSL can both capture the global optimum for every benchmark function in every trial. For 10D, although the proposed AIS-SSL and AIS-HSL cannot reach the global optimum in every trial, they are able to obtain desired solutions which are much better than those of opt-aiNet, IA-AIS, and AAIS-2S. It is obvious that the proposed AIS-SL has much more accurate solutions than opt-aiNet, IA-AIS, and AAIS-2S. Further, by comparing the results of AIS-SSL and AIS-HSL, it is easy to find that they perform equally for 2D problems while it is hard to tell which is better.

Figures 59 show the average convergence process for five benchmark functions, respectively. In order to distinguish the results obviously, the results are expressed in lg (i.e., log 10) scale. In every subfigure of Figures 59, it is obvious that the proposed AIS-SSL and AIS-HSL have much faster convergence speed than opt-aiNet, IA-AIS, and AAIS-2S. For example in 2D, AIS-SSL and AIS-HSL can both capture the optima easily within about 100, 900, 100, 300, and 50 time epochs for , , , , and , respectively, while the curves of the other algorithms descend more slowly.

4.5. An Industrial Application in PID Controller Design

Proportional integral derivative (PID) controllers are the famous process control techniques. Because PID controllers have simple structure and robust performance, they are widely used in most control systems. In a general PID control system, as shown in Figure 10, there are three components, that is, the proportional component , the differential component , and the integral component , where , , and are the gain factors of three control components, respectively. They have great effect on the performance of control systems. controls the response speed of system, controls the dynamic performance, and controls the steady state error. As a result, the design of a PID controller is to select an optimum solution .

To optimize the performance of a PID controlled system, four error indices are selected to be minimized, that is, the integral of the absolute magnitude of the error (IAE), the integral of the square of the error (ISE), the integral of time multiplied by the absolute error (ITAE), and the mean of the square of the error (MSE). These indices are formulated as follows.where is the integral time and normally no more than the settle time of the desired system, and is the error between the input and the output.

Assume that the transfer function of a certain controlled process is given byand the transfer function of the embedded PID controller is expressed by

Once the controller parameters are determined, the corresponding step response curve of the PID controlled system can be figured out. Hence, we can use some analytical methods or intelligent optimization methods to calculate the controller according to (4). For the sake of comparisons, in this application, three optimization methods (i.e., opt-aiNet, IA-AIS, and AAIS-2S) and the proposed AIS-SL (i.e., AIS-SSL and AIS-HSL) algorithms are used to search the best PID controller parameters. The corresponding parameters are set the same as in Section 4.3 and Table 1. For further analysis, the best solutions optimized by the proposed AIS-SSL and AIS-HSL are specially given in Table 4.

Correspondingly, the step response curves of the PID control system using opt-aiNet, IA-AIS, AAIS-2S, AIS-SSL, and AIS-HSL can be presented in Figure 11. It is obvious that AIS-SSL and AIS-HSL have the least settling time and the smallest overshoot and can obtain the smallest value in all of the four indices by comparison with other three methods. Particularly, PID controller obtains ideal step response curves which are tuned by the proposed AIS-SSL or AIS-HSL when ITAE is used, because their overshoots are both less than 5% which is seen from the red and black plots shown in Figure 11(c). For instance in Figure 11(c), AIS-SSL obtains the overshoot of only about 4% and the settle time of 5.9 s, while AIS-HSL gets the overshoot of about 5% and the settling time of 5.8 s. And in Figures 11(a), 11(b), and 11(d), AIS-SSL and AIS-HSL have similar performance due to their approximately overlapping step response curves, where the results in overshoot are 10.97%, 10.99%, and 10.23%, and results in settling time are 5.2 s, 5.4 s, and 5.2 s, respectively.

In the details of comparisons, AIS-SL harvests improvement in overshoot and settling time of AAIS-2S by 10.90% and 26.76% for IAE and by 25.19% and 27.03% for ISE. Furthermore, for ITAE, AIS-SSL (or AIS-HSL) reduces the overshoot and the settling time of opt-aiNet by 28.09% (or 27.46%) and 35.87% (or 36.96%). And for MSE, compared with IA-AIS, AIS-SL improves the overshoot and the settling time by 12.38% and 23.53%. These results fully prove that the proposed AIS-SL is more effective in tuning a PID controller.

5. Conclusions

This paper proposes an artificial immune system algorithm with social learning mechanisms (AIS-SL) for complex optimization problems. Considering that the simple elitist-learning has great risk of falling into the optima, the proposed AIS-SL considers two social learning mechanisms, that is, stochastic social learning (SSL) and heuristic social learning (HSL). In addition, a dynamic searching radius update strategy is proposed to improve the solution accuracy. In the numerical simulations, the performance of the proposed AIS-SL is compared with opt-aiNet, IA-AIS, and AAIS-2S in five benchmark functions and a practical application of PID controller tuning. According to the numerical simulation results, both AIS-SSL and AIS-HSL can obtain the global optimum or desired solutions much more quickly and more accurately and are more effective in such an industrial PID application than opt-aiNet, IA-AIS, and AAIS-2S.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is partially supported by the National Science Foundation of China (no. 61471122) and the Science and Technology Program of Huizhou (nos. 2013B020015006 and 2014B020004025). Also, it is funded by the Priority Academic Program Development (PAPD) of Jiangsu Higer Education Institutions, Jiangsu Collaborative Innovation Center on Atmospheric Environment, and Equipment Technology (CICAEET). In addition, a famous professor of South China University of Technology, Mr. Zongyuan Mao, should be thanked for his earlier help in leading the authors to study artificial immune systems and his professional guidance in industrial applications.