Scientific Programming

Volume 2016 (2016), Article ID 2148362, 9 pages

http://dx.doi.org/10.1155/2016/2148362

## Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

^{1}College of Computer Science, Guizhou University, Guiyang 550025, China^{2}Department of Big Data Science and Engineering, College of Big Data and Information Engineering, Guizhou University, Guiyang 550025, China

Received 24 December 2015; Accepted 23 February 2016

Academic Editor: Eduardo Rodríguez-Tello

Copyright © 2016 Kai Yang and Zhuhong Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

#### 1. Introduction

Many real-world engineering optimization problems, such as industrial control, project management, portfolio investment, and transportation logistics, include stochastic parameters or random variables usually. Generally, they can be solved by some existing intelligent optimization approaches with static sampling strategies (i.e., each candidate is with the same sampling size), after being transformed into constrained expected value programming (CEVP), chance constrained programming, or probabilistic optimization models. Although CEVP is a relatively simple topic in the context of stochastic programming, it is a still challenging topic, as it is difficult to find feasible solutions and meanwhile the quality of the solution depends greatly on environmental disturbance. The main concern of solving CEVP involves two aspects: (i) when stochastic probabilistic distributions are unknown, it becomes crucial to distinguish those high-quality individuals from the current population in uncertain environments, and (ii) although static sampling strategies are a usual way to handle random factors, the expensive computational cost is inevitable, and hence adaptive sampling strategies with low computational cost are desired.

When stochastic characteristics are unknown, CEVP models are usually replaced by their sample average approximation models [1, 2], and thereafter some new or existing techniques can be used to find their approximate solutions. Mathematically, several researchers [3–5] probed into the relationship between CEVP models and their approximation ones and acquired some valuable lower bound estimates on sample size capable of being used to design adaptive sampling rules. On the other hand, intelligent optimization techniques have become popular for nonconstrained expected value programming problems [6–8], in which some advanced sampling techniques, for example, adaptive sampling techniques and sample allocation schemes, can effectively suppress environmental influence on the process of solution search. Unfortunately, studies on general CEVP have been rarely reported in the literature because of expected value constraints. Even if so, several researchers made great efforts to investigate new or hybrid intelligent optimization approaches for such kind of uncertain programming problem. For example, B. Liu and Y.-K. Liu [9] proposed a hybrid intelligent approach to solve general fuzzy expected value models, after combining evolutionary algorithms with neural networks learning methods. Sun and Gao [10] suggested an improved differential evolutionary approach to solve an expected value programming problem, depending on static sampling and flabby selection.

Whereas immune optimization as another popular branch was well studied for static or dynamic optimization problems [11, 12], it still remains open for stochastic programming problems. Some comparative works between classical intelligent approaches and immune optimization algorithms for stochastic programming demonstrated that one such branch is competitive. For example, Hsieh and You [13] proposed a two-phase immune optimization approach to solve the optimal reliability-redundancy allocation problem. Their numerical results, based on four benchmark problems have showed that such approach is superior to the compared algorithms. Zhao et al. [14] presented a hybrid immune optimization approach to deal with chance-constrained programming, in which two operators of double cloning and double mutation were adopted to accelerate the process of evolution.

In the present work, we study two lower bound estimates on sample size theoretically, based on Hoeffding’s inequalities [15, 16]. Afterwards, two efficient adaptive racing sampling approaches are designed to compute the empirical values of stochastic objective and constraint functions. These, together with immune inspirations included in the clonal selection principle, are used to develop a microimmune optimization algorithm (IOA) for handling general, nonlinear, and higher-dimensional constrained expected value programming problems. Such approach is significantly different from any existing immune optimization approaches. On one hand, the two lower bound estimates are developed to control the sample sizes of random variables, while a local search approach is adopted to strengthen the ability of local exploitation; on the other hand, the two adaptive racing sampling methods are utilized to determine dynamically such sample sizes in order to compute the empirical values of objective and constraint functions at each individual. Experimental results have illustrated that IOA is an alternative tool for higher-dimensional multimodal expected value programming problems.

#### 2. Problem Statement and Preliminaries

Consider the following general single-objective nonlinear constrained expected value programming problem:with bounded and closed domain in , decision vector in , and random vector in , where is the operator of expectation; and are the stochastic objective and constraint functions, respectively, among which at least one is nonlinear and continuous in ; and are the deterministic and continuous constraint functions. If a candidate solution satisfies all the above constraints, it is called a feasible solution and an infeasible solution otherwise. Introduce the following constraint violation function to check if candidate is feasible:Obviously, is feasible only when . If , we prescribe that is superior to *.* In order to solve CEVP, we transform the above problem into the following sample-dependent approximation model (SAM):where and are the sampling sizes of at the point for the stochastic objective and constraint functions, respectively; is the th observation. It is known that the optimal solution of problem SAM can approach that of problem when and , based on the law of large number [17]. We say that is an empirically feasible solution for if the above constraints in SAM are satisfied.

In the subsequent work, two adaptive sampling schemes will be designed to estimate the empirical objective and constraint values for each individual. We here cite the following conclusions.

Theorem 1 ((Hoeffding’s inequality) see [15, 16]). *Let be a set, and let be a probability distribution function on X; denote the real-valued functions defined on with for , where and are real numbers satisfying . Let be the samples of i.i.d. random variables on , respectively. Then, the following inequality is true:*

Corollary 2 (see [15, 16]). *If are i.i.d. random variables with mean and , , thenwith probability at least , where and ; and denote the observation of and the significance level.*

#### 3. Racing Sampling Approaches

##### 3.1. Expected Value Constraint Handling

Usually, when an intelligent optimization approach with static sampling is chosen to solve the above problem , each individual with the same and sufficiently large sampling size, which necessarily causes high computational complexity. Therefore, in order to ensure that each individual in a given finite population has a rational sampling size, we in this subsection give a lower bound estimate to control the value of with , based on the sample average approximation model of the above problem. Define

We next give a lower bound estimate to justify that is a subset of with probability , for which the proof can be found in Appendix.

Lemma 3. *If there exist and such that with , one has that , provided thatwhere and denotes the size of .*

In (6), and are decided by the bounds of the stochastic constraint functions at the point . is the maximal sampling difference computed by the observations of the stochastic constraints. We also observe that once and are defined, is determined by . Additionally, those high-quality individuals in should usually get large sampling sizes, and conversely those inferior ones can only get small sampling size. This means that different individuals will gain different sampling sizes. Based on this consideration and the idea of racing ranking, we next compute the empirical value of any expected value constraint function at a given individual in , that is, . This is completed by the following racing-based constraint evaluation approach (RCEA).

*Step 1. *Input parameters: initial sampling size , sampling amplitude , relaxation factor , significance level , and maximal sampling size .

*Step 2. *Set , , and ; calculate the estimate through observations.

*Step 3. *Set .

*Step 4. *Create observations, and update ; that is,

*Step 5. *Set ; if and , then go to Step .

*Step 6. *Output as the estimated value of .

In the above formulation, and are used to decide when the above algorithm terminates. Once the above procedure is stopped, obtains its sampling size ; that is, . We note that is very small if is large. Thereby, we say that is an empirical feasible solution if, in the precondition of , the above deterministic constraints are satisfied. Further, RCEA indicates that an empirical feasible solution can acquire a large sampling size so that is close to the expected value of .

##### 3.2. Objective Function Evaluation

Depending on the above RCEA and the deterministic constraints in problem , the above population is divided into two subpopulations of and , where consists of empirical feasible solutions in . We investigate another lower bound estimate to control the value of with , relying upon the sample average approximation model of the problem . Afterwards, an approach is designed to calculate the empirical objective values of empirical feasible solutions in . To this point, introduce where and stand for the minima of theoretical and empirical objective values of individuals in , respectively. The lower bound estimate is given below, by identifying the approximation relation between and . The proof can be known in Appendix.

Lemma 4. *If there exist and such that with , then , provided thatwhere *

Like the above constraint handling approach, we next calculate the empirical objective values of individuals in through the following racing-based objective evaluation approach (ROEA).

*Step 1. *Input parameters: and mentioned above, initial sampling size , and population .

*Step 2. *Set , , , and .

*Step 3. *Calculate the empirical objective average of observations for each individual in , that is, ; write and .

*Step 4. *Remove those elements in satisfying .

*Step 5. *Set .

*Step 6. *Update the empirical objective values for elements in through

*Step 7. *Set .

*Step 8. *If and , then return to Step ; otherwise, output all the empirical objective values of individuals in .

Through the above algorithm, those individuals in can acquire their respective empirical objective values with different sampling sizes. Those high-quality individuals can get large sampling sizes, and hence their empirical objective values can approach their theoretical objective values.

#### 4. Algorithm Statement

The clonal selection principle explains how immune cells learn the pattern structures of invading pathogens. It includes many biological inspirations capable of being adopted to design IOA, such as immune selection, cloning, and reproduction. Based on RCEA and ROEA above as well as general immune inspirations, IOA can be illustrated by Figure 1. We here view antigen Ag as problem SAM itself, while candidates from the design space are regarded as real-coded antibodies. Within a run period of IOA by Figure 1, the current population is first divided into empirical feasible and infeasible antibody subpopulations after executing RCEA above. Second, those empirical feasible antibodies are required to compute their empirical objective values through ROEA. They will produce many more clones than empirical infeasible antibodies through proliferation. Afterwards, all the clones are enforced mutation. If a parent is superior to its clones, it will carry out local search, and conversely it is updated by its best clone. Based on Figure 1, IOA can be formulated in detail below.