Table of Contents Author Guidelines Submit a Manuscript
Science and Technology of Nuclear Installations
Volume 2014, Article ID 286826, 9 pages
http://dx.doi.org/10.1155/2014/286826
Research Article

PSO Based Optimization of Testing and Maintenance Cost in NPPs

1Software Development Center, State Nuclear Power Technology Corporation, Beijing 102209, China
2School of Nuclear Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China

Received 11 July 2014; Revised 13 November 2014; Accepted 13 November 2014; Published 9 December 2014

Academic Editor: Alejandro Clausse

Copyright © 2014 Qiang Chou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Testing and maintenance activities of safety equipment have drawn much attention in Nuclear Power Plant (NPP) to risk and cost control. The testing and maintenance activities are often implemented in compliance with the technical specification and maintenance requirements. Technical specification and maintenance-related parameters, that is, allowed outage time (AOT), maintenance period and duration, and so forth, in NPP are associated with controlling risk level and operating cost which need to be minimized. The above problems can be formulated by a constrained multiobjective optimization model, which is widely used in many other engineering problems. Particle swarm optimizations (PSOs) have proved their capability to solve these kinds of problems. In this paper, we adopt PSO as an optimizer to optimize the multiobjective optimization problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Numerical results have demonstrated the efficiency of our proposed algorithm.

1. Introduction

Improvement of the availability performance for safety-related systems has drawn much attention in Nuclear Power Plant (NPP) nowadays. One way to increase availability of these systems is to improve availability of the equipment that constitutes them. In this way, people often pay attention to the more efficient testing and maintenance activities. NPPs often pursue more efficient testing and maintenance activities to control risk and cost. Actually, safety-critical equipment is normally on standby till occurrence of an accident situation which requires safety-related systems to prevent or mitigate the accident process. In order to keep safe-related systems at a high level of availability or safety, regular testing and maintenance activities are implemented. Efficient regular testing and maintenance strategy can improve the availability performance of the systems, and meanwhile it will lead to great expenditure cost. Therefore, both risk controlling and expenditure effectiveness have drawn much attention in NPP [1, 2].

Technical specifications define the limits and conditions for operating NPPs which can be seen as a set of safety rules and criteria required as a part of safety analysis report of each NPP. Both technical specifications and maintenance activities are associated with controlling risk and then with availability of safety-related systems. The resource related to risk controlling rules and criteria formally enter into optimization problems. Using a limited expenditure resource to keep safety-critical equipment at a high level of availability or safety actually is a constrained multiobjective optimization problem where the cost or the burden, that is, number of tests conducted, duration, incurred cost, and so forth, is to be minimized while the unavailability or the performance of the safety-critical equipment is constrained at a given level.

By now, some researchers have made great achievements in nuclear technology area. References [3, 4] presented a constrained multiobjective optimization model to solve this problem using genetic algorithm (GA); reference [5] first presented PSA-based techniques to solve risk-cost maintenance and testing model of an NPP using GA; reference [6] puts forward using a multiobjective approach to regulate Nuclear Power Plant (NPP); reference [7] presents using fuzzy-genetic approach to optimize the test interval of safety systems at NPP considering parameters uncertainty. In this paper, we put forward using PSO to solve the constrained multiobjective optimization problem which simulates the testing and maintenance activities. The PSO method is firstly used to solve the multiobjective optimization problem described by testing and maintenance activities in NPPs. It is a heuristic algorithm and can offer the solution by iteratively trying to improve a candidate solution with regard to a given measure of quality. Numerical results have demonstrated the reasonability of PSO method.

The plan of this paper is the following: Section 2 presents the unavailability and cost models of critical systems/components of NPP; Section 3 gives the multiobjective problem model; Section 4 reviews PSO method; Section 5 presents a case study; finally, Section 6 draws a short conclusion.

2. System Risk and Cost Function

2.1. System Unavailability Model

As to nuclear facilities, the system unavailability is classified into three types: component’s unavailability, common failure, and human errors. In this paper, we just consider the component’s unavailability which is caused by random failure and test and maintenance activities which are the functions of the optimization variables such as test interval, test duration, maintenance period, allowed outage time, and so on. The system unavailability is often modeled by fault tree using rare-event approximation as follows [4]: where is the decision variable vector; the sum in refers to the number of minimal cut sets generated from the considered system structure function and the product in represents the number of the basic events belonging to the corresponding MCS. The represents the unavailability of the basic event contained in minimal cut sequence (MCS) .

The unavailability expressions of basic events caused by random failure are written as [4]:

Equation (2) is the time-dependent unavailability evaluated at , where denotes per-demand failure probability and represents the failure rate. Equation (3) is the average time-dependent over a given time span .

To reflect the effect of age, preventive maintenance, and working conditions, an averaged standby failure rate is developed [8, 9]:

Note that (4) is applicable for proportional age setback (PAS) mode, and (5) is used for proportional age reduction PAR. The meanings of the parameters involved in (4) and (5) are listed in Table 1.

Table 1: Meanings of the parameters used in (4)-(5).

Therefore, one could apply (4) or (5) into (2)-(3) to account for random failure contributions considering effect of age, preventive maintenance, and working conditions.

Next, let us consider the models to account for the testing and maintenance activities effect, [9, 10] developed three expressions to characterize such effects:

The meanings of notations used in (6) are listed in Table 2.

Table 2: Meanings of the notations in (6).

Given that a test interval and preventive maintenance period , the , ,   can be calculated as follows:

Note that, in (9), can be calculated from (2) replacing with .

Additionally, is often restricted by the allowed outage time (AOT) in the standard PRA and is often calculated by the following relationships: where represents the allowed downtime; is the mean time to repair.

2.2. System Cost Function

As to equipment, the cost expressions due to implementing test and maintenance activities are often expressed as follows [810]:

The meanings of the notations used in (11) are listed in Table 3.

Table 3: Meanings of the notations in (11).

Then, the system cost model is easily formulated by summing up the corresponding cost contributions of the relevant components as follows: where represents the total number of the components in the considered system. Obviously, this model can be described by a multiobjective optimization problem.

2.3. Constraints

The presence of constraints significantly affects the performance of an optimization algorithm, including evolutionary search methods. Satisfying constraints is a difficult problem in itself often. There have been a number of approaches to impose constraints including rejection of infeasible solutions, penalty functions and their variants, repair methods, use of decoders, and so on. A comprehensive review on constraint handling methods is provided by Michalewicz [11]. All the methods have limited success as they are problem dependent and require a number of additional inputs. When constraints cannot be all simultaneously satisfied, the problem is often deemed to admit no solution. The number of constraints violated and the extent to which each constraint is violated need to be considered in order to relax the preference constraints. Generally speaking, we can impose constraints over (1) the objective functions and (2) the values the decision variables in vector can take. In our first case, we apply constraints over one of the two possible objective functions, risk or cost function, which will act as an implicit restriction function alternatively. For example, if the selected objective function to be minimized is the risk, then the constraint is a restriction over the maximum allowed value to its corresponding cost. In our second case, we handle constraints directly over the values the decision variables in vector can take, that is, technical specification and maintenance requirements (TS&M) parameters, which are referred to as explicit constraints. Examples of this type of constraints in optimizing TS&M include limiting the STIs, AOTs, and the preventive maintenance (PM) period to take typified values because of practical considerations of planning, representing each one, for example, an hour, a day, a month, or any other realistic period in the plant, instead of only mathematical solutions that are often unpractical.

3. Multiobjective Optimization Model

As to a general multiobjective optimization problem, it often has -dimensions decision variables and -dimension optimizing objectives and can be expressed as follows: where is an -dimension decision space and is an -dimension objective space defining mapping functions from decision space to objective space. The represents inequality constraints and represents equalities constraints. and denote the low and upper boundaries of the decision variables, respectively.

The specific multiobjective optimization model for optimizing testing and maintenance activities at NPP can be expressed as follows: where is a given limited unavailability level. In this paper we are using PSO based techniques to solve the minimization problem described with (14).

Multiobjective optimization has been applied in many fields of science, including engineering, economics, and logistics where optimal decisions need to be taken in the presence of tradeoffs between two or more conflicting objectives. For a nontrivial multiobjective optimization problem, there does not exist a single solution that simultaneously optimizes each objective. When one deals with multiobjective optimization problems with conflicting optimization criteria, there is not only a single solution but also a set of alternative solutions [12]. None of these solutions can be said to be better than the others with respect to all objectives. Since none of them is dominated by the others, they are called nondominated solutions. The curve formed by connecting these solutions is known as a Pareto optimal front and the solutions that lay on this curve are called Pareto optimal solutions. Classical optimization methods for multiobjective optimization suggest converting it to a single-objective optimization problem through emphasizing one particular Pareto-optimal solution at a time. This is the disadvantage of classical optimization methods, because finding Pareto-optimal solutions set is required often. Recently, a number of evolutionary algorithms (MOEAs) have been proposed [1315]. The reason is their ability to obtain multiple Pareto-optimal solutions in one single simulation run. In these algorithms, genetic algorithm (GA) is wisdom and popular for multiobjective optimization problems and GAs are adaptive methods that can be used in searching and optimization problems. Genetic algorithms belong to the larger class of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover. The particle swarm optimization (PSO) is one type of evolutionary algorithms (EAs); PSO shares many similarities with evolutionary computation techniques such as genetic algorithms (GAs). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation but its particles have mnemonic ability of the historical optimal positions. Thus PSO is of simplicity, flexibility, and easy operation compared with GA. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. As many other EAs such as genetic algorithm (GA), Ant Colony Optimization (ACO), and so forth, PSO can find Pareto optimal solutions and have chances to find near global solutions, only with different probability and reliability in finding these solutions. The more details of PSO will be introduced later on.

4. The Multiobjective Optimization Algorithm Based on PSO

4.1. Particle Swarm Optimization

Particle swarm optimization (PSO) has been successfully used to optimize nonlinear functions, combinatorial optimization problems and multiobjective problems [16, 17] because of its simplicity, flexibility, easy operation, and fast convergence. PSO is originally attributed to Eberhart et al. [18, 19] and was first intended for simulating social behaviour, [20] as a stylized representation of the movement of organisms in a bird flock or fish school. The algorithm is simplified and it is observed to be performing optimization. The book by Kennedy and Eberhart [21] describes many philosophical aspects of PSO and swarm intelligence. An extensive survey of PSO applications is made by Poli [22]. The PSO’s basic idea is having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formula over the particle’s position and velocity. Each particle’s movement is influenced by its local best known position and is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions. However, as many other metaheuristic methods, PSOs do not guarantee that an optimal solution is ever found. More specifically, PSO does not use the gradient of the problem being optimized; PSO can therefore also be used on optimization problems that are partially irregular, noisy, change over time, and so forth.

Let be the number of particles in the swarm, each having a position in the search-space and a velocity . Let be the best known position of particle . There is the best particle in the swarm labeled by and let be the best known position of the entire swarm; that is to say, is the optimal position of search history before. Particles update their velocity and position as the following formula: where , denotes the th dimension of the particles, is the number of iterations, and , are the acceleration constant, whose values are often in . and are two independent identity distribution random numbers. Generally, is kept within the range . The first part of formula (15)   is the particle velocity of the last iteration, the second part is the cognitive part, and the third part is the momentum part. A basic PSO algorithm is shown in Algorithm 1.

Algorithm 1: Basic PSO [18].

The summary flow chart of the basic PSO is shown in Figure 1.

Figure 1: PSO algorithm procedure.

The basic PSO described above has a small number of parameters that need to be fixed. One parameter is the size of the population. This is often set empirically on the basis of the dimensionality and perceived difficulty of a problem. Values in the range 20–50 are quite common.

Also there are many variant PSOs based on the basic PSO such as the PSO with inertia weight [19], the PSO with constriction coefficients [23], and so forth. The PSO with inertia weight is motivated by the desire to better control the scope of the search, reduce the importance of , and perhaps eliminate it altogether. The PSO with constriction coefficients which is based on some form of damping of the dynamics of a particle (e.g., ) is necessary. These variant PSOs are applicable in different situations. In this paper, we only consider the basic PSO version to verify the validity of PSO for minimizing the testing and maintenance cost in NPP.

4.2. The Proposed Multiobjective Optimization Algorithm

The basic idea of multiobjective optimization algorithm based on PSO in this paper is that through spliting and merging the dominated set and nondominated set repeatly, we can reach a better balance between the algorithm efficiency and accuracy. This is based on the idea of fitness dominance, which is similar with the idea mentioned in literatures [2325]. Let the initial population size be . is one nondominated subset of the population whose size is and is another size dominated subset which meets (, ). For any element , there at least exists an element which holds that dominates (, ). In each iteration process, only update the elements in and then compare them with the elements in based on fitness dominance rules. Then the dynamic switching strategy can be described as follows: for any , if there exist , holds dominates , then switch the positions of these two elements. After the previous preparation, we are now ready to describe the multiobjective optimization algorithm based on basic PSO (abbreviate it as MOBPSO (see Algorithm 2)). Let the th constraint for the th dimension of be ().

Algorithm 2: MOBPSO.

5. Case Study

In this section, the high-pressure injection system (HPIS) is considered a case study; the simplified structure diagram is shown in Figure 2.

Figure 2: HPIS system of a PWR.

The system consists of 7 valves and 3 pumps. The function is drawing water from the refueling water storage tank (PWST) and discharges it into the cold legs of the reactor cooling system through any of the two inlets, A or B. The components reliability parameters are listed in Table 4.

Table 4: Components reliability parameters.

The components cost information is shown in Table 5.

Table 5: Components cost information (RMB/h).

The constraints on the test intervals are listed in Table 6.

Table 6: Constraints on test intervals (TI).

The constraints on components preventive maintenance intervals are listed in Table 7.

Table 7: Constraint on preventive maintenance intervals.

The decision variables vector is chosen as where are are the allowed outage time for valves and pumps, respectively. For the valves, the permitted range is . As to the pumps, the permitted range is .

Our first case of study encompasses two different single objective optimization problems, considering the optimization of the risk function and adopting the cost function as an implicit constraint or vice versa.

Firstly, we choose the system yearly cost as an objective function and the system unavailability is treated as a constraint (). The customized basic PSO described in previous section has been used as optimizer. The optimization results are shown in Figure 3. From Figure 3, we can see the mean value of fitness (cost) in one particle decreases gradually along with the increasing of the iterative number.

Figure 3: Convergence evolution versus cost function.

Then, we choose the system yearly unavailability as the objective and treat the system yearly cost as a constraint (). The optimization results are shown in Figure 4. The results show that the mean value of unavailability also decreases as the increasing of the iterative number.

Figure 4: Convergence evolution versus risk function.

The optimized results are shown in Table 8.

Table 8: Optimized results.

As observed in Figures 3-4, objective functions are becoming convergent with the increase of the iteration number. PSO optimizer finally presents a valid optimized variable vector. In case 1, optimized results keep the considered system nearly at the same risk level, but greatly reduce the cost. Moreover, optimized solutions notably decrease the system risk level remaining at almost the same expenditure cost.

In the second case we consider the model as a multiobjective optimization problem and obtain the nondominated solutions with MOBPSO algorithm. Figure 5 shows the obtained nondominated solutions.

Figure 5: Obtained nondominated solutions with MOBPSO.

6. Conclusions

One important aspect of the nuclear industry is to improve the availability of safety-related equipment at NPPs to achieve high safety and low cost levels. In this paper, the multiobjective optimization algorithm based on PSO has been applied to solve the constrained multiobjective optimization of testing and maintenance-related parameters of safety-related equipment. The numerical results indicate that PSO is a powerful optimization method for finding NPP configuration resulting in minimal cost and unavailability. Also based on the MOBPSO, we obtain the nondominated solutions set. The results successfully verify that the PSO is capable of finding a nondominated solution of a constrained multiobjective problem in resource effectiveness and risk control of NPP. The multiobjective optimization algorithm based on PSO should attract more attention to apply in optimization of testing and maintenance activities of safety equipments at NPPs. Exploring the capacity (including topology structure, the optimal selection of parameters, and the integration with other EAs, etc.) of PSO for the testing and maintenance cost model is our future work.

Acronym

ACO:Ant Colony Optimization
AOT:Allowed outage time
GA:Genetic algorithm
HPIS:High-pressure injection system
MCS:Minimal cut sequence
NPP:Nuclear Power Plant
PAR:Proportional age reduction
PAS:Proportional age setback
PSWT:Refueling water storage tank
PSO:Particle swarm optimization.

Notation

:Failure rate
:Averaged failure rate
:The residual standby failure rate
:Repair rate
:Decision variable vector
:Cumulative distribution function
:Age cofactor
:The maintenance effectiveness factor
:Working condition factor
:Overhaul period.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. IEAE, “Risk based optimization of technical specifications for operation of nuclear power plants,” IAEA-TECDOC-729, 1993.
  2. M. Harunuzzaman and T. Aldemir, “Optimization of standby safety system maintenance schedules in nuclear power plants,” Nuclear Technology, vol. 113, no. 3, pp. 354–367, 1996. View at Google Scholar · View at Scopus
  3. S. Martorell, S. Carlos, A. Sánchez, and V. Serradell, “Constrained optimization of test intervals using a steady-state genetic algorithm,” Reliability Engineering and System Safety, vol. 67, no. 3, pp. 215–232, 2000. View at Publisher · View at Google Scholar · View at Scopus
  4. S. Martorell, A. Sánchez, S. Carlos, and V. Serradell, “Simultaneous and multi-criteria optimization of TS requirements and maintenance at NPPs,” Annals of Nuclear Energy, vol. 29, no. 2, pp. 147–168, 2002. View at Publisher · View at Google Scholar · View at Scopus
  5. T. Jiejuan, M. Dingyuan, and X. Dazhi, “A genetic algorithm solution for a nuclear power plant risk-cost maintenance model,” Nuclear Engineering and Design, vol. 229, no. 1, pp. 81–89, 2004. View at Publisher · View at Google Scholar · View at Scopus
  6. A. Mishra, M. D. Pandey, and A. Chauhan, “Regulation of nuclear power plants a multi objective approach,” Annals of Nuclear Energy, vol. 36, no. 10, pp. 1560–1573, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. K. Durga Rao, V. Gopika, H. S. Kushwaha, A. K. Verma, and A. Srividya, “Test interval optimization of safety systems of nuclear power plant using fuzzy-genetic approach,” Reliability Engineering and System Safety, vol. 92, no. 7, pp. 895–901, 2007. View at Publisher · View at Google Scholar · View at Scopus
  8. E. Martorell, A. Munoz, and V. Serradell, “Age-dependent models for evaluating risks & costs of surveillance & maintenance of components,” IEEE Transactions on Reliability, vol. 45, no. 3, pp. 433–442, 1996. View at Publisher · View at Google Scholar · View at Scopus
  9. S. Martorell, A. Sanchez, and V. Serradell, “Age-dependent reliability model considering effects of maintenance and working conditions,” Reliability Engineering and System Safety, vol. 64, no. 1, pp. 19–31, 1999. View at Publisher · View at Google Scholar · View at Scopus
  10. S. Martorell, S. Carlos, A. Sanchez, and V. Serradell, “Using genetic algorithms in completion times and test intervals optimization with risk and cost constraints,” in Proceedings of the (ESREL '00), SARS and SRA-Europe Annual Conference, May 2000.
  11. Z. Michalewicz, “A survey of constraint handling techniques in evolutionary computation methods,” in Proceedings of the 4th Annual Conference on Evolutionary Programming, pp. 135–155, The MIT Press, Cambridge, Mass, USA, 1995. View at Google Scholar
  12. B. Gjorgiev, D. Kančev, and M. Čepin, “Risk-informed decision making in the nuclear industry: application and effectiveness comparison of different genetic algorithm techniques,” Nuclear Engineering and Design, vol. 250, pp. 701–712, 2012. View at Publisher · View at Google Scholar · View at Scopus
  13. K. Deb, Multiobjective Optimization Using Evolutionary Algorithms, John Wiley & Sons, Chichester, UK, 2001. View at MathSciNet
  14. C. M. Fonseca and P. J. Fleming, “Genetic algorithms for multiobjective optimization: formulation, discussion and generalization,” in Proceedings of the 5th International Conference on Genetic Algorithms, S. Forrest, Ed., pp. 416–423, Morgan Kauffman, San Mateo, Calif, USA, 1993.
  15. E. Zitzler and L. Thiele, “Multiobjective optimization using evolutionary algorithms—a comparative case study,” in Parallel Problem Solving from Nature V, A. E. Eiben, T. Bäck, M. Schoenauer, and H.-P. Schwefel, Eds., pp. 292–301, Springer, Berlin, Germany, 1998. View at Google Scholar
  16. K. Parsopóulos and M. N. Vrahatis, “Particle swarm optimization method in multi-objective problems,” in Proceeedings of the ACM Symposium on Applied Computing (SAC '02), pp. 603–607, March 2002. View at Publisher · View at Google Scholar · View at Scopus
  17. C. A. C. Coello and M. S. Lechuga, “MOPSO: a proposal for multiple objective particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation (CEC '02), pp. 1051–1056, Honolulu, Hawaii, USA, May 2002. View at Publisher · View at Google Scholar · View at Scopus
  18. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, December 1995. View at Scopus
  19. Y. Shi and R. Eberhart, “Modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 69–73, May 1998. View at Scopus
  20. J. Kennedy, “The particle swarm: social adaptation of knowledge,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '97), pp. 303–308, April 1997. View at Scopus
  21. J. Kennedy and R. C. Eberhart, Swarm Intelligence, Morgan Kaufmann, Boston, Mass, USA, 2001.
  22. R. Poli, “An analysis of publications on particle swarm optimisation applications,” Tech. Rep. CSM-469, Department of Computer Science, University of Essex, Colchester, UK, 2007. View at Google Scholar
  23. M. Clerc and J. Kennedy, “The particle swarm-explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002. View at Publisher · View at Google Scholar · View at Scopus
  24. N. Srinivas and K. Deb, “Multiobjective function optimization using nondominated sorting genetic algorithms,” Evolutionary Computation, vol. 2, no. 3, pp. 221–248, 1995. View at Google Scholar
  25. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus