Abstract
This paper presents a design for the multivariable control of a cooling system in a PEM (proton exchange membrane) fuel cell stack. This system is complex and challenging enough: interactions between variables, highly nonlinear dynamic behavior, etc. This design is carried out using a multiobjective optimization methodology. There are few previous works that address this problem using multiobjective techniques. Also, this work has, as a novelty, the consideration of, in addition to the optimal controllers, the nearly optimal controllers nondominated in their neighborhood (potentially useful alternatives). In the multiobjective optimization problem approach, the designer must make decisions that include design objectives; parameters of the controllers to be estimated; and the conditions and characteristics of the simulation of the system. However, to simplify the optimization and decision stages, the designer does not include all the desired scenarios in the multiobjective problem definition. Nevertheless, these aspects can be analyzed in the decision stage only for the controllers obtained with a much less computational cost. At this stage, the potentially useful alternatives can play an important role. These controllers have significantly different parameters and therefore allow the designer to make a final decision with additional valuable information. Nearly optimal controllers can obtain an improvement in some aspects not included in the multiobjective optimization problem. For example, in this paper, various aspects are analyzed regarding potentially useful solutions, such as (1) the influence of certain parameters of the simulator; (2) the sample time of the controller; (3) the effect of stack degradation; and (4) the robustness. Therefore, this paper highlights the relevance of this in-depth analysis using the methodology proposed in the design of the multivariable control of the cooling system of a PEM fuel cell. This analysis can modify the final choice of the designer.
1. Introduction
In control engineering, optimization tools are widely used, for example, in the design of control systems [1–3]. Normally, in these problems, there are several conflicting objectives to optimize (such as output errors, control effort, and robustness). Therefore, it is reasonable to solve these problems as multiobjective optimization problems (MOP [4–7]). Thus, the designer can analyze the controller trade-off for each design objective and can better choose the final controller.
In an MOP, nearly optimal solutions (also called approximate or -efficient solutions) have been studied by many authors so far [8–10]. These alternatives have a slightly worse performances than the optimal solutions and may be useful to the designer. However, these alternatives are ignored in a classic MOP and considering all these solutions can slow down the algorithm and overcomplicate the decision stage. Among them, the solutions with similar performance to the optimal ones in the objective space and those that differ significantly in the parameter space (different neighborhoods) are the potentially useful alternatives for the designer [11–13]. We define the potentially useful solutions as the optimal and nearly optimal solutions nondominated in their neighborhood. These alternatives provide the designer with greater diversity without increasing excessively the number of possible alternatives [11]. The designer can then analyze these controllers (significantly different from the alternatives obtained in a classical MOP) and make the final decision with higher quality information. Thus, it is possible to find nearly optimal controllers with better performances than the optimal ones in some features not included in the design objectives [14]. In this situation, the designer could choose a nearly optimal controller instead of an optimal one.
To formulate an MOP, the designer must define certain aspects. In the design of control systems, the MOP must contemplate aspects such as(i)Design objectives for the control.(ii)Parameters to be adjusted for the defined control structure.(iii)Process model and its operation point.(iv)Definition of the simulation: input signals, disturbances, and noise.(v)Simulation environment setup:(a)Integration method.(b)Sample time of the controller.(c)Other less relevant aspects, a priori, such as the nature of the noise (if existing).
Firstly, the designer must choose the design objectives. These objectives are defined to measure certain characteristics of the control such as output errors, control efforts, or robustness. However, each of these characteristics can be analyzed using different indicators. For example, output errors can be measured by ITAE (integral time absolute error), IAE (integral absolute error), ISE (integral squared error), and ITSE (integral time square error) [15]. Including many of these indicators in the MOP would increase the number of objectives and this has two drawbacks: a greatly increased computational cost and a more complicated decision stage. Increasing the number of the design objectives increases optimal and nearly optimal solutions if a similar discretization is maintained. Each new solution generated must be compared with all the optimal solutions and nearly optimal ones to analyze their inclusion in the Pareto set (or nearly optimal set). Therefore, increasing the design objectives significantly increases the computational cost of the optimization process. Furthermore, this increase in solutions, together with the new design objectives added, makes the decision process and final decision more difficult for the designer. Therefore, the designer usually chooses only some of the indicators to define the MOP. The rest of the interesting indicators can be analyzed more cheaply in the decision stage [14].
In the design of controllers, another important element is the process model employed. This model may have uncertainties in its structure and/or in the value of its parameters. There are different methods to evaluate the impact of these uncertainties: such as Monte Carlo [16] and minimax [17]. Nevertheless, these methods increase the computational cost of the MOP [18].
The simulation environment setup is set in the definition of the MOP. However, there may be different setups valid for the designer: different numerical integration methods, different sample times, different types of noises, etc. Considering all these design alternatives in the MOP is unapproachable.
The designer must therefore establish certain fundamental aspects to define the MOP, but there are other interesting aspects that are not included in the MOP due to various limitations (usually related to the computational cost). These aspects ignored in the optimization stage could be analyzed in the decision stage on the controllers obtained during optimization. In this scenario, the nearly optimal alternatives nondominated in their neighborhood have an especially relevant role. These controllers have similar performances to the optimal ones, but they have significantly different parameters (they are located in different neighborhoods). These controllers could produce an improvement (even significant) over the optimal ones in some aspect not included in the optimization process. For this reason, it can be very valuable to obtain these nearly optimal controllers. Depending on their behavior in the aspects not considered in the optimization, the final choice can be made for a nearly optimal controller.
Let us look at a specific example. We define an MOP for the adjustment of a proportional integral (PI) controller (parameters to be adjusted: gain and integral time ) for a nonlinear nominal model of the process. The MOP is defined with two design objectives for changes in the setpoint: measures the output errors through the ITAE and measures the control effort through IAU (integral of absolute control effort). In the simulation environment, steps are introduced in the setpoint and a noise in the outputs (similar to the noise present in the actual process), and no additional disturbances are considered. Under this scenario, the set of optimal and nearly optimal controllers is shown in Figure 1.

There are a set of optimal controllers () and a set of nearly optimal controllers nondominated in their neighborhood ( and ) in various neighborhoods of the parameter space. and provide the designer with new alternatives significantly different from the optimal ones (but with similar performances). Thus, these controllers are potentially useful for the designer. From these alternatives, we select optimal controllers and and nearly optimal controllers , , and (see Table 1).
The designer now wants to analyze the output errors of the selected alternatives with a new indicator, IAE. This new indicator has not been contemplated in the optimization stage of the MOP. The value of this indicator on the selected controllers can be seen in Table 1. The nearly optimal controllers and obtain a better IAE than the optimal controllers and , respectively. In fact, the nearly optimal controller obtains a significant improvement over the optimal controller . Additionally, the designer analyzes the robustness of the alternatives and . For this, we consider the uncertainty in the parameters of the nonlinear model. The controllers are evaluated on 50 random variations of the model parameters with a variation of . Figure 2 shows the value of the design objectives for each of the 50 variations of the model. The blue points are the objective value obtained by each of the model variations on the optimal controller . The green squares are the objective value obtained by each of the variations of the model on the nearly optimal controller . The nearly optimal controller has a lower degradation than the optimal controller . Finally, the designer analyzes the influence of the noise in the selected controllers. The controllers are evaluated on the design objectives by varying the type of noise ( and in Table 2). The optimal alternative is dominated by the nearly optimal controller in this new scenario. So, is nearly as good as , and the dominance depends on the noise chosen for the simulations. Therefore, the nearly optimal alternatives bring new controllers potentially useful for the designer. This diversity allows the designer to make a final decision with additional valuable information. In addition, the nearly optimal controllers present improvements in the three defined scenarios over the optimal controllers selected. Further, the nearly optimal solutions obtain similar performance in the design objectives compared to the optimal ones. This situation may lead the final choice of the designer towards a nearly optimal controller to the detriment of an optimal one.

In this work, we propose the design of a multivariable control system for the cooling circuit of a proton exchange membrane fuel cell (PEMFC [19, 20]) stack. The correct design of the stack cooling system is vital in the durability, cost, reliability, and energy efficiency of the stack [21–24]. The PEMFC stack is part of a microcombined heat and power (-CHP) system [25–28]. -CHP systems are cogeneration systems. The main advantage of these systems is the use of the thermal energy produced in the generation of electrical energy. In this way, the efficiency of the system is increased. -CHP systems employ various technologies [29]. Among them, some authors agree that the most promising, due to their efficiency and low emissions, are technologies based on the fuel cell [30, 31]. The most common -CHP systems of this type are based on PEMFC stacks. These systems are sometimes used for the electrical and thermal supply of homes [32]. Nevertheless, it is necessary to advance in several technical aspects to improve the performance of these systems and reduce their costs. One of the most important work areas for improvement is the temperature control of the PEMFC [33, 34].
In this paper, a nonlinear model of a PEM fuel cell (Nedstack, model 2.0HP) is used which is able to produce up to 2 kW of electrical energy and 3.3 kW of thermal energy. This stack is cooled by a liquid cooling system. This model is described in [35]. Using the methodology presented, nearly optimal controllers will be obtained. These controllers provide the designer with significantly different alternatives (in their parameters). Thanks to them, the designer can make a final decision using additional valuable information. Therefore, this work shows as a novelty the usefulness of considering the nearly optimal alternatives nondominated in their neighborhood in the design of a multivariable control system. The methodology proposed in the paper enables the maximum exploitation of a specific control technology by using valuable information in the tuning procedure.
This work is structured as follows. In Section 2, some basic definitions previously presented in the literature are described. In Section 3, the nevMOGA algorithm used in this work is described briefly. In Section 4, the design of a multivariable control system for the cooling system of a PEMFC stack is presented. Finally, the conclusions are commented in Section 5.
2. Background
The resolution of an MOP produces a set of optimal solutions (). There is also a set of nearly optimal solutions that could be interesting for the decision maker () and which are ignored in a classic MOP. Nevertheless, finding all of the nearly optimal solutions can considerably increase the number of alternatives. Among them, the solutions nondominated in their neighborhood (potentially useful, ) provide the designer with alternatives that are close to the optimal ones in objective space, which differ significantly in parameter space. These alternatives maintain the diversity in parameter space without excessively increasing the number of possible alternatives. With them, the designer can make the final decision with the benefit of valuable additional information. In this section, these sets are defined.
A multiobjective optimization problem (a maximization problem can be converted into a minimization problem; for each of the objectives that have to be maximized, the transformation: can be realised) can be defined as follows:where is defined as a decision vector in the domain and : is defined as the vector of objective functions . and are the lower and upper bounds of each component of .
Definition 1. Dominance [36]: a decision vector is dominated by any other decision vector if for all and for at least one , . This is denoted as .
Definition 2. Pareto set: the Pareto set (denoted by ) is the set of solutions in that is nondominated by another solution in :
Definition 3. Pareto front: given a set of Pareto optimal solutions , the Pareto front is defined as
Definition 4. -dominance [37]: define as the maximum acceptable performance degradation. A decision vector is -dominated by another decision vector if for all and for at least one , . This is denoted by .
Definition 5. -efficiency [13]: the set of -efficient solutions (denoted by ) is the set of solutions in which are not -dominated by another solution in :
Definition 6. Neighborhood: define as the maximum distance between neighboring solutions. Two decision vectors and are neighboring solutions () if for all .
Definition 7. dominance: a decision vector is dominated by another decision vector if they are neighboring solutions (Definition 6) and . This is denoted by .
Definition 8. efficiency [11]: the set of efficient solutions (denoted by ) is the set of solutions of which are not dominated by another solution in :The sets , , and may have infinite solutions. Therefore, obtaining these is often unapproachable computationally. Normally, the designer obtains the discrete sets , , and , in such a way that appropriately characterize , , and , respectively.
Figure 3 shows an example. There is a set of optimal solutions located in the . In addition, there is a set of nearly optimal solutions (gray area). Both sets form . Therefore, if the designer considers the nearly optimal alternatives, he or she will obtain new alternatives significantly different from the optimal ones ( and solutions). A knowledge of these neighborhoods enables the designer to make a more informed final decision. But adding all the nearly optimal solutions has two drawbacks: it slows down the algorithm and complicates the decision stage. Nevertheless, the nearly optimal solutions nondominated in their neighborhood provide the designer with diversity in the obtained set without excessively increasing the number of possible alternatives. Consequently, it avoids the two drawbacks mentioned previously. In this example, these alternatives are , , and . We believe these alternatives are potentially useful solutions—the best solutions in each neighborhood.
There are various algorithms designed to provide nearly optimal alternatives [12, 13]. Nevertheless, many do not take into account the space of the parameters when they discard solutions. Therefore, these algorithms cannot guarantee that the potentially useful solutions have not been discarded. However, the algorithm nevMOGA [11] takes into account the space of the parameters in their discretization, guaranteeing that the potentially useful alternatives are not ruled out. This algorithm has been evaluated on various examples, obtaining a good approximation to the set in every case [14, 38].

3. Materials and Methods
In this work, we use the algorithm nevMOGA (multiobjective evolutionary algorithm available in Matlab Central: https://www.mathworks.com/matlabcentral/fileexchange/71448-nevmoga-multiobjective-evolutionary-algorithm) [11]. This algorithm is an evolutionary algorithm based on the algorithm ev-MOGA [39]. nevMOGA provides the designer with a discrete set of optimal and nearly optimal solutions nondominated in their neighborhood (potentially useful solutions) (Definition 8). nevMOGA has four populations:(1) is the main population. This population must converge towards , and not only towards , to achieve diversity in the set found. The number of individuals in this population is .(2) is the archive where a discrete approximation of the Pareto front () is stored. Its size is variable but bounded, depending on the number of boxes (divisions for each dimension, parameter ) previously defined by the designer.(3) is the archive where a discrete approximation of the nearly optimal solutions nondominated in their neighborhood () is stored. The size of this population varies. Nevertheless, the size of this population is limited and based on the number of boxes (divisions for each dimension, parameter ).(4) is an auxiliary population. The population stores the new individuals generated in each iteration. The size of this population is , which must be multiple of 4.
Figure 4 shows a flowchart of nevMOGA. First, populations and are initialized (empty sets). Then, the population is created randomly. The designer can initially define part or all of the population with an initial population. Later the populations and are updated from individuals of . Then, in each iteration, the following is done: (1) create the evoluted subpopulation (by crossing and mutating individuals from , and ); (2) update the populations and if necessary; (3) update the population when the populations and change.

The parameter defines the size of the area of nearly optimal solutions (maximum degradation acceptable to the designer, Definition 4), and its definition is necessary for the use of nevMOGA. In addition, the parameter (neighborhood, Definition 6) defines from which value we consider two significantly different solutions (in the decision space), and their definition is recommended. If the knowledge necessary for its definition is unavailable, there is a simple procedure for calculating this parameter from the parameter and a reference solution [38]. So, a very large or a very small can lead to an excessive number of solutions, slowing down the optimization process and complicating the decision stage. However, a very small or a very large can lead to obtaining a very small number of solutions, discarding potentially useful nearly optimal alternatives.
4. Results and Discussion
In this section, the new approach to the design of a multivariable control system for the cooling system of a PEMFC stack is used. The PEMFC stack can be part of a cogeneration system, for example, the -CHP system. The main advantage of these systems is that the use of the thermal energy produced in the generation of electrical energy increases overall efficiency. An accurate temperature control of the stack is necessary to improve the behavior of this type of system. Therefore, in this section, the control is designed using the new methodology.
The -CHP system used in this work is shown in Figure 5. The electric load demands electrical power to the PEMFC, requiring an electric current . This current simulates the electrical demand of a house. To generate this current, the stack must be supplied with hydrogen () and oxygen (). In addition to the mentioned electrical energy, the stack generates thermal energy. Two water cooling circuits extract the heat, and the system consists of primary and secondary circuits, coupled by a heat exchanger. The heat generated by the stack is extracted by water from the primary circuit (with flow ) at a temperature (water outlet temperature of the stack) and transferred to the secondary circuit (with flow ) through the heat exchanger. The heat finally arrives in the hot water tank () for use (heating and hot water). The primary circuit consists of Pump 1 that propels the water in the primary circuit, regulating the flow of water that passes through the stack (). If increases, more heat is extracted and the stack cools down. The secondary circuit consists of a hot water tank and Pump 2. The water flow rate of the secondary circuit () is varied using Pump 2. If this flow rate decreases, the amount of heat transferred through the heat exchanger also decreases, and as a result, less heat passes from the primary circuit to the secondary circuit. The water temperature at the stack inlet () then increases.

In this paper, a nonlinear model of a PEM fuel cell is used. This model (available in https://riunet.upv.es/handle/10251/118336) is described in [35]. This model has been made in first principles and has more than 30 equations. This model simplifies some little relevant aspects of the cooling system. However, it is a complex model with high nonlinearities.
4.1. MOP
The methodology proposed in the paper enables the maximum exploitation of a specific control technology by using valuable information in the tuning procedure. For the design of the control system, a multiloop PI control was chosen because of its easier implementation and maintenance. The other control structures can be tuned with the same methodology. The RGA technique is used to establish the loop pairing [40]. Since the system is nonlinear, the static gains matrix is determined at three operating points corresponding to = 65°C and = 60°C and currents of , , and . For this, variations of are introduced in the flow rates of each inlet independently, at each operating point. Thus, the static gains represented by , , and are obtained, giving rise to the matrices , , and , respectively:
By observing and comparing the gains of the matrices and , the static nonlinearity of the process is evident, with variations in the gains of up to . The RGA matrices clearly show that the most suitable pairing is that of the main diagonal. Therefore, the control structure is defined as a multiloop PI control which uses the following loop pairing scheme: output is controlled by and by (see Figure 6). That is to say,where and are the proportional gains, and are the integral time constants in seconds, and and are the output errors, where and are the setpoints for and respectively. The actuators and must meet the restrictions and , and for this the PI incorporates an antiwindup mechanism.

For tuning the controller of this system, the water temperature of the tank is maintained constant at 55°C. Thus, the secondary circuit of the -CHP system is simplified. The output temperatures of the real system have an associated noise. This noise is filtered to prevent it from spreading to system control actions. Similarly, for the simulation of the system, a noise similar to the real process is introduced. This noise will also be filtered so that the control actions of the system are less influenced by it.
Design objectives are evaluated throughout a defined test. This test has two changes of the current demanded at 500 and 1500 seconds (see Figure 7). In addition, as recommended by the manufacturer, the water outlet temperature from the stack () should be 65°C for the optimal stack operation. There must be a 5°C gradient between the temperature of the water input and output of the stack. Therefore, the system references are constant throughout the entire test: = 65°C and = 60°C. In this way, it is possible to operate at the optimum operating point suggested by the manufacturer. Therefore, the performance of the controllers is evaluated on the rejection of disturbances (current demands). The heat demand is not evaluated because its influence on temperature changes has much slower dynamics than those produced by current changes. Additionally, its effect is filtered by the capacity of the secondary tank and the heat exchanger to the primary thermal circuit. Therefore, the most critical disturbance is the change in the current demand, and presumably, the disturbance produced by the demanded heat will be easily rejected with a valid controller for the rejection of the current effect.

The controllers obtained are evaluated by means of objectives that measure output errors and control efforts. The output errors and control effort objectives will presumably be in conflict. Therefore, it is valuable to study these as independent objectives to analyze the trade-off between each. However, the system consists of two outputs and two control actions. It seems reasonable to add the output error objectives (in both outputs) and control effort (in both control actions), as they have the same relative importance and equal magnitude. Thus, we simplify the optimization process and the decision stage. is the average absolute error in stack temperature , in °C. The objective is the average absolute error in stack inlet water temperature , in °C. The objective is the average absolute value of the rate of change of the control action , in . The objective is the average absolute value of the rate of change of the control action , in . The design objectives have been defined as integrals divided by the time to obtain an average measurement. In this way, the objectives have some physical sense, and their interpretation can be easier for the designer. Furthermore, the greater physical sense enables the designer to define the epsilon parameter (maximum degradation over design objectives) in a simpler way. Therefore, the first objective is defined as the aggregation of the average error in both outputs ( and ). The second objective is defined as the aggregation of the integral of the control action derivative in both control actions ( and ). Therefore, the objective measures the rejection of disturbances, while measures the control effort. The MOP is defined aswherewheresubject towhere
The constraints (equation (11)) have been chosen to obtain the set of solutions in the designer’s region of interest. Thus, we achieve an improvement in the pertinency of the set, discarding undesirable solutions. Furthermore, the bounds of the decision space (equation (12)) have been defined to find practical/realizable controllers.
Once the optimization problem is defined, the two main parameters of nevMOGA ( and ) must be defined. In this MOP, the design objectives make physical sense ( measures the average error in °C and measures the average variations in control actions). Therefore, it is easier to define the parameter (maximum acceptable degradation). and maintain the units of the design objectives and respectively. In this problem, = 0.05°C and have been defined for and , respectively, (). We then chose (neighborhood) based on the previously defined search space (approximately of the search space). To optimize the defined MOP, nevMOGA is used with the following configuration:(i)(ii)(iii)(iv)
These parameters have been defined to obtain an adequate distribution in the objective space (divisions for each dimension, parameter ), a sufficient number of new candidate solutions ( and ), and an adequate number of individuals of the population (population to explore the search space). For the definition of the remaining parameters, the values suggested in [41] for the original algorithm (ev-MOGA) are used.
Figure 8 shows the discrete set obtained by nevMOGA. In the figure, to show the decision variables, we use the level diagram (LD (available in Matlab Central: https://es.mathworks.com/matlabcentral/fileexchange/62224-interactive-tool-for-decision-making-in-multiobjective-optimization-with-level-diagrams) [42, 43]) visualization tool, using 2-norm (). The objective space is shown in an x-y graph because it is an MOP with only two objectives, and so the trade-off on the design objectives can be analyzed in a simpler way than if we use LD on the objectives space. As seen in the figure, nevMOGA has been able to find a large number of nearly optimal controllers nondominated in their neighborhood with similar performances to the optimal ones. These controllers provide the designer with valuable information to make a final decision with still greater criterion—as will be shown below.

In all MOPs, the final decision is always a subjective decision based on the designer’s preferences, knowledge, and previous experience. All the optimal solutions are equally valid but have different trade-offs between the design objectives. In this paper, we choose different optimal alternatives with different trade-offs (different areas of the Pareto front) that could be chosen by the designer (depending on his/her preferences). In this way, we validate the methodology independently of the specific preferences of the designer in question. There is no unique procedure for the designer’s final decision. However, our procedure consists of the following: (1) we choose an optimal solution in a certain area of interest for the designer and (2) we select significantly different solutions but with similar performance in the design objectives. On these alternatives, we analyze new indicators not included in the design objectives. Thus, significantly different solutions with similar performance in the design objectives obtain a significant improvement in the new indicators analyzed with respect to the initially chosen optimal solution. This is valuable information for the designer before the final decision.
To carry out the analysis before the final decision, we chose three optimal controllers (, , and ) in three different zones in the objective space. is a fast controller, that is, with little error in the output in exchange for aggressive control action. Controller is slow, that is, it produces more output error in exchange for smooth control action. Finally, is a balanced controller. In addition, we choose the nearly optimal controllers , , and . These controllers obtain a similar performance to , , and respectively, being significantly different in their parameters.
Let us now look at the fast controllers ( and , see Table 3). Figure 9 shows the behavior of the system with both controllers. The objective values of both controllers are observed in Table 4. The error in () is greater for the nearly optimal controller . However, in the output (), the opposite occurs, and the optimal controller produces a greater error. With respect to the control effort, controller is softer for () and more aggressive for () than the controller (see Table 4). Therefore, although slightly dominates , there is no significant improvement (small behavioral differences). So, we look for new indicator to make a better informed final decision.

To study the selected controllers, we will make an analysis in four different scenarios:(i)Increasing the sample time (initially seconds).(ii)Electrical degradation of the fuel cell.(iii)Changes in the noise introduced in the system outputs (change in the seed, that is, in the sequence of noise values or in the amplitude).(iv)Uncertainty in the model.
The choice of the sample time may vary the set of optimal controllers (Pareto front). The sample time refers to the controller sample time (it is a parameter of the real-time implementation). Therefore, we will analyze how the increase of this parameter affects the performance of the controllers obtained (through the design objectives). Suppose we increase the sample time to seconds. Considering the dynamics of the process, this sample time is still perfectly valid. In this scenario, the nearly optimal controller dominates the optimal controller (the dominance is reversed). The objective values in this new scenario are observed in Table 5.
The electrical power provided by the stack also depends on its degradation. The manufacturer provides a voltage-current characteristic curve for certain stack operating temperatures. These curves are valid at the beginning of the stack life. However, after hours of operation, the electrical power generated by the stack decreases. The characteristic curve degrades, providing less voltage with the same demanded current, which translates into less electrical energy [44, 45]. For example, in [45], it is said that the degradation of the stack is approximately 0.3–3% in the voltage provided for every 1000 hours of operation. Therefore, in this analysis, we will study how this degradation of the stack affects the and controllers. This study is carried out by reducing a percentage (degradation) of the voltage provided by the stack. With a degradation greater than or equal to 6%, the optimal controller no longer dominates the nearly optimal controller (see Table 5). In this scenario, the controller provides softer control actions. Therefore, after hours of stack operation, the nearly optimal controller is not worse (the controller with the highest objective value being understood as worse) than the optimal controller .
The output temperatures of the real system have noise. This noise is filtered to prevent it from spreading to system control actions. This noise has been included in the simulated system using white noise. However, the noise introduced has a random component depending on its seed. This value modifies the sequence of the noise values at each moment, maintaining its nature and amplitude. Therefore, in this analysis, we will study how the random component of noise affects the performance of the controllers (changing the seed). Suppose that the seed of the noise is randomly modified. In this new scenario, the nearly optimal controller dominates the optimal controller (the dominance is reversed, see Table 5). Therefore, in this new scenario (just as valid as the initial scenario for the designer), the controller could be optimal (and nearly optimal), and therefore, both controllers are equally good.
The objective value obtained depends on the noise seed. A statistical study of this aspect is then appropriate. This analysis can be made by performing more simulations with different nose seeds. This analysis can be carried out in the decision stage only for the optimal and nearly optimal solutions (much less expensive than making the analysis in the optimization stage). This analysis has been carried out on controllers and . The controllers have been evaluated on 250 randomly obtained noise seeds. of the noise seeds cause the dominance to be reversed, that is, the controller dominates the controller. In addition, the dominance between both controllers disappears with of the seeds. In this analysis, is a more robust controller for seed changes. Despite this, it is shown that changes in the noise seed can cause a change in the Pareto set obtained.
Finally, we will analyze how the uncertainty of the model affects the controllers studied. To do this, we made 50 variations of the parameter’s value of the model. A variation of is carried out on all the parameters of the model (30 parameters described in [35]). The controllers will be evaluated through the design objectives on each of the 50 variations of the model. In this way, we measure the robustness of the controllers.
We analyze how the uncertainty of the model affects the controllers studied. To do this, we made 50 variations of the parameters of the model with a variation of . This variation is carried out on all the parameters of the model (30 parameters described in [35]). The controllers will be evaluated through the design objectives on each of the 50 variations of the model. In this way, we measure the robustness of the controllers.
Figure 10 shows the degradation limits of the and controllers over the 50 variations of the plant. The degradation of is practically included within the degradation limits of . Therefore, the nearly optimal controller is more robust and shows less variability due to uncertainty in the model parameters.

Therefore, after studying the fast controllers ( and ), may be preferred by the designer. In different scenarios (sample time, stack degradation, random noise change, and robustness analysis), this controller produces a better (or equal) performance than . Thus, in this case, considering the nearly optimal controller has been very useful for the designer. Not considering this controller (obtaining only the optimal controllers) would mean ignoring relevant information that enables the designer to make a more informed decision.
Let us now analyze the compromise controllers ( and in Figure 8). Figure 11 shows the system responses for both controllers. The controllers and their objective values are shown in Tables 3 and 4, respectively. The optimal controller shows a smaller error on () with more aggressive control actions on (). However, the nearly optimal controller shows smaller errors on () with more aggressive control actions on (). Again, although dominates slightly , no clear improvement is seen in their response, and therefore, it may be useful to evaluate their performance on the alternative indicators in order to make a more informed final decision.

Firstly, as above, the effect of increasing the sample time is studied. If the sample time is increased to seconds, the optimal controller still dominates the nearly optimal controller (see Table 6). Secondly, we analyze how the degradation of the stack affects the controllers studied. In this case, with a degradation greater than or equal to 8%, dominates (the dominance is reversed, see Table 6). Therefore, after hours of stack operation, the nearly optimal controller is better than the optimal controller . Thirdly, we analyze the influence of the noise introduced in the system outputs on the controllers studied. Suppose we slightly decrease the amplitude of the noise introduced on . In this scenario, does not dominate (see Table 6). Similarly, if we slightly increase the amplitude of the noise introduced on , again, the controller is not dominated by . Finally, we analyze the robustness of the compromise controllers (performed in the same way as in the previous comparison). In this situation, the optimal controller seems more robust than the nearly optimal controller (see Figure 12). Therefore, after making this detailed study of the compromise controllers ( and ), we can conclude that , in certain scenarios, is just as good as the optimal controller . However, in this case, the preference for the nearly optimal controller over is unclear. Again, this analysis is very valuable for the designer before making the final decision.

Let us now study the slow controllers ( and , see Figure 8). The objective values of both controllers are observed in Table 4. Figure 13 shows the outputs and control actions of both controllers. Both controllers have a significantly different behavior despite having a similar performance (objective values). Therefore, it does not seem reasonable to select the final solution only from the results of the optimization (obtained for a specific scenario). It seems reasonable to let the designer choose between these solutions by consulting additional information that has not been taken into account in the optimization. The error in is greater for the nearly optimal controller (). However, for the output , the opposite occurs, and the optimal controller has a greater error (). With respect to the control effort, the controller is smoother for () and more aggressive for () than the controller (see Table 4). Therefore, slightly dominates , but significant differences in their behavior invite us to conduct a deeper study of both controllers with the goal of making a more informed decision.

Firstly, we analyze how the increase of the sample time affects the performance of the controllers obtained, analogously to the study of previous controllers. Again, suppose we increase the sample time to seconds. In this scenario, the optimal controller does not dominate the nearly optimal controller (see Table 7). Secondly, we analyze how the degradation of the stack affects the slow controllers. In this study, is dominated by independently of the degradation of the stack (see Table 7). Therefore, the degradation of the stack does not affect the dominance of these controllers. Thirdly, suppose that the seed of the noise is randomly modified. In this scenario, does not dominate (see Table 7). Therefore, in another scenario (just as valid as the initial scenario, for the designer), the controller could be optimal, and therefore, both controllers can be equally good. The same study made on the controllers and is carried out with controllers and . In this case, of the noise seeds (of 250 seeds randomly obtained) make the dominance disappear and there is no seed that reverses dominance. Thus, it is shown that varying the noise seed varies the Pareto set obtained. Finally, we analyze the robustness of the slow controllers (in the same way as in the previous comparisons). In this context, seems to be a more robust controller than (see Figure 14). Therefore, after this deep study of the slow controllers, the nearly optimal controller could be optimal in another scenario. In fact, in certain scenarios, both controllers can be considered equally good. Again, this information is very useful for the designer before making a final decision.

Thus, in this section, a deep study has been carried out for the design of the control of the inlet and outlet water temperatures of the PEMFC stack. Using the methodology raised, the utility of considering nearly optimal controllers nondominated in their neighborhood has been demonstrated. These controllers could be equal (or better) than the optimal ones in different scenarios (sample time, stack degradation, noise change, and robustness analysis). Analyzing all these scenarios in the design objectives is unapproachable. However, the designer can analyze them at the decision stage. In this context, the nearly optimal solutions nondominated in their neighborhood are very relevant alternatives for the designer. Thanks to obtaining these alternatives, controllers that are significantly different to the optimal ones have been obtained. Due to this difference (in parameters), some of them show improvements in different scenarios not studied in the optimization stage. Thanks to the analysis made in the decision stage, the designer can carry out the final decision with valuable additional information.
5. Conclusions
In this paper, the design of the multivariable control system for the cooling circuit of a PEMFC stack has been shown. This system is complex and challenging enough: interactions between variables, highly nonlinear dynamic behavior, etc. In the design, in addition to the set of optimal controllers under a multiobjective approach, the set of nearly optimal controllers potentially useful has been taken into account. Various aspects are analyzed on the potentially useful solutions, for example, the influence of certain parameters of the simulator on the Pareto front. This is an aspect little considered in the literature, and in this work, it has been shown how this can vary the Pareto set obtained (for instance, when the seed of the noise is changed). In addition, the effect of fuel cell degradation and robustness have also been analyzed. As observed in this work, this could be valuable information for the designer before the final decision. Therefore, this document highlights the usefulness of this methodology in the in-depth analysis of several of the aspects that can influence the tuning of a control structure, especially in the design of the multivariable control of complex systems such as the cooling system of a PEM fuel cell.
Including all the interesting aspects for the designer in the optimization stage is generally computationally unapproachable and complicates analysis of the results. However, in this work, we have analyzed these aspects (not contemplated in the optimization stage) in the decision making phase. Thus, the designer can consider these aspects without excessively increasing the computational cost of MOP resolution.
In this context, the diversity in the set of controllers obtained takes a still more relevant role. Different controllers can provide an improvement (even significant) in several interesting aspects not included in the MOP. Therefore, in this situation, nearly optimal controllers nondominated in their neighborhood play an essential role. These controllers provide the designer with solutions with similar performances to the optimal ones, but with significantly different characteristics. Consequently, these controllers provide greater diversity in the set, without excessively increasing the number of solutions obtained.
In this paper, we have analyzed various interesting scenarios for the designer using the obtained set of controllers . Firstly, with an increased sampling time, nearly optimal controllers can improve the optimal controllers; this means that the way the simulation itself is carried out can vary the result obtained in the resolution of the MOP. Secondly, we have analyzed how the degradation of the stack affects the performance of the controllers. In this scenario, nearly optimal controllers can again improve the optimal ones. The designer, with this analysis, can opt for nearly optimal controllers instead of controllers that are optimal only at the beginning of the stack’s life. In addition, we have analyzed the effect of the seed used for the generation of the introduced noise. A change in this seed can cause the nearly optimal alternatives to improve the performance of the optimal ones. Generally, the seed of noise is a parameter not chosen by the designer but by the simulation tool. This implies that both scenarios (each with a different noise seed) are equally valid for the designer, and therefore, both types of controllers (optimal and nearly optimal) are also equally valid. Finally, a robustness analysis of the controllers obtained has been carried out. An assessment of the impact of the uncertainties in the parameters of the nonlinear model has been carried out. In some cases, the nearly optimal solutions are more robust.
In summary, it is worth considering these additional solutions as they can provide new and perfectly valid design alternatives. This analysis has a computationally acceptable cost because it is only applied to a limited number of solutions—the set of optimal and nearly optimal controllers obtained. Its incorporation into the MOP statement would entail a high computational cost that may not be assumable. All of these analyses can move the designer’s final choice towards a nearly optimal rather than an optimal controller.
Thanks to the methodology presented, the designer can make a final decision with additional valuable information (ignored in the classic MOP) by obtaining potentially useful new controllers. Therefore, this work has revealed the relevance of the nearly optimal alternatives nondominated in their neighborhood in the design of multivariable control systems. Given the usefulness of the approach proposed in this work on the controller design, as future work, we plan to use this methodology in system identification where nearly optimal models can be potentially useful for the designer.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This study was supported in part by the Ministerio de Ciencia, Innovación y Universidades (Spain) (grant no. RTI2018-096904-B-I00) and by the Generalitat Valenciana regional government through project AICO/2019/055.