In terms of energy production, combining conventional and renewable energy sources prove to be more sustainable and cost-effective. Nevertheless, efficient planning and designing of such systems are extremely complex due to the intermittency of renewable sources. Many existing studies fail to capture the stochasticity and/or avoid detailed reliability analysis. This research proposes a practical stochastic multi-objective optimization tool for optimally laying out and sizing the components of a grid-linked system to optimize system power at a low cost. A comparative analysis of four state-of-the-art algorithms using the hypervolume measure, execution time, and nonparametric statistical analysis revealed that the nondominated sorting genetic algorithm III (NSGA-III) was more promising, despite its significantly longer execution time. According to the NSGA-III calculations, given solar irradiance and energy profiles, the household would need to install a 5.5 (kWh) solar panel tilted at 26.3° and orientated at 0.52° to produce 65.6 (kWh) of power. The best battery size needed to store enough excess power to improve reliability was 2.3 (kWh). The cost for the design was $73520. In comparison, the stochastic technique allows for the construction of a grid-linked system that is far more cost-effective and reliable.

1. Introduction

Energy is one of the critical determinants in the growth of every country’s economy. The demand for energy around the globe keeps growing exponentially, and as a result, the traditional sources of energy are grappling to keep up with this demand [13]. Also, the acquisition of these oil-fired energy sources comes with high and fluctuating prices [4, 5]. Moreover, the usage of fossil fuels has a substantial impact on the level of global warming, which is very detrimental to the peace of our ecosystem [6, 7]. Due to these, much attention is shifted to more environmentally friendly, reliable, and cost-effective power sources [8]. For instance, Ghana is preparing to reduce its high dependency on these oil-fired energy sources by implementing policies that would enable it to add 10% renewable energy to its original sources by the end of 2030 [9, 10]. Renewable energy sources (RES) such as the wind, solar, hydro, ocean tidal, and waves seem to be the most preferred options. Using RES could also help in reducing the cost of transmission and transformation and ensure energy security [1113].

Their power generations on the other hand are completely random. As a result, properly predicting them is exceedingly difficult, and relying on a single source may result in load failure or blackouts at specific points in the energy supply system [14]. In other words, they are unable to guarantee a reliable and continuous supply of energy at a cost that can compete with the conventional power from the grid [15]. Also, tapping energy from some of the renewable resources like setting up the solar panel could be very expensive [16, 17]. Integration of renewable energy sources and traditional energy sources to establish a hybrid energy supply system (HESS) could be a realistic option [18, 19] because one energy source can back up the other during power outages. This power generation system could provide a steady power supply in various climatic conditions and consequently improve the efficiency and reliability of the power supply [1, 2022].

Figure 1 shows a grid-connected system, where the HESS is connected to the electricity grid to either buy power during the energy shortage or sell the excess energy produced by the RES.

This design could decrease the high dependency on the national grid by lessening the demand of households [21, 24]. This work focuses on the design of this HESS type due to its popularity. Though the HESS has proved to be more reliable, efficient, and cost-effective, its design is highly complex as it involves many unpredictable variables and system characteristics [25]. At the design stage, due to the stochastic nature of the RES, it is always a great challenge to select the best component sizes and layout for which the HESS would be cost-effective, highly reliable, and very efficient since the system cost, power production, and reliability depend on optimal layout and the unit sizing of each generation system [2528]. Thus, for given energy demand, it is challenging to choose the optimal component’s size to design the HESS due to the uncertainties in the RES. Optimization offers a balance between supply and demand, reliability, sustainability, and cost-efficiency [7, 29]. The need for optimal design has encouraged various authors to conduct a series of research. For instance, in the study [4], an -constraint and particle swarm optimization (PSO) were employed to design the HESS, where the criteria were the cost and the reliability. Reference [1] utilized nondominated sorting genetic algorithm-II to minimize cost and maximize reliability of the HESS. A dynamic multi-objective particle swarm algorithm was applied in designing a HESS where the minimization of cost, pollutant emissions, and renewable energy ratio were considered [4, 30]. The study [20] considered a deterministic sizing approach of a residential HESS using a TRNSYS (transient energy system simulation program) software. In the study [31], a hybrid PV/wind system was proposed to size a Lafarge cement factory in Al-Tafilah and Jordan to maximize RES energy fraction with less cost. Reference [32] developed a methodology that employed various indicators to measure the average reliability and reliability uncertainty of a hybrid system. In the study [33], a strength Pareto evolutionary algorithm (SPEA) was employed for optimal economics scheduling of grid-connected system. Reference [16] used NSGA-II and multi-objective particle swarm optimization algorithms (MOPSO) to size a hybrid PV/wind/battery energy storage system. In the research work by [21], a multi-objective tool using a nondominated sorting genetic algorithm (NSGA) was built to design a grid-connection consisting of a solar PV and battery storage system. In reference [27], a meta-heuristic approach was used to design HESS and it was concluded that considering the output rate of hybrid energy systems equipment gives a more accurate and realistic view to designers of such systems. In the study [28], a fuzzy PSO was applied to optimally size a HESS, where the simulation results obtained showed that the procedure was capable to give higher quality solutions. In reference [34], an artificial intelligence approach was used to build a HESS. Up-to-date comprehensive reviews on various design methodologies for the HESS have been summarized in the studies [12, 13, 3541]. Despite the fact that different scholars have approached the design challenge in various ways, it remains a complex topic [2528, 34, 4244]. The main issues in this design approach are the intermittency and the uncertainties in the RES. This stochastic nature of the RES really poses serious reliability threats as the energy production of the HESS would be highly oscillating, thereby making the system’s performance very unpredictable [30]. This implies that making any reliability analysis in the HESS design without considering the random nature of the RES is not reasonable enough [1]. This also means that using predefined system parameters to design the HESS is highly unrealistic and, thus, affects the performance of the HESS. Hence, it is essential to confront the stochastic nature of these sources in order to get much more precise and realistic results. As a result, there have been a number of design approaches by various researchers to handle the variability in the RES. Some of the most popular methods are the Monte Carlo method, chance constraint, time series analysis, and stochastic approximation. For example, reference [45] assessed the performance of uncertainty on independent solar-wind transformation systems, where they considered outages due to the uncertain nature of the RES. Reference [46] researched on the uncertainties effect on wind-batteries standalone energy system. Reference [47] modeled the unpredictable nature of wind speed and solar radiation with time series analysis. Reference [48] used the ARMA model to simulate the uncertainties in the RES and emphasized that the time series analysis is the most appropriate approach to simulate RES variations if the distribution is known. They compared deterministic and stochastic design approaches and concluded that they are realistic enough and easy to handle reliability issues in the design process if the uncertainties are incorporated. Reference [1] developed a system to decide the best design of renewable energy mix system. This study considered the load profiles as deterministic and the RES as random. The study finally concluded that dealing with uncertainties in the design though comes with a high computational cost is much more reliable. Reference [49] looked at a new probabilistic approach to optimally allocate PV distributed generators for the reduction of system losses. They applied a Monte Carlo approach to model the uncertainties in the RES. In reference [50], a novel reliability indicator based on the minimum hourly wind and solar power was proposed to increase the HESS reliability via the maximization of the RES. A complete survey of stochastic methods for HESS could be found in reference [51]. Monte Carlo approach has been identified as the most robust mechanism to handle such as complex system [49]. In summary, most existing design methodologies that addressed this issue ignored the randomness effect in the design [52]. The few that considered this effect mostly focused on the single objective, though in reality, the design problem involves optimizing two or more stochastic conflicting objectives, the reliability analysis, or the conflicting nature of objectives (net cost and total power) simultaneously [16]. Also, a detailed and critical reliability analysis on HESS that would inform the decision-maker about the sustainability of the system is mainly ignored [51, 53]. Thus, to the best of our knowledge, there is no a practical design methodology for household-based HESS that effectively captures or combines the modelling of the randomness using a robust Monte Carlo method, the multiple objectives (net cost and total power), and conducts a detailed monthly reliability analysis at the same time. Therefore, in this paper, a novel stochastic multi-objective optimization approach that gives the optimal layout and sizes (capacities) of the photovoltaic panel and the storage medium to maximize energy and minimize net present cost in the design of a household grid-connected HESS has been developed. This new design methodology would help engineers and designers to identify the optimal capacities of the PV panel and battery in order to minimize cost, maximize energy production, and meet a specified level of reliability for grid-connected households even under fluctuating production of PV power.

The study is organized in the following way in order to attain the stated goal: Section 1 summarizes the study’s background, key findings, and theoretical and methodological contributions to the design challenge, as well as the problem description and research objectives. The details of the mathematical formulation of the objective functions, constraints, and the overall design technique are also discussed in Section 2. The design methodology is validated in Section 3 using a household on a Ghanaian university campus. In this section, the simulation’s results are analyzed and discussed. Finally, in Section 4, a conclusion is drawn and future study directions are suggested.

2. Methodology

To ensure a full electricity coverage of a household with a minimal system cost, in this study, we optimize these two conflicting objectives: net cost of the system and the total power produced simultaneously. Detailed modelling of these objective functions is carried out in this phase to acquire the objective functions that represent the overall cost and power. To evaluate the level of reliability in this design, there is the need to match up total PV power produced with household load demand. Hence, the bottom-up method, which is a robust probabilistic strategy, is formulated to estimate the household’s load demand in this section.

2.1. Modelling the Net Present Cost of the HESS

The net present cost of the HESS is computed by subtracting the current amount of all revenues from that of costs over its entire life. In modelling the net cost, the following simplifying assumptions are made: there is no limitation in grid connection if the facilities meet technical requirements; the lifetime T of the system is taken to be 25 years; no excess power from the PV panel is sold to the grid; and salvage cost is not considered, which implies that the total system revenue is 0. It is assumed that a battery delivers in L years and thereafter, discount factor and inflation rates are not considered in this model, and finally, only the battery will be replaced over time. The costs involved in the design are as follows:

Initial capital () is given aswhere is PV size (kW), is battery’s size (kWh), is PV array unit cost, is battery unit cost, and is the cost of installing PV panels [54].

Operation and Maintenance (Mc) is given as follows:where and are the operation and maintenance cost per unit of the PV and battery, respectively. is the series present worth factor, which determines the present worth of a known uniform series and it is given aswhere r and T are the interest rate and the maximum time for the project, respectively. The present value of buying electricity from the grid, grid cost (), can be expressed as follows:where is grid’s power price and is power from the grid.

Replacement cost (): the cost of replacing the battery can be stated as follows:where is the single payment present worth factor given as

Y is the number of batteries to be replaced, L is the battery’s lifetime, n is the number of times to replace the battery, and represents the replacement cost per unit battery. Since revenue = 0, we have the net present cost of the system as

From equations (1), (2), (4), and (5), equation (7) becomes

Here, equation (8) represents the net present cost of the household-based HESS.

2.2. Photovoltaic System Modelling

In designs and practical analysis concerning the solar energy system, it is highly required to estimate how much power produced by the tilted panel [20]. This power is basically a function of global irradiance, efficiency, and the system capacities (our decision variables). We now derive the function that estimates the global irradiance on the panel.

Global irradiance on a tilted panel is mostly estimated from measured global irradiance on a horizontal plane by various mathematical models [55]. The accuracy of these models depends on angles in the solar geometry, which is depicted in Figure 2. All the angles can be estimated except the tilt angle () and azimuth angle () (decision variables), which we will estimate using the optimization approach.

To obtain better model to estimate irradiance in Ghana, various existing models were reviewed and it was identified that the anisotropic model of Klucher developed from the Temps and Coulson and Liu and Jordan’s models provided fruitful results for all sky conditions, such as partly clear, clear, and cloudy [55]. It is given bywherewhere is the beam irradiance; is the diffuse irradiance; is the reflected irradiance; is the albedo; is the hour angle; denotes the sun’s declination angle; is the latitude; is the incidence angle; is the zenith angle of the panel; and and are the sun’s and PV Azimuth angles, respectively. Thus, ; is the altitude angle.

The power produced by the PV panel at time is given bywhere is the PV module’s efficiency. Variables to be optimized to ensure maximum PV power output in this paper are , and .

2.3. The Bottom-Up Method

As discussed earlier, critical and realistic reliability analysis calls for estimating of house load demand. Many approaches have successfully predicted the household load with high speed; however, in a situation where more priority is assigned to accuracy, the bottom-up method dominates over others [56, 57]. Again, the bottom-up approach captures the effect of each household gadget in estimating the total energy demand [58, 59]. Therefore, in this paper, it is used to forecast the household energy demand.

The main logic behind the bottom-up approach is to deduce the overall energy consumption of the household using the appliance wattage [60, 61]. In reference [62], the details of this method are presented. Its mechanism is depicted by Figure 3.

The activation of an appliance is checked using a probability function called starting probability function , given as follows:where are the appliance, the time step (in minutes), and the hour of the day, respectively. is hourly probability factor, which models the levels of activity of each appliance within a day. Also, is the mean daily starting frequency, which models the average time each appliance is used. is the step size scaling factor scaling the probabilities based on . Furthermore, is the probability indicating the availability of a special class of appliances present in a particular household [62].

The monthly average energy consumption by a household by active and standby consumption parameters can be generated by the following equation:where are the total number of appliances, nominal, and standby power, respectively [62].

2.4. Formulation of the Stochastic Multi-Objective Optimization Problem

The objective functions are(1)Total power given by equation (15).(2)Net present cost given by equation (8).

The dispatch strategy of the HESS is described by reference [1]:

2.4.1. Decision Variables

In this study, in order to obtain maximum PV power, the position of the panel which is described by the tilt () and azimuth () angles needs to be controlled [63]. Again, the system capacities () affect both the cost and power linearly, and thus, the design variables in this study are the sizes of the PV panel, the battery, and the panel positioning angles, which are summarized as .

One of the most important criteria that the power system must meet is reliability, which is measured by the loss of load probability (LLP), which is the overall probability that there will be a power shortfall in a year and is expressed as

D(t), P(t), and denote the load demand estimated by equation (14), total power generated at time t given by equation (15), and the desired level of reliability, respectively [4].

The current state of a battery is normally described by its state of charge, which can be computed analytically by the following equation [64]:where is the charged or discharged power and is the battery’s efficiency. The minimum discharge state must not go below and the charging cannot exceed ; that is,

Now, the stochastic multi-objective optimization problem (SMOOP) can be formulated as

Reliability constraint is presented in equation (19), which states that at any time, the total loss of load probability, the system must satisfy the desired level of reliability [1]. Constraint (20) enforces that the rated capacity of the solar PV generator must not be exceeded by the hourly generated PV power. Constraint (21) specifies the state of charge of the battery.

The power function contains a random data (solar irradiance) so we maximize its expected value (). It is negated because every optimization problem can be written a minimization problem by the duality principle.

2.5. Method of Handling the Randomness in the Design

Many stochastic optimization approaches do exist in literature [51]. Among these methods, the Monte Carlo (sample average) approach has been identified as the most robust mechanisms to handle such as complex system with high level of uncertainties [49]. In the next section of this study, we use a stochastic optimization method called the sample average approximation to tackle the uncertainty.

2.5.1. Sample Average Approximation (SAA)

Given an objective function of equation (12) involving uncertainty, it can be rewritten as , where and denote the randomness effect. Since cannot be easily optimized, we target its expected value of the form, [65].

Assume, we obtain a sample of realizations (i.d.d), of the random vector N after running the Monte Carlo simulations, then the SAA of could be defined as

By the law of large numbers, we have

Thus, we now have the following deterministic multi-objective problem:

Therefore, the SMOOP can be written as follows:for which we apply a multi-objective optimization method to solve in the next section.

2.6. Methods for Solving Multi-Objective Optimization Problem (MOOP)

For MOOP, since the objective functions are often conflicting, the solution represents a lot of optimal solutions. They are called Pareto optimal or nondominated solutions.

Theorem 1.

Evolutionary algorithms (EA) are algorithms that mimic natural phenomena and deal with problems via mechanisms that emulate the attitude of living creatures [6, 66, 67].

They can(i)Search for solution sets, which are closer to the true Pareto fronts;(ii)Search for solution sets, which are sufficiently spread to represent the entire range of the Pareto front;(iii)Deal with discontinuities, nonconvexity, etc., of the Pareto fronts.

Some of the standard multi-objective optimization algorithms that have been proposed and successfully been applied in various applications for decades [6, 14, 16, 21, 6669], are the nondominated sorting genetic algorithm II (NSGA-II), nondominated sorting genetic algorithm III (NSGA-III) [66], multi-objective genetic algorithm (MOGA), the multi-objective grey wolf optimizer (MOGWO) [67], etc. Each of these algorithms might be better than the other in at least one of the following criteria such as convergence, diversity preservation, and execution time [69, 70]. Therefore, it is necessary to conduct a detailed comparative analysis to decide the algorithm that solves the problem better in this paper. The mechanism of the NSGA-III has been briefly discussed below.

NSGA-III was developed in 2014 for solving multi-objective optimization problems. It is an extension of the popular NSGA-II to handle many objective functions and improve the distribution and diversity of solutions [71, 72]. Figure 4 summarizes the mechanisms in NSGA-III.

2.7. Performance Assessment with Hypervolume Metric

The performance of multi-objective EAs (MOEAs) is evaluated using the following criteria [69, 73]: convergence, diversity preservation, and actual execution times. Many metrics exist and detailed discussions on them can be found in the following studies [66, 69, 70, 7375].

According to a detailed review by Riquelme et al., the hypervolume is the most used metric in literature [75] and thus, it is the measure that is considered in this paper. It measures proximity and diversity at the same time and the higher its value, the better the approximation (indicative of better spread and convergence of solutions) [76].

Given the Pareto front approximation and a reference point hypervolume indicator is given by [76]:

3. Simulations, Results, and Discussion

We consider a simulation-based optimization method in this paper to obtain the optimal design of the HESS.

In terms of scalability of the design approach, though the focus of this study is to design a grid-connected household-based system, the methodology can efficiently handle a grid-linked system on national bases under the assumptions made. That is, the design approach scales very well and, thus, has the capability to maintain its high efficiency, effectiveness, and accuracy by handling the randomness and the conflicting nature of the objectives even under an increased or expanding operational workload or energy demand. Figure 5 summarizes the mechanisms employed to solve the design problem.

3.1. Simulations

At this section, the models of the system and the bottom-up method are simulated. The Monte Carlo simulation is applied to approximate the expected value of the energy function and its convergence is shown. In order to test the proposed methodology, an on-grid HESS considering the PV, grid electricity, and battery bank is established for a household in Kwame Nkrumah University of Science and Technology (KNUST), Kumasi, Ghana. All the simulations and the optimization modelling of the HESS are carried out in MATLAB and R environments.

The main inputs of this model are the hourly solar irradiance, the hourly energy consumption data, and PV system economic characteristics given in Table 1. These datasets are described below.

In the study, due to unavailability of measured hourly load profiles for the household of five (5) rooms flat, it was synthetically generated. This energy consumption data are generated using the probability function (D(t)) given by equation (13), the parameters in Table 2, and the procedure outlined in Figure 3, in previous section. The simulated hourly load demand is shown in Figure 6 and it could be observed that between 5:00 AM and 8:00 AM, there was a little pressure on the power, which described the morning activities, and between 1:30 AM and 8:00 AM, there was a massive pressure on the power, which depicted the numerous household activities that went on after work.

Another input data used to test proposed methodology are one year hourly solar irradiance plotted in Figure 7. These data are estimated from measured solar irradiance on the horizontal surface obtained from KNUST (latitude ). The solar irradiance data have only the diffused and beam measurements, so the reflected irradiance is estimated by multiplying the albedo 0.2 by the global irradiance on the horizontal surface. The estimation of the irradiance on the panel was done using the PV power function given by equation (11), discussed in the previous section and the parameters in Table 1. The hourly time step is used because it is assumed that within a period of one hour, the effect of the variations in the RES is insignificant [1].

Figure 8 shows the details of the PV power and the energy demand of some months. The energy deficiency is handled by the power from the grid. This comparison would help the designer to do a proper reliability analysis in the HESS since it gives much idea about blackout hours in the months. It is observed that January would experience less blackout as compared to the other months.

Before the application of the Monte Carlo simulations, the randomness in the solar irradiance is modeled by fitting the historical weather data to an appropriate probability distribution. The results from the distribution are then used in the Monte Carlo simulation and the expected value of power and the LLP is computed.

In fitting the randomness effect in the meteorological data, the cumulative distribution function (CDF) of two standard probability functions, Weibull and Beta, are applied.

After using the RMSE to compare the output of each distribution with the actual data calculated, the Beta (RMSE = 0.3306) outperforms the Weibull (RMSE = 0.4167). This is shown in Figure 9.

After running the Monte Carlo simulations for a sample of (10000), the results in Figure 10 show that the converged mean value of the power from the Monte Carlo simulations gets closer to the average of the actual data as the sample size gets larger, which establishes the law of large numbers stated previously.

3.2. The Optimization Module

In this section, we run each algorithm with parameters in Table 3 and the Pareto fronts of each design process are compared.

3.2.1. Parameters Setting of the Algorithms

The optimization algorithms depend on a number of parameters and functions whose different levels affect their performances in terms of speed, diversity, and convergence. The general approach to determine the appropriate parameter combination of an algorithm depends on too many trials of different combinations, and the set of combinations that produce best results is selected for the program that would be used to solve the problem [77]. In this paper, in order to obtain the most optimal parameters for all the algorithms used to improve their running time, convergence, and their ability to obtain diverse solution (avoid converging to local optimum solutions), we applied this general approach coupled with detailed statistical tests on these parameter values. During the analysis, different values of the parameters were tested for each algorithm. For instance, in the NSGA-III, six different population sizes, which are 50, 100, 150, 200, 300, and 400, were tested and assessed based on execution time and hypervolume (for diversity and convergence). Each algorithm was run 40 times for each value, and the results displayed in Figure 11 show that there exists instability when the population size is between 50 and 100. It clearly shows the algorithm terminates in some cases at suboptimal solutions due to their lower hypervolume indicators. When the population size is 150 and higher, the given optimal value is stable and the different population sizes give equally good solutions. It is observed from Figure 12 that when the population size is 150 or higher, there exists no significant difference between the hypervolume indicator and thus, the population size of 150 could lead the NSGA-III to a better and more efficient solution. Since the execution time is almost linearly dependent on the population size, it increased as the population sizes increases.

For each of the parameters in each algorithm, a similar analysis is conducted but, though each algorithm has a unique mechanism of operations and search procedures, to make the algorithms contrast, the same maximum number of iterations, population size, and archive size are set up. The most preferred set of parameters is summarized in Table 3.

In order to eliminate the contingency, each algorithm is run 40 times independently and their averaged results are shown in Figure 13. Since the NSGA-III is a heuristic search method, its optimization results have certain randomness.

3.2.2. Comparison of Pareto Fronts

In order to eliminate the contingency, each algorithm is run 40 times independently, and their averaged results are shown in Figure 13. In other words, since for each run the solution changes, 40 sets of runs were carried out and their average was used for the analysis.

To choose the better algorithm, a standard and robust performance metrics known as the hypervolume is employed. Again, to test the significance of median differences among algorithms and designs, nonparametric tests were performed. Unlike parametric tests, nonparametric tests assume flexible or no limitations about the distribution of data and are suitable for non-normally distributed data [78]. In this paper, a nonparametric test called the Kruskal Wallis, which is an equivalent one-way ANOVA, is employed to test the significance levels in the median of the hypervolume and the execution time of each algorithm.

Also, the differences in the median of all two samples are tested by the Mann–Whitney–Wilcoxon test.

We tackle the problem in both deterministic and stochastic ways by using the predefined average solar irradiance and the output of the Monte Carlo simulation, respectively. The NSGA-III is a heuristic search algorithm and thus its optimization results have certain randomness. In order to eliminate the contingency, the algorithm is run 40 times independently and the averaged results are used as final results.

The solutions have been displayed by Figure 14. Aside from the figure illustrations, examples of the PFs for both cases have been presented in Tables 4 and 5. The breakdown of the cost has been presented for the DM to have a fair idea about both the fixed and recurring costs for the design.

3.3. Post-Pareto Front Analysis and Sensitivity Analysis

There are 80 sets of non-inferior solutions, and it would be a headache for the decision maker to choose the most preferred single solution out of these. This calls for the application of critical and robust multiple-criteria decision making methods. Many authors have developed and applied various mechanisms. For instance, in reference [79], a multi-objective decision making approach, which uses a nonuniform weight generator method, was developed to reduce the set of Pareto front. In reference [80], a robust fuzzy c-means clustering approach coupled with -dominance based evolutionary algorithm was developed to solve a combined heat and power economic emission problem where the fuzzy c-means clustering was used to obtain the most preferred decision-maker’s solution. A detailed and comprehensive review on multiple-criteria decision making methods for MOOP can be found in the study [35].

In reference [35], a Post-Pareto analysis method called the technique for order of preference by similarity to ideal solution (TOPSIS) whose details can be found in reference [81] was run to enhance the process of decision making. The choice of this method is due to its simplicity and robustness in decision making process [35]. The mechanism of TOPSIS is displayed by Figure 15.

After the PostPareto front analysis, the most optimal design is shown in Table 6.

3.4. Discussion of Results

From Figure 16, it could be observed that the differences in the hypervolume are statistically significant according to the Krusikal–Wallis test. The NSGA-III has the highest hypervolume indicators, followed by the NSGA-II, MOGWO, and MOGA. This implies that the NSGA-III algorithm has a better convergence ability, thus its solution is closer to the approximate Pareto front and has a well-distributed solution. MOGA has the worst hypervolume values, which means that the solutions of these methods tend to get stuck into local optimum and far from the approximate Pareto front. Apart from the convergence and diversity preservation criteria, algorithms are also compared by their average execution times. In Figure 17, it could be seen that though the NSGA-III has better convergence and a well-spread solution, its execution time is the highest. The MOGWO has the least time execution.

Considering the results of the two designs displayed in Figure 18, it is observed in Figure 14 that the execution time for the stochastic design is almost 18 times that of the deterministic case. Also, it is observed that the cost values for the stochastic case appear to be closer to that of the deterministic case. It could be inferred that the NSGA-III works better on the deterministic MOOP than the stochastic case; however, from Figure 19, the level of reliability of the stochastic case is much higher than that of the deterministic, which implies that the stochastic design would aid the decision-makers to choose solutions that are evaluated in scenarios closer to the real-life situation than assuming a predefined irradiance in the deterministic design. With the application of the NSGA-III, the set of nondominated solutions representing cost and power produced by the HESS is obtained and the results show that, given solar irradiance and energy profiles, the household needed to install a solar panel with a capacity of 5.5(kW) tilted at 26.3° and oriented at 0.52° to produce 65.6(kWh) of power. The best battery size needed to store enough excess power to improve reliability was 2.3(kWh). The cost for the design was $73520, whereas the fixed initial capital cost was $37495.2.

The recurring costs are the replacement and grid cost given as 28672.8 ($), and the operation and maintenance (Mc) given as 7352 ($) Table 6. Table 7 displays the results of the sensitivity analysis and it could be observed that the initial capital has a significant effect on the design objectives.

4. Conclusion and Recommendations

This study aimed at building a novel practical stochastic multi-objective optimization methodology that captures the randomness effect of the RES and selects the optimal layout and sizes (capacities) of the photovoltaic panel and the storage medium to maximize energy.

And minimize net present cost in the design of a household grid-connected HESS. Even under fluctuating weather conditions, the model could assist designers and engineers to select the optimal layout parameters and sizes of the system’s components to meet the energy demand with a minimum cost. To evaluate the quality of the results obtained, a comparison of four state-of-the-art MOEAs by hypervolume metric, execution time, and nonparametric statistical analysis with the test was carried out, and results showed that NSGA-III was more promising, though its time execution was quite higher. The most optimal solution for the decision maker indicated that, given solar irradiance and energy profiles, the household needed to install a solar panel with a capacity of 5.5 (kW) tilted at and oriented at to produce 65.6 (kWh) of power. The best battery size needed to store enough excess power to improve reliability was 2.3 (kWh). The Cost for the design was $73520 where the fixed initial capital cost is 37495.2 ($) and the recurring costs are the replacement and grid cost given as 28672.8 ($) and the operation and maintenance (Mc) is 7352 ($). In order to assess the system’s reliability, both stochastic and deterministic designs were carried out, and from the results obtained, it was observed that though the computational cost of the stochastic design was much higher, its level of reliability was much higher than that of the deterministic case. Analyzing the effect of changes in the economic parameters on the optimal solution through sensitivity analysis indicated that small perturbations in the capital cost could lead to a significant change in the optimal design.

5. Recommendations for Further Studies

The following are some future directions in the design of the HESS.(1)The economic characteristics used in this study are assumed to be predefined parameters. Taking into account the stochastic nature of some economic values like the interest rates could be done in future studies.(2)The bottom-up approach used to generate the household load profiles could be improved in future works by incorporating the temperature effects and differentiation of power consumption between weekdays and weekends.(3)Additional renewable sources such as wind, tide, and biomass could be considered in future works.

Data Availability

All the data will be made available upon request.

Conflicts of Interest

The authors declared that they have no conflicts of interest.

Authors’ Contributions

The final draft of this paper has been read and approved by all the authors. All of them were highly involved in the concept, modelling, formulation, implementation, analysis, and writing of the paper.


Our deepest gratitude goes to Prof. Tor Soverick from the University of Bergen, Norway, for offering his continuous advice, support, and encouragement throughout the course of this study. This thesis was fully funded by the National Institute of Mathematical Sciences, Ghana.