Abstract

A simple and flexible model for economic statistical design of joint and control charts was proposed. The design problem was approached by constrained fuzzy multiobjective modeling for three objectives: joint power, joint Type I error, and joint total control cost. Fuzzy membership functions were created to measure the satisfaction levels of the objectives, and the overall satisfaction level of the design was calculated using a weighted-average method. A genetic algorithm was designed to solve this model. The strength of this model lies in its effectiveness in detecting the assignable causes through the joint design and in its simplicity and flexibility in dealing with uncertainties in the design.

1. Introduction

Nonconforming products are often caused by assignable causes in the production process. These nonconformities cost companies a lot of money due to warranty, rework, and scrap costs among other costs. Statistical process control (SPC) is aimed at identifying and isolating assignable causes in a timely manner. These assignable causes may cause a shift in the process mean or adrift in the process variation or both and hence lead to an increase in the number of nonconforming products. Control charts, as an on-line SPC tool, are the foremost used tool for this purpose as they are very effective in this respect if they were designed well. Unfortunately, no single control chart can be effective in detecting the assignable causes for both the shift in the process mean and the drift in the process variation. Usually, control chart is used to detect the shifts in the process mean along with one of , , or control charts to detect the drift in the process variation.

Since the mean and variation are considered statistically independent for a normally distributed process, the and the control charts can be designed simultaneously to be effective in detecting the assignable causes for the shifts in the mean and the drifts in the variation. Simultaneous design of these control charts involves the determination of the sample size, frequency of sampling, and width of control limits that achieve a certain statistical and economical requirements such as Type I and Type II errors and total control cost. Unfortunately, achieving high statistical standards is usually accompanied with low economic standards and vice versa. This dilemma calls for a design that compromises between the statistical requirements and the economic requirements when the control charts are designed.

The statistical design (SD) is oriented to maximize the power of the control chart under an upper limit constraint for Type I error. References [15] among other references designed their control charts based on the SD. The drawback of this design is that it does not account for the economic consequences of the design because the main focus of this design is to enhance the statistical properties of the SPC and this usually happens at the expense of the economic aspect.

The economical design (ED) is oriented to minimize the total cost of the SPC. References [614] among other references designed their control charts based on ED. Reference [15] showed that ED usually gives poor statistical properties and model parameters estimation.

To overcome the disadvantages of the SD and the ED, [16] proposed a design method in which the total cost of the SPC is minimized under minimum power and maximum Type I error constraints, known as economic statistical design (SED). Many authors followed the steps of [17] and designed their control charts based on this design. Reference [18] used Markov chain approach to design the control charts using SED. Reference [19] used data development analysis in multiobjective optimization schema to design the control charts based on SED. References [2023] utilized Taguchi loss function in the SED framework to design their control charts. References [24, 25] investigated the optimal SED design of the control chart when the underlying process is a nonnormal process. Joint design of control charts was also discussed in the literature; [26] used genetic algorithm to achieve the best design in the joint and control charts. Reference [14] designed joint and control charts using differential evolution. Reference [27] investigated the joint design of and control charts in preventive maintenance context.

In this paper, a multiobjective model for designing the joint and control charts was presented to find a compromise between Type I error, Type II error, and total control cost of the SPC objectives. The proposed model utilized fuzzy logic to calculate the scores of each objective through a membership function that reflects the satisfaction level for that objective. The overall satisfaction level of the design was calculated as the weighted average of the three satisfaction levels. The model maximized the total satisfaction level under total cost per hour, sample size, frequency of sampling, and width of control limits constraints. A genetic algorithm was designed to optimize the model such that the sample size, frequency of sampling, and width of control limits for and control charts are the decision variables.

To the author’s knowledge, the combination of the economic statistical joint design of the and control charts in the context of fuzzy logic multiobjective constrained optimization using genetic algorithm was not discussed in a single model in the literature before. The model developed in this paper dealt with designing the and control charts simultaneously to enhance the effectiveness of the two control charts in detecting the assignable causes for the shifts in the mean and the drifts in the variation. Moreover, this model also used fuzzy logic in the objective function of the model to give the designer the required flexibility to express his/her point of view about the relative importance of the statistical and economic requirements of the design throughout the weights in the objective function and the decay shape of the membership functions. The strength of this model is its effectiveness in detecting the assignable causes for the shifts in the mean and the drifts in the variation due to the usage of the joint design of the and control charts as well as its simplicity and flexibility in dealing with uncertainties in the design process due to the usage of the fuzzy membership functions.

The rest of the paper was organized as follows: Section 2 presented the assumptions and notations used in the paper, Section 3 showed the model derivation and formulation, Section 4 discussed the fuzzy logic, Section 5 discussed the genetic algorithm used, Sections 6 and 7 presented the experiments and discussion for verification purposes, Section 8 presented a comparative analysis between the joint design of the and control charts and the joint design of the and control charts, and Section 9 contained the conclusions.

2. Assumptions and Notations

In this paper, the assignable causes were modeled as a Poisson process with a rate of and a mean exponential time between assignable causes of . Moreover, it was assumed that the quality cycle followed a renewal reward process and the quality control process is not self-correcting. Furthermore, the variability in the quality characteristic output of the process followed a normal distribution and the process started with known mean and variance. These assumptions are widely used in the literature such as [28, 29].

The total cost of the quality cycle in this paper was adopted from the cost schema in [17]. In this scheme, the total cost of quality cycle consisted of three stochastic elements: the cost of producing nonconforming item, the cost of investigating false alarms and repairing the process, and the cost of sampling and testing. Furthermore, the total time of quality cycle was also divided into two stochastic elements: the time that the process spends in statistical control state and the time that the process spends out of statistical control state.

Figure 1, adapted from [30], shows the quality cycle. The figure shows that the cycle is divided into two major periods, the in-control period and the out-of-control period. The figure shows the adjusted average time to signal (AATS) as the average time from the occurrence of an assignable cause to the time of detecting it. Also, the figure shows that the average time of the cycle (ATC) is the sum of the in-control period and the AATS. Moreover, the time of sampling and interpreting the results is the difference between the sum of AATS and the time to find and repair an assignable cause in one hand and the out-of-control period on the other hand.

3. Model Formulation

Considering the following set of design factors , the joint power can be expressed as where is the intersection between the two independent events and . Mathematically can be evaluated as such thatwhere and Hence the joint power isThe joint Type I error or is the union of the two independent events and . Mathematically can be evaluated as such thatThe expected values of three elements that constitute the total cost of the quality cycle were calculated as follows:

(1) The expected cost of producing a nonconforming item while the process is in statistical control or out of statistical control is mathematically as follows:where is defined in (14).

(2) The expected cost of investigating a false alarm and repairing the process is mathematically as follows:

(3) The expected cost of sampling and testing is mathematically as follows:By adding up these three elements, the expected total control cost of the quality cycle iswhere is the normalized exponential distribution in the range . Mathematically, is expressed asandAs Figure 1 shows, the expected values of the two elements that constitute the expected total time of the quality cycle are calculated as follows:

(1) The expected time that the process spends in statistical control state is

(2) The expected time that the process spends out of statistical control state is By adding up these two elements, the expected total time of quality cycle ismeaning that the expected cost per hour isThe proposed model used three objectives, namely, minimizing Type I error, minimizing Type II error, and minimizing the total control cost of the SPC. These three objectives behave in a conflicting manner. For example, decreasing Type II error, generally, increases Type I error at the same sample size. Increasing the sample size, to decrease both Type I and Type II errors, increases the total cost of the SPC. This situation demands a compromise between the three objectives. Because of the high degree of uncertainty involved in the design process, fuzzy logic is employed to help in the decision-making process.

4. Fuzzy Logic

A crisp set is a collection of items that have the same definition; hence any item is either in that set or not in that set. For example, if we defined the set of good designs for the joint and control charts as the set that contains the control charts with , , and , then the joint design with , , and does not belong to that set, while the joint design with , , and belongs to the set. In fact, human reasoning does not work in this way and, instead, in most real-life situations, the designer prefers the high decrease in and at the expense of the minor increase in . This shows that, in general, the human behavior in decision-making process prefers fuzzy set theory rather than classical crisp set theory as the latter allows the items to be partially in the set; hence fuzzy set theory is preferred over classical crisp theory in modeling decision-making process as stated in [27].

Fuzzy logic is based on fuzzy set theory in which an item’s membership in a set can be partial rather than a full or none. The fuzzy logic is based on “degrees of truth” rather than binary 0 and 1 logic. Fuzzy logic was first introduced by Dr. Lotfi Zadeh in 1965, which included 0 and 1 as extreme cases along with all the other values between them such that a design is not simply a “good” or “bad” but can be, for example, 0.37 of goodness.

In this paper, fuzzy logic is adapted to assess the degree of satisfaction level for each of the three objectives rather than the crisp satisfactory or not satisfactory objective. The fuzzy set for each of the three objectives has the formwhere denotes the objective under consideration, denotes the membership function for the objective and its value is the satisfaction level of that objective, and is the universe of the objective .

Figure 2 shows the three membership functions corresponding to the three objectives used in this paper. Figure 2(a) shows a straight line membership function for the expected cost per hour . The membership function gives lower satisfaction values for higher . Figure 2(b) shows an exponential membership function for the joint Type II error of the and control charts . In this membership, the higher the value of , the lower the satisfaction level . In Figure 2(c), another exponential membership function for the joint Type I error of the and control charts is shown; the membership function gives lower satisfaction values for higher values of . Mathematically, the three membership functions areandwhere , , and are decay rate constants that control the decay of the , , and membership functions. These constants give the designers the required flexibility in designing these membership functions according to their needs. It should be emphasized here that these membership functions can be changed according to the needs of the designer without any changes to the model. The only change is in the way that , , and are calculated.

The full model is where is the total satisfaction level of the joint design, is the satisfaction level for the given by , is the satisfaction level for the given by , and is the satisfaction level for given by .

5. Genetic Algorithm

Genetic algorithm (GA) is a metaheuristic optimization algorithm that is characterized by a population that evolves over time. It is inspired by Darwin's theory of evolution. The underlying principle is that the environment puts pressure on the individuals such that the good individuals survive and pass their genes to the next generation and the bad individuals eventually disappear. GA is among the most popular evolutionary algorithms (EAs) because, unlike other EAs, GA has two operators, crossover and mutation, that enable the algorithm to search the sample space adequately. The crossover operator enables the GA to explore the sample space thoroughly and the mutation operator enables the GA to exploit the promising regions found by the crossover operator.

In this paper the following strategies for the GA were adopted.

5.1. Chromosome Representation

Table 5 shows the schema for the chromosome representation. The chromosome consists of four genes, one for each of the four decision variables .

5.2. Mating

The best 25% chromosomes in the population mate with the worst 25% chromosomes in the population to produce the offspring. To avoid premature convergence of the algorithm, half of the remaining chromosomes in the population are heavily mutated and the other half replaced with immigrants (fresh randomly generated chromosomes).

5.3. Selection

Enlarged sample space strategy is adopted, in which the offspring and the old parents are available for selection based on their fitness values. The fitness values are calculated according to the objective function in model (24).

5.4. Crossover

Cut-in-the middle crossover strategy is adopted in this GA where genes from the first parent are combined with genes from the second parent to produce one offspring. Then genes from the first parent are combined with genes from the second parent to produce another offspring. This way of crossover guarantees a feasible offspring.

5.5. Mutation

Random mutation strategy is adopted in this GA where a randomly selected gene is replaced with a random number from the range of the corresponding decision variable according to the constraints in model (24) to guarantee that the mutated chromosome is feasible.

5.6. Termination Criterion

The GA terminates when a predefined number of generations is reached.

6. Experiments

The GM casting operation problem described in [28] and also used in [26] was used for verification purposes in this paper. For convenience, the example is repeated here in this paper.

A production line produces 84 castings in an hour. Periodic samples are taken from the production line to monitor the carbon-silicate content to avoid loss of tensile strength. The per unit sampling cost is $4.22 and the sampling duration is 5 minutes. The average cost of a nonconforming unit is $100. The proportion of the in-control nonconforming units is 1.36% and the proportion of the out-of-control nonconforming units is 11.3%. The average time that the process spends in control state is 50 hrs. The system must be flushed when the process is out of control state for 45 minutes and it costs $22.8 per hour for the repair crew to flush the system. The downtime cost per minute during flushing is $21.34. The time needed to assemble the crew is on average 5 minutes and it takes no time to search for the assignable cause. Table 1 summarizes the values used in the model based on the above scenario.

By changing the weights in the objective function, the three designs, i.e., SD, ED, and SED, can be achieved, whereas by changing the decay rates of the membership functions, the satisfaction levels can be changed. For example, setting to zero yields the SD model while setting both and to zeros yields the ED model; moreover, keeping the values of the three weights above zero yields the SED model. The ability to change the weights and the decay shapes in the model gives the designers the required flexibility to choose the appropriate design for their and control charts according to their needs.

Throughout this section, the values of , , , , , and were set to 100, 3.4, 3.4, 10, 30, and 30, respectively, and the was set to $500/hr. The statistical performance of a certain design was measured by the in-control average running length for that design such that the longer, the better, while the economic performance of the design was measured by the expected control cost per hour for that design such that the smaller, the better. Table 2 shows the results of the experiments.

7. Discussion

To investigate the SED design, , , and were set to to reflect the equal importance of the three objectives. Under this situation, the resulting model wasThe best design found was, , , and . The and AATS were 279 and 3.11, respectively, whereas and were 8.63e-04 and 6.17e-04, respectively. Consequently, moderate values of and were obtained, i.e., 1.16e+03 samples and 0.999, respectively. Figure 3 shows that the evolution happened on the total satisfaction level, , by the GA over 100 generations. The figure shows how the evolution in the solution converged to 0.798, which is the best found that corresponds to the above-mentioned solution.

To investigate the SD design, was set to zero and the resulting model was It should be noticed here that the in this scenario is less than infinity to reflect the fact that the economic side of the design is not important. The best design found for this scenario was , , , and . The and AATS for this design were 347 and 3.88, respectively, whereas the values for and were 6.84e-04 and 7.17e-07, respectively. The large value of resulted in small values of and . This matter resulted in a long of 1.46e+03 samples and large power of almost 1, which is good in terms of statistical performance. This result shows that the SD enhanced the statistical properties of the joint design at the expense of the economic properties of the design as the was long and was large. This result was also noticed by many authors like [26, 31].

To investigate the ED design, and were set to zeros and the resulting model was The best design found for this scenario was, 0, , and . The and AATS were 199 and 1.94, respectively, whereas the values of and were 1.43e-2 and 3.29e-1, respectively. The high value of shortened the to 70 samples. This result matched the results in [31]. It showed that the ED enhanced the economic requirements of the design at the expense of the statistical performance as this design reduced the at the expense of and . This matter increased the number of false alarms because of the decrease in and also reduced because of the increase in . It should be noticed here that the model chose and not 1 as it may be expected to lower the cost. However, since is a function of , and , the model did not choose as and will become very large and hence will increase .

By comparing the SD and the ED results, it should be noticed that the for the SD is significantly higher than the for the ED and that the is significantly lower in the ED than the SD. As a matter of fact, the SD increased the value of by 1989% at the expense of the increase in the value by 81.6% compared to the values of and in the ED. Moreover, it should be noticed that the values of the and found for the SED were between the values found for and in the SD and ED. This shows that the SED compromises the solution between the desired statistical properties and the desired economical properties of the SPC.

The model can be used also under nonequal weights scenarios to reflect the designer's point of view about the importance of the three objectives in the design. If the designer wants to enhance the and at the expense of , the following weights may be used: . At these weights (everything else remains the same), the best design found was , , , and . The and AATS were 272 and 2.97, respectively, whereas the values for and were 7.72e-4 and 1.60e-3, respectively, and the was 1296 samples. If the designer is more concerned about the consumer risk, i.e., , than the producer risk, i.e., , at the same interest of total control cost, then the following weights may be used: . At these weights (everything else remains the same) the best design found was , , , and . The and AATS for this design were 282 and 3.17, respectively, whereas the values for and were 1.20e-3 and 2.91e-4, respectively. At this value of , the was 833 samples. It should be noticed here that the for this case is significantly less than the for the previous case with an insignificant reduction in . The and , the membership decay rate constants, also affect the design of the SPC. To show their effect, the following values for and were considered under equal weights: and . The best design found was, , , and . The and AATS were 267 and 2.89, respectively, whereas the values of and were 3.50e-3 and 6.41e-04, respectively. The high value of shortened the to 287 samples compared with 1.16e3 from the case where and with an insignificant reduction of . This reduction in is due to the reduction in value's from 30 to 5, which puts less emphasis on .

8. Comparative Analysis

In this section, a comparative analysis between the joint and control charts and the joint and control charts is presented. The only change needed in the proposed method to optimize the joint design for and control charts is calculating Type I and Type II errors for control chart. Equations (28) and (29) show the equations for calculating these errors. After calculating Type I and Type II errors for control chart, the joint Type I and Type II errors for and control charts can be calculated as in (8) and (2), respectively. The calculations of and the mathematical model are also the same as in (20) and (26), respectively.

Tables 2 and 3 show the best design found for the , control charts and the best design found for , control charts, respectively, at the same values used in Table 1.

By comparing the results for the joint design of , charts and the joint design of , charts in Tables 2 and 3, one can see that the general trend found in the joint design of , is also found in the joint design of , charts. The for the SD is significantly higher than the for the ED, and that the is significantly lower in the ED than the SD. Moreover, the values of the and found for the SED were between the values found for and in the SD and ED. This result shows that the joint design of , charts and the joint design of , charts have the same trend and in both cases the SED compromises the solution between the desired statistical properties and the desired economical properties of the SPC.

Table 4 shows the values of for the joint design of , and , charts for 15 cases under equal , and equal , . The results show that when and are equal but their values are less than 23, the joint design of , charts has better values for than the joint design of , charts, while it has worse values when and are equal but their values are more than 23.

Figure 4 shows this result clearly. This result emphasizes the effect of the decay rates on in both charts. If the joint design of , charts is needed, the values of and should be high to have higher values of , and if the joint design of , charts is needed, the values of and should be low to have higher values of

9. Conclusions

The model developed in this paper dealt with designing the and control charts simultaneously to enhance the effectiveness in detecting the assignable causes for the shifts in the mean and the drifts in the variation. This model also used fuzzy logic to give the designer the required flexibility to express his/her perspective about the relative importance of the statistical and economical requirements based on the consumer risk, producer risk, and total control cost of the design throughout the weights in the objective function and the decay rates of the membership functions. The strength of this model lies in its effectiveness in detecting the assignable causes for the shifts in the mean and the drifts in the variation due to the usage of the joint design of the and control charts and in its simplicity and flexibility in dealing with uncertainties in the design process due to the usage of the fuzzy membership functions.

The comparative analysis between the joint design of the and control charts and the joint design of the and control charts showed that both joint charts have the same behavior regarding the economic statistical design in which the SED compromises the solution between the desired statistical properties and the desired economical properties of the SPC. Moreover, the comparative analysis emphasized the important role that the decay rates play in the proposed method.

Notations

:Joint and control charts power
:Joint and control charts Type II error
:The control chart Type II error
:The control chart Type II error
:Lower control limit for control chart
:Upper control limit for control chart
:Lower control limit for control chart
:Upper control limit for control chart
:The process mean after shift
:The original process mean
:The process variance after drift
:The original process variance
:The sample average
:The sample variance
:Joint and control chart Type I error
:The control chart Type I error
:The control chart Type I error
:The width of control limits for control chart
:The width of control limits for control chart
:The expected control cost per hour
:The expected total control cost in the cycle
:The expected cycle time
:Time to discover the assignable cause
:Time to repair the assignable cause
:Time to sample, interpret, and chart
:Time in which the process is in statistical control state
:Sampling interval
:Sample size
:The cost of producing nonconforming item when the process is in statistical control state
:The cost of producing nonconforming item when the process is out of statistical control state
:The cost of investigating a false alarm
:The cost of searching and correcting the assignable cause
:The in-control average running length,
:The fixed cost of sampling and testing
:The variable cost of sampling and testing
The maximum total control cost per hour allowed
:Weights for the relative importance of , , and respectively
:The cumulative standard normal distribution
:The cumulative chi-square function with degrees of freedom
:Upper limit for
:Upper limit for
:Upper limit for
:Upper limit for .

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author declares that he has no conflicts of interest.

Acknowledgments

The author is grateful to the German Jordanian University, Mushaqar, Amman, Jordan, for the financial support granted to this research.