Abstract

Interval type-2 fuzzy logic systems have favorable abilities to cope with uncertainties in many applications. While the block type-reduction under the guidance of inference plays the central role in the systems, Karnik-Mendel (KM) iterative algorithms are standard algorithms to perform the type-reduction; however, the high computational cost of type-reduction process may hinder them from real applications. The comparison between the KM algorithms and other alternative algorithms is still an open problem. This paper introduces the related theory of interval type-2 fuzzy sets and discusses the blocks of fuzzy reasoning, type-reduction, and defuzzification of interval type-2 fuzzy logic systems by combining the Nagar-Bardini (NB) and Nie-Tan (NT) noniterative algorithms for solving the centroids of output interval type-2 fuzzy sets. Moreover, the continuous version of NT (CNT) algorithms is proved to be accurate algorithms for performing the type-reduction. Four computer simulation examples are provided to illustrate and analyze the performances of two kinds of noniterative algorithms. The NB and NT algorithms are superior to the KM algorithms on both calculation accuracy and time, which afford the potential application value for designer and adopters of type-2 fuzzy logic systems.

1. Introduction

As we all know, the membership grades of type-1 fuzzy sets (T1 FSs) are crisp numbers. While the membership grades of type-2 fuzzy sets (T2 FSs) are themselves T1 FSs, type-2 fuzzy sets (T2 FSs) and type-2 fuzzy logic systems (T2 FLSs) are hot topics in current academic area. As the membership grades of interval type-2 fuzzy sets (IT2 FSs) are all uniformly equal to 1, they can be completely characterized by the corresponding footprint of uncertainty (FOU) composed of the upper membership function (UMF) and lower membership function (LMF). Currently, the computational relative simple IT2 FLSs have been successfully applied in many areas with high uncertainty, nonlinearity and time-varying behavior [1] like financial systems [2], computing with words [3], power systems [46], intelligent controllers [79], permanent magnetic drive [10, 11], pattern recognition [12, 13], medical systems [14], recommendation systems [15], and so on. Moreover, the performances of IT2 FLSs are superior to their T1 counterparts.

In general, an IT2 FLS is composed of five blocks as fuzzifier, rules, inference, type-reducer, and defuzzifier (see Figure 1). In fact, the principle of IT2 FLSs is similar to T1 FLSs, while the major difference between them is that there must be at least one IT2 FS in the rules of IT2 FLSs. A T1 FLSs does not contain the block of type-reduction. In addition, the type-reduction under the guidance of inference is a central block in IT2 FLSs, which mainly plays the role of transforming the output IT2 FS obtained from fuzzy reasoning [16] to the T1 FS. Next, the crisp output is acquired according to the block of defuzzification. The process of type-reduction is usually performed by the most popular computationally intensive KM iterative algorithms [1719]; however, the complex computations make the applications of IT2 FLSs more challenge. When all the uncertainties of IT2 FSs disappear, the IT2 FLSs degenerate to the T1 FLSs.

In early times, Mendel et al. developed the KM algorithms to compute the centroids IT2 FSs or perform the centroid type-reduction (TR) of IT2 FLSs. In order to reduce the computational cost, Wu and Mendel proposed the enhanced version of KM (EKM [20]) algorithms. As the continuous versions of KM or EKM algorithms (CKM or CEKM [21, 22]) were proposed, which gave a huge push for the theoretical studies of IT2 FLSs. Mendel and Liu analyzed and proved the monotoncity and superexponential convergence [23] of CKM algorithms. In recent years, Nagar and Bardini verified that IT2 FLSs based on the NB algorithms [24] have superior performances to respond to the effect of uncertainties in the systems’ parameters to the IT2 FLSs based on other type-reduction (TR) algorithms like EKM, Wu-Mendel Uncertainty Bound (WM-UB [25]), Begian Melek Mendel (BMM [26, 27]), and Greenfield-Chiclana Collapsing Defuzzifier (GCCD[28]). Furthermore, Nie-Tan (NT) algorithms [29] were proved to be excellent approach to simplify IT2 FLSs. While the continuous versions of NT (CNT) algorithms [30] were verified to be accurate algorithms for solving the centroids of IT2 FSs, in a word, all these works laid enrich theoretical foundations for studying and applying the TR algorithms of IT2 FLSs.

Inspired by [16, 19, 2224, 2937], this paper proposes the continuous of NB and NT (CNB and CNT) algorithms and brings connections between NB, NT, and BMM algorithms. The blocks of fuzzy reasoning, centroid type-reduction, and defuzzification of IT2 FLSs are performed by combining the NB and NT algorithms. Moreover, the CNT algorithms are proved to be accurate algorithms for performing the centroid TR of IT2 FLSs. Numerical simulation examples analyze and illustrate the performances of two kinds of non-iterative algorithms for computing the centroid defuzzified values compared with the KM algorithms. The simulation studies show that the two kinds of noniterative algorithms can effectively improve both the calculation accuracy and the efficiency for obtaining the outputs of IT2 FLSs.

The rest of this paper is organized as follows. Section 2 introduces the background knowledge of IT2 FSs and IT2 FLSs. Section 3 provides the NB and NT algorithms and how to use them to obtain the outputs of IT2 FLSs. Section 4 compares the performances of two kinds of noniterative algorithms with the KM algorithms according to the numerical examples. Finally the conclusions are given in Section 5.

2. Backgrounds

2.1. IT2 FSs

Definition 1. A T2 FS can be described by its T2 membership function (MF) , i.e.,where , , and denotes the universe of the primary variable of . Here represents all the admissible and . The point-value representation of is as

Definition 2. The secondary MF of is also called a vertical slice of , i.e., [38]where , and denotes the secondary MF of . The secondary membership grades of IT2 FSs all equal 1, that is to say, for any , .

Definition 3. The two-dimensional is referred to as the footprint of uncertainty (FOU) of , i.e.,where represents the primary membership of ; here the lower MF (LMF) and upper MF (UMF) comprise the FOU, where [39]

2.2. IT2 FLSs

Generally speaking, there are two major architectures for IT2 FLSs, Mamdani type [5, 31] and Takagi-Sugeno-Kang (TSK) type [10, 40]. The paper pays attention to the Mamdani type. Without loss of generality, consider a Mamdani IT2 FLS with inputs and one output . The system can be completely characterized by fuzzy rules, where the fuzzy rule is of the form:

The process of inference [5, 16, 31, 41] is as follows.

In order to simplify the expressions, here we adopt the singleton fuzzifier [41]; i.e., the input measurements are modeled as type-0 FSs (singletons). The fuzzy implication relation between each fuzzy rule is asThe MF of (8) is aswhere represents the minimum or product t-norm operation [42].

Then the output T2 FS of each fuzzy rule is , whose MFwhere denotes the composition operation and represents the maximum t-conorm operation [41]. Here is defined as the firing interval of each fuzzy rule, when and compute :

While adopting the popular centroid TR [32], the firing output set is generated from by each fuzzy rule and the corresponding consequent IT2 FS, i.e.,where also denotes the minimum or product t-norm operation.

Next, the final output can be obtained by merging all the rule firing output sets , i.e.,where denotes the maximum operation. Then the type-reduced set can be obtained by computing the centroid of , i.e.,where the two points and can be calculated by popular type-reduction algorithms like KM [19, 41], EKM [20], and weighted EKM (WEKM [18, 22]).

3. NB and NT Algorithms

In this section, we provide two kinds of noniterative for performing the centroid TR of IT2 FLSs; they are the NB and NT algorithms [24, 29, 30, 43].

3.1. NB Algorithms

The NB algorithms [24] offered a closed form of TR. After the process of fuzzy reasoning [16], let be the obtained output IT2 FS and the primary variable be equally discretized into points that satisfies ; then the two endpoints of centroid interval can be computed as

The defuzzified output is computed as

Similar to the CKM (or CEKM) algorithms [2123, 34, 44], the continuous form of NB (CNB) algorithms can be adopted for studying the theoretical property of TR and defuzzification of IT2 FLSs.

Suppose that and are the left and right endpoints of primary variable , and ; then the continuous form of (15) is as

Unlike the traditional computational intensive KM algorithms [1721], the NB algorithms may be suitable for real time applications. For the centroid interval, the two end points can be computed without iterations. Furthermore, the output is simply composed of the linear combination of two T1 FLSs: one constructed from the LMFs and the other constructed from the UMFs.

3.2. NT Algorithms

The NT algorithms also provide a closed form of TR. Supposing that the primary variable is equally discretized into points which satisfies , then the output centroid interval can be calculated as

Then the continuous form of (18) can be as

The most recent studies on calculating the centroid of IT2 FSs illustrate that the CNT algorithm [30] is an accurate approach. Next we prove that the CNT algorithm is accurate method for performing the centroid TR and defuzzification of IT2 FLSs according to simple statements and explanations as follows.

As we all know, the random sampling is an approximate method for performing the TR and defuzzification of IT2 FLSs. As the number of sampling points approaches infinity, it is reasonable to believe that the random sampling becomes an accurate TR and defuzzification.

Theorem 4. As the number of sampling approaches infinity, a random sampling algorithm can perform the accurate centroid TR and defuzzification of IT2 FLSs.

Proof. For the discrete output IT2 FS defined in the universe of discourse of , let be the number of horizontal slices along the -axis and be the number of vertical slices along the -axis. Because each T1 FS contains vertical slices, the total number of embedded T1 FSs in is .
Suppose that and is the UMF and LMF of FOU, respectively. In addition, we select embedded T1 FSs from randomly, and let be the MF of the embedded set.
Then we aggregate embedded T1 FS, i.e.,thereforeHere is a random value uniformly distributed in the interval , so the right side of (21) can be turned toFor (22), as the right side of which is a random sampling with , the right side becomes exactly the aggregation of MFs of all embedded FSs of . Because is just a constant, it has no effect for calculating the centroid of (see (19)).
When is a continuous IT2 FS, we simply substitute and by and , respectively. Then we get

Theorem 5. A representative T1 embedded FS of has the MFand the centroid of can be computed as

Proof. For the discrete output IT2 FS , we choose embedded T1 FSs from randomly. Suppose that is the MF of th embedded FS and is an arbitrary vertical slice. In addition, can be denoted as , among which let be a random number uniformly distributed within . Thus, we getAs is uniformly distributed with , henceThen we haveDue to the fact that is an arbitrary vertical slice, (28) holds for all . Thus, we getBy mean of the above analysis (see (29) and Theorem 4), we know that is the MF of a representative T1 FS and it can perform the accurate centroid TR and defuzzification of IT2 FLSs.
Here we adapt Theorems 4 and 5 from [30]. It is simply to substitute the sum operation by the integral operation to confirm the proofs to a continuous IT2 FS. The CNT algorithm performs the accurate centroid TR and defuzzification as it is equivalent to an exhaustive TR.
In fact, both the NB and the NT algorithms are special cases of BMM algorithms. And the BMM algorithms have been extensively studied for their stability and robustness [26, 27]. Next, we give an explanation as follows.
The closed form BMM TR algorithms calculate the output centroid interval asHere and are adjustable coefficients.
Comparing (30), (15), and (16), we find that the NB and BMM algorithms are the same while and . Next, let us consider the NT algorithms. Equation (18) can be transformed towhereComparing (31) and (16), we find that the NT and BMM algorithms are the same when and . The above analysis provides connections between BMM algorithm and NB and NT algorithms. And both NB and NT algorithms are special cases of BMM algorithms.

4. Simulation

In this section, four computer simulation examples are provided to illustrate the performances of two kinds of noniterative algorithms compared with the KM algorithms. Before performing the TR, we suppose that the FOU of output IT2 FS is known according to merging or weighting all the fuzzy rules under the guidance of inference [16]. For the first example, the FOU is bounded by the piecewise linear functions [22, 29, 30, 36]. For the second example, the FOU is composed of piecewise linear functions and Gaussian functions [22, 23, 27, 30]. For the third example, the FOU is bounded by the Gaussian functions [22, 29, 30, 36]. For the last example, the whole FOU is the symmetric Gaussian MF with uncertainty derivation [22, 23, 27, 30].

Here we perform the centroid TR for the IT2 FLSs. Suppose that the primary variable of the output IT2 FS is the letter . Furthermore, select the number of sampling points of primary variable as . Figure 2 gives the FOU graphs for four examples. In addition, Table 1 provides the FOU MF expressions for four examples. Considering both Figure 2 and Table 1, we can clearly find that the nonlinearity of these FOUs gets greater from Example one to Example four. Then let us compare the performances of NB, NT, and KM algorithms on the benchmark of CNT algorithms.

In the first example, the CNT algorithms are first adopted to calculate the benchmark defuzzified value . Then the graph of defuzzified values for the NB, NT, and KM algorithms is shown in Figure 3(a); moreover, the functional graph of absolute errors between CNT and NB, NT, and KM algorithms is provided in Figure 3(b).

For the second example, we adopt the CNT algorithms to calculate the benchmark defuzzified value . Then we provide the graph of defuzzified values for the NB, NT, and KM algorithms in Figure 3(a) and the functional graph of absolute errors between CNT and NB, NT, and KM algorithms in Figure 3(b).

In the third example, the CNT algorithms are first adopted to calculate the benchmark defuzzified value . Then the graph of defuzzified values for the NB, NT, and KM algorithms is shown in Figure 5(a); moreover, the functional graph of absolute errors between CNT and NB, NT, and KM algorithms is provided in Figure 5(b).

For the fourth example, we adopt the CNT algorithms to calculate the benchmark defuzzified value . Then we provide the graph of defuzzified values for the NB, NT, and KM algorithms in Figure 6(a) and the functional graph of absolute errors between the CNT and NB, NT, and KM algorithms in Figure 6(b).

Then we measure the performances of NB, NT, and KM algorithms for the above examples. As the number of sampling points is chosen as , we define the relative error as . Table 2 gives the mean relative errors of NB, NT, and KM algorithms, where the last column is the total average mean relative errors of them.

From Figures 36 and Table 2, the following conclusions can be obtained:

The absolute errors of NB, NT, and KM algorithms all converge to some extent as the number of sampling points increases. For both examples one and three, the NT algorithms can obtain the smallest absolute error, whereas the NB algorithms obtain the largest absolute error. For example three, the NT algorithms also obtain the smallest absolute error, whereas the KM algorithms obtain the largest absolute error. For example four, these three types of algorithms almost obtain the same absolute error (see Figure 6(b), the unit of vertical axis is ). Moreover, the amplitude of variation of absolute errors for them three is really not big.

See Table 2; the largest mean relative error of NT algorithms is 0.013729%, while the largest mean relative errors for the NB and KM algorithms are 1.664610% and 2.211606%, respectively. Furthermore, the total average mean relative error of NT algorithms is 0.020351%, while the total average mean relative errors for the NB and NT algorithms are 0.652444% and 0.694472%, respectively.

Considering the above items and comprehensively, we believe that the NT algorithms can obtain the best accuracy, while the NB algorithms take second place.

Next we investigate the computation time for the above algorithms. The unit of time is chosen as the second (s). As for calculating the defuzzified values, the computation time results rely on the specific hardware and software environments. In addition, it is unrepeatable for the results. In the paper, the simulations are performed by a dual-core CPU dell desktop with [email protected] and 2.00 GB memory, Windows XP operation system, and the programming software of Matlab 2013a. Then we give the computation time comparisons for the four examples in the Figures 710.

Supposing that the influence of fluctuation of number of sampling points with respect to the computation time is not considered, then the above three types of algorithms vary linearly in terms of . Here we select the least square regression model for them, in which is the computation time and Table 3 provides the regression coefficients. Moreover, the computation time difference rate is defined aswhere is the computation time for NB, NT and KM algorithms.

From Figures 710 and Table 3, we can draw the conclusion that the computation time of NB and NT algorithms are much better than the KM algorithms. In the above four examples, the average computation time of both NB and NT algorithms is at least twenty times better than the KM algorithms. Here the computation time of NB and NT algorithms is very similar, while the latter is only slightly better than the former. Furthermore, the computation time difference rate for three types of algorithms is between .

The noniterative algorithms can be adopted for studying the centroid TR of IT2 FLSs. If only the computation accuracy were considered, according to Table 2, the best choice should have been the NT algorithms. Observing both Figures 710 and Table 2, we suggest one shares the NT algorithms to perform the centroid TR of IT2 FLSs with the piecewise linear functions as in example one, the hybrid functions as in example two, and the Gaussian functions as in example three and adopt the NB or NT algorithms to compute the centroid TR of IT2 FLSs with the Gaussian T2 MF with uncertain derivation as in example four.

At present, because there is no comprehensive comparison on the performance of KM TR algorithms and other alternative TR algorithms (which is still an interesting open problem), the paper pays attention to the theoretical studies of NB, NT, and KM algorithms according to the comparisons. On the basis of four numerical examples, as a certain number of sampling points is considered, both NB and NT algorithms improve the computation accuracy compared with the KM algorithms. After choosing the accurate CNT TR algorithms as the benchmark, the NT algorithms can obtain the best computation accuracy.

5. Conclusions

This paper proposes the CNB algorithms, proves the CNT algorithms to be accurate approach for performing the centroid TR of IT2 FLSs, and provides connections between the BMM algorithms and the NB, and NT algorithms. Then we choose the CNT algorithms as the benchmark to compute the defuzzified output of IT2 FLSs. According to the computation accuracy and time, four computer simulation examples illustrate the performances of NT, NT, and KM algorithms. Compared with the NB and KM algorithms, the NT algorithms can obtain smaller absolute errors and less computation time.

There are still many interesting works that lie ahead, including studying the center of sets TR of IT2 and general T2 FLSs [30, 31, 3437, 45] and investigating forecasting and control problems based on IT2 and GT2 FLSs optimized with swarm intelligence algorithms [4651]. Future studies will give attention to the T2 FLSs design and applications.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author declares that they have no conflicts of interest.

Acknowledgments

This paper is supported by the Natural Science Foundation of China (Nos. 61773188, 61803189) and Liaoning Province Natural Science Foundation Guidance Project (No. 20180550056). The author is very thankful to Professor J. M. Mendel, who has provided some valuable suggestions.