Abstract

In this paper, a new assessment method based on the interval evidential reasoning (IER) rule is proposed to solve the problem of physical and mechanical property assessment (PMPA) for particleboards. Because the detection data of the density and thickness swelling (TS) of particleboards are in an interval form, a model with precise values as input becomes inappropriate, so the PMPA of particleboards is not feasible. In the proposed method, expert knowledge and interval data are integrated to solve the assessment problem. First, the overall reliability of attributes is calculated, and the interval data are transformed into an interval belief structure. Then, the multiple interval belief structures are aggregated by ER nonlinear optimization models. Finally, the assessment results are obtained by utility theory. With the proposed method, the PMPA of particleboards with interval values can be assessed reasonably, and the combination interval belief degree of different grades of particleboard can be obtained, which has a certain guiding significance for the production and subsequent operation of enterprises. A case study for the PMPA of particleboards is conducted to demonstrate the effectiveness of the proposed method.

1. Introduction

Particleboard, also known as chipboard, is an engineered wood product manufactured from wood chips or jute-stick chips and a synthetic resin or other suitable binder that is pressed and extruded. As a renewable resource product, it is widely used in furniture, construction, packaging, and other industries [1]. PMPs are a necessary standard used to measure the quality of particleboard [2]. Research and analysis are helpful to improve the production quality of particleboard [3]. Therefore, it is of great value to assess the physical and mechanical properties of particleboard [4].

In engineering practice, the detection value of some particleboard PMPs come in the form of intervals; for example, density and TS are the PMPs of particleboards, and these values are in interval forms. However, current studies on PMPA use precise values to construct models. One point in the interval is selected as the representative value of the interval as the input of the model. These methods cannot reflect the actual production situation and will amplify the uncertainty of the information, resulting in a decrease in the accuracy of the results. More importantly, due to the limited detection data, if a data-driven model is constructed to solve the PMPA problem, it is difficult to optimize the parameters of the model to a better effect. At the same time, because the system involves many influencing factors, if a model based on qualitative knowledge is constructed, the uncertainty in expert knowledge cannot be well resolved. Therefore, a new PMPA model based on the IER rule is proposed in this paper. It uses qualitative knowledge and quantitative data for modeling, takes the interval data into the model completely, and at the same time considers the overall reliability of the attribute, which solves the uncertainty and unreliability of the data and improves the accuracy of PMPA.

In recent years, many types of excellent research have been done on the PMPA of materials; for example, Maharana et al. used the TOPSIS multicriteria decision-making approach to determine the optimal sequence and content of nanofillers to obtain the best physical and mechanical properties of the composites [5]. Alac et al. proposed a multicriteria decision-making model to evaluate the essential physical and mechanical properties of materials [6]. Liu et al. studied the durability of lightweight honeycomb concrete and established a durability evaluation method combined with the analytic hierarchy process and fuzzy comprehensive evaluation method [7]. Ozsahin et al. used a mixed analytic hierarchy process and multiobjective optimization method based on ratio analysis to select the most suitable building wood after assessing the economic, physical, mechanical, and thermal properties and durability of wood [8]. The methods mentioned above mainly use qualitative knowledge to construct models, and they lack the effective use of quantitative data, which makes the fuzziness and uncertainty of expert knowledge in the model unable to be effectively reduced. Hence, the assessment accuracy of the model is degraded.

To improve the accuracy of PMPA, several quantitative methods have been proposed. Farizhandi et al. used planetary ball milling as a tool to evaluate the properties of materials and then used an artificial neural network to build a model [9]. Based on three typical artificial neural networks, Xue et al. established friction property evaluation and prediction models of three kinds of brake friction materials [10]. Zhang et al. constructed a BP neural network model to determine the engineering material assessment index’s weight [11]. Meng et al. conducted reliability-based optimization for offshore structures using saddle point approximation and studied the product remaining life prediction methods [1215]. Tian et al. studied performance state estimation for static neural networks with time-varying delays [16], which has been widely used to improve the accuracy of the PMPA. The methods mentioned above mainly use quantitative data to build models. However, many data are not easy to obtain in some cases, and there is no practical use of expert knowledge.

For the PMPA of particleboards, when various PMPs are considered, the problem can be transformed into a multiattribute decision-making (MADM) problem. To better solve the MADM problem and consider qualitative knowledge and quantitative data, Xiao et al. studied the evidential reasoning rule in dealing with uncertain information [1719]. Xu et al. added interval uncertainty into the ER framework and proposed interval evidence reasoning, which can effectively solve the evaluation problem under attribute uncertainty [20]. Zhao et al. studied the reliability calculation method based on information consistency and solved data unreliability in evidence integration [21]. The research above has solved the data uncertainty and unreliability. However, when both of them appear in the problem at the same time, these methods are no longer suitable.

Based on the analysis above, the current method cannot reasonably input interval data into the model, and fusing the interval value with the precise value is also a problem that needs to be solved. Therefore, it is necessary to construct a model that can input both precise values and interval values and can integrate the two types of values. Evidence reasoning rule can not only deal with uncertain information flexibly but can also integrate quantitative information and qualitative knowledge effectively. Also, the reliability of attributes is considered when modeling, so it is suitable for modeling under unreliable data [22]. Interval evidence reasoning has a good effect in solving interval problems. It can solve the interval uncertainty caused by interval data through enumeration and then use ER nonlinear optimization models to integrate different attributes to get the result [23]. In this context, an assessment model of the PMP of particleboards based on the IER rule is proposed in this paper. It can effectively process interval data through enumeration and then integrate the enumerated value with other precise values, and the overall reliability of attributes obtained by combining static reliability with dynamic reliability is also considered, which solves the uncertainty and unreliability existing in data and improves the accuracy of PMPA.

The remainder of the paper is organized as follows: in Section 2, the PMPA of particleboard is formulated. Section 3 constructs an assessment model based on the IER rule, and its reasoning process is presented. A case study is provided to illustrate the effectiveness of the proposed assessment method in Section 4. This paper is concluded in Section 5.

2. Problem Formulation

In PMPA, the data, including interval data and precise data, and expert knowledge will directly affect the accuracy of the model. In this section, the problems that exist in PMPA are formulated, and then, a new assessment method based on the IER rule is developed.

2.1. Problem Formulation of PMPA

Problem 1. In the actual PMP test process, for the same property of a particleboard, the test results from different environments or different machines are often different, which makes unreliability usually exist in the obtained data. To solve this problem, the overall reliability of attributes is developed to represent the information quality. This can be divided into two parts: static reliability and dynamic reliability. Static reliability denotes the subjective aspect of reliability and is provided by experts. Dynamic reliability represents the objective aspect of reliability and is calculated based on a statistical method [24, 25]. Therefore, the first problem in assessing the PMP of particleboard is how to obtain the dynamic reliability supported by data other than the static reliability provided by experts and how to integrate them to obtain the overall reliability. Thus, the following model is constructed:where represents the static reliability given by experts, is the static reliability of the attribute, is the number of attributes, represents the model inputs, is the data of the attribute, represents the calculation method of the dynamic reliability of attributes, represents the calculation method where the static and dynamic reliability are integrated into the overall reliability of attributes, represents the overall reliability of attributes, and is the overall reliability of the attribute.

Problem 2. Because the data of some properties of PMP are interval forms, as shown in (1), the existing machine learning, neural network, and traditional ER approaches cannot process this kind of data well. In general, these methods always select one point from the interval as the input, but this cannot reflect the actual production situation. Hence, choosing an appropriate method to integrate interval data and precise data is the second problem:where represents the attribute data set and represents the data of the attribute.

2.2. Construction of the New PMPA Model

To solve the problems described in Section 2.1, a new assessment model based on the IER rule is proposed in this section.

The problem of PMPA can be expressed as follows:wheredenotes the input data, denotes the assessment grade, denotes the interval belief degree of the attribute assessed to , and represents a nonlinear function, which is the calculation method of assessment results . The assessment process is shown in Figure 1.

In the new PMPA model, each attribute’s belief degree is expressed in the form of the interval, which considers all the possibilities whether the input is an interval value or accurate value [20, 26]. This approach unifies the qualitative and quantitative assessment information in the form of the interval, integrates the information gradually, and then obtains different schemes’ belief degrees under different grades [22, 23]. Finally, each scheme’s maximum, minimum, and average utility is calculated using utility theory to complete the assessment [27].

3. The Interval Evidential Reasoning Rule with Interval Belief Degree

The assessment model based on the IER rule is shown in Figure 2 and is mainly composed of four parts. The first part obtains the attribute overall reliability. The second part transforms the interval belief distribution. The third part calculates the combination belief degree by integrating data, weight, and reliability. The fourth part calculates utility.

3.1. Calculation Method of the Overall Reliability of Attributes

In the particleboard testing process, both the test machine and the external environment will influence the test result, causing certain errors, so reliability cannot be guaranteed. Therefore, it is necessary to consider the reliability of data to improve the accuracy of the assessment. Two main aspects of reliability need to be considered: the static reliability given by experts and the dynamic reliability extracted from data [28]. Let and represent the static reliability and the dynamic reliability of the attribute, respectively; then, its overall reliability can be calculated as follows:where denotes the overall reliability of the attribute, denotes the proportion of static reliability in the overall reliability of the attribute and , the static reliability is determined by expert knowledge, and the dynamic reliability is calculated as follows:where denotes the value of the attribute in the test; represents the distance between the value of the attribute in the test and the average value of previous tests, that is, the fluctuation extent of the attribute; the greater the fluctuation is, the lower the reliability obtained. is the average distance of the attribute in tests.

3.2. Transformation of the Interval Belief Structure

Based on the belief decision matrix and evidence combination rule of D-S theory, the ER algorithm of multiattribute aggregation is proposed. Unlike the traditional multiattribute decision-making method, which uses a decision matrix to describe the multiattribute decision-making problem, the ER approach uses a belief decision-making matrix. The attribute of each alternative is defined by a belief distribution [2931]. The advantage of this approach is that the use of distributed assessment can model precise numerical values and capture various types of uncertainties, such as the uncertainty and fuzziness of subjective judgements.

The existing ER approach uses accurate belief degree and belief distribution to model decision attributes quantitatively and qualitatively. However, both interval values and precise values exist in practice. In this case, the interval belief degree can better represent subjective judgement or assessment. The ER is also extended to enable multiattribute decision-making problems to be modelled using the interval belief degree [3234].

According to the combination requirements of the IER rule, different types of attributes are transformed into a unified belief structure [35]; that is, the test data are converted into the following form:where denotes the test data and represents the belief degree of the attribute evaluated as the assessment grade is . According to whether the interval data cross the assessment grade, the calculation method can be divided into interval data without crossing the assessment grade and interval data crossing the assessment grades.

Unlike the belief transformation of precise data, the calculation method may be complex considering that the input data may reside between two adjacent assessment grades or cross multiple assessment grades in the interval form.

3.2.1. The Interval Data without Crossing Assessment Grade

Let represent the interval data and and represent assessment grades. As shown in Figure 3, the belief degree of should be distributed to and in the form of intervals, which can be represented by and . The calculation method is as follows:

It should be noted that the interval belief degrees and are not unconditional. They must satisfy the condition . After using the interval belief degree, the interval value can be equivalently expressed as follows:with .

3.2.2. The Interval Data Crossing Assessment Grade

Let represent the interval data and , and represent assessment grades. Different from the assessment grades without crossing, there are many possible situations for the belief distribution of . Although the data are expressed in the form of the interval, it is still a fixed value. Therefore, its belief degree may be distributed to any two adjacent assessment grades among , and in the form of the interval, and the belief degree of other assessment grades is 0. To represent these distributions, the 0-1 integer variable is introduced, which is expressed as follows:

As shown in Figure 4, since can only fall in an interval of , , and , only one 0-1 variable is nonzero. Thus, we obtain

Using the 0-1 variable, we can obtain the interval belief degree of each assessment grade as follows:

It is worth noting that , , , and have the same certain conditions, in which must be met. After using the interval belief degree, the interval value can be equivalently expressed aswith

3.3. Calculation Method of the Combination Belief Degree

To correctly integrate multiple attributes with interval values, the ER nonlinear optimization model is needed for the calculation. First, the interval belief degree needs to be converted into interval probability mass by combining weight and reliability. The interval probability mass combination of L basic attributes is then converted into the overall interval belief degree by the nonlinear optimization mode [36]:(1)The belief degree to the basic probability mass is transformed aswithwhere represents the combined parameter considering the attribute weight and attribute reliability, represents the reliability of the attribute, and represents the weight of the attribute, that is, the relative importance of the attribute in the process of the IER rule. The probability mass of the whole set , which is not assessed to any assessment grades, can be divided into two parts: and . Among them, represents the relative importance of attribute and represents the incompleteness in the assessment of attribute . If the structure of the interval belief degree is complete, then .(2)The combination belief degree is calculated ass.t.

3.4. Utility Calculation

To obtain the final integration results of the PMPA model, the ER approach provides a calculation method of utility. The combination belief degree of each option is integrated, and then, the results are compared or sorted. The calculation method of utility is as follows:where represents the expected utility of assessment distribution . represents the utility of assessment grade , and represents the combination belief degree of attribute assessed to .

If the assessment distribution is incomplete, then , and it can be assessed to any assessment grade. When it is assessed to a grade with the maximum utility, can obtain the maximum value. In contrast, it can obtain the minimum value when it is assessed to a grade with the minimum utility. The specific calculation method is as follows:s.t.

When the maximum and minimum utility are calculated, the average utility can be calculated as follows:

3.5. The PMPA Process

Based on the analysis mentioned above, the process for the IER rule is shown in Figure 5 and consists of the following steps:Step 1: the belief distributions of the data are obtained by the calculation method proposed in Section 3.2.Step 2: considering both the static reliability given by experts and the dynamic reliabilities calculated using (5)–(7), the overall reliabilities of attributes are obtained using (2).Step 3: the basic probability masses are calculated using (20)–(23).Step 4: the combination belief degrees are calculated using (25).Step 5: the maximum and minimum utilities of an option are calculated using (28) and (29), and the final output of the IER rule is obtained using (31).

4. Case Study

In this section, a case study for the PMPA of furniture-type particleboard (P2) is examined to illustrate the effectiveness of the proposed assessment method.

4.1. Problem Formulation of Particleboard Assessment

Considering an assessment problem of P2-type particleboard, whose test data are composed of 8 basic characteristics, namely, internal bond (IB) (MPa), density, modulus of rupture (MOR) (MPa), modulus of elasticity (MOE) (MPa), surface bond (SB) (MPa), screw holding force (SHF) (surface), screw holding force (SHF) (side), and TS (2 h). Among these characteristics, the density and TS (2 h) data are in interval form. Others are in the form of an exact value. The density data are shown in Figure 6, and the TS (2 h) data are shown in Figure 7.

According to expert knowledge, the relative weights of these eight characteristics are (0.11, 0.16, 0.14, 0.12, 0.1, 0.12, 0.11, 0.14), the static reliability is (0.92, 0.94, 0.93, 0.92, 0.9, 0.94, 0.93, 0.95), and the proportion of static reliability in overall reliability is (0.83, 0.84, 0.82, 0.84, 0.8, 0.81, 0.85, 0.82).

The following grades are used for board assessment: A, B, C, D, and E. They constitute the frame of discernment, and their corresponding utilities are (0.1, 0.4, 0.6, 0.8, 1):

It is worth noting that there is nothing good or bad among all grades in the frame of discernment and that different grades of the board are applied to corresponding situations; for example, furniture manufacturers may need grade C boards because they require processing and combining. Suppose the board manufacturers are provided the wrong type of board, such as E board. In that case, the equipment will be damaged due to the different hardness and strength of the boards, resulting in certain economic losses, which does more harm than good for later processing.

4.2. Calculation of Characteristic Reliability

According to (5)–(7), the dynamic reliability of the characteristics is calculated. Then, combined with the static reliability given by experts, each characteristic’s overall reliability is calculated according to (2), in which density and TS (2 h) are calculated by taking the median. The calculation results are shown in Table 1.

4.3. Transformation of Interval Belief Structure

The assessment grades of each characteristic are shown in Table 2, which has been adjusted by the expert.

According to (9)–(17) and the assessment grades given above, all quantitative data can be transformed into interval belief structures, and the transformation results are shown in Table 3.

After obtaining the assessment distribution of each characteristic of the particleboard, the combination belief distribution of each option can be calculated according to (25), and then, the assessment results are obtained using (28)–(31) based on utility theory. The results are shown in Table 4.

The combined belief degree in Table 4 is the result of comprehensively considering the belief distributions of eight PMPs; for example, the belief degree of board 1 corresponding to grade E is 0 because its eight PMPs have no belief for grade E. The average utility represents the utility of the board after integrating the combined belief distribution of each grade.

4.4. Comparative Studies

To illustrate the effectiveness of the proposed assessment method, comparative studies between the IER rule, evidential reasoning (ER), backpropagation (BP), extreme learning machine (ELM), and radial basis function (RBF) are conducted in this section.

Since the density and TS (2 h) data are in the form of the interval, to input them into other models, the interval’s median is taken as the input. The MSEs are given in Table 5, and the assessment results are shown in Figures 812.

Among the above assessment methods of PMPs of particleboards,(1)ER combined with expert knowledge on the basis of quantitative data overcomes the problem of low modeling accuracy of complex systems when data are insufficient and effectively deals with the uncertainty existing in qualitative knowledge. However, it lacks certain expansibility in processing interval data.(2)For the BP, ELM, and RBF, it is almost impossible to optimize their parameters to an ideal level under the condition of few samples. The inputs of these models are precise values and cannot reflect the real situation of the data. Therefore, the assessment accuracy of these models is not good enough.(3)Compared with the other methods in the study, the IER rule takes the interval value and precise value as the input. When processing interval data, the enumerated value is integrated with other precise values to solve the uncertainty problem in the interval data. Compared with other methods that take the median of the interval instead of interval data, it can better reflect the real situation of the data. At the same time, the proposed method uses expert knowledge and quantitative data for modeling, which solves the problem of low modeling accuracy when the sample size is insufficient. Therefore, it is better than the data-driven model in the case of small samples. In addition, the overall reliability of the attributes is considered in the model, which solves the problem of unreliability in the data and further improves the modeling accuracy.

The dataset of standard samples is rarely expressed in the form of intervals, but interval values often exist in engineering practice since they can reflect the real production situation. The main purpose of this paper is to solve the PMPA of particleboards in real production; therefore, the dataset provided by the manufacturer is selected as the experimental samples rather than standard samples.

Through the assessment examples of PMP for particleboards, the IER rule’s effectiveness in the interval state is verified. Its advantages in solving data uncertainty and unreliability are also demonstrated.

5. Conclusion

In this paper, the PMPA of particleboards based on the IER rule was investigated. The problem was transformed into a MADM problem by considering various properties of particleboards. The interval data and precise data were taken as the inputs, which reflected the real situation of the particleboard and thus solved the uncertainty in the data. The overall reliability of attributes was considered in the model, which reduced the unreliability of the data. The multiple interval belief structures were aggregated by ER nonlinear optimization models, and the utility function was used to obtain the final results of the PMPA of particleboards. The study shows that the IER approach provides a flexible way to model complex MADA problems. It allows MADA problems to be modelled using both precise and interval data. Actual production helps enterprises classify market demand more precisely and improve profits.

For the proposed assessment method, when more attributes are in the form of the interval, the greater the interval is, the more obvious the effect is. However, the complexity of the calculation also increases exponentially. Therefore, (1) adding an optimization algorithm will reduce the model’s computational complexity in the future. (2) It is also vital to improve the performance of the optimization algorithm. We look forward to more applications and better development of the interval evidential reasoning rule.

Data Availability

Requests for access to the data that used to support the findings of this study should be made to Cuiping Yang.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

Cuiping Yang and Chao Sun contributed equally to this work.

Acknowledgments

This work was supported in part by the Fundamental Research Funds for the Central Universities under Grant no. 2572018CP01, in part by the Natural Science Foundation of Heilongjiang Province of China under Grant no. F2018023, in part by the Ph.D. research start-up Foundation of Harbin Normal University under Grant no. XKB201905, in part by the Postdoctoral Science Foundation of China under safety status assessment of large liquid launch vehicle based on deep belief rule base, and in part by the Natural Science Foundation of School of Computer Science and Information Engineering, Harbin Normal University, under Grant no. JKYKYZ202102.