Abstract

Experience economy is a trend of future economic development. Enterprises can only occupy the market more successfully by enhancing the user experience in product design. The user’s product experience is affected by uncertainty noises (such as the user’s environment and different users), rendering the user experience quality evaluation results highly variable. The purpose of this paper is to study the modeling method of user experience quality evaluations under uncertain environmental noises; inspired by normal ordered weighted averaging (OWA) operators, the normal distribution probability density function is implemented to improve the normal ordered weighted averaging (OWA) operators, and a new modeling method designed for the evaluation of user experience quality under uncertainty is proposed, which can overcome the disadvantage of unreasonable weight distribution when the data size and location are different, as well as weigh the importance of both the value and the location of the data. The simulation results show that this method is more effective, accurate, and feasible than the conventional order weighted synthesis operator method. The feasibility and validity of this method are proved by user experience and comparative experiments of multiattribute bread products.

1. Introduction

User experience can be defined as the user’s overall experience using a product or system. It includes the emotions, beliefs, preferences, cognitive impressions, physical and psychological reactions, behaviors, and achievements generated by the user before, during, and after using the product. With the development of the quality of life, people’s demand for products has risen from pure functionality to the stage of emotional satisfaction. As product quality becomes a barrier to overcome, the technology gap gradually narrows, and the user experience reflects the product’s differentiation, which largely determines whether it can succeed on the market [13].

The quality of user experience (QoE) is a kind of product evaluation method based on the degree of user satisfaction [4]. It synthesizes the product factors, user factors, and environmental factors and directly represents the product approval degree from users’ perspectives [5]. According to the literature [68], the essence of user experience quality evaluation is to quantify the user experience based on experimental data; therefore, the core problem of user experience is its evaluation modeling.

As the main influencing factors, various models of user experience quality evaluation are highly variable even for the same product. Evaluation modeling refers to a mathematical modeling problem [912], which integrates several user experience quality evaluation data of the same product into a comprehensive result. These factors include the objective environment, the subjective factors of the user, and the technical performance of the business or service. Some relevant research articles on the evaluation modeling have been published, which are mainly divided into several categories: (1) Dynamic weighting method: by setting the dynamic weighting function, a comprehensive evaluation model is used to integrate the piecewise variable power function, partial large normal distribution function, and s-type distribution function [13]. (2) Multi-index comprehensive evaluation method with unknown weight: the projection size between individual evaluation values and ideal evaluation values is used to determine the weight of experts, such as the new ranking method based on projection; the weight of the index is determined by the optimal model of mathematical programming, for instance, the multi-index comprehensive evaluation method based on similarity [14]. (3) Metasynthesis method: a metasynthesis decision matrix is obtained by aggregating the decision maker’s information, and the metasynthesis score of each scheme is calculated using the metasynthesis operator and aggregation index weight, for example, the mold and comprehensive evaluation method [15], gray comprehensive evaluation method [16], data envelopment analysis method [17], analytic hierarchy process (AHP) [18] and weighted average (WA) operator, weighted geometry (WG) operator, ordered weighted averaging (OWA) operator, and ordered weighted geometry (OWG) operator [1922] comprehensive use; the weight information, which includes the index attribute weight and evaluator weight, has a crucial influence on the results and is an important research aspect of comprehensive evaluation.

The normal weighting method [23] is used to determine the weight distribution, and the ensemble results are derived from the ordered weighted averaging (OWA) operator. The principal steps are as follows: (1) reorder the input parameters in a descending order; (2) a normal weighted assignment method is used to determine the weight associated with the OWA operator, and (3) the OWA operator is used to aggregate the rearranged data. At present, this method has been applied in some scenarios [19, 20]. However, this method has the following shortcomings: (1) the method only considers the importance of the data location and neglects the importance of the data value itself, which leads to discrepancies in the ranking of the same values and the distribution of different weights; (2) for different values, it is possible to have a central symmetric distribution, or to assign the same weight, which is in contrary to the actual decision-making form. Based on the aforementioned disadvantages, this paper improves the ordered weighted integration operator. The new proposed method has the following advantages: (1) both the importance of the data itself and the data location are considered; (2) since the normal distribution is one of the most widely existing distribution forms, its probability density function is often different due to the independent variables. Therefore, different values should be given different weight functions; (3) this method does not need to sort the data samples, so the calculation process is simpler and more convenient. The main contribution of this paper is setting forth a synthesis operator which uses the weighted synthesis operator of the normal distribution probability density function to establish a new modeling method of the user experience quality evaluation under uncertainty. This paper’s objective is to solve the problem whereby the user experience of products is affected by the uncertain noise of the user’s environment and user’s individual factors on the issue of uncertainty.

This article is organized as follows: the research problem is described in Section 2; Section 3 comprises the probability density function based on the normal distribution, and the theoretical analysis is also carried out in this section; The effectiveness of the proposed method is proved in Section 4 through experimental verification and analysis. The feasibility and validity of the method are proved by the user experience and comparative experiment of multiattribute bread product in Section 5. Finally, the algorithm verification analysis is presented in Section 6.

2. Problem Description

User experience quality evaluation is a mathematical evaluation modeling problem, which maps numerous evaluation data to the real evaluation results.

In evaluation modeling, it is assumed that the sample form is , where represents the evaluation result value of the experiencer, whereas stands for the label of the sample point. By calculating the model parameter , an evaluation function model , which maps to , is generated, that is, .

Since user experience evaluation is influenced by the environmental, products, users, and other comprehensive factors, such as the environmental dimension, the natural environment, and the human and social environment, the user level includes the user’s expectation, experience, the state of the user’s body and mind, as well as the background of the user’s experience [24, 25]. The evaluator is often affected by individual and environmental subjective and objective factors, and the evaluation is uncertain to a great extent; this difference in individuals and in the environment gives varying evaluation results [26, 27].

Through the user experience test experiment, we derive the user experience evaluation data of a product scheme, such as psychological scales (such as the Likert scale and semantic difference scale) and questionnaires, but the product solution is often obtained from a number of different user experience evaluation results: . These evaluation results are uncertain and usually conform to a specific probability distribution. Using the abundant uncertainty evaluation label: , through some mapping: , to obtain an accurate user experience evaluation label , is the uncertainty denoising of user experience evaluation modeling.

Inspired by decision-making science, this paper adopts the metasynthesis theory to effectively integrate product scheme to calculate the estimation of the real experience evaluation through different environment and different human experience result estimations and to achieve the filtering of uncertainty (noise) due to people and due to the environment. In this kind of user experience experiment, there is noise in the label samples, and the key problem to be solved is finding ways to minimize the impact of the noise.

3. Weighted Averaging Operator Based on the Normal Distribution Probability Density Function

3.1. Normally Ordered Weighted Averaging Operator

The normal weighting averaging operator calculates the weight distribution value using the corresponding weight distribution data and the standard deviation, with the normal distribution weighting thought. The main steps are as follows [28]:

If is the sum of the corresponding set , then is the average value of the corresponding set , and is the standard deviation of the corresponding set ; hence , , , and are obtained through the following equations:

Considering that and , we get

Subsequently, we used the OWA operator for comprehensive integration. An OWA operator of dimension is a mapping of that has an associate vector such that and . The aggregated value of determined by the value of is as follows:where is the largest element of the arguments. Note: the values in the data group are resorted in an ascending order and then weighted simultaneously, which indicated that the elements and are not related in any way and are only related to the position in the assembly process. Therefore, the weighted vector is also called the position vector.

Ruan et al. [28] propose the weight distribution results of data quantity n in the range of 2–20 by using equations (1)–(5). Some results are as follows:(1)(2)(3)(4)(5)(6)

For example, we got a set of data through experiments, and using the normal order weighting method to calculate by the equation as follows:

We discovered that there are two cases of the same data 55 and three cases of the same data 58, after sorting and then weighting, the same data are given different weights, two 55 were given a weight value of 0.1311 and 0.1907, three 58 were given a weight value of 0.2161, 0.1907, and 0.1311; different data may also have the same weight. For example, for 55 and 58, the same weight value of 0.1907 was given, which is in contrary to the realistic decision form. This method emanates from the fact that the common data distribution obeys the normal distribution; thus extremely large and small data would deviate from the mean value, resulting in a smaller weight value, while values closer to the mean deviates from the mean value, giving larger weight values. However, this method only considers the importance of the data location whilst neglecting the importance of the value itself. It can be found that the weight value of the normal distribution is symmetric, the same data may give different weights; different data may also have the same weight, which is inconsistent with the actual form of decision-making.

3.2. Weighted Synthesis of the Probability Density Function of Normal Distribution

Inspired by the normal order weighted ensemble operator, we proposed a new method based on the normal distribution probability density function operator, which is an upgrade of the ordered weighted ensemble operator. The principal idea behind this method is that for a random variable, a probability density function is used to describe the probability distribution of the variable. To be more specific, when given the value of a random variable, the probability of that value can be calculated from the probability density function, and then the probability density (weight) of that value in all random variables can be calculated. In addition to taking the ordering relationship of the data into account, this method also considers the importance of the data itself [29].

In practice, if the experimental data are large enough, according to the central limit theorem, the data will conform to the normal distribution [3032]. The method proposed in this paper does not need to sort the data samples but uses the properties of probability density function to aggregate. Therefore, the aggregation results are not affected by the data being scattered or not, and the data being either discrete or continuous have no effect on the results. Setting of aggregation parameters is often a set of preference values provided by different individuals. Some people may assign preference values that are too high or too low to those they like or hate. In this case, a minimum amount of weight should be granted to these “wrong” or “biased” opinions. In other words, the closer the preference value (parameter) is to the middle value, the greater its weight; on the contrary, the farther the value is from the median, the smaller its weight. Next, we will introduce a weighted method based on a continuous normal distribution probability density function to determine the weight distribution of each parameter [3335]:where follows the normal distribution, is the expected value (mean), and is the standard deviation.

From the characteristics of the normal distribution,

Implying that

Therefore, the function reaches its maximum at :

Moreover, the farther is from , the smaller the value of ; when is fixed, the smaller the value of , the steeper the normal distribution diagram; meanwhile the larger the value of , the smoother the normal distribution diagram, and the standard deviation is positive. Videlicet, the farther the value is from the median, the smaller its weight. To ensure that the evaluation system is gray, the system was converted to a white system with specific evaluation results. The state distribution curve is displayed in Figure 1. Inspired by the above characteristics, we hereby provide a new method for determining operator weights.

The probability density function is defined as follows:where and are obtained by the following equation:

Considering that and , we derive the following equation:

Then, the weighted average (WA) operator is implemented for comprehensive integration, and mapping of the WA operator in a dimension [36], WA: , dimension vector , where , , which gives the following equation:

Equation (14) has the following properties:

The larger the random variable value deviates from the mean, the smaller the weight value assigned, that is,When , we get When ,And when ,We getWhen , we get , which demonstrates that the function has symmetry.

For example, we also get a set of data as through experiments and use this method to carry out comprehensive integration calculation, and the results are as follows:

Compared with the normal ordered weighting method, this method is more generalizable, when there are a large number of identical data, which makes the calculation result to have less error, and the method does not need to sort the data, so the calculation process is simpler. The method also shows good usability and accuracy when applied to a variety of complex UX experimental data. It has been proven that each data have a corresponding weight function based on its size; thus it is not related to the position of the data; therefore, this method weighs both the importance of the data positioning and the size of the data itself. Additionally, it has such excellent properties as symmetry; hence the larger the deviation from the mean (expectation), and the smaller the weight value assigned, which is consistent with the actual decision-making form.

4. Experimental Verification and Analysis

The central limit theorem is a probability theorem stating that the partial sum distribution of a random variable sequence obeys an asymptotically normal distribution. This set of theorems is the theoretical basis of mathematical statistics and error analysis, which points out the conditions for a large number of random variables to converge point by point to the accumulation distribution function of a normal distribution. In other words, when the sample size is large enough, the experimental data sample presents a normal distribution trend.

Given a fixed value , a certain noise model is selected, from which random noises are, respectively, generated and added to the original fixed value ; then the integrated algorithm model is used to determine the demaximum averaging (DMA), normal weighted synthesis (NWS), normal ordered weighted synthesis (NOWS), and weighted synthesis of the probability density function based on the continuous probability density function of the normal distribution (WS-PDF-ND). Four comprehensive integrated algorithm models are calculated to obtain the denoised estimated values . The error between the estimated value and the true value is calculated and analyzed; then the performance of each integrated approach was observed and compared. The experimental principle flow chart is illustrated in Figure 2.

Sources of data acquisition: the average value generated by MATLAB is , , , and , and the standard deviation is 16 sets of data with , , , and , respectively. The amount of each data is 20, and the average value represents the evaluated product’s real user experience value, and the standard deviation simulates the noise parameters in the evaluation process.

The data variables , with representing the maximum value of the data, and referring to the minimum value of the data, are calculated as follows:(1)Demaximum averaging (DMA):(2)Normal weighted synthesis (NWS):(3)Normal ordered weighted synthesis (NOWS):Among them, sorts the data values in a descending order. Equations (22) and (23) are obtained from the literature [28].(4)Based on the weighted synthesis of probability density function of the normal distribution (WS-PDF-ND), equations (12)–(15) were used to obtain the weight vector and perform an integration to get , respectively.

Through MATLAB simulation, 16 sets of data with different mean and variance are obtained, and the results of four integrated synthesis operators are presented in Table 1.

As depicted in Table 1, the overall effect of the four evaluation modeling methods is quite adequate, and the error margin is not very large. After specific comparison of the results of the four methods, method 4, the continuous normal distribution probability density function method, was found to be the closest to the true value in all 4 experiments.

In order to compare and analyze the performance of the four evaluation models more directly and clearly, the absolute errors of the five groups of samples were calculated through the four evaluation modeling methods, as delineated in Table 2. A line chart of the absolute error comparison analysis is displayed in Figure 3.

From the line chart in Figure 3, we can intuitively notice that the absolute error of the results obtained by method 4 is the smallest of the four methods, and the error can even reach 0, which indicates that the new method hereby set forth exhibits the strongest generalization ability, and it also elicits an optimal effect in removing the uncertain noise of the supervision signal.

In order to further verify the comparative analysis of the superiority and generalization ability of the four model methods, we used the same method to generate three sets of experimental samples; the data volumes of each group of samples were 50, 100, 200, 1000, 2000, 5000, and 8000 through the above method for verification by using the mean square error (MSE) and the mean average error (MAE) as comparative indicators of the samples, as listed in Table 3.where is the predicted output of the built DMA, NWS, NOWS, and WS-PDF-ND, is the expected output, and is the number of samples.

Table 3 demonstrates that the prediction error of the WS-PDF-ND method is lower than those of the other three methods, which proves that this method provides a more accurate and effective generalization ability, with its accuracy being directly proportional to the amount of data.

5. Application Example Verification

5.1. Bread Product User Experience Experiment

According to literatures on bread evaluation [30], the dimensions of bread characteristics perceived by professional bread evaluators include the aroma, shape, color, taste, touch, and inner structure. However, the target experiencer is generally not a professional, meaning that he may have his own evaluation criteria when evaluating the multiattributes of bread. We discovered that the sense of bread touch was low, but the sense of the other 5 dimensions was obvious and stable, so the sense of bread touch was excluded. Finally, the evaluation index of users’ bread taste experience was defined by the flavor, shape, color, taste, and inner structure.

According to the probability theorem, before evaluating any particular product, the probability of getting every possible evaluation result is theoretically the same, implying that the satisfaction rating is equal for every possible event; hence measurement of the user experience satisfaction rating should be evenly distributed across the range of values. The user experience satisfaction is divided into 4 grades: very satisfied, satisfied, neutral, and dissatisfied. The specific evaluation measures are as laid out in Table 4.

Each of the five evaluation indicators has a total of 5 points and a perfect score of 25 points. Table 5 presents the bread evaluation indicators and evaluation basis.

5.2. Experimental Principles of Bread Products Experience

The perception experience of bread products is that people perceive the shape and color of bread through vision, smell the fragrance of bread, the taste perception of bread’s taste, and internal organization. Through the input of perceptual data, the experiencer is able to perceive the memory effects of bread vision, smell, and taste simultaneously; meanwhile, the bread’s visual, smell, and taste information is transmitted to the brain through the nerves, subsequently allowing the brain to establish its perception of the bread. Classification: combining the memory and experience of previous vision, smell and taste to evaluate the performance of bread. Nonetheless, people’s evaluation and feelings of things are relatively vague. In psychology, the dimensions of the user’s perceived experience can be measured and evaluated using psychological scales (such as the Likert scales and semantic difference scales) [5]. The principle flow chart of the bread product perception experience test is illustrated in Figure 4.

Participants: 6 bakers specializing in the cake room, 10 students from colleges and universities, aged between 21–25 years old, reported a normal experience of the taste, vision and smell.Experience product: a new bread product developed by the school bakery.

The user bread product experience evaluation process is as follows:(1)Distribute questionnaires, evaluation indicators, and tables of evaluation basis to the professional bakers and school students, and then explain the rules for filling them out. Note: The scoring range is 0–5 points, and the step size is 0.5.(2)Each experiencer individually experiences the prepared bread products.(3)In the process of bread experience, the experiencer fills in the questionnaire according to the evaluation index and evaluation basis form.(4)Collect the questionnaires and end the experiment.

5.3. Results of Four Evaluation Modeling Methods

After collating the experimental data, we obtained 10 participants’ satisfaction scores in the five indicators as revealed in Table 6.

The final results of bread products using four evaluation modeling methods are set out in Figure 5.

From Figure 5, we can obviously see that the satisfaction results of the bread product experience obtained by the four evaluation modeling methods are all satisfactory.

6. Algorithm Verification Analysis

In order to verify the validity of the method, a new bread product developed by the school bakery was used to evaluate the users’ satisfaction degree of their bread product experience. Firstly, the evaluation index system of bread experience is established based on the factors influencing the bread experience satisfaction. Since different experts have varying scores on each index weight, there are also uncertain noise effects. Thence, six professional bakers are used to score the weight of all levels of indicators, and four evaluation modeling methods are used to determine the relative weight of all levels of indicators in the index system, and the first step of aggregation is carried out. According to the gray class and the corresponding gray number, 10 students are given the experience score of each index through the experiment, and the comprehensive quantitative evaluation result is obtained by data processing through the gray clustering method.

6.1. Bakers Determine Indicator Weights

Taking the index weight determination as an example, six professional bakers scored the importance of five evaluation indicators, as shown in Figure 6:

As can be observed from Figure 6, different people with the same raw material ratio scheme reported different results. For instance, regarding indicator , Baker 1 gave 4.5 points denoting “very satisfied,” while Baker 5 gave only 1.5 points implying “neutral.” Due to each person’s different experience, cognition and knowledge levels are different; thus each person’s evaluation result is different, meaning that the evaluation process will inevitably have a noise influence, and it is a very common phenomenon to have a quite large gap between the same product’s evaluation experience results from real life professionals.

Taking the calculation of the weight of indicator as an example, the number of evaluators , and the weighting vector obtained from equations (12)–(14) in Section 3.2, we have .

The absolute weight of the index can be calculated as

Calculate to in the same way in the order of 3.30, 3.00, 3.50, and 2.51. Then find the relative weights of the five indicators as .

Similarly, the other three evaluation modeling methods are used to calculate the comprehensive integration results and relative weight values of each evaluation index.

6.2. Gray Cluster Evaluation

The gray clustering evaluation model is a classic method of assessment in the gray system theory [37]. To be more specific, on the basis of determining the evaluation gray class, the whitening value of the corresponding gray number of each gray class is calculated by establishing a whitening weight function. To ensure that the evaluation system is gray, the system was converted to a white system and specific evaluation results were obtained. The specific steps are as follows: (1) Determine the evaluation gray class and establish the corresponding whitening weight function: refer to the method given in [32] and determine the four gray classes “very satisfied, satisfied, average, and unsatisfactory.” The corresponding gray numbers are (1, 2, 3, 4), in which the first category (): “very satisfied,” with a score of 4 or above; the second category ( ): “satisfied,” with a score of about 3 points; the third category ( ): “neutral,” with a score of about 2 points; and the fourth category (): “not satisfied,” with a score of 1 or below; (2) Construction of a gray clustering weight matrix; (3) Gray clustering evaluation. According to the gray class and the corresponding gray number determined in this paper, the results of the experience scoring for each index were measured by 10 students at school; then the data were processed by the gray clustering method to obtain the comprehensive quantitative evaluation results.

Construct a gray evaluation matrix according to the experiencer’s satisfaction score Table 5:

Based on the determined gray class and whitening weight function method introduced in [38], the gray clustering weight matrix of 5 bread satisfaction evaluation indexes is calculated as follows:

The comprehensive clustering evaluation vector of the four evaluation modeling methods is given as

Among them, denotes the relative weight vector of 5 indicators obtained from the four integrated integration operators under the expert evaluation principle.

According to the evaluation principles of the gray clustering model, the clustering weight vector and the bread experience satisfaction evaluation threshold are further synthesized, and the comprehensive satisfaction value is calculated to obtain , , , and . According to the performance measures defined in Table 3, we can deduce that the comprehensive performance value does not reach “very satisfied,” but lies in the “satisfied” range. The result of the gray clustering evaluation model is a gray number , and the matching comprehensive evaluation result is also “satisfied.” The results achieved via the weighted synthesis method based on the continuous normal distribution probability density function are in accordance with this. The effectiveness and practicability of this novel method also indicate that the bread product still has room for improvement.

7. Conclusion

In this paper, by combining the theories of statistics and decision science, a method based on the normal distribution probability density function weighted synthesis operator is proposed to solve the influence of uncertain noise in the process of user experience quality evaluation modeling. The simulation results suggest that the novel method set forth in this article has the least error when the sample data exhibit a normal distribution. In other words, the exclusion of uncertain noise is optimal. Finally, a user experience test experiment of bread products is performed. Four evaluation modeling methods are used to determine the weight of all levels of indicators in the index system, and gray clustering evaluation is carried out. More importantly, the final results are consistent, which confirms the feasibility and practicability of the proposed evaluation method.

Data Availability

The data supporting the findings of this study are available upon reasonable request from the corresponding author.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Scientific and Technological Research Program of Chongqing Municipal Education Commission, KJZD-K201801502, and the Technology Innovation and Application Development Project of Chongqing Science and Technology Bureau, cstc2020iscx-msxm0366.