Abstract

How to choose suppliers scientifically is an important part of strategic decision-making management of enterprises. Expert evaluation is subjective and uncontrollable; sometimes, there exists biased evaluation, which will lead to controversial or unfair results in supplier selection. To tackle this problem, this paper proposes a novel method that employs machine learning to learn the credibility of expert from historical data, which is converted to weights in evaluation process. We first use the Support Vector Machine (SVM) classifier to classify the historical evaluation data of experts and calculate the experts’ evaluation credibility, then determine the weights of the evaluation experts, finally assemble the weighted evaluation results, and get a preference order of choosing suppliers. The main contribution of this method is that it overcomes the shortcomings of multiple conversions and large loss on evaluation information, maintains the initial evaluation information to the maximum extent, and improves the credibility of evaluation results and the fairness and scientificity of supplier selection. The results show that it is feasible to classify the past evaluation data of the evaluation experts by the SVM classification model, and the expert weights determined on the basis of the evaluation credibility of experts are adjustable.

1. Introduction

Supplier evaluation and selection is an important part of strategic decision-making management in enterprises, and also an important branch of enterprise supply chain management research [1, 2]. In the operation processes of enterprises, the production and operation behaviors, such as the procurement of raw materials, machinery, and equipment, as well as external technology, services, are generally related to the choice of suppliers [3, 4]. How to choose suppliers by scientific decision-making will play a vital role in improving the market competitiveness of enterprises and maximizing economic and social benefits.

The existing domestic and overseas researches on supplier selection mainly focus on two aspects: the first is the construction of evaluation index system in the decision-making process of supplier selection, mainly focusing on the industry areas of enterprises and the personalized requirements of suppliers [57]. The second is how to select the evaluation method and model scientifically [810], such as Best-worst method (BWM), TOPSIS, and Fuzzy. The research in [11] has identified the 5 essential barriers of supply chain and proposed a methodology called Fuzzy-AHP to compare the weight of these barriers. A Combined FUCOM Rough SAW approach has been used in supplier selection and has been performed in order to achieve sustainability in resources and environment [12]. Best-worst method was used to decide the weights of green supplier selection, which aims to provide environment-friendly information system products [13]. In [14], to decide the importance of selection criteria, Fuzzy-TOPSIS technique is used, which helps select dairy suppliers. Chakraborty et al. [15] try to solve the uncertainty in supplier selection with D numbers, and MARCOS is used for ranking alternative suppliers. Zhu et al. [16] built a closed-loop supply chain model and focus on the recycling behaviors of the members in this supply chain. Kurpjuweit et al. [17] developed a typology of three supplier selection archetypes. In [18], to show the application of a structured decision-making technique is vital, especially under the complex conditions that include both qualitative and quantitative criteria. Pearn et al. [19] considered a two-stage method composed of quality verification and selection decision for multiple-line supplier selection problems. Xie et al. [20] try to solve the uncertain yield and demand in supplier selection. However, these assembly methods focus only on the mathematical operation during the process of assembling evaluation and do not fully consider evaluating the loss of information in the assembly process. In fact, the more the times evaluation information assembled and converted using mathematical methods, the larger the loss of information.

Support Vector Machine (SVM) is a supervised machine learning method based on statistical learning theory, which becomes a hot research topic in the field of artificial intelligence after artificial neural networks in recent years. SVM method is built on the principle of Vapnik-Chervonenkis (VC) dimension and structural risk minimization in statistical learning theory. VC dimension is a core concept of statistical learning theory; it is an important indicator to describe the learning ability and complexity of function sets. SVM uses limited sample information to best compromise between model complexity and learning ability and obtains good generalization ability [21]. It has been widely used in many fields like classification [22, 23], feature selection [24], pattern recognition [25], and troubleshooting [26].

This paper proposes to use SVM classifier in the supplier selection process. Experts’ past evaluation data is classified and used for calculating the evaluation credibility, and the evaluation credibility is used to determine the evaluation experts’ weight. Then, the evaluation results are directly assembled with simple mathematical calculations; it not only retains initial evaluation information maximally, but also improves the credibility of evaluation results and realizes the fairness and scientificity in decision-making of supplier selection.

The paper aims to solve the problems in supplier selection and improve the fairness and reasonability. The main contribution is that SVM is used to evaluate the credibility of experts, which is subsequently converted to weight in supplier evaluation. Our method avoids multiple conversions and large loss on evaluation information, largely keeps the initial evaluation information, and improves the credibility of evaluation results.

The paper is organized as follows: Section 1 overviews the motivation, related works, and our basic idea. Section 2 describes the method and theories used in our paper. Section 3 describes how we process data to an unbiased way and meet the requirement of SVM classifier. Section 4 is the whole processing flow of our method, SVM classifier is trained, and then it is used to infer the credibility of experts, which is eventually converted to experts’ weight in supplier selection.

2. Theoretical Model and Methodology Design

2.1. Theoretical Model: SVM Classifier

Traditional statistical research is based on the law of large numbers, which is an approximation theory on huge amounts of samples, but in reality, there are always limited amounts of samples and cannot meet the requirement of the theory. To solve this problem, Vapnik et al. proposed a machine learning theory, called the statistical learning theory (STL). Cortes and Vapnik proposed linear support vector machine [27], Boser and Vapnik introduced kernel techniques and proposed nonlinear support vector machine [28], and Druckers et al. extended it to support vector regression [29]. The original binary classification model was extended to multiclass classification support vector machine [30] and structural support vector machine for structural prediction [31].

Assume the training set with n samples is

For a set of functions , there exists an optimal function which will minimize the expected risk when it is used to evaluate unknown samples:where is the set of prediction function, is loss function that defines how much the prediction of deviated from real value, and is joint probability.

In practical machine learning context, the expected risk cannot be calculated or minimized, because the joint probability is unknown [32]. Empirical Risk Minimization (ERM) method is widely used in traditional machine learning; it aims at minimizing empirical risk , but it is not reasonable when there are only limited amounts of samples. In statistical learning theory, under the worst distribution, empirical risk meets the relation in equation (3) with the probability :where n is the number of samples, and h is VC dimension. For a practical classification problem, the number of samples is fixed, and the higher the VC dimension is (which means higher complexity of the classifier), the larger the confidence interval will be, and this will lead to the larger gap between real risk and empirical risk [33]. Therefore, when we design a classifier, not only the empirical risk, but also the VC dimension is required to be minimized to shrink the confidence interval and minimize the expected risk; this is called structural risk minimization (SRM) [34].

Support Vector Machine is a novel machine learning method based on principles of VC dimension and structural risk minimization, which is specialized to deal with the problems with limited number of samples [35]. SRM improved the generalization ability of models, and no limitation on the dimension of data. For linear classification, the classification plane is the plane that has largest distance with each class [36, 37]; for nonlinear classification, high dimension transformation is applied to data and turns nonlinear classification problem into a linear classification problem in a higher dimension space [38].

SVM was originally proposed to solve linear separable problems. Its theory is developed from optimal classification hyperplane in linear separable problems. Suppose the training sample set given in formula (1), is linearly separable; that is, there exists a classification hyperplane that can divide n samples correctly and has the maximum distance from each class. This hyperplane is the optimal classification hyperplane, and the distance between the nearest sample in each class and the optimal classification plane is called margin. Therefore, the optimal hyperplane is also known as the maximum interval hyperplane, as shown in Figure 1.

Optimal classification hyperplane can separate two classes of samples correctly and made samples in a single class all fall into one side of the hyperplane, which means that all samples satisfy:

This condition can be written asthrough adjusting scale, which means that of the first class should be larger or equal to 1, and of the second class should be less or equal to −1. These two inequalities can be combined to a single inequality:

The values of of samples on boundary in each class equal 1 and −1, respectively, so the margin between two class is ; hence, the problem of finding an optimal hyperplane converts to an optimal problem under constraints of inequalities:which can be equally converted to the following optimization problem using Lagrange method:where are Lagrange coefficients. We can get the optimal classification function by using quadric programming method, and the solution is

For linear nonseparable problems, we can use the nonlinear mapping , mapping samples in original input space into a higher dimension feature space, and then construct optimal classification hyperplane. Dot product operations in mapping samples into higher dimension space are computation intensive. Bajard et al. and Hamidzadeh et al. proposed to replace dot product operation with use kernel functions that satisfy Mercer condition to reduce computation complexity [39, 40].

Support vector machine can perform various kinds of nonlinear classifiers by selecting different kernel functions. There are three common types of kernel functions:(1)Polynomial kernel function:where q is the order of polynomial.(2)Radical base function (RBF):in which is the width of the radical base function. Each center of a base function is corresponding to a support vector, and its position, width, number, and weight can be determined by training process.(3)Sigmoid kernel function:SVM classifier that employs Sigmoid function, when and c satisfy certain condition, equals a multilayer perception neural network that contains only one hidden layer, and the number of nodes in hidden layer is the number of support vectors.

2.2. Methodology Design
2.2.1. Overview of Research Methodology

In this paper, we propose a novel weighted method based on SVM to evaluate suppliers. Our method consists of four parts: (1) we collect the history evaluation data of experts and preprocess them as training and test data; (2) train a SVM classifier and apply this classifier to the validation data to evaluate the credibility of evaluation experts’ opinion; (3) determine the weight of each expert according to the credibility; (4) evaluate suppliers with expert’s weight and experts evaluation data. The whole workflow is shown in Figure 2.

Our method simplified the mathematical operation during the process of assembling evaluation, which reduces the loss of information. Also, the usage of SVM classifier, which performs excellent in limited number of samples, turns the evaluation of credibility into an easy job. Our method is reasonable and effective in supplier selection.

2.2.2. Samples Set of SVM Classifier

SVM is a supervised machine learning method; it has two processes in solving a classification problem: learning and classification. During the learning process, training data is used to train a classifier according certain efficient policy; then, the classifier is used to classify the input data samples [41]. In our paper, experts’ evaluation data set are defined as is an input vector of 2l dimensions, which is preprocessed expert evaluation data; is its corresponding output. In our work, “+1” means that evaluation of an expert is credible, and is credible data; “−1” means that evaluation of an expert is biased, thus, it is called biased data. In our experiment, 80% of all evaluation expert evaluation sample data are used for training data, and the remaining 20% are used as validation data.

2.2.3. Kernel Function and Parameter of SVM Classifier

Kernel function is a very important part of SVM classifier; it will affect the result of classification, but Mercer Theory only gives some alternative functions that can be used in support vector algorithm but does not explain how to construct nonlinear transformation function and kernel functions , and the type and parameter of kernel function are to be determined according to your own task.

Compared to polynomial kernel function, radical base kernel function has less parameters; it has relative fast computation and adapts to parameter adjustment [42]; Sigmoid kernel function only has two parameters, but it cannot be represented as dot product of two vectors in feature space [43, 44]. So, the radical base function is used as a kernel function in our research according to the characteristics of our samples. The optimal classification function is

We employ K-fold cross validation method to select and optimize parameters of SVM classifier: first, samples are divided into K mutually disjoint subsets with same size, every subset is used as validation set once, and other K−1 subsets are used as training set to train the classifier. Then, we select parameters that give the smallest validation error as the optimal parameters of classifier after traverse all K alternatives.

2.2.4. Evaluation Experts’ Credibility and Evaluation Weight

The evaluation data of an evaluation expert in n most recently bids activities are selected as the input of SVM classifier, and the credibility of this evaluation experts is defined according to the output of SVM classifier aswhere is number of times of getting an output “+1,” and is the number of times of getting an output “−1.”

The credibility of an evaluation expert can reflect the quality of his evaluation data in the past. We defined a normalized weight of his historical evaluation data based on evaluation credibility :

3. Data Preprocessing

During the process of supplier bid activities, because of the difference of purchasing categories, different evaluation attributes and evaluation criteria will be applied, and evaluation experts and suppliers are various in each bid activity, which means that the original evaluation data of experts are not comparable, and it must be processed beforehand to make it comparable and more reasonable and can be fed into a SVM classifier.

Because the original sample data itself implies the key information of evaluation characteristics of experts, and different data preprocessing methods have different degrees of retention of characteristic information, only appropriate preprocessing methods can unify the unit of data and will not affect the classification effect. In this paper, a combined method is used to preprocess the evaluation data of the experts; the process is as follows:

3.1. Normalization Processing of a Group Evaluation Data

Assume that is the evaluation data that the th evaluation expert had given to th supplier’s th attribute, and after the group normalization processing, we get :where indicates the th supplier; is th evaluation attribute, ; means th evaluation expert, .

Taking a public bid activity of goods as an example, there are four suppliers (named P1, P2, P3 and P4) participating in this bid, and five evaluation experts (named A, B, C, D and E) performed evaluation to these suppliers, and the total evaluation score is 100, in which 30 points is the objective score related with quoted price, and 70 points is the score given by evaluation experts. In this paper, we only consider the expert evaluation score, and their original evaluation data to different attributes of suppliers are shown in Table 1 in which “bidding responsiveness” refers to the degree of matching with bidding requirements, and the value in parentheses means the maximum point to different attribute of supplier.

To well understand the evaluation data from experts A, B, C, D and E, we use the box plot in Figure 3 to illustrate distribution of the evaluation data. Figures 3(a) to 3(e) are corresponding to five different indexes; e.g., Figure 3(a) is the distribution of scores that experts give to technology index for supplier P1 (blue box), P2 (orange box), P3 (grey box), and P4 (yellow box).

From this figure, we can see that the score distributions of P1 and P3 are more concentrative in all five indices than those of P2 and P4, and average scores of all indices are relatively higher for P1 and P3.

The evaluation data in Table 1 has the initial weights of the evaluation attributes; we need to remove the weights before normalization. The normalized evaluation data are shown in Table 2.

3.2. Normalization of Individual Evaluation Data

Original evaluation data of th evaluation expert is normalized according to an individual expert and get the normalized data :where indicates the th supplier, , is th evaluation attribute, , and means th evaluation expert, .

Similarly, we remove the attribute weight of data in Table 1 and put them into formula (18), and we can get the normalized data of each individual expert, as shown in Table 3:

After the data preprocessing, the evaluation data of experts not only keep the critical information of original evaluation data, but also remove the barrier between different attribute criterion and magnitude. The evaluation sample of expert in formula (11) is formed by evaluation data in Tables 2 and 3 (which formed ), and its corresponding output label .

4. Experimental Analysis

4.1. Samples Source and Samples Set Distribution

The experimental sample data in this study are derived from the evaluation records of some evaluation experts who are often involved in the supplier biding activities. We extract 450 groups of evaluation data that meet the criteria as experimental samples, and all the original evaluation data are normalized by the group and individual with method shown Section 4 to form a valid dataset. The number of positive and negative samples is well balanced in the process of selecting experimental samples to improve the accuracy of SVM classifier, in which 80% of the total sample data is randomly selected to form the training set for learning and optimizing the parameters of the classifier, and the remaining 20% of the samples composed test set to test the accuracy of the classifier.

4.2. Training a SVM Classifier

The experimental tools for this study were based on a popular SVM software packages, Libsvm [45]. Since the Libsvm package has its own format requirements for input data, the training data and validation data mentioned above are first converted to the required format of “svmtrain” and “svmpredict” functions. grid py tool of LibSVM with 10-fold cross-validation is used to find the optimal parameter value of (penalty factor) and (variance in RBF nuclear function) of RBF kernel function. When the model performance is the same, the combination of parameters with a smaller penalty factor is preferred in order to reduce the calculation time. Eventually, the optimal parameter combination of , , is selected. The SVM classifier is trained by the “svmtrain” function, and then “svmpredict” function is applied to the validation samples to evaluate classifier model, and the accuracy of the classification is 96.67%.

4.3. Calculating Evaluation Credibility and Weight of Evaluation Expert

Taking the five evaluation experts in a procurement of goods as an example, all the evaluation records of these five evaluation experts in the last 10 procurement evaluation activities are collected; if an expert’s past evaluation data is insufficient, the evaluation credibility of the expert will be assigned to the average value. The extracted raw evaluation data is preprocessed and converted to the format required by the Libsvm package, then fed into the SVM classifier. The evaluation credibility of each expert is calculate using formula (15) based on the output of SVM classifier, and the results are shown in Table 4.

We put the evaluation credibility of expert into formula (16) and get the normalized weight of each expert:

4.4. Determination of the Supplier Selection Results

Set the “expert-supplier” evaluation matrix to

Based on the original evaluation values for the 4 suppliers by the 5 evaluation experts, with the initial weights of the attributes removed, we can get the “expert-supplier” evaluation matrix as

With this evaluation matrix E and the normalized weights of the 5 evaluation experts mentioned earlier, the following formula is used to combine all experts’ evaluation results:and we get evaluation results for each supplier as , , , .

Therefore, in this supplier evaluation and selection process, the preference order for suppliers is .

4.5. Discussion of Weight Selection in Supplier Selection Management

When the weights of evaluation experts are determined by the experts’ credibility, different weighting formulas can produce different expert weight coefficients; for example, if we use formula (23) to generate the evaluation expert weight :where

Then, put the experts’ credibility in Table 4 into formula (24) and (23), and we will get the following weights of evaluation experts:  = (0.2118, 0.2634, 0.1410, 0.1741, 0.2097). The comparison of the weight coefficients that come from different weighting formula of 5 experts is shown in Table 5:

The two sets of evaluation expert weight coefficients generating formulas (16) and (23) are different in value, but their changing trends are the same; that is, the two sets of weight coefficients of evaluation expert are linearly related to the evaluation of expert evaluation credibility; further analysis shows that expert weight coefficients with different degrees of discrete can be drawn from different formula, while evaluation credibility stays the same. The result convinces that we can adjust the degree of discreteness of expert weights according to specific needs of enterprise’s procurement projects when using evaluation credibility to determine experts’ weights. In addition, the comparison between this method and the mean weight method is shown in Figure 4. The expert weight coefficients obtained on the basis of the evaluation credibility of experts all have the same trend of change, and this also verifies that our method has its scientificity and universality.

From the sensitivity analysis of this research method, it can be found that there is no significant difference in the results of expert evaluation credibility, whether the evaluation data of expert are collected of the last 10 or 50 procurement evaluation activities. This verifies that the evaluation credibility of expert obtained has good stability. But the SVM classifier is sensitive to the choice of kernel function and its parameters. We select the kernel function and its parameters in this research based on our experience.

5. Conclusions

With the help of artificial intelligence tools, this paper explores a novel method of using the evaluation credibility of experts to determine the weight of the experts’ evaluation, so as to effectively assemble the evaluation results together and optimize the supplier selection. The process of this study not only demonstrates the feasibility of using the SVM classification model to classify the experts’ past evaluation data but also verifies that the expert weight determined on the basis of the evaluation credibility of the expert is universal. In the enterprise supplier selection practice, it can be used to adjust the evaluation weight of different experts according to the specific needs of the procurement projects or adjust the weight assignment of the same expert in different procurement projects. Certainly, there are still some limitations in our research: if the sample data is large, the training of SVM classifier will take more time. In addition, the performance of SVM classifier mainly depends on the selection of kernel function. At present, the kernel function and its parameters are selected manually, and there is no better way to solve this problem other than experience value. In the future, we will endeavor to find more appropriate kernel and parameters to improve SVM classifier model and explore the more effective ways to integrate experts’ evaluation results.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request. The questionnaire data were acquired mainly through e-mail and paper filling out.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by Social Science Planning Project of Shangdong Province, China (Grant no. 20CLYJ41), and Quality of Postgraduate Education Upgrading Project of Shangdong Province, China (Grant no. SDYJG19117).