Abstract

Electric-mechanical equipment manufacturing industries focus on the implementation of intelligent manufacturing systems in order to enhance customer services for highly customized machines with high-profit margins such as electric power transformers. Intelligent manufacturing consists in using supply chain data that are integrated for smart decision making during the production life cycle. This research, in cooperation with a large electric power transformer manufacturer, provides an overview of critical intelligent manufacturing (IM) technologies. An ontology schema forms the terminology relationships needed to build two intelligent supply chain management (SCM) modules for the IM system demonstration. The two core modules proposed in this research are the intelligent supplier selection and component ordering module and the product quality prediction module. The intelligent supplier selection and component ordering module dispatches orders that match the best options of suppliers based on combined analytic hierarchy process (AHP) analysis and multiobjective integer optimization. In the case study, the intelligent supplier selection and component ordering module demonstrates several acceptable Pareto solutions based on strict constraints, which is a very challenging task for decision makers without assistance. The second module is the product quality prediction module which uses multivariate regression and ARIMA to predict the quality of the finished products. Results show that the R square values are very close to 1. The module shortens the time for the company to accurately judge whether the two semifinished iron cores for the product meet the quality requirements. The component supplier selection module and the finished product quality prediction module developed in this research can be extended to other IM systems for general high-end equipment manufacturers using mass customization.

1. Introduction

The global manufacturing industry is moving towards Industry 4.0, a program for smart manufacturing that affects the way people, machines, communication, data management, supply chains, and other external players in the marketplace and standards environment work and perform. Technological developments in the areas of artificial intelligence, digital transformation, and cloud platforms are bringing new innovations to individuals and companies [1]. Enterprises are facing competitive pressures such as a changing labor force, diversity of product demands, and rising material costs. One way to reduce costs, improve quality, and increase efficiency and productivity is to introduce Industry 4.0 to integrate physical and cyber components for digitalization, knowledge management, and automation of manufacturing systems.

Many companies and organizations have transformed their paper-based systems to digital-based systems for exchanging details quickly between supply chain partners through cloud computing and Internet of Things. The Industry 4.0 approach enables the supply chain to react quickly to unpredictable events and enhances collaboration with members to solve new problems. These partnerships and working relationships increase productivity and efficiency by sharing data and information technology across the supply chain [2].

The design of an intelligent manufacturing (IM) system is driven by customized requirements for production capacity, facility layout, and level of automation control. The IM design is carried out at three levels. The first level is the static physical configuration, such as equipment planning and production line layout. The second level includes dynamic operations such as equipment motion and work-in-process (WIP). The third level is related to workshop logistics and production linked to field control networks, sensor layouts, and manufacturing execution system (MES) [3]. Increasing manufacturing competition is driving change to improve process efficiency, reduce production costs, and achieve optimized production and delivery times to satisfy customers’ diverse demands. The customers’ demands include smaller quantities, greater variety, and more customized goods.

This research was conducted in cooperation with a large electric power transformer manufacturer. At present, the high-end transformer manufacturers face two critical problems. First, due to the highly customized requirements of high-end products, it is necessary to ensure high-quality requirements for the entire life cycle of raw material procurement, production process, and finished product shipment. Customer requirements must be met and the full-cycle product information (from raw materials to finished products and disposal) must be traceable. Second, the stability of the manufacturing process must avoid variations in product quality that affect customer satisfaction. The customers’ needs are very diverse, the products are highly customized, and different customers have different requirements for the transformer design and components selected. When there are changes in the market demand or shortened delivery times to consider, the manufacturer must improve production efficiency, stabilize the quality of finished products, and quickly respond to market driven needs. Industry 4.0 helps companies sustain competitive advantage by adopting mass customization, greater collaboration, and digital transformation. The production process must be stable and be continuously studied to improve product yields, especially any incident which leads to customer dissatisfaction. This project uses IoT sensors to combine manufacturing execution systems (MES) and supply chain management (SCM) to build two intelligent decision modules required for the back end of the smart manufacturing system.

This research uses a large database of manufacturing process inspection data, production parameters, and production equipment monitoring data; integrates machine learning algorithms and analysis methods to realize the intelligent ordering of parts; and predicts the quality of semifinished products by analyzing key process information. These two modules enable the intelligent manufacturing system to provide an accurate supplier selection and real-time procurement portfolio. The collaborating research company for this manuscript currently selects component suppliers only using subjective judgments (rule of thumb) by their purchasing department to select the most suitable supplier to dispatch orders. Secondly, the use of semifinished product (component) key parameter values and corresponding finished product quality inspection results of large amounts of historical data are used to construct the product quality prediction model. The model uses the multivariate regression and the ARIMA model to facilitate real-time product quality prediction and prevent abnormal quality components from flowing into the downstream processes that waste production capacity and reduce the output of defective products.

The literature review of the methods used in this paper continues in Section 2. The current studies in the area of smart manufacturing are presented in Section 3. Section 4 presents the proposed research methodology. Section 5 provides a case using a client’s data to study and evaluate the performance of the proposed model. Finally, Section 6 concludes the paper with a discussion of the results and future research as shown in Figure 1.

2. Literature Review

Supplier selection is an important issue for supply chain management. Previous research shows that many methodologies have been proposed to solve the supplier selection problem. Durmić et al. [4] proposed a combined FUCOM (FUll COnsistency Method)–Rough SAW model to examine the problem of sustainable evaluation of supplier performance and selection. Nunić [5] proposed a FUCOM-MABAC (Multi-Attributive Border Approximation Area Comparison) model for evaluating and selecting the PVC manufacturer. Among these previous research studies, the analytical hierarchy process (AHP) is among the most popular methods continuingly being used. AHP is often applied with other methods to construct a hybrid model for supplier selection such as applying AHP with logistic regression, a classification regression tree, and a neural network for supplier selection [6]. Combining AHP with multi-expression programming (MEP) for suppliers’ performance evaluations [7] and using quality function deployment (QFD) provide an order preference that is very satisfactory for supplier selection [8]. In this paper, supplier selection is improved by using an intelligent decision module combined with a hybrid method containing AHP and a multiobjective integer programming (MOIP) model. The more complex and competitive an industrial environment, the more likely that the life cycle manufacturing process will include customer-oriented sales and production. Quality management is the basis for the prevention of defective products, for sustainable business operations, and for building the company’s brand equity through customer trust. For this research, historical data including key parameter values of components and semifinished products and a large amount of data corresponding to the quality inspections of finished products are stored in knowledge management systems. The Long Short-Term Memory (LSTM) method is used to train and test the quality prediction model of finished products. In order to build the intelligent decision modules proposed in this research, the literature review covers the analytic hierarchy process (AHP), reinforcement learning (RL), multiobjective integer programming (MOIP) model, and autoregressive integrated moving average (ARIMA).

2.1. Analytic Hierarchy Process (AHP)

AHP is a multiattribute decision-making (MADM) approach often used in problems with multiple evaluation criteria [9]. AHP selects the best solution, determines the order of priority, forecasts demand, and measures performance. Using a tree-like hierarchy, complicated problems are decomposed into smaller elements. The domain experts give relative weight value to each element to establish a pairwise comparison matrix. After calculation, the eigenvector and eigenvalue for each of the pairwise comparison matrices are obtained. The result provides the basis for decision makers to make better decisions. Hierarchical construction is a very important part of AHP, and designing the hierarchical structure impacts the problem outcome and solution. The structure of the hierarchy is arranged according to decreasing complexity. The elements of the upper layer are used to enumerate the ultimate goal(s), and the elements of the lower layer branch out from the parent elements to describe the subelements in the hierarchy. The number of construction levels depends on the complexity of the problem and the needs of the decision makers.

Compared with other decision analysis methods, AHP helps decision makers systematically consider many aspects of the problem using a structured hierarchical system. The establishment of the hierarchical structure is flexible and can be adjusted to fit different problems. AHP establishes a pairwise comparison matrix by setting the relative weight value between each element using the judgment of domain experts. The method can be applied to decision-making problems that contain both qualitative and quantitative criteria. The following steps introduce the AHP method:

Step 1. Establish the problem hierarchy and list the elements. For the problem to be solved, list the criteria to be evaluated. By establishing a hierarchical structure, these criteria are classified into several levels from top to bottom. The uppermost layer belongs to the target layer, which is usually the ultimate goal of the problem. Then, the criteria to be evaluated are branched to the lower layers according to the target. When there are too many criteria (usually no more than 7 items are recommended), continue to branch out to establish additional subcriteria layers.

Step 2. Build the pairwise comparison matrix. Using the hierarchical structure established in the previous step, for all levels below the second level, all elements belonging to the same level will be analyzed using pairwise comparison. A comparison scale from 1 to 9 is used to compare the elements at the same level and construct the pairwise comparison matrix. The numerical definition of each comparison scale is shown in Table 1.

Step 3. Calculate the priority vector and maximum eigenvalue. In this step, calculate the eigenvector (the priority vector after the eigenvector is normalized) and the maximum eigenvalue for each matrix. The eigenvector and the maximum eigenvalue are obtained using the following formula:where is the pairwise comparison matrix, is the eigenvector, and is the maximum eigenvalue. After obtaining the eigenvector and the maximum eigenvalue, the consistency test of each eigenvector is performed. To perform a consistency test, first calculate the consistency index (CI) and the consistency ratio (CR). The next formulas follow:In the above equation, the random index (RI) is used for calculating the CR. RI is related to the order of the paired comparison matrix. The order and its corresponding random index values are shown in Table 2. Saaty [9] believes that if CR < 0.1, it has passed the consistency test. Otherwise, the AHP hierarchical structure should be redesigned. After the overall hierarchical structure of the problem passes the consistency test, the priority vector of each criterion is obtained and the decision maker can make a decision.
AHP has been applied to decision support across many industries. Fashoto et al. [10] developed a decision support model using AHP and an artificial neural network to evaluate and select suppliers providing healthcare services to universities. Dweiri et al. [11] built a scalable and generalizable decision support system for supplier selection using AHP for an automotive company in Pakistan. Mondragon et al. [12] developed an approach which includes the identification of multiple factors affecting manufacturing technology selection with respect to the supply chain and the use of the principles of AHP and fuzzy AHP techniques. Zhang et al. [13] proposed an approach to select manufacturing suppliers in the B2BE-commerce environment where the attribute values in the decision matrix are expressed with linguistic terms, preference orderings, and interval numbers. Through these studies, AHP has gained sustained popularity in the supply chain management domain, especially for supplier selection. Therefore, in this study, we will apply a similar method for analyzing the supply chain and vendor analysis.

2.2. Reinforcement Learning (RL)

In our proposed model, we reference the concept of RL to establish the performance of the evaluation system. The concept of RL belongs to the field of machine learning including supervised learning and unsupervised learning. The method uses interactive learning processes to enable machine learning from environmentally generated feedback. The core concept of reinforcement learning is to find the best decision through trial and error [14]. Figure 2 shows a reinforcement learning process to train an intelligent agent to adapt to the environment.

Reinforcement learning does not require the user to access huge amounts of data in advance and learns from interaction with the environment. RL is suitable for applications in supply chain management and optimization where the environment is constantly changing. One research paper [15] proposed that two reinforcement learning agents should be used to solve a supply chain optimization problem consisting of one factory and multiple warehouses. Another research paper [16] designed a dynamic online multicriterion supplier selection mechanism using RL methods. Reference [17] trained an RL agent for a retailer to make appropriate ordering and pricing decisions in a competitive environment. Finally, researchers [18] proposed a deep reinforcement learning model that automatically replenishes the drug prescription inventory to prevent drug shortages in a hospital supply chain. As shown in Figure 1, the intelligent agent will take action according to the current state, obtain a reward from the environment by taking this action, and then decide the most suitable action based on the feedback. Ray et al. [19] research proposes a similar system using parallel Markov decision processes to collect, represent, and combine the supplier performance information from past transactions; predict the supplier’s future behavior; and determine the winning supplier using a multiattribute reverse auction. In our proposed model, the methods of Ray et al. are adopted to create a new hybrid model.

2.3. Multiobjective Integer Programming (MOIP) Model

Multiobjective optimization is a branch of mathematical optimization first proposed by Johnsen et al. [20] in 1961. Multiobjective optimization problems are usually quite difficult since there are often contradictions between the objective functions. There are several algorithms to solve this problem, such as particle swarm optimization (PSO), differential evolution (DEA), and genetic algorithms (GA). In this paper, we apply a genetic algorithm specifically designed for solving multiobjective optimization problems. Nondominated sorting genetic algorithm II (NSGA-II) is used as the basis for solving our multiobjective integer programming model.

Many studies in the domain of supply chain management have applied multiobjective optimization models. For example, Park et al. [21] proposed an integrated approach that consists of two phases to effectively reflect the multiple perspectives of a global supply chain designed for sustainability. Tirkolaee et al. [22] applied the Fuzzy Analytic Network Process (FANP) method to rank criteria and subcriteria using the fuzzy Decision-Making Trial and Evaluation Laboratory (DEMATEL) to identify the relationships between the main criteria. The fuzzy Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is applied to prioritize the suppliers. The obtained weights are imported into a tri-objective model designed to optimize the proposed supply chain. Almeida and Asada [23] proposed adding a static multiobjective model to the Distribution System Expansion Planning (DSEP) problem which identifies expansion plans that have compromises between their global cost and setup risk. The problem is then solved using NSGA-II.

Our research focuses on supplier selection which is related to several research papers which apply multiobjective optimization models. Mahdi and Mohammad [24] formulate a dynamic virtual cellular manufacturing plan using a mathematical model. To solve the problems of the proposed model, a hybrid genetic algorithm is used in their research. Che et al. [25] evaluated the supplier selection problem encountered when using multiple assembly plants with production capacity constraints to produce multiple products. A multiobjective optimization mathematical model was constructed and a modified multiobjective algorithm was used to solve the optimization model. Hamdan and Jarndal [26] rank the available suppliers based on selected green criteria by decision makers using the analytic hierarchy process (AHP). Then, the genetic algorithm (GA) is used to find the optimal solution for the multiobjective integer linear programming model.

These literature reviews show that multiobjective programming models are still used in supply chain management and are relied upon for estimation results. The results show that it is necessary to select the most appropriate lag period for the prediction model to measure the appropriateness of fit. The following two guidelines are commonly used:

Akaike’s information criterion (AIC)

Bayesian information criterion (BIC)where k represents the number of parameters to be estimated in the model, N is the total number of samples, and SSE is the residual sum of squares. Choosing the best mode is based on the smaller AIC, and BIC is the standard. Small sample sizes use AIC to evaluate model fit while large sample sizes use BIC to evaluate model fit.

Most scholars use ARIMA in forecasting research since this method is quite accurate and effective. For example, Sun [27] applied multiple historical ARIMA models constructed with publicly available COVID-19 data in Alberta, Canada. The means of the data and the 95% confidence intervals of the differences between the forecasted values and the actual values were computed. Jamil [28] used ARIMA modeling to evaluate the hydroelectricity generation plans of the Government of Pakistan and predict the amount of electricity generated up to the year 2030. The results were compared with the actual amount of generated electricity for effectiveness, which showed good fit with minimum deviation. Calvello [29] proposed a method that combines statistical machine learning algorithms and ARIMA for forecasting water levels. The literature reviews show that training ARIMA using historical data can achieve optimal accuracy. This research explores research in related fields to improve the accuracy of predicting the quality of finished products.

3. Smart Manufacturing Ontology and Key Technologies

In the field of mechanical and electrical engineering manufacturing, industrial customers have diverse needs, and most large equipment products are highly customized. With global changes in market demand and shortened product delivery time, the means to improve production efficiency, stabilize finished product quality, and control manufacturing cost using digital transformation to meet market demand is a significant challenge. High-end electrical power transformer manufacturers are faced with the problem of ensuring high quality throughout the life cycle of raw material procurement, the production process, and the finished product shipment, as well as ensuring post-process stability to avoid a decline in product yields that affect customer satisfaction. The following is an in-depth introduction to the background and technologies related to this study, namely, the supply chain management (SCM), manufacturing execution systems (MES), and Internet of Things (IoT) used for digital transformation.

3.1. Supply Chain Management of Electromechanical Equipment Manufacturing

Electromechanical engineering manufacturers often rely on a vertically integrated manufacturing business model which no longer meets the competitive needs of modern global supply chains and electronic marketplaces. Manufacturers are working towards the digital integration of global suppliers. The enterprise should use advanced information technology (IT) to strengthen supply chain management (SCM), quickly obtain accurate and real-time information with upstream suppliers and downstream customers, and then optimize the resources needed to meet the product and service needs of the target customers. This approach substantially increases the company’s revenue and enables efficient and rapid transmission of information between suppliers and customers. The information provides data for more precise and real-time planning, and control. Control enables the simultaneous assembly of a large number of customized manufacturing systems that dispatch orders to reliable suppliers and design products based on the best practices unique to the manufacturer. The domain ontology for supply chain management is shown in Figure 3. The definition of supply chain management (SCM) starts from the process of acquiring raw materials to produce finished products and ultimately selling them to customers, linking all suppliers with the value chain of the company’s internal and external product production and service provision, and effectively managing inventory, total demand, net demand, quality, and delivery time. Supplier management refers to the integration of the company’s overall supply chain including manufacturers, logistics companies, and retailers. The processes include ordering raw materials, designing manufacturing processes, and finally delivering the product to the client. A critical objective is to determine suppliers based on quality, cost, delivery time, and service. In compliance with the company’s ISO requirements, a further supplier evaluation and selection mechanism is needed [16]. With the improvement of customer service standards, many factors affect the selection of suppliers. In addition to the consideration of price factors, the typical supplier selection mechanism also considers delivery time, product quality, and service. The weighted average rule is often used to weigh and average each factor for selecting the supplier and further sort out variables, which are in turn used to construct the supplier’s choice and decision-making model. The item “quality” refers to the quality of materials provided by suppliers. “Price” refers to the cost of materials purchased by companies and suppliers. “Delivery performance” refers to accuracy of supplier delivery period. “Service” refers to the after-sales service and support provided by the supplier, and “flexibility” refers to the supplier’s ability to respond to changes in the production plan of the company.

The capabilities provided by suppliers for different key components are different. For example, consider suppliers A and B. Supplier A can provide products immediately with low prices and low quality, and supplier B has a slow delivery time but can provide high-priced products with good quality. After applying the weighted average, the scores are the same or close. Considering fierce competition among enterprises, the requirement of high-quality products motivates the enterprises to pursue specific goals under the application of the total cost model. Quality can enhance the competitiveness of enterprises and have a direct effect on market sustainability. Thus, there is a need to convert all quantifiable related factors, such as opportunity cost, degree of risk, and other factors, into costs, and use ratio scales and nominal scales to compare the elements in pairs and establish a matrix to find the important procedures and priorities of the hierarchical elements for evaluation and selection of the supplier.

There are many methods used for supplier selection in SCM. The following is an introduction to several methods cited in the literature. The Delphi method emphasizes the principle of anonymity to consider all participants’ opinions without disclosing their identity. The principle of iteration is determined by the chairman who announces results to the participants based on the group participants’ opinions. Each round of controlled feedback requires participants to answer a predesigned questionnaire and analyze the results as the next questionnaire is revised and repeated for several rounds to approach consensus. A statistical group response over time provides a comprehensive judgment after all the opinions are counted and the expert consensus becomes the expert opinion [30]. The total cost approach uses the cost ratio calculation before selecting the suppliers for procurement decisions [31]. Mathematical programming is also applied [32]; it is often formulated as a multiobjective linear or nonlinear programming model with constraints for the supplier selections. The model includes the decision-making conditions for selecting suppliers and ordering quantities in a multiobjective manner. The data envelopment analysis (DEA) model assumes n decision-making units (DMU) where each DMU has m types of inputs and n types of outputs to promote the application of different strategies to different market segments [33]. The fuzzy AHP method [34] can be used to support supplier selection decisions when there is uncertainty and ambiguity in the information provided. The analytic hierarchy process (AHP) is a multiobjective decision analysis method that combines qualitative and quantitative data for reaching a conclusion. The method is practical and effective and has been widely applied in industry [35]. This method can consider factors that cannot be directly taken into consideration. AHP method is widely used in the selection of the priority of orders and the selection of proposed enterprise plans. The model is suitable for the selection of long-term partners as reported in [36, 37].

For the selection methods mentioned above, each has limitations. The intuitive judgment method based on past knowledge and experience is subjective; the Delphi method obtains relatively objective information and opinions through the independent repeated judgments of multiple experts. If accurate cost data cannot be obtained, it is impossible to make accurate judgments. Mathematical programming cannot process qualitative data and may not be able to obtain an optimal solution. Data envelopment analysis is an input-output model that requires access to a supplier’s internal data to establish a selection model. Fuzzy theory requires each expert to formulate a separate fuzzy questionnaire which is time-consuming and often yields confusing answers. The hierarchical analysis method is a combination of qualitative and quantitative multiobjective decision making. Given an uncertain environment based on the decision makers’ experience, intuition, and observations, this method uses qualitative factors in a quantitative structure. Hierarchical analysis uses the feature vector method to calculate the weights between factors to make a group decision. The approach is better designed to provide nonbiased solutions than traditional economic evaluation models.

3.2. Manufacturing Execution System (MES)

Enterprises are facing competitive pressures from labor structure change, climate change, global competition, diversity of product demands, and rising material costs, as well as global trade disputes and sanctions. The only way to reduce costs, improve quality, and increase efficiency and productivity is to promote Industry 4.0. Through digitization, AI, and automation, IoT technology digitally connects the overall value chain (transparent sourcing of physical equipment, communications, collaborative development, and immediate bill of materials). Cloud information and application systems can be integrated to create smart manufacturing systems within a reasonable budget. Competition among global companies is particularly fierce. Customization of small quantity, high value products require collaboration between buyers and sellers. Multiple products require improved process efficiency and reduced production costs to retain a competitive advantage and maintain a high level of customer service to build brand loyalty. For the traditional manufacturing industry, the most intuitive way to improve productivity is to reduce labor costs, purchase new equipment, or increase production. With clear insight on the status of ongoing manufacturing processes, it is possible that the machines, manpower, and production volume impacting the supply chain can be improved and meet changing customer order requirements. The establishment of a manufacturing execution system (MES) lays the foundation for smart manufacturing through the use of information transmission as shown in Figure 4. The MES is constructed to analyze operator and production management, quality management, machine and equipment production rate, human resource management, material loss status, and work-in-process quality so that managers can make human resource transfer decisions and effectively balance production capacity to achieve production performance and goals [38]. Given the above definition of manufacturing systems, combined with the ontology presented in Figure 5, production management, quality management, machine and equipment production rate, and human resource (HR) management will serve as factors to be optimized to improve performance and timely delivery.

The research purpose of Widjajanto et al. [39] is to put forward the concept of poka-yoke (mistake proofing), which traces the causes of quality errors from manufacturing processes and reduces variability by analyzing the root causes related to human errors. The equipment design and the working environment affect reliability, and understanding the causes greatly reduces various errors. Manufacturing processes start from the time a customer places an order, requiring the system to collect management information through Enterprise Resource Planning (ERP) systems, supply management, production history tracking, equipment production rate analysis, and machine maintenance. These information sources enable the managers to better plan material demand management, machine operations, and real-time quality management information to ensure customer satisfaction. For different requirements in different manufacturing industries, twelve MES key functions are defined [40], namely, resource allocation and control; scheduling production; data collection and acquisition; quality management; production process management; material batch management and production traceability; performance analysis; operation and detailed planning; document management; HR management; maintenance management; and material transportation, storage, and tracking.

3.3. Internet of Things (IoT)

The Internet of Things (IoT) connects things to the Internet as a broad extension of network utilization by adding the computing power to objects, devices, and sensors that are not normally considered as computers [41]. IoT is the technology that empowers devices to read, compute, and transmit information. Objects can collect information through sensors, analyze or execute commands based on the information collected, and send information to other devices through the Internet. In Figure 6, a domain ontology for IoT related technologies is defined by three categories: sensors, cloud computing, and network connectivity. Through integrated IoT technologies, machine-to-machine connectivity and linking to other systems and applications effectively support intelligent decision making across various industrial channels and manufacturing processes. For example, the sensor is an important category of IoT technology. The internal and external information of the object can be read through the sensor device. The key sensing technologies include radio frequency identification (RFID) and wireless sensing networks (WSNs) [42]. RFID reads and recognizes tags attached to objects (machines or devices), by adjusting the electromagnetic field of the radio wave frequency, and then transmits the information to applications for decision support. Most tags are not equipped with a power source since these tags obtain energy from an electromagnetic field generated by the radio waves emitted by the identifier. However, some radio frequency technology tags have their own power sources and can actively emit electromagnetic waves to transmit information. Such tags read signal data over a longer distance. WSN is a wireless communication network composed of sensors scattered in space. Each WSN sensor is equipped with a radio transceiver, a microcontroller, and a power supply. Wireless sensor network technology has been used in various fields such as image recognition, traffic control, health monitoring, and industrial production monitoring.

IoT technology collects data using sensors and can process and analyze these data. With the continuous advancement of technology, the amount of data processed by IoT technology is increasing. To process huge amounts of raw data, cloud computing has become one of the key technologies of IoT. Cloud computing technology is a shared resource that provides computing resources, data access, and different forms of services through the Internet [43]. Data computing technology, data access technology, and services constitute cloud computing technology. Research by [44] identified two data computing techniques. The first is a parallel computing language. By dividing data into multiple servers for separate computing, efficiency is improved when processing large amounts of data. Traditional data mining methods are not suitable for analyzing the vast amounts of data collected by the IoT since these methods are usually designed to run on a single computer. Many scholars have designed parallel computing machine learning methods and data mining methods to increase the computational performance of the algorithms. However, [44] believes that for such technologies to efficiently extract useful information from huge amounts of data, more research is needed. The research mentioned two data storage methods used in cloud computing called NoSQL and distributed file system. NoSQL is different from a relational database. A relational database stores data in a table format and there is a clear relationship between the data. However, NoSQL does not require data to be correlated and is suitable for processing large amounts of data or rapidly changing data structures with greater flexibility. A NoSQL database can be subdivided into 5 types: index key databases, file databases, graphics databases, memory databases, and search databases. The distributed file system is a file system that allows files to be shared on multiple devices through the network. Users in the same system have access rights to all files. Such a data storage system effectively solves the problem of limited storage space on a single computer. Many cloud services are based on this technology such as Dropbox, iCloud, and Google Drive.

In addition to computing technology and data access technology, cloud computing service is an important part of the three elements of cloud computing. The service forms of cloud computing can be divided into three types: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) parameter IaaS is provided by service providers with hardware facilities such as user networks, servers, and data storage space. Users do not need to understand the architecture behind cloud computing to obtain computing resources, paying the service provider fees according to their contract and use of the computing resources. PaaS is a service provider that provides users with the infrastructure for cloud computing but also provides the application and the development environment. The target customers for this type of service are mainly program developers. By contracting this type of service, programmers can develop new applications without having a building to manage the development environment and expensive servers, equipment, and software required for development. SaaS is a service that can be contracted to write the software and provide the software to users in the form of services. The advantage is that users do not need to worry about the hardware facilities, development environment, and subsequent maintenance of software development, even for the software program itself. Everything is developed and provided by the supplier.

The Internet of Things technology has given objects the ability to collect information and autonomously process and analyze data. The ability to transmit the processed information to other devices is one of the primary advantages of this technology. With the vigorous development of today’s 5G communication technology, low-power wide-area networks (LPWAN) are suitably applied to the IoT network connection technology [45] such as SigFox, LoRa, Wi-Fi, ZigBee, and narrowband IoT.

4. Intelligent SCM Modules

This section presents the proposed methodologies for both modules of the intelligent SCM, i.e., the supplier selection model and the end-product quality prediction.

4.1. Intelligent Supplier Selection Decision Module

The priority of SCM is to follow the dynamics of a large number of customized product orders, instantly select the most suitable component suppliers, and dispatch the proper number of orders to the priority suppliers. The collaborating research company for this manuscript currently selects component suppliers using subjective judgment (rule of thumb) for the purchasing department to select the most suitable supplier to dispatch orders. This research helps the company to formulate objective judgments using a digital data system when selecting a component supplier. The system establishes an intelligent knowledge module to define a component supplier’s ordering system and reduces the influence of human subjective factors on supply chain management. The workflow of the intelligent supplier selection and order dispatch module is shown in Figure 7. The intelligent supplier selection decision module consists of two modules: the QCDS-data driven AHP model and the multiobjective integer programming model. These two models are introduced in the following subsections.

4.1.1. QCDS-Data Driven AHP Model

Every year our partner company generates a QCDS score for each of its component suppliers evaluating their performance using quality, cost, delivery, and service. Since this QCDS score is only updated once per year, the QCDS-data driven AHP model is changed once per year in order to generate the weights required for the multiobjective integer programming model. The QCDS-data driven AHP model works as follows. First, the data from the company’s internal QCDS scoring system is imported to establish a pairwise comparison matrix [46]. The analytic hierarchy process method is then applied to establish the priority vector weight of each criterion and component supplier.

The component supplier intelligent dispatch system selects a total of four different component suppliers. The four components are transformer casing, copper wire, silicon steel sheet, and power distribution enclosures. For each different component, the hierarchical structure established by the AHP method is different. For the case demonstration, the selection of transformer casing suppliers is used as an example to demonstrate how the system will select the most suitable suppliers and dispatch the proper number of orders using the AHP method and integer programming model. The hierarchical structure of the transformer casings is shown in Figure 5.

4.1.2. Multiobjective Integer Programming Model

After executing the QCDS-data driven AHP model, the weights for each of the four criteria are collected and imported into the multiobjective integer programming model. The model consists of three objective functions and five constraints. The decision variable, parameters, and mathematical formulation of the proposed model are explained below.A. Decision variableB. Parameters: weights of criteria obtained from AHP,: performance parameters of criteria for supplier i,Ki: capacity for supplier i,Djt: demand for component type j at time t,Oijt: on-hand order of component type j for supplier i at time t,Cijt: cost from supplier i manufacturing component type j at time t,Lijt: lead time of component type j for supplier i at time t,I: set of chosen supplier i,N: number of suppliers.C. Formulationsubject to

The objective function (7) maximizes the total performance of the dispatch order, the objective function (8) minimizes the cost of the order, and the objective function (9) minimizes the lead time of the order. The constraints (10) regulate the number of the dispatch orders so the order does not exceed the supplier’s capacity, constraint (11) is a rule set by the company to preferentially dispatch orders to suppliers who have not received an order for more than six consecutive months. Constraint (12) insures the demand is met, constraint (13) ensures that the suppliers selected can manufacture the product assigned, and constraint (14) ensures that the number of the dispatch orders is an integer value.

After the order dispatch, the performance of the chosen supplier is evaluated by the performance evaluation system. The system evaluates suppliers using quality, cost, delivery, and service. Each of these dimensions will be graded on a 0∼1 scale, and these scores are imported into the multiobjective integer programming model as performance parameters.

The company selects the most suitable supplier and decides the most suitable number of orders to dispatch through the system. The performance of the dispatch order will be reported by the company and evaluated by the performance evaluation system as a reward to the trained system to help the system perform better in future supplier selections.

4.2. Finished Product Quality Prediction Module

Quality management is the basis of a sustainable business operation and is important for the reputation of the company and to maintain the trust between customers and the company. Quality management prevents defective products from entering into the production stage. This project helps the company establish a set of dynamic adjustments for the semifinished product to reduce quality abnormalities that affect the production processes and the building of defective products, and improve production efficiency. The knowledge module for real-time prediction of finished product quality is provided using two prediction models, i.e., the multivariate statistics and the ARIMA prediction model.

This study focuses on the transformer cores as the key indicator for the finished product quality prediction. The total number of original samples was 1,202. This study predicts the quality of finished products using 100% and 110% iron loss values of the cores. Since there are two cores in a transformer, the supplier will provide the core inspection values within a QR code when the cores are imported, and the operator will scan the QR code provided by the suppliers when using the cores for component assembly.

The original data has 1,202 samples, but for some data, the operator did not properly scan the cores during operation. The company’s professional managers recommended that the data from the two scenarios should be deleted. First, if sum of the iron loss values of two semifinished products is higher than the quality loss value of the finished product, the data are invalid and omitted. Second, if the quality loss value of the finished product is greater than the sum of quality loss values of the semifinished product by 5%, the data are also invalid. After cleaning the data, there are 625 samples left for 100% iron cores and 548 data samples left for 110% iron cores.

4.2.1. Multiple Regression Model

Multiple regression analysis is used to investigate how the multiple independent variables () are related to dependent variables (Y) [47]. In this work, the environmental factors used as multiple independent variables are x1 and x2 for the two semifinished cores for 100% current and x3 and x4 for the semifinished cores for 110% current. The dependent variables are y for 100% current and y1 for 110% current, divided into two multivariate regression models:where is the predictor, is the partial regression coefficient of the corresponding influencing factor, and is the random error term.

4.2.2. ARIMA Model

If the original data is not a stable sequence, the series must be converted into a stable state. The autoregressive integrated moving average model known as the ARIMA (p, d, q) model can be modified for different time series, where p is the number of autoregressive terms, d is the number of differences before the series is stable, q is the number of moving average terms, and the most suitable variables are selected [48]. This study adjusts the ARIMA (p, d, q) prediction model steps as shown in Figure 8.

4.2.3. Predictive Evaluation Indicators

The unit root test is used to determine the integration level of the time series between variables to determine whether the time series has reached a stationary state which can be used for prediction studies. This study uses the ADF (Augmented Dickey–Fuller) test proposed by Dickey and Fuller [49] and the single-root test proposed by KPSS [50]. Since most time series have the characteristics of self-correlation and heterogeneous variation, this study uses the ADF check and KPSS check to determine whether the series is stationary by checking the correlation variables on the past historical data.(i)Retrieval model for ADF:The original single root check was proposed by Dickey and Fuller, which focuses on checking the sequence stationary of AR (1) [51]. The ADF single root check includes the following 3 models:Model 1: no intercept term, no trendModel 2: with intercept items, without trendModel 3: with intercept items and trendwhere ∆ is the first-order difference, α is the intercept or drift term, β is the self-regression coefficient, t is the time trend term, p denotes the number of fallback periods of self-regression, and is the residual term and obeys the white noise process.(ii)KPSS validation model.

Kwiatkowski et al. [50] confirm other single-root validation results by using the variable following the fixed state procedure as the null hypothesis and the opposing hypothesis that the sequence does not have a single root. Assume that the variables are composed of a constant trend, a random walk, and constant white noise:where is a stationary program, is a random walk, , and . Here, the null hypothesis is H0:  = 0 (or is a constant).

Under the assumption of H1:  > 0, the KPSS check statistic can be derived from the null hypothesis:where is the cumulative sum of residuals and the variance is . The null hypothesis of KPSS is H0:  = 0, which assumes that the variables are constant. Therefore, the null hypothesis cannot be rejected, which means that the series is constant and there is no single root in the data. The null hypothesis is rejected, which means that the data is nonstereotypic, and the data should be adjusted until it becomes constant.

In multiple linear regression, represents the correlation coefficient between the observed values of the outcome variable (Y) and the fitted (predicted) values of y. The value of will always be positive and will range from zero to one. represents the proportion of variance in the outcome variable y that may be predicted given the value of the x variables. An value close to 1 indicates that the model explains a large portion of the variance in the outcome variable. A problem with the is that it will always increase when more variables are added to the model, even if those variables are only weakly associated with the response. A solution is to adjust the by considering the number of predictor variables. The adjusted R square value in the summary output is a correction for the number of x variables included in the prediction model. The adjusted determination coefficient () evaluates how a model approximates the real data points which is a measure of the predictability degree of the model. The higher the value of , the more efficient the developed model [52]:where xi is the i-th expected output, yi is the i-th predicted output, is the average of the desired output, p is the number of variables, and k is the number of the identification set samples.

To provide information on the reliability of each model, an analysis of the distribution of residuals (differences between expected and actual values by the models) through their representation in scatter plots was conducted. The following is an evaluation of the prediction model.Mean-square error (MSE): To measure the error between the predicted value and the actual value of the data, the MSE method (22) sums up the absolute values of each data error and then calculates the average error [53]:Mean absolute percent error (MAPE): MAPE (23) is measured by the relative prediction error of data and is used to avoid the shortcomings of the MAD method and MSE method. The calculation results increase given large data values. When MAPE is less than 10, the model is highly accurate [54]:Root mean-square error (RMSE): RMSE (24), also known as standard error, is usually used as a measure of the prediction results of machine learning models. It is the square root of the ratio of the square of the deviation between the observed value and the actual value to the number of observations. It is more sensitive to size errors and can be very good. The response of the measurement accuracy is as follows [55]:where is the actual value, is the predicted value, and the sample size is k. The prediction model capability indicators are the three methods mentioned above because MAPE is not affected by the unit and the size of the value, the judgment is objective, and the larger the sample size of RMSE, the more reliable the root mean-square error. , MAPE, and RMSE allow a comparison of the deviation between the predicted and expected values of the final product quality prediction [5659]. Therefore, this study uses , MAPE, and RMSE as the judgment prediction capability indicators.

5. Case Demonstration and Results

This section demonstrates the results of the two proposed knowledge modules. First, the knowledge module of the intelligent order dispatch system for component suppliers uses the analytic hierarchy process (AHP) and the multiobjective integer programming (MOIP) model to determine the number of orders to be assigned to each supplier. The finished product quality prediction knowledge module, using multivariate regression model and the ARIMA model, is used to estimate the finished product given the historical data of semifinished products. This approach helps the enterprise to create a set of dynamic adjustments for a semifinished product portfolio, reduces the occurrence of quality abnormalities or defective shipments affecting the subsequent production process, and improves product production efficiency.

5.1. Results of Intelligent Supplier Selection Demonstration

In this case demonstration, we choose the scenario of selecting a transformer casing supplier for the company. The QCDS score of the transformer casing supplier is used as input of the AHP model, which will generate the weight required in the multiobjective integer programming model. For the supplier selection, there are four candidates. For confidentiality reasons they are called supplier 1, supplier 2, supplier 3, and supplier 4. The hierarchy structure for selecting transformer casing suppliers is shown in Figure 9. The pairwise comparison matrix for each level is computed. In Table 3, we demonstrate the comparison matrix for the first level of the hierarchy structure. The priority vector of the pairwise comparison matrix represents the weight of each criterion. These weights will then be imported into the MOIP model as parameters.

Next, before we present the result of the MOIP model, the parameters’ setting is shown in Tables 410. As mentioned earlier, there are four suppliers in this case demonstration; therefore, = 1, 2, 3, 4 represent supplier 1, supplier 2, supplier 3, and supplier 4, respectively. Now, we assume that there are three transformer casing types:  = 1 represents major transformer casing,  = 2 represents medium-sized transformer casing, and  = 3 represents the third (minor) transformer casing. are obtained from the priority vector in Table 3. As for performance parameters for each supplier, , since there is no order dispatched before, we set the initial performance parameters as the value of supplier weight in each criterion calculated through AHP.

In the case demonstration, it can be noted in Table 4 that supplier 1 does not manufacture component types 2 and 3, supplier 2 does not manufacture component type 3, and supplier 4 does not manufacture component type 1. Therefore, in Tables 7 and 8, we set the cost and lead time for these situations at an extreme large number, which is 1010 in this case. Nondominated sorting genetic algorithm II (NSGA-II) is used to calculate the Pareto solutions of the model. The multi-objective integer programming model is applied by implementing the NSGA-II algorithm in Python software.

By implementing NSGA-II, we obtain several Pareto solutions. The reason why the model generates several solutions instead of one optimal solution is that there is more than one objective function. Due to the fact that it is quite unrealistic for a solution to dominate all the other solutions in respect to all objective functions, the model then finds a solution set. In this solution set, the solutions are nondominated by any other solutions (in terms of all objective functions). In the case example of selecting suppliers for highly customized transformer casings, five Pareto solutions are identified. They are listed in the following five sets of matrices (1)∼(5): could be

In each matrix, the numbers in each row represent the assigned order of 3 types of transformer casing. As shown in these solution sets, the only differences between these solutions are the dispatching strategy on type 1 and 2 casings. The decision maker has to decide whether to assign all type-1 components to supplier 1 or 2 and decide whether to assign 22 type-2 components to supplier 2 or 4. However, despite these differences, the 3 objective values for QCDS performances, cost, and lead time of all 5 solutions are nearly identical, i.e., approximately [264, 320 k, 1920] with deviations being less than 0.5%. Thus, the above 5 solutions are considered great choices with the same consequence. Moreover, all five solutions satisfy all the constraints, although many conditions are contradictory and decision making is challenging. To sum up, without the model for conflict resolution, a good decision, for the selection of suppliers and the dispatching orders, cannot be reached.

5.2. Results of Finished Product Quality Prediction Model

Using a multivariate regression statistical model, the two finished iron cores are predicted showing their iron losses as y and y1. The independent variables are the key components (x1, x2, x3, x4). The partial regression coefficients determine the predicted variables of the parameters and reflect the partial effect or model bias for the corresponding predictor variable while holding other predictor variables fixed. For the finished quality iron loss values y and y1, there are 625 and 548 samples and two independent variables (semifinished cores), and the model is adjusted by statistical multivariate regression as shown in Table 11. The adjusted values and the prediction are both close to 1. Table 12 is the ANOVA summary table of statistical significance test for overall model fit.

Changes in the calculated independent variables affect the dependent variables, so regression analysis is used to predict future changes in the quality loss value of the finished product. Table 12 shows the ANOVA results of (1) 100% iron-core current in the complex regression equation (i.e., ) and (2) 110% iron-core current in the complex regression equation (i.e., ).

The time series models between AR(p) and MA(q) have their characteristics, but in practice, the two models must be integrated together as ARIMA model for this product quality prediction research. According to Tables 13 and 14, we can see that the ADF and KPSS tests of both sample data sets show that the ARIMA model generates stable series for predicting final product qualities.

According to Tables 13 and 14, in determining the sequence stability, the Python suite ARIMA is used to automatically run the best combination of p and q parameters as shown in Figures 10 and 11, which illustrate the best combination of parameters for 100% iron loss and 110% iron loss.

The model was adjusted by the ARIMA model as shown in Table 15. The adjusted values and the predicted model are both close to 1.

The scatter plots are used to show the correlation between actual quality data and predicted quality data. In Figures 10 and 12, the predicted quality indicators (iron loss values) of 100% and 110% cores are plotted as the distribution points between the Y-axis and the actual value X-axis, respectively. The R values of both multivariate regression and ARIMA models are close to 1, indicating the predictions are highly accurate. As shown in Figures 9 and 12, the multiple regression model has a larger fluctuation (blue dots) than the ARIMA model (orange dots), and both models show a positive correlation between the actual iron losses and predicted iron losses.

6. Conclusions and Recommendations

In this research, the case company can integrate internal and external supply chain data collected from material inspections, production processes, and quality inspections to enhance intelligent manufacturing decision supports. The actual data, provided by the case company, are used to develop two key knowledge modules, i.e., the intelligent supplier selection for component purchasing and the finished product quality prediction for highly customized transformer manufacturing. For the intelligent supplier selection, the result obtained from the system shows adherence to the preset limitations, while selecting the best fit solution for the objective functions. One major advantage is that the proposed module provides the order dispatching suggestions for the company leaving the decision maker as the selector of the final supplier. In the case demonstration, a total of five Pareto solutions are generated. Each is nondominated by any other solution in respect of three objective functions, which means they have advantages in at least one of the desired goals. For future research, it is important to construct a rule to select the best solution among all Pareto solutions.

The quality prediction module is verified by comparing the predicted value with the actual process data obtained from the transformer manufacturer. The results show that the two semifinished iron cores used the multiple regression and the ARIMA model to obtain an value of 0.99. According to the inspection data of the semifinished products or materials received by the company according to the supplier, the data is input into the two models and cross-validated to ensure the quality of the finished transformer. Compared with the past, it is necessary to complete the entire transformer to determine whether the quality satisfies the customer. When the finished product does not meet the requirements, a lot of time and resources (such as raw materials and labor costs) have already been wasted. Therefore, this research method can shorten the time to judge whether the two semifinished iron cores meet the quality requirements of the finished product. For future research, when the key data of semifinished products and materials are provided by a smart SCM system with safe and reliable data transparency, the research can use deep learning models to make quality predictions.

Data Availability

All the relevant data are included within the manuscript.

Conflicts of Interest

The authors declare that there are no conflicts of interest about the publication of the research article.

Authors’ Contributions

CHC, AJCT, and CVT have been involved in conceptualization. CHC, PYC, and AJCT are responsible for methodology and formal analysis. CHC and PYC provided software and perfomed investigation, data curation, visualization, and original draft preparation. AJCT and CVT provided the resources and contributed to validation, , supervision, project administration, funding acquisition, and manuscript review and editing. All authors have read and agreed to the published version of the manuscript.

Acknowledgments

This research was partially supported by the Ministry of Science and Technology (Taiwan) individual research grants (Grant nos. MOST-108-2221-E-007-075-MY3 and MOST-110-2221-E-007-113-MY3) and the National Yang Ming Chiao Tung University’s R&D grant for enhancing international research cooperation.