Abstract

Additive manufacturing, also known as 3D printing, has been facing the problem of inconsistent processing defects and product quality as a transformative technology, thus hindering its wide application in industry and other fields. In this context, machine learning (ML) algorithms are increasingly used for automatic classification of process data to achieve computer-aided defect detection. Specifically, in this paper, two data-driven classification prediction models are built by monitoring the sensing signals (temperature and vibration data) and interlayer images during the printing process, using the fused deposition model (FDM) as the base case, and the prediction results of the two machine learning models are fused for prediction. The experimental results show that by fusing the prediction results of the two models, the classification accuracy is significantly higher than the prediction results of a single model. These findings can benefit researchers studying FDM with the goal of producing systems that self-adjust online for quality assurance.

1. Introduction

Additive manufacturing is also known as 3D printing. Since its emergence, it has shown great potential in areas such as healthcare, aerospace, and automotive manufacturing. Unlike subtractive manufacturing, additive manufacturing is a process of joining materials to 3D model data to make objects, which usually proceeds layer by layer [1, 2]. However, process repeatability and quality control remain the biggest obstacles to the technology being widely used in industrial applications [3]. The focus of this research paper is fused deposition modeling (FDM), a material extrusion technique in which a thermoplastic material is melted and extruded through the hot end to create printed layers [4].

In the FDM process, the instability of production processing often affects the printing process and printing quality, and even printing defects may occur [5]. The common printing defects include overfill, underfill, surface roughness, and warping [6]. To address these problems, there are various field monitoring technologies that can easily and automatically correct the detected defects in real time to improve the quality of AM components and reduce the variation of their mechanical properties [7, 8].

Since traditional process optimization usually involves iterative experiments, the experimental process is both time-consuming and expensive. Therefore, an increasing number of research efforts are combining machine learning with field monitoring, using machine learning to analyze in-process images and data and to derive predictions about the expected component quality [9]. Process data is monitored and recorded by placing sensors and cameras on the production site [10, 11]. Artificial intelligence (AI) methods are used to evaluate the recorded data or predict future states to provide a preliminary judgment about the relevant printing process parameters and the quality of the components. In addition, developments in sensing technology have made it possible to collect data from multiple sources, including process data from 3D printers, data from scanners, and data from microscopic cameras [12, 13].

With the rapid development of machine learning, an increasing number of research efforts are combining machine learning with field monitoring, using machine learning to analyze process images and data and to derive predictions about the expected component quality. For example, Song et al. used thermocouple sensors to record thermal timing data from a build platform and used the K-nearest neighbors (KNN) algorithm to predict the warping phenomenon in the FDM process. However, the classification accuracy was only 84% and was not compared with other related algorithms [14].

In another study, Li et al. proposed a data-driven modeling approach for printed part surface roughness prediction, in which sensors such as accelerometers and thermocouples were arranged to record process data and make predictions by ensemble learning methods [10]. However, the experiments mainly focus on the roughness of the middle three parts of the printed part and do not provide a complete overview of the printed part as a whole [15]. Compared with vibration and thermal signals, acoustic signals have some good advantages. Acoustic signals can provide rich information about relevant process conditions, part quality, structural integrity, etc. [16]. Wu et al. developed a data-driven monitoring and diagnosis method based on acoustic emission sensing information and investigated the relationship between process failures and relevant acoustic emission patterns. Different process failures were analyzed by unsupervised machine learning and clustering methods (SOM) [17]. However, the experiments for this process failure diagnosis are limited to the FFF first layer printing process, which has not yet been able to fully automate the monitoring and adjustment of parameters.

Another study in ML is the detection and analysis of AM processes by optical tools. Westphal and Seitz used a transfer learning approach to experiment with image datasets in combination with an adaptive classifier for the analysis of the part manufacturing process [18]. The proposed complex transfer learning approach allows the automatic classification of powder bed defects in SLS processes on a very small dataset. In FDM processes, more and more research is based on process images using convolutional neural networks (CNN) to analyze process parameters and printing defects in real time [19, 20]. An image-based closed-loop control system was developed by Liu et al. For defect identification, a defect feature extraction method based on image texture analysis is proposed to extract the corresponding defect texture features from the input image [21]. The developed closed-loop system has 85% defect classification accuracy and can adjust the process parameters in time. However, the limited coverage of the microscope used to monitor the printing process in the experiment limits the identification of more other types of surface defects.

In addition to monitoring 1D and 2D data from the AM process, some studies have used the 3D data collected during the process to reconstruct the geometry of the printed part and compare the printed geometry with a computer model [22]. Charalampous et al. developed an ML method that is based on a regression algorithm to investigate the dimensional deviations of CAD models and solid parts [5]. The relationship between process parameters and printing quality was investigated by creating a database of optical scans. However, this analysis method only evaluates the measurement of printing results and ignores the mechanical movements and thermal changes inside the printed part during production. A laser-based online machining monitoring system was proposed by Lyu and Manoochehri [23]. The point cloud dataset acquired by a 3D laser scanner provides information on the current part height in the -axis direction and the in-plane depth of each layer. The VGG16 model is used to classify the in-plane anomalies, and the PID online closed-loop control method is used to adjust the process parameters, resulting in a significant reduction in the height deviation of the printed parts and an effective correction of the printing anomalies. However, the complex ML method resulted in relatively high computational costs and a long analysis time.

Unlike AM process monitoring using a single sensor, some research works fuse information from different data sources to evaluate geometry deviations and mechanical properties of printed parts. Gui et al. proposed a data fusion method that fuses sensing data from different sources and process parameters to evaluate the quality of 3D printed parts [12]. In addition, Petrich et al. developed a neural network that ingested multimodal sensor data from cameras, microphones, and machine logs to detect defects in laser powder bed fused AM parts with 99.9% accuracy [13]. These experiments were performed by fusing multimodal data for quality assessment analysis, and the fusion of multimodal data sacrifices the independence of the data itself. Second, there is a model learning bias in the prediction effectiveness of a single machine learning model.

In summary, the above-mentioned literature is based on various types of data during AM to provide assurance of printing quality by ML methods. In this paper, a defect classification method for FDM processes is proposed for defect diagnosis by fusing the prediction results of two machine learning models. The multimodal data include layered images, extruder vibration, and melt pool temperature during the production process. By fusing vision- and sensing-based methods, the classification accuracy is significantly improved compared to the classification of defects in the FDM process using the two methods alone. The experiments involved five different print defect types and one normal condition, and the classification accuracy of the fusion model was about 97.9%. The rest of the paper is organized as follows, with Section 2 describing the experimental setup as well as the research methodology. Section 3 gives the data analysis processing and experimental results. The experimental results are discussed in Section 4. Finally, Section 5 discusses the conclusions and future work.

2. Materials and Methods

The experimental setup used in this study, the data acquisition, and the generated datasets are described in detail below. In addition, how the data were preprocessed, the ML algorithm, and the performance evaluation metrics used in the experiments are described.

2.1. Experimental Equipment and Data Acquisition

A low-cost FDM printer (Aurora Ervo A6) was used in this experiment, and the data acquisition system is shown in Figure 1. The machine uses a 0.4 mm diameter nozzle, a piece of size heatable build platform, and a 1.75 mm diameter gray polylactide (PLA) as the filament material used for printing. The test parts printed in the experiments were rectangles of size . In order to simulate a real production environment, the necessary printing process parameters were set using the slicing software JGreate version 4.8.4.

The Aomekie USB microscope is used to image the surface of the printed layers during the FDM production process. The digital microscope is arranged on the left side of the motor, parallel to the extruder. After each layer is printed, the FDM printer automatically moves the extrusion head to one side and uses the digital microscope to capture the detailed features of the print surface. After several experimental adjustments, the resolution of the microscope was set to with a sampling frequency of 1 Hz. Considering the defects that may be generated by temperature variations and abnormal vibrations in the FDM process, accelerometers and noncontact infrared cameras are also used to monitor the FDM process. Among them, in order to measure the layer-by-layer thermal activity of the FDM process, an infrared (IR) imaging camera is mounted on the extruder to record the melt pool temperature. In addition, to face possible abnormal vibrations during the printing process, accelerometers were mounted on the extruder motor to monitor the extruder motion during the printing process. Table 1 lists detailed information about the sensor measurements.

In order to investigate the relationship between process parameters and printing defects, this work was designed to conduct experiments for a variety of process parameters. According to previous studies and literature, the machine parameter layer height () did not have a significant effect on the print surface quality [21]. Therefore, three process parameters were selected for the experiments, including extruder temperature (), material flow rate (), and printing speed (), where controls the extruder temperature of the extruded material and the viscosity of the extrudate, denotes the material extrusion flow rate of the extruder, and is the movement speed of the extruder. All three process parameters chosen for the experiment may have an effect on the printing quality. In this study, experiments were conducted by adjusting three selected process parameters (this experiment assumes that there are four combinations of parameters that can lead to printing defects) and diagnostic analysis of defect types based on printing results. The experimental design is shown in Table 2, which is repeated twice in order to avoid chance events during the FDM process, for a total of 20 prints.

In order to determine the relationship model between the parameters selected in the study and the defects, a total of 20 printing operations were performed according to the experimental design in Table 2. Since the height of the printed test part is 12 mm, after setting the layer height to 0.3 mm, each part will be repeatedly printed in stacks of 40 layers. During the FDM printing experiments, the digital microscope automatically moves directly above the printed part after each completed layer of the extruder to record the detailed features of each printed layer. With real-time monitoring by the digital microscope, 40 images of the surface condition can be captured for each printed test part. However, when printing experiments were performed using the process parameter T250R150V60, large print position shifts always occurred, resulting in false stops. As a result, a total of 765 field monitoring images were actually collected for the 20 print experiments. Meanwhile, the sensing data from the accelerometer and IR imaging camera real-time monitoring were saved to a txt text file via a python program.

2.2. Data Preprocessing

After collecting the required multimodal data, the data must be preprocessed accordingly. By analyzing the results of setting different parameters during the experiment, the results are shown in Figure 2(a). It can be observed that improper settings of the material flow rate lead to defects in the underfill and overfill of the printed part. For the effect of extruder temperature and printing speed, the experimental results show that high temperature and too fast printing speed lead to abnormal material cooling and filling defects with regular patterns on the surface of the printed part. Second, low extruder temperatures can lead to uneven cooling rates throughout the printing process, and as the number of deposited layers increases, temperature gradients in the part can create residual thermal stresses that can predispose to defects such as warping [14]. However, the results also show that if the print speed is fast enough, the printed part adheres to the build platform even at a slightly lower extruder temperature, and no warping occurs at the corners of the printed part. In addition, the interaction of low temperature and too fast printing speed reduces the material fluidity and leads to a rough surface of the printed part. The different types of printing defects were then labeled according to the acquired images (Figure 2(a)), and the experimental results show that the printing results for four groups of process parameters can be broadly classified into six categories: through, normal printing, overfill, regular pattern, underfill, and warping. The results of the experiments are shown in Table 3, which lists the printing results for each experiment, where the process parameters for a normal printing process (i.e., no defects occurring) are defined as the optimal machine parameters. In addition, for the 1D sensing data (Figures 2(b) and 2(c)), it is necessary to detect the presence of missing values. After determining the integrity of the dataset, the data file is manually cropped to remove all values before printing the first layer and after the last layer. After the data were cropped, a stable production process of approximately 1 hour and 30 minutes was obtained. To improve computational efficiency, nine statistical features (maximum, median, mean, minimum, root mean square, skewness, kurtosis, peak, and variance) needed to be extracted from the time domain for each signal channel collected. In addition, a fast Fourier transform (FFT) of the vibration data is required to extract four frequency domain features, including the mean, maximum, minimum, and median of the spectral amplitude. Finally, each layer of the FDM process corresponds to the sensing data, which needs to be labeled with a corresponding label after the features are extracted by time-frequency conversion to be consistent with the image data.

2.3. Machine Learning Models Used in the Research

Throughout the analysis process, all measurements were undertaken using a local workstation computer with a Windows 10 environment, an Intel Corei5-10400F CPU, and 16 GB RAM. All machine learning algorithms were based on PyTorch 1.10.1, python version 3.8.12.

2.3.1. Vision-Based Defect Classification Algorithm

Figure 2(c) clearly demonstrates that the surface images of printed parts have distinct textural characteristics under different process parameter conditions. Therefore, this study proposes the use of the Swin Transformer algorithm to accurately identify the defects that appear. Transformer was first proposed in the field of natural language processing (NLP) and has been widely used in computer vision in recent years [24]. Usually, CNN is considered the dominant solution to vision problems [25], but since the introduction of Visual Transformer (ViT) [26], more and more researchers have started to apply transformer architecture to the field of image recognition. Since the attention mechanism in transformer is able to obtain semantic information between each image block, this makes it possible to obtain a global perceptual field in training, better recognition of small targets, and significantly reduced computational resources compared to traditional CNN architecture models. However, transformer is difficult to handle on high-resolution images, and the basic transformer models, where the scale of token is fixed, are not suitable for the nature of visual applications. To overcome these problems, Swin Transformer uses a shift window-based module (SW-MSA) to replace the standard multiheaded self-attentive module (MSA) [27]. By combining the regular multiheaded self-attention (W-MSA) with the module with shifted windows (SW-MSA), the perceptual field of the model is expanded and the computational efficiency is improved.

To solve the problem of insufficient image acquisition during the experiment, a transfer learning method based on Swin Transformer is used in the study. With the development of deep learning, transfer learning is becoming an integral part of many applications, especially in the field of fault diagnosis [28]. The pretrained model used in this study is the Swin Transformer model trained at resolution on ImageNet-1k. The loss function used in the model is the cross-entropy loss function (SparseCategoricalCrossentropy) used to compute the multiple classification problems. To better utilize the information of sparse gradients, the optimizer uses Adam optimizer [29]. Throughout the experimental process of image classification, the experimental algorithm is first validated using 80% of the data from the dataset for training and then tested using the remaining 20% of the data.

2.3.2. Time Series Data Classification Algorithm

Since image defect detection focuses only on the surface images between the printed layers, process parameters such as extruder temperature and vibration during the FDM process, for example, are ignored. Therefore, the 1DCNN model is used for feature extraction and correlation evaluation for the collected temporal sensing data. CNN is widely used for image and speech processing [30]. In extracting features from sensing data, 1DCNN is more suitable than 2DCNN operation because the size of its kernel is equal to the signal dimensionality. During the training and testing of the 1DCNN model, the division of the sensing dataset is kept consistent with the division of the image dataset.

2.3.3. Combination of Vision and Sensing

Both of the above-mentioned FDM surface defect classification methods can be run as stand-alone methods. However, there are two problems with FDM process image monitoring. First, the image quality is susceptible to the effects of light and microscope cameras. Second, the microscope camera has a blind spot in the field of view, and print part edge errors are difficult to capture. In contrast, the sensing data collected during the process can well describe the layered thermal activity and mechanical motion in the FDM process. Therefore, it is advantageous to fuse the two methods for classification prediction of FDM surface defects to obtain more reliable classification results. Figure 3 illustrates the basic flow of multimodal data fusion prediction.

The basic idea behind the combination of vision- and sensing-based surface defect identification is to run both methods simultaneously. The machine learning model provides the probability of predicting each class, i.e., for each class, returns the probability that the test vector belongs to that class. For separate predictions based on vision and sensing methods, the model outputs the predicted class labels directly. However, when performing fusion prediction of multimodal data, this study uses a class fusion approach to perform classification prediction. Experimentally, is set as the class probability of the test vector for the image-based classification model, where represents the probability of the test vector for the class of and is the total number of classes; in this study, is equal to 6. Similarly, represents the class probability of the sensing signal feature vector based on the sensing signal classification prediction, where the feature vectors and both correspond to the same time period of the FDM process. Therefore, the joint predicted probability of each class can be obtained by combining and summing

By combining the category probabilities predicted by the two methods, the joint value is small if both and are small. If both and are even, or if one is large and the other is small, then the joint value is assigned a median value. If both and are large, then the joint value will be large. Since different methods of prediction may have different importance, multimodal data prediction can be better achieved by appropriate weighting. In this experiment, the optimal joint value is found by adjusting the weight value of . To prevent the joint value from exceeding the maximum probability, the joint probability is divided by 2. Finally, the class label predicted by fusion is given by the following equation:

2.4. Machine Learning Model Evaluation and Analysis

How to distinguish and evaluate the performance of classification algorithms has been an important issue in machine learning, as the correct choice of evaluation metrics helps to compare the performance of different models [31]. In this experiment, specific experimental results are displayed using the confusion matrix (CM) [32], which is a cross-tabulation table that records the number of occurrences between two raters, i.e., true/actual classification and predicted classification. The confusion matrix yields four basic elements, namely, true positive (TF), false negative (FN), false positive (FP), and true negative (TN). For the multiclassification problem, accuracy, macro average precision, macro average recall, and macro -score are used as the evaluation metrics of the model in this study. These metrics are defined as follows:

3. Results

3.1. Image Classification Results

Experiments are conducted using the Swin Transformer-based transfer learning model and image data collected by the FDM process. In this process, the Swin Transformer model is trained and tested on the previously described image dataset. The results of the examination on the image data validation set are shown in Table 4. It can be found that on the validation set, the Swin Transformer model is poorly classified for the normal printing class, and the precision of the rough class and the recall of the warping class is in a low state. In addition, Figure 4 shows the accuracy and loss curves of the Swin Transformer model trained and validated on the image dataset. Over the 20 epochs of model training, the training loss of the model gradually decreases, and the training and validation losses finally remain floating around 0.4. The trend of the loss on the validation set remains consistent with the training loss, and the final accuracy on the validation set is 0.837. This indicates that the Swin Transformer algorithm finds a relatively good fit with the training data during the training process. The confusion matrix of the validation set in Figure 5 clearly shows the difference between the actual and predicted values. The confusion matrix for the test set has a similar conclusion, with the Swin Transformer model classifying the normal printing errors into regular pattern classes. This may be attributed to the fact that the regular pattern class exhibits normal surface conditions at the beginning of printing and only accumulates layer by layer to a certain level before the defects become more apparent. In addition, the reflection of the printed material also affects the classification performance of the ML model. For the warping class, Swin Transformer also has a certain degree of misclassification, mainly because the process parameters of the rough class and the warping class are too close, and the main difference between the two is whether the corner of the printed part appears to fall off from the build platform during the printing process, which happens to be the blind spot of the digital microscope.

Due to the small size of the dataset used in the experiments, four other basic machine learning models experimented on the image dataset in this study in order to verify the reliability of the Swin Transformer model and the dataset. The results of support vector machine (SVM), decision tree (DT), K-neighborhood (KNN), and Parsimonious Bayes (NB) on the test set are shown in Table 5. It can be found that the accuracy of SVM, which is the best performance among the four basic machine learning models, is also 6.79% less than that of Swin Transformer, and the worst performance of NB on the test set is only 0.635. From the confusion matrix in Table 5, it can be seen that all five machine learning models are not very effective in recognizing the normal printing class with high accuracy. Even the better-performing SVM models still have more than half false positives for the normal printing class. In conclusion, compared with the traditional machine learning models, the Swin Transformer-based transfer learning model has good classification performance on small datasets with accuracy (0.899) and macro -score (0.811), but it is still difficult to distinguish individual defective classes.

3.2. Sensing Data Classification Results

This section proposes a method to classify the defects of the FDM printing process based on sensing data. The FDM process information is collected by sensors arranged on the printer. The collected sensing data are preprocessed and input to a 1DCNN model for training. 1DCNN model consists of one input layer, four convolutional layers, two “Maxpooling” layers, two “Dropout” layers, and one output layer. Figure 6 shows the accuracy and loss curves of the 1DCNN model trained and validated on the image dataset. It can be observed that as the epoch increases, the training and validation losses of the 1DCNN model gradually decrease and the loss curve is smoother. The accuracy of the model can reach 0.899 on the training set and 0.892 on the test set. According to the confusion matrix of the 1DCNN model on the test set in Table 5, it can be found that the 1DCNN model has a good classification effect on the normal printing class on the test set. According to the calculation, the -score of the normal printing class is 0.833, which is much higher than the performance of Swin Transformer on the image dataset. However, the classification effect of the 1DCNN model on other defective classes is still not satisfactory.

The same was done to establish the reliability of the 1DCNN model and the sensing data. In this study, the random forest (RF) algorithm has again experimented on the sensing dataset, and the experimental results were compared with those of the 1DCNN model (see Table 6). It can be seen that 1DCNN has better prediction accuracy and higher macro -score than the conventional RF model. As shown in the ROC plot in Figure 7, the TPR is plotted on the FPR and the AUC values are determined. The AUC value of the 1DCNN model is 0.0036 higher than that of the RF model, which indicates that the 1DCNN model has a more robust prediction performance than the RF model.

3.3. Fusion of the Prediction

The basic idea of vision- and sensing-based classifications for surface defect detection in FDM processes is to run both methods simultaneously. Both methods can be predicted separately. Through the experiments mentioned in the previous two sections, it can be found that the Swin Transformer model has no significant difference in performance on the training and test sets but has limited ability to discriminate between the normal printing classes due to factors such as surface reflection of the printing material and instability of the printing process. 1DCNN model shows good classification performance on both the training and test sets. Unlike the Swin Transformer model which has difficulty in distinguishing the normal printing class, the 1DCNN model can identify the normal printing well by sensing data without confusing it with the regular pattern class. However, since the collected sensing data consisted of vibration signals and IR temperature signals, the signal characteristics corresponding to overfill and regular pattern classes were not significantly different, so the prediction of the 1DCNN model on these defect classes was not satisfactory. The experimental results for both models show that both models have good classification results for some classes, but neither can take into account all defect classes. In this case, two prediction methods were fused in this study.

To perform fusion prediction, this work uses the sensing data segments corresponding to the images for experiments. The weights on are first adjusted to find the optimal prediction model. The weights are set from 0.1 to 1 in steps of 0.1, and the weights are gradually adjusted (). The performance of the fusion model on the test set under different weights is shown in Figure 8(a). When the weight on is set to 0.4, the fusion model has the highest accuracy (0.979), and the -score distribution of each category is relatively uniform. Compared with the classification performance of the previous two models for the normal printing class, the -score of the fusion model for the normal printing class can reach 0.904. As shown in Figure 8(b), all the evaluation metrics of the fusion model are higher than those of the Swin Transformer model and the 1DCNN model, and the metrics are in close agreement, which indicates that the fusion model has good robustness.

The fusion model can well classify the surface quality of the printed parts in the experiment into six types based on the printing quality, including rough, normal printing, overfill, regular pattern, underfill, and warping. Since this study needs to determine the relationship model between the selected FDM process parameters and defects, according to the classification results of the fusion model, the defect types can be correlated with the root cause to find the process parameters corresponding to them. The qualitative relationship between the defects and the relevant adjustment measures in this study is shown in Figure 9. However, rough and regular pattern correspond to two kinds of root causes, respectively. Unlike the rough class, the regular pattern class involves more combinations of process parameters, and the fusion model alone cannot determine the process parameters that need to be adjusted (regular pattern may have normal temperature but high printing speed; it may also have high temperature and normal printing speed). Therefore, this study also needs to conduct a triple classification experiment for the regular pattern defect class to distinguish its process parameters in the FDM process for targeted and timely adjustment. For the random forest model, the hyperparameters to be tuned include (1) the maximum depth, (2) the number of decision trees, and (3) the minimum number of samples needed for the classification nodes. Specifically, for the optimization problem, a grid search is used to find the optimal parameters, where max_depth is 9,200 decision trees are required to split a node, and four features are needed to optimize the split. After optimization, the accuracy of random forest model reaches 0.958 on the test set, which can well meet the demand of triple classification and can well determine the relationship between regular pattern defects and process parameters. So far, all the experiments of surface defect classification in FDM process are completed. The experimental results show that ML is a promising solution in the field of additive manufacturing, which can be used to detect surface defects in the printing process and adjust the process parameters in time.

4. Discussion

4.1. Optical Detection and Challenges

FDM process monitoring using a digital microscope can provide a good description of defects in the printing process. This experiment was performed by manually annotating the acquired images and analyzing them using a machine learning model. However, there are limitations in using optical imaging for process monitoring. The coverage of the digital microscope is limited, and the field of view is mainly focused on the central area of the printed part, making it difficult to take into account the edges and bottom of the printed part. In addition to the blind spot in the field of view of the digital microscope, it is also critical to obtain sufficient contrast in the image. The PLA material used in the experiments was gray in color, and the images acquired in the experiments were slightly affected by reflections due to the high reflectivity. The impact is mainly seen in the classification problem of normal printing, where the Swin Transformer model always incorrectly predicts the normal printing class as the regular pattern defect class. Therefore, the material chosen for the test case needs to have ideal conditions, with material properties that are both bright and not too reflective. If necessary, polarized illumination should be used to reduce further reflections. In addition to the effects of site environment and material properties, optical analysis methods tend to ignore the operating conditions of the machine itself during the printing process and do not allow direct analysis of process parameters.

4.2. The Importance of Sensing Monitoring

Unlike image classification ML algorithms which require relatively high computational cost, the sensor-based analysis approach possesses less complexity. The experiments monitor the working status of the FDM process by arranging accelerometers and IR sensors, where the vibration signal can reflect the mechanical activity of the FDM printer, and the IR temperature can monitor the layered thermal activity of the FDM process. The collected vibration and temperature data are processed and input to the 1DCNN model for training, and the corresponding prediction results are obtained. The experimental results show that the 1DCNN model is much more effective than the Swin Transformer model for normal printing class prediction. However, since the collected sensing data consisted mainly of vibration signals and temperature data, it was difficult to accurately distinguish between various types of defects, and the accuracy of the 1DCNN model on the test set was only 0.892. The experiments showed that the presence of multiple sensors (accelerometers and IR temperature sensors) might help to detect problems for which computer vision is not necessarily applicable.

4.3. The Function and Significance of Fusion Model

By fusing two independent models for prediction, good prediction results are obtained. For most categories, the fusion model outperforms the prediction based on visual and sensing data. The experiments use the Swin Transformer model as a benchmark and gradually adjust the weights on the 1DCNN model to find the optimal fusion model. It is found that the fusion model can cope well with the problem of state change during the FDM process, which is gradually shaped by material stacking, and the initial layers may not show obvious printing defects (e.g., regular pattern class and warping class) under digital microscope monitoring, and only after printing to a certain extent will there be more obvious differences. It is difficult to distinguish whether the initial layer image belongs to a certain defect class by optical inspection. By the time the image classifier detects a defect, the print has already undergone a large printing deviation, which is difficult to correct. Therefore, sensors arranged on the printer can be well placed to monitor the mechanical and thermal activities between layers during the printing process, facilitating the timely detection of defects and subsequent adjustments. Finally, the relationship model between the selected process parameters and defects can be precisely determined by fusion models and a simple random forest triple classification model. However, for practical industrial applications, the proposed method needs further improvement and automation. In addition, experiments need to add more and more data to the algorithm database to ensure more robust classification results.

5. Conclusions

To improve the diagnostic performance of printing defects in fused deposition modeling (FDM), a method that fuses vision-based and sensor-based prediction results for defect diagnosis is proposed. The interlayer surface images and sensing data (vibration signals and infrared temperature) from the FDM process are acquired by arranging a digital microscope and sensors (accelerometer and infrared thermometer) on the printer. Two methods of defect classification were tested on the acquired dataset. The first method uses a Swin Transformer-based transfer learning model and interlayer surface images for defect classification, and the second method uses a 1DCNN-based model and sensing data for defect diagnosis. Both defect diagnosis models can be run independently; however, the diagnosis results of both models are not ideal. Therefore, the prediction results of the two models are fused together for our experiments, and the experimental results showed that the fused prediction model was more reliable than the prediction using a single ML model alone. The prediction accuracy of the fusion model is more than 8.9% and 9.8% higher than that of the Swin Transformer model and the 1DCNN model alone. In addition, the method proposed in this paper can well determine the relationship model between printing defects and process parameters, which facilitates subsequent online adjustments for defects that occur during printing.

For future work, the proposed fusion model approach will be further tested on more complex parts. Since the FDM printer used in this paper is not an open source FDM printer, it is not possible to automate the online adjustment function for the time being. Therefore, in the future, it is expected that the production process can automatically perform online data acquisition, preprocessing, and analysis and feed the ML model prediction results to the closed-loop system in real time. The production system will automatically adjust the process parameters based on the feedback information to achieve quality assurance.

Data Availability

The data presented in this study are available on request from the corresponding author.

Ethical Approval

No research on human participants and/or animals was involved.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Authors’ Contributions

X.-Y.L. was responsible for the conceptualization, methodology, and investigation, wrote the original draft, and reviewed and edited the manuscript. F.-L.L. was responsible for the supervision and wrote, reviewed, and edited the manuscript. M.-N.Z was responsible for the investigation. M.-X.Z. was responsible for the software. C.W. was responsible for the supervision. X.Z. was responsible for the methodology and supervision and wrote, reviewed, and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

Acknowledgments

This research was funded by the National Key Research and Development Program of China (2020YFB1711500).