Abstract
In this paper, we represent a method for machine learning to predict the defect in foundry operation. Foundry has become a driving tool to produce the part to another industry like automobile, marine, and weapon. These foundry processes mainly have two critical problems to decrease the quality assurance. Now, we have to predict the defect to increase the quality of foundry operation. The foundry process’s failure is associated with micro shrinkage and ultimate tensile strength. We process by utilizing a machine learning classifier to predict the micro shrinkage and maximum tensile strength and describe the process, learning process, and evaluate the predataset from the foundry process to compare the accuracy and stability.
1. Introduction
The manufacturing industry should make a component with high standards, because supply to another drive is the primary activity. For example, brakes will supply the automobile industry. Like some weapons parts, aeronautical components and naval equipment play a key role in related industries. In all the industries, few components are manufactured using a foundry process with high-quality standards by regulating preprocessor and post-processor activities. Generally, micro shrinkage happens due to cooling down. More flaws will be identified as defects and increase the amount of deposition as a raw material. Radiographic testing will identify the micro shrinkage as a defect. Mechanical properties are the more critical, but we consider ultimate tensile strength in our work. Ultimate tensile strength is the maximum tension that a forged part can stand until it breaks.
1.1. Introduction to Foundry Process
The foundry process converts liquid metal into the desired shape, i.e., metal casting. Foundry has a sequence of steps (Figure 1). Initially, metal should be melting into liquid metal [1]. The liquid metals should pour into the mold. After some time, liquid metal should pour into the metal, and then, remove the mold. Finally, we will get the final shape by applying the finishing process [2].

1.2. Basic Defects of the Foundry Process
The defects in the foundry process are determined using various testing procedures. But it is challenging to identify from the output what the exact problem that occurred [3]. The process has five faults: porosity, shrinkage defect, material defect (pour or mold), metallurgical defects, shape defect, and mechanical property change. We concentrated only on shrinkage and mechanical properties in all the above flaws [4].
1.3. Shrinkage Defect
Shrinkage defects may appear due to thermal expansion in the materials. In the foundry process, metal thermal expansion is not hygiene like when metal is at the transformation stage, i.e., from liquid to solid. It undergoes more or less thermal contraction as it cools to room temperature. Shrinkage defects will appear at the edges of the object [5].
1.4. Prevention of Shrinkage Defect
(A)A good gate design with a riser should allow the melted metals’ reasonable flow rate [6](B)Increases the cooling coils to adjust the heat reduction in molten metal(C)Reduce the total volume defects [7]
1.5. Mechanical Properties
Once the entire foundry process may be complete, the objective is exposed to external forces, which are directly acting on the body at the respective time, due to the acting forces because the troops were permanently affected by the foundry element quality [8]. (A)Strength: it states the physical strength of the metal. It had ultimate strength and tensile strength(B)Hardness: condition of quality(C)Toughness: more strong to stand the different loading conditions(D)Resilience: ability to absorb energy(E)Elasticity: permanent deformation by applying the forces(F)Plasticity: it will return to its original position after removing the materials(G)Brittleness: it will break before the deformation stage(H)Ductility: material can deform without breaking
2. Machine Learning and Its Implementation
Generally, machine learning is a programming tool to predict or optimize the fault based on datasets or previous experience. Adequate datasets and processing methods are already available with high standards. By referring to the existing dataset [9, 10], we can reduce the complexity of the problem and build a more accurate system by using machine learning [11]. This procedure will associate smartly with the vision of the computer with intelligence by using a simple language process through automation and manufacturing [10, 12, 13].
3. Implementation of Machine Learning
Generally, machine learning algorithms (Figure 2) can write in 3 different methods according to the application [14]. (1)Supervised algorithm: primary input data can develop; this input data consist of numbers or strings. It is used to take previous inputs to predict the feature aspects. It is again divided into two subcategories [15, 16](A)Regression: this method generally gives the output as {correct, wrong}, {positive, negative}, {yes, no}, {true, false} etc., picture as an input source [17](B)Classification: this method will give the output as a new category or classification as an input of given historical datasets [18](2)Unsupervised learning: it will predict the output by using inputs as previous data or historical evidence(3)Reinforcement learning: reinforcement learning can choose the instant action at an instant time [19, 20].

But in our problem, we predict the fault in the foundry quality aspect, so the proposed algorithm is supervised learning with regression and classification method [21]. (4)Workflow implementation—machine learning: the workflow is initiated by using four different phases [22](1)Data collection: relevant data is collected from a common source, i.e., foundry related. That data is converted into prescribed datasets [23](2)Data preprocessing: we need to identify the missing data from the datasets [8]. This missing data may be available in the real world. So we need to take that public data to process the dataset optimally. Machine learning has more benefits by making the standard datasets [24]
If we want to use any unfilled dataset, we need to incorporate the missing data as zero or average value or mode setting. Machine learning has some procedures to solve this issue by feature scaling, average removal, and scaling differentiation. Using this method, we can improve the accuracy of output [24].
Formatting the missing datasets: we cannot process the missing dataset by algorithm or programming. A string value or text to replace the missing part may include [24, 25]. Now, it is needed to convert the string or text value into the desired manner. This method is called categorized encoding [26].
4. Learning Model
Before we process the model, we can divide datasets into two parts. (i)Learning dataset: how to utilize the dataset in the algorithm [1, 27](ii)Evaluation set: execute the algorithm with accuracy [27, 28]
Initially, check the dataset; if it is ready, only we involve that into the algorithm to be the next step for learning. We need to introduce some basic terminology to update the regression and classification method.
4.1. Bayesian Probability Theorem
According to Bayesian analysis, give any two events and ; the probability condition is that happened, or if happened, the probability condition for given as .
The above equation is represented in terms of a graph. The graphical method will define the nearby realistic probability of a given problem, and this method can make the expert system by changing rules one by one. The expert system can expand the basic concept with new ideas without neglecting the performance level.
The main strength of this theorem is that specific hypotheses become correct.
As per our problem, we used both micro shrinkage and mechanical properties. Several values of each string will adapt to ultimate tensile strength. Using the Bayesian theorem, we easily calculate the normal relations between each probability in that range of maximum strength [29].
4.2. -NN Algorithm (-Nearest Neighbor Algorithm)
It is a simple algorithm for machine learning. This algorithm can find the nearest unknown events.
This algorithm represents a set of learning data ; is a dimensional space.
In this method, we need to find or establish the distance between and or two random events. For distance measuring purposes, we can use the empirical value.
4.3. ANN Model (Artificial Neural Networks)
ANN can simulate, depending on the set of inputs, where is the number of active functions, is the weights, and is the correct dynamic events.
We will make this analysis as background propagation. It will calculate the active function for every event.
4.4. MAE (Mean Absolute Error)
MAE is the difference between the actual and predicted data values. If the value may not be relevant to the expected value, we compare it with an existing situation because some of the data we consider from the real world. If we may get accurate performance data slight fail, then we react to unseen data.
4.5. MSE (Mean Squared Error)
It is the simple average of the square difference between actual and predicted dataset.
4.6. RMSE (Root Mean Squared Error)
It will show the exact difference between errors that got predicted from the dataset.
It is a determinant of coefficient. It will give the value between 0 and 1. If it is 0, it does not fit; if it is one, it perfectly fitted in the dataset.
4.7. Root Mean Squared Logarithmic Error
It is the risk metric related to the correct root squared logarithmic error value. The root mean squared logarithmic error will add one on the actual and predicted values.
5. Evaluation Model
After formulating all the datasets, we will execute the program with optimized datasets. Verify the entire program with accuracy. Execute the program; after the execution of one trial, we can revise the program for output. If there is any error or any other issues or any other improvement in datasets to improve the performance, and to develop the model as user-friendly.
6. Machine Learning Algorithm Development
6.1. Supervised Algorithm
In machine learning, regression and classification algorithms are used for the detection of the defect in datasets, but both algorithms are used for a different purpose. Regression can be found in the continuous value, but the classification is used to find the different values (Figure 3). Classification helps differentiate the datasets into classes according to the category. Here, we use -NN, Bayesian forecasting, and random forecasting methods. Regression helps to find the relation between the different datasets. It will find the continuous variable by using simple linear regression, decision tree regression, and random forecasting regression.

6.2. Random Free Casting
We developed our problem with many sets of decisions; those are designed in a tree. It is used to predict the output as a decision. Based on leaving datasets in learning datasets and an imposed number of datasets, a random forecasting system was developed for the program and its production; by utilizing all these, reduce the estimation by the construction process into the random forecasting. It will give the mean output.
6.3. Extreme Randomized Forecasting
It is the next step of randomized forecasting. It will not observe the resamples. It will split the small number randomly for selected predictions. It will reduce the different models in bit more.
6.4. Extreme Gradient Boosting
It is also highly randomization, but it incorporates methods of boosting techniques. A correct approach is more weight to the existing dataset, which are already incorrectly predicted by the previous tree. This process will execute the strategy called gradient boosting because this process will add a gradient to reduce the loss when adding new model datasets. Combining these two datasets improves the capacity of supervised algorithms.
6.5. Cat Boosting Technology
It will work with varieties of data and missing data. But compared with extreme random forecasting, it will defecate the same technique, which will split into nodes of each value. This method will be targeted as a decision to find the correct dataset from the learned dataset.
6.6. Gradient Boosting Technology
It will assemble the weak learner to weak prediction datasets to develop a good algorithm using different decision trees. This entire process will start from the mean value of the target. But all trees must be a fixed size. On the other hand, estimations are predicted as unfavorable. The gradients are updated in each iteration.
6.7. Interpretation of Model
Machine learning has ethics; machine learning function should be expected, transparent conceptual explanation, and work should be visible. Explanatory has more importance. It includes the transfer of the data according to the issues. It should produce the best result. Here, we have one criterion; i.e., categorize the dataset model with interpretation methods and group according to related standards.
7. Methodology
We collected the data related to foundry from the related industry specialized for the precision part. This entire methodology will focus on the microleakage and ultimate tensile strength. We know that radiographic testing can generally find micro shrinkage, and destructive testing can find mechanical properties. So it can be concluded that evaluation will start after the production stage. Acceptance and rejection will be decided by user needs or as per standards. So, according to the guidelines, micro shrinkage and mechanical properties will choose to be accepted or rejected. Now, we develop a methodology with a policy by using the classifier and regression methods in machine learning.
7.1. Validation System
We can validate with . In the same order, our datasets 11 times split into ten various learned datasets.
7.1.1. Learning Model
We made learning models. In this, we included several datasets and algorithms. Now, we will use the algorithm for different models. For getting more precision, we use other methods.
7.1.2. Bayesian Probability Theorem
We used several learning algorithms, and also, we executed this theorem with a Bayesian classifier.
7.1.3. -NN
We execute the program, when , , , , and .
7.1.4. ANN
We can use the multilayer precision with the backpropagation theorem.
7.2. Testing Methods
In this testing method, we can find the ratio of errors. For example, the error prediction rate is , and the actual value dataset is ; is the size of the dataset.
8. Results
8.1. Micro Shrinkage Experiments
In the two graphs, it is shown that ANN and Bayesian probability theorem are learned and more suitable. But the naive theorem will behave more than other classifications and regressions. Here, we should note that the naive Bayes theorem is the variety of Bayesian networks. But input variables should be linear. Similarly, if dependent on the variable, the result may not achieve well. -NN is nonlinear; it reaches the best result. But -NN has several learning phases inbuilt into it only (Figure 4).

From Figure 5, we cannot get 100% accuracy of the result in any case. But good results achieved in the ANN method are similar to those in the Bayesian theorem. But combine them for better, we will use them for better decision for choosing the defect. Using this, we can significantly decrease the cost and duration of the testing method.

8.2. Ultimate Tensile Strength
We work with optimal framing datasets to analyze the performance in this experimentation. Tracing dataset sizes are 100, 200, 300, 400, 500, 600, 700, 800, and 889.
From Figures 6–8, for the size of learned datasets of 100, we protected by machine learning; Bayesian theorem learned with still classifying which outperformed the rest of classifier with production accuracy near 82%. From this result Bayesian theorem is capable; when we increase the datasets to obtain the best result, we learn that the dataset size should be 700 to 800 which is known when data is a phase that is performed manually. For more accuracy, we should perform different methods of -NN, , and it achieves the best result.



9. Conclusion
The ultimate tensile strength of the micro shrinkage defect is induced when the foundry cools down and is subject to different loading. Our work predicts the micro shrinkage and ultimate tensile strength using a machine learning classifier and regression method. Our work compares the various machine learning techniques used to indicate the micro shrinkage and ultimate tensile strength. Our problem is implemented in a different approach without changes. All the methods are performed very well. But comparing these techniques, in micro shrinkage production, Bayesian networks are the best (from Figures 4 and 5), and the ultimate tensile production ANN method performance is outstanding (from Figures 6–8). When we consider MAE and RMSE, the results are maximum similar. In the Bayesian theorem, exceptionally accurate results come; in insensitive cases, we proposed the best technique as the Bayesian theorem. And also, it is 82% accurate when compared to all other algorithms. Bayesian increases the datasets; the accuracy of the result will improve.
Data Availability
All the data and results presented in this paper are included in the article.
Disclosure
The publication of this research work is only for the academic purpose of Mettu University, Ethiopia.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.