Review Article

A Cutting-Edge Survey of Tribological Behavior Evaluation Using Artificial and Computational Intelligence Models

Table 7

Advantages and disadvantages of the above-discussed algorithms (Table 6) used to evaluate wear behavior in different metals.

AdvantagesDisadvantagesRef.

NNs are quite endurable as the parameter (weight) values are changed according to the performance. The modifications are made according to an ML algorithm called gradient descent (GD).Other algorithms like SVM in [56] could be implemented and compared for better performance and results. For example, the LMA (Levenberg–Marquardt algorithm) could improve the model instead of GD.[53]
The model was concluded to be excellent and fast because of the little prediction time, and the results of the ANN model, along with the experimental study, indicated the same. Also, the LMA was faster than GD or GN.The LMA gives us only the local optimum instead of the global optimum.
Because the derivatives of the flat functions do not exist after a certain point in time, the algorithm might be a failure.
[54]
The experiment helped perceive the most influential factors affecting the friction coefficient and the wear rate. Therefore, the ANN is very much capable of predicting the same.The LMA might not be a potential choice if the beginning point does not have the right quality, i.e., distant from the actual required values.[55]
NNs can take in linear and nonlinear relationships, generating and performing well to show good results.Sigmoid (the activation function) was not zero-centered that could give undesired results and implications during the implementation of GD. An alternative for it could be tanh, and where priority is speed, ReLU would be suitable.[56]
LSSVM could eliminate local minima. Also, comparing the relative error of RSM and LSSVM, the graph depicts LSSVM as a suitable model since it has fewer relative errors.SVM underperforms if the number of characteristics for a data point exceeds the number of training data samples. Therefore, a considerable amount of data are required to be enforced.[57]
To obtain an optimal value of the input parameter and achieve an output value with the minor target, Taguchi coupled ANN was applied.The effects of a parameter on the resultant value were not precise. Also, the method did not provide any absolute results; therefore, it was stated unsuitable for a constantly changing process.[58]
The ANN was better than a statistical approach since it has three times lower relative mean error and higher stability for all studied conditions.A model without units makes the equations incomprehensible physically; therefore, it is necessary to include units to make sense in the world.[59]
The aim behind ANFIS is to connect inputs and outputs accurately. It could help set up a model with uncertainties and composite data distribution.The limitations of ANFIS are the computational expense, and it is hard to compute large input values. Therefore, it cannot be used in a big data paradigm.[60]
To determine parameters having minimum variations, Taguchi methods were helpful. Also, ANOVA was used to check the quality of features affected by design parameters.The effects of a parameter on the resultant value were not precise. Also, the method did not provide any absolute results; therefore, it was stated unsuitable for a constantly changing process.[61]
RF showed the highest precision. Due to its ability to get tuned and give visual information, RF can be directly used by product engineers.The RF creates many trees and needs a lot of computational power and colossal training time.
Overfitting of noisy data may lead to unfavorable outputs.
[62]
To improve user-friendliness, linguistic rules were applied. Also, for fuzzy logic, they act as an advantage.To achieve a stable mapping with the help of Kohonen’s SOM, the nearby data point needs to behave similarly.[63]
The ANN aids in estimating the coefficient of friction for parameter values more significant than those included in POD experiments.Increasing the total number of parameters in an MLP might lead to more time. It is inefficient as such high dimensions might be redundant.[64]
An ANN and expert systems were used to find the worn-out tools. A blend of inference results and complex sensor outputs helped achieve a positive result.Expert systems collapse without a proper output from the ANN; therefore, they will face issues classifying the tool’s wear.[65]
GFs usually take in more compound, nonlinear, and unpredictable relationships since their connections can skip several layers.A network of this kind could overfit due to its inability to deduce the latest data when applied to simple tasks.[66]
ANN’s characteristics like adaptability and fault tolerance are beneficial here.The beginning point is far off the desired value; the LMA might not perform well here.[67]
The unknown parameters can be found through this technique if the outputs are already known.Massive space for inputs is required when using RBF though it is not favorable to waste inputs while having other essential tasks.[68]
Less period is required for training the Bayes, and its application is effortless.The nature of the attributes is presumed to be mutually independent in the Bayesian algorithm, but that seems impossible as the predictors cannot be fully independent.[69]
Results were in order with the experimental values; therefore, the neuro-fuzzy approach is good.Sugeno FIS provided no output membership function, and chances of loss of interpretability are high.[70]
A framework based on the Takagi–Sugeno neuro-fuzzy network has proven to be the best of both worlds.Massive inputs and computational expenses are some of the limitations of ANFIS. Therefore, it is not applicable for a “big data paradigm.”[71]
The RNN can mimic the dynamic nature of the problem here as the old network values are reused, in turn, giving the ANN memory.There can be problems with the gradient not converging. It is a complex task while working with tanh or ReLU activation functions.[72]
RF is concluded to be best for industry purposes as no parameter tuning was required for it. Also, its predictions are equally good as MLPs.The RF creates several trees; therefore, it requires more computational power and more training time.
Chances of noisy overfitting data having unfavorable outputs as results are there.
[73]
This model practices a high-powered working nature, whereas a supervision system cannot.A neuro-FIS might be applicable in such a dynamic environment.[74]
An R2 error value of 0.989 was obtained using the ELM method, and a reduced number of tests, testing time, and cost were also observed here.More training cases could lead to the loss of the essence of the problem as the ELM consists of only one hidden layer. This was not observed here since the number of cases is only 40.[75]
GBR was concluded as the best out of the seven ML algorithms compared here since it resulted in the slightest standard deviation and good accuracy.KNN, STR, and GPR will not be recommended as they are considered the worst-performing algorithms here.[76]
Minimum error artificial data were generated for processing, and the method used here is flexible and considered best for evaluating the tribo-parameters.The work is limited to the general behavior of distinct reinforcement particles due to the variable metallurgical properties.[77]
The ANN investigated the impact on the APS process parameters well.Future work includes the optimum coating properties dependent on the APS process parameters.[78]
A regression coefficient value of 0.99996 using the ANN was the best of all the other proposed models.Different algorithms could be used for training the ANN along with GBR and SVR, and it can be used to compare the results.[79]