|
Type of learning | Algorithm | Description |
|
Supervised learning | Decision tree | Using related values, decision trees (DTs) will categorize attributes in different groups which can be applied for classification purposes [31]. |
Naïve Bayes | Naïve Bayes can be best applied to cluster and classify objects [32]. |
Support vector machine | Working on the margin calculations, support vector machine (SVM) can be best applied for classification purposes [33]. |
K-nearest neighbor | In K-nearest neighbor (KNN), the learner usually uses the training data. When the test data are introduced to the learner, it compares both the data. Here, K most correlated data are taken from a training set. The majority of K is taken which serves as the new class for the test data [34]. |
Supervised neural network | Using supervised neural network (SNN), the predicted output and actual output will be compared, and according to the identified error, the parameters will be modified and considered as the input into the neural network again [15]. |
|
Unsupervised learning | K-means clustering | Using similarity of the clusters of data,K-means (KM) clustering algorithm definesK clusters in which the center of the clusters is the mean of the values [35]. |
Principal component analysis | Principal component analysis (PCA) can provide faster and easier computations as it reduces the dimension of the data [34]. |
Unsupervised neural network | Unsupervised neural network (UNN) categorizes data based on their similarities. Since the output is unknown, UNN considers the correlations between different inputs and categorizes them into different groups [15]. |
|
Semisupervised learning | Self-training | Self-training first classifies using labeled data, and then unlabeled data are used as inputs [15]. |
Transductive support vector machine | Being an extension of SVM, transductive support vector machine (TSVM) considers both labeled and unlabeled data to make sure the margin is maximized between them [15]. |
|
Ensemble learning | Boosting | Boosting uses two sorts of variables, namely, weak learners and strong learners. By grouping weak learners and converting them to strong learners, it aims to decrease bias and variances [36]. |
Bagging | Bagging is another tool which can be applied to decrease variances and increase the accuracy and stability of ML [37]. |
|