Parkinson’s Disease

Parkinson’s Disease / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 2513053 | 9 pages | https://doi.org/10.1155/2019/2513053

Optimization of SVM Parameters with Hybrid CS-PSO Algorithms for Parkinson’s Disease in LabVIEW Environment

Academic Editor: Jan Aasly
Received10 Dec 2018
Accepted07 Apr 2019
Published02 May 2019

Abstract

Optimization is the process of achieving the best solution for a problem. LabVIEW based on an SVM model is proposed in this paper to get the best SVM parameters using the hybrid CS and PSO method. PCA is used as a preprocessor of SVM for reducing the dimension of data and extracting features of training samples. Also, SVM parameters are optimized for Parkinson’s disease data by combining CS and PSO. The designed system is used to determine the best SVM parameters, and it is compared to PSO and CS optimization methods and found that the used CS-PSO hybrid optimization method is better. The hybrid model shows that the accuracy of the performance achieved is 97.4359%. Also, the data classification results obtained by using SVM parameters determined by optimization are measured by precision, recall, F1 score, false positive rate (FPR), false discovery rate (FDR), false negative rate (FNR), negative predictive value (NPV), and Matthews’ correlation coefficient (MCC) parameters.

1. Introduction

Parkinson’s disease (PD) is a neurological disorder that affects the standard of life of the patients and their relatives. PD is more widespread in countries where the pretty old population is high. According to statistics, in the US, about one million people will be affected with Parkinson’s disease (PD) by 2020 and more than 10 million people worldwide will be living with PD. The possibility of men with Parkinson’s disease is 1.5 times higher than that of women [1]. The main symptoms of this disease are tremor-trembling, stiffness in the body, slowness of movements, and impaired balance. As the disease progresses, patients may experience difficulty in vital tasks such as walking, speech, swallowing, and chewing, emotional changes, and sleep disorders. PD symptoms occur slowly, and in some people, the disease progresses faster than others. The intensity of the symptoms varies from person to person and does not create the same effect [2]. Although there is no treatment method to eliminate this disease completely, drug treatment is applied to reduce the symptoms seen in the early stages of the disease. For diagnosis of the disease, walking and vocal analysis methods are used. Machine learning methods have been used to diagnose the disease [3]. Among supervised learning algorithms, support vector machine (SVM) is based on statistical learning theory and is the most effective algorithm for predicting performance for nonlinear problems. It has high generalization capabilities. SVM is a powerful technique for overcoming classification problems, image processing, and disease diagnosis with excellent performance.

Before the SVM is applied, a “feature transformation” operation must be performed, which is the process of transforming the data into a new set of data at a dimension that can express less features. With this, dimension of data reduces and excessive numbers of unimportant features are removed. In this study, PCA is used for dimension reduction.

In cases where the difference between the reduced dimension data is too high, the normalization process is performed to handle the data in a single order. In addition, normalization is also used which makes use of mathematical functions to move data in different scaling systems to a common system and make them comparable. In this study, the Z-score normalization method is used. Z-score is calculated by subtracting the average value from each variable value and dividing the obtained difference by the standard deviation.

Particle swarm optimization (PSO) is an optimization technique designed to deliver the best solution to the system. Also, the cuckoo search algorithm (CS) is another optimization method. It has less parameters, easy to implement, and efficient. In this paper, both of them are used for optimization of SVM parameters.

1.1. Motivation

Accurate and reliable diagnosis is very important for human health. In this study, different optimization algorithms have been used to obtain the best SVM parameters for predicting Parkinson’s disease. The proposed hybrid CS-PSO-SVM model provided an accuracy of 97.4359% and is superior to the PSO-SVM and CS-SVM models.

1.2. Contribution

(i)A different work environment for researchers has been proposed using LabVIEW, a visual programming language instead of only text-based programming languages(ii)Hybrid optimization methods are used for obtaining the best SVM parameters

1.3. Sections

The paper is organized as follows: In Section 2, used data are described. In Section 3, PCA, SVM, PSO, and CS are explained. In Section 4, the LabVIEW programming language and information about its features are given. In Section 5, the proposed model CS-PSO-SVM is described. In Section 6, experimental results are given. The last section of the study includes discussion and conclusions.

2. Dataset and Features

Appropriate C value parameter is chosen [4]. The attributes of these data which are related to biomedical voice measurements are given in Table 1.


NumberAttributesInformation

1MDVP:Fo (Hz)Average vocal fundamental frequency
2MDVP:Fhi (Hz)Maximum vocal fundamental frequency
3MDVP:Flo (Hz)Minimum vocal fundamental frequency
4MDVP:Jitter (%)Several measures of variation in the fundamental frequency
5MDVP:Jitter (Abs)
6MDVP:RAP
7MDVP:PPQ
8Jitter:DDP
9MDVP:ShimmerSeveral measures of variation in amplitude
10MDVP:Shimmer (dB)
11Shimmer:APQ3
12Shimmer:APQ5
13MDVP:APQ
14Shimmer:DDA
15NHRTwo measures of ratio of noise-to-tonal components in the voice status
16HNR
17RPDETwo nonlinear dynamic complexity measures
18D2
19DFASignal fractal scaling exponent
20Spread1Three nonlinear measures of the fundamental frequency variation
21Spread2
22PPE

3. Description of Used Techniques

Dimension reduction and normalization procedures were performed to extract properties from the used data to ensure that they are in a single order. Then, optimization methods are applied to SVM. Figure 1 shows the diagram of the used techniques.

3.1. Principal Component Analysis (PCA)

PCA is a technique that has a wide range of uses for reducing the insignificant features of the data. The idea underlying the PCA is to represent a data plane by separating it into orthogonal axes to reflect the data in small linear combinations. In other words, PCA reduces the data dimension to extract features. Figure 2 shows the used dimension reduction program on LabVIEW.

3.2. Normalization with Z-Score

Statistical normalization is performed to treat the data in a single order when there is a lot of difference between the data. Also, another objective is to use mathematical functions to translate data from different systems into a common system and make them comparable. In the Z-score normalization, the numbers are normalized to the distance of the mean value. In addition, dividing by standard deviation, the mobility between numbers (rate of change) normalizes the distance to the average. In other words, the mean and standard deviation values are taken into account. Standard deviation, mean, and Z-score are calculated by using the following equations, respectively:

3.3. Support Vector Machine (SVM)

SVM is an important tool for machine learning (ML) derived from statistical learning theory [58]. SVM is one of the most used algorithms for classification and regression tasks because it has high performance and generalization capability [914]. The main idea behind SVM is to obtain a linear discriminatory function that separates classes from each other by a hyperplane. SVM finds the hyperplane which maximizes the margin between samples and the class boundary for linearly separable data, but for real-world applications, as shown in Figure 3, the nonlinear transformations with the kernel functions are essential to move the datasets to spaces that can linearly separate and classify them [15, 16]. Transfer process of input data to the property plane is shown in Figure 4.

Theoretically, any linearly separable SVM can be correctly classified. For a linearly separable dataset, there are n samples of training data with two classes expressed as

These data can be separated from each other by the separator function given by

The following equations are used for correct classification:

The appropriate values of and are calculated to find the optimal separator hyperplane. For real-world data, data samples cannot be distinguished linearly. For this reason, defining a feature mapping function is needed. This function is called the kernel function. The basic idea of kernel methods is to use nonlinear mapping on the input plane first and then apply a linear algorithm to the new input. The training phase is the K kernel function of the data in this plane:which will depend on the inner products. Decision function is

In this function, αi values are positive Lagrange multipliers. ls is the number of support vectors, and xi is the support vector [17].

Polynomial kernel, sigmoid kernel, and Gaussian kernel functions are used commonly to find the optimal hyperplane to distinguish linearly nonseparable data.(i)Gaussian kernel:(ii)Polynomial kernel:xy is the dot product of x and y. nth order of this product is a polynomial kernel. The infinite totaled expression containing all polynomial kernel from the 0th to the infinite order is Gaussian kernel. Therefore, Gaussian is a special kernel and shows good performance.

In order to classify with SVM, the first thing to do is to select a kernel function and related parameters that allow linear separation of the data. For classification of data, the following equation is obtained:

Appropriate C value parameter is chosen and α is found:

Provided the 0 ≤ αi ≤ C condition, support vectors V are determined:

3.4. Particle Swarm Optimization (PSO)

Optimization is the process of achieving the best solution for a problem. Since the methods used in optimization problems defined by mathematical functions are not flexible and the desired result cannot be achieved, new methods have been developed with reference to natural phenomena and PSO is the most common of these algorithms. Inspired by fish and insects moving in flocks, Kennedy and Dr. Eberhart developed PSO in 1995 [18, 19]. It has been shown that the random movements of animals that move in flocks to meet their vital needs are influenced by the other members of the flock and are easier to reach for the purpose of the flock. This process is done for determining the location of the particle with the best position in the stream and the other particles to move in that direction. The particles aim to improve their next position based on their past experience and the individual with the best position in the pack. The PSO algorithm is an evolutionary algorithm like genetic algorithm (GA). However, PSO is faster than GA because there are no operators such as crossover and mutation.

The basic PSO algorithm: every individual in the swarm can be a solution, and every individual is represented by the dimension vector:

The speed of each individual in the herd is randomly generated. Each individual has the same speed as in equation (14):

The best local and global positions are determined. Here, the position of each individual is defined as follows:

Each individual in the PSO adjusts its position around the individual to pbest, global, and gbest. The speed and position information of the individuals are given in the following equations:

Here, c1 and c2 are two social and cognitive acceleration parameters. r1 and r2 are random numbers between [0, 1]. Figure 5 shows the pseudocode of the PSO.

3.5. Cuckoo Search Algorithm (CS)

CS is a next-generation optimization method based on the hatching parasitic nature of cuckoo birds [2123]. The most effective features that separate cuckoos from other birds and subject them to the optimization algorithm are aggressive breeding strategies. If the host bird finds that the eggs are not its own, it shows the behavior of throwing the egg from the nest or abandoning the nest. If the eggs are not recognized, the host bird sits on these eggs and the condition of brood parasitism arises. In the CS algorithm, each egg represents a new solution, while each solution is a cuckoo egg. The purpose of this is to use better new solutions to replace the existing poor solution in the nest. As with any optimization problem, CS also has some restrictions that are each cuckoo can leave only one egg in a randomly selected nest and if the nest has a high-quality egg, then the egg is transferred to future generations.

Dropped eggs can be familiarized by the host with a probability of Pa. The probability of Pa changes in the range [0, 1], and the number of host nests is fixed. Heuristic optimization algorithms perform global and local searches while approaching the best solution. The CS algorithm is an algorithm that is used in combination with global random walk and local random walk approaches.

The global random walk performed with a Levy flight is performed with equation (18). Here, the value produced by the Levy flight is weighted by the variable α and is summed to its old position. Thus, new locations are found. s and λ are control parameters:

Local random walk is performed with equation (19). Here, and are the random permutations, H(u) the Heaviside function, s the step length, and α a random real number from the Gaussian distribution:

4. LabVIEW

The emerging technology needed the development of object-oriented programming languages instead of text-based programming languages. Thus, visual programming was possible without writing code. With National Instruments’ development of the LabVIEW program, it was possible to program the model graphically with ready-made functions, and there was no need to write code. With LabVIEW (Laboratory Virtual Instrument Engineering Workbench), it was possible to make programs more quickly and to avoid time loss. LabVIEW generally uses a data flow model instead of text codes. Also, LabVIEW has an ability of multiple parallel processes [24].

LabVIEW consists of two components: the first one is the front panel that is the user interface and the second one is the block diagram in which graphical codes are shown. Both of them are shown in Figures 6 and 7, respectively. Inputs connected to the virtual instrument on the front panel are called controls, while the outputs are called indicator. The control palette is used in the front panel, and the function palette is used in the block diagram. The control palette allows access to various controls and indicators and is displayed only on the front panel. In the same way, the function palette also allows access to blocks with various functions to design a system and is displayed only in the block diagram. With LabVIEW, subVI can be created just like a VI. Also, a subVI can be created from code already within another VI. The created subVI, with the customized icon and the configured terminals, is used within other VIs repeatedly. The subVI prevents the program from appearing too crowded. As a matter of fact, subVI was used in this study.

5. The Proposed Methods

In this study, PSO-SVM, CS-SVM, and CS-PSO-SVM methods are compared with each other. The created hybrid program in the LabVIEW environment is shown in Figure 8. Optimization algorithms are used to find the best SVM parameters. To get these parameters, subVI is created for each optimization method. subVI is frequently used in LabVIEW such as in other programming languages. With subVI, the created program is simplified and is prevented from appearing crowded.

The classification performance results obtained by using SVM parameters determined by optimization are measured by accuracy, precision, recall, F1 score, false positive rate (FPR), false discovery rate (FDR), false negative rate (FNR), negative predictive value (NPV), and Matthews’ correlation coefficient (MCC) parameters. These parameters are obtained by the confusion matrix. A confusion matrix is shown in Table 2.


PredictionActual
PositiveNegative

PositiveTPFP
NegativeFNTN

5.1. Accuracy

Accuracy is the correct classification ratio:

5.2. Precision

Precision is a situation that shows success in a positively predicted situation:

5.3. Recall

Recall shows how well the positive cases are estimated:

5.4. F1 Score

F1 score is the harmonic average of precision and recall:

5.5. False Positive Rate (FPR)

FPR, sometimes called the fall-out, is the ratio of misclassified events (FP) to all actual negative events:

5.6. False Discovery Rate (FDR)

FDR is the expected percent of false predictions in a set of predictions:

5.7. False Negative Rate (FNR)

FNR, sometimes called the miss rate, is the proportion of individuals with a known positive condition for which the test result is negative:

5.8. Negative Predictive Value (NPV)

NPV is the proportion of individuals with a negative test result for which the true condition is negative:

5.9. Matthews’ Correlation Coefficient

MCC is a reliable metric used to assess the quality of binary classifiers by taking into account TP, TN, FN, and FP. In fact, MCC is a correlation coefficient between the actual and predictor labels. This parameter takes a value between −1 and +1. The +1 coefficient means an excellent estimate, 0 indicates that the classifier is not better than random estimates, and −1 means a discrepancy between the actual and predicted values [25]:

6. Experimental Results

The dataset used in the study is obtained from [4] and has 195 instances and 22 attributes. It is composed of a range of biomedical voice measurements from 31 people. The dataset information is listed in Table 3. Table 4 lists the appropriate values of the parameters for PSO-SVM, CS-SVM, and CS-PSO-SVM, and the comparison of the models is given in Table 5.


Number of instancesNumber of attributesNormalPD

19522823


MethodPopulation sizeIterationc1c2

PSO-SVM181201.31.87
CS-SVM (Pa = 0.262)18120
CS-PSO-SVM (Pa = 0.262)181201.31.87


MethodAccuracy (%)Precision (%)Recall (%)F1 measure (%)FPRFDRFNRNPVMCC

PSO-SVM82.0588.8957.1469.570.040.11110.42860.800.6051
CS-SVM92.307783.3390.9186.960.07140.16670.09090.96300.8167
CS-PSO-SVM (Pa = 0.262)97.435910090.9195.24000.09090.96550.9369

The Pa parameter in Table 4 is usually selected in the range [0, 1]. In this study, the program was run for different Pa parameters, and more successful results were obtained for Pa = 0.262.

As can be seen in Table 5, the performance of the proposed hybrid model is more superior to that of others.

The population average fitness value for the used dataset is shown in Figure 9 for each method. Also, error rates are shown in Figure 10.

7. Discussion and Conclusion

Accurate and reliable diagnosis is very important for human health. Different optimization algorithms have been used for optimizing the SVM parameters in this paper. The aim of this paper is to find the best SVM parameters with the hybrid CS-PSO optimization method and obtain best classification accuracy. For this, to analyze the performances of the used methods, the programs were run several times, and the results are presented as tables. Table 4 shows the appropriate algorithm parameters of different methods used for classification. The performances of the used models are shown in Table 5, and the results show that the best result is obtained by the hybrid model (CS-PSO).

The proposed model achieves a classification accuracy of 97.4359%, while this rate is 92.3077% in CS-SVM and 82.05% in PSO-SVM. The MCC contains all parameters in the confusion matrix. The higher value of MCC proves that the proposed classification method is successful. As shown in Table 5, the highest MCC value was obtained with the proposed hybrid algorithm.

As results of this study, hybrid models created by combining the good characteristics of different optimization algorithms can be used to find the parameters of the classification methods, and the success rate of the model can be increased.

In an increasingly widespread LabVIEW environment, it is possible to quickly create subprograms and to obtain results quickly.

Data Availability

The data that support the findings of this study are available from the authors upon reasonable request.

Conflicts of Interest

The author declares that there are no conflicts of interest.

References

  1. https://parkinson.org/Understanding-Parkinsons/Causes-and-Statistics/Statistics.
  2. https://www.worldpdcoalition.org/page/AboutParkinson.
  3. X. Liu and H. Fu, “PSO-based support vector machine with cuckoo search technique for clinical disease diagnoses,” Scientific World Journal, vol. 2014, Article ID 548483, 7 pages, 2014. View at: Publisher Site | Google Scholar
  4. https://ftp.ics.uci.edu/pub/machine-learning-databases/.
  5. T. Frieß, N. Cristianini, and C. Campbell, “The kernel-adatron: a fast and simple learning procedure for support vector machines,” in Proceedings of the Fifteenth International Conference on Machine Learning (ICML’98), pp. 188–196, Madison, WI, USA, July 1998. View at: Google Scholar
  6. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at: Publisher Site | Google Scholar
  7. B. Gu, V. S. Sheng, Z. Wang, D. Ho, S. Osman, and S. Li, “Incremental learning for support vector regression,” Neural Networks, vol. 67, pp. 140–150, 2015. View at: Google Scholar
  8. B. Gu and V. S. Sheng, “A robust regularization path algorithm for support vector classification,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 5, pp. 1241–1248, 2017. View at: Publisher Site | Google Scholar
  9. U. R. Acharya, S. V. Sree, A. P. C. Alvin, and J. S. Suri, “Use of principal component analysis for automatic classification of epileptic EEG activities in wavelet framework,” Expert Systems, vol. 39, no. 10, pp. 9072–9078, 2012. View at: Publisher Site | Google Scholar
  10. H. Drucker, C. Burges, L. Kaufman, A. Smola, and Vapnik, “Support vector regression machines,” in Neural Information Processing Systems, M. Moser, M. Jordan, and T. Petshe, Eds., vol. 9, pp. 155–161, MIT Press, Cambridge, MA, USA, 1997. View at: Google Scholar
  11. J. Tian, Q. Hu, X. Ma, and M. Han, “An improved KPCA/GA-SVM classification model for plant leaf disease recognition,” Journal of Computational Information Systems, vol. 8, no. 18, pp. 7737–7745, 2012. View at: Google Scholar
  12. T. Fletcher, “Support vector machines explained,” 2014, https://www.tristanfletcher.co.uk/SVM%20Explained.pdf. View at: Google Scholar
  13. Reyaz-Ahmed, Y.-Q. Zhang, and R. W. Harrison, “Granular decision tree and evolutionary neural SVM for protein secondary structure prediction,” International Journal of Computational Intelligence Systems, vol. 2, no. 4, pp. 343–352, 2009. View at: Publisher Site | Google Scholar
  14. A. An, C. Angulo, and Y. Sun, “Support vector regression with interval-input interval-output,” International Journal of Computational Intelligence Systems, vol. 1, no. 4, pp. 299–303, 2008. View at: Publisher Site | Google Scholar
  15. M. Xi, J. Sun, L. Liu, F. Fan, and X. Wu, “Cancer feature selection and classification using a binary quantum-behaved particle swarm optimization and support vector machine,” Computational and Mathematical Methods in Medicine, vol. 2016, Article ID 3572705, 9 pages, 2016. View at: Publisher Site | Google Scholar
  16. B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A training algorithm for optimal margin classifiers,” in Proceedings of the Fifth Annual Workshop on Computational Learning Theory—COLT’92, pp. 144–152, ACM, Pittsburgh, PA, USA, July 1992. View at: Publisher Site | Google Scholar
  17. Y. Saeys, I. Inza, and P. Larrañaga, “A review of feature selection techniques in bioinformatics,” Bioinformatics, vol. 23, no. 19, pp. 2507–2517, 2007. View at: Publisher Site | Google Scholar
  18. R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in Proceedings of the Sixth International Symposium on Micro Machine and Human Science (MHS’95), pp. 39–43, Nagoya, Japan, October 1995. View at: Publisher Site | Google Scholar
  19. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, November 1995. View at: Google Scholar
  20. F. Serbet, T. Kaya, and M. T. Ozdemir, “Design of digital IIR filter using particle swarm optimization,” in Proceedings of the 2017 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 202–204, Opatija, Croatia, May 2017. View at: Google Scholar
  21. X.-S. Yang, Nature-Inspired Optimization Algorithms, Elsevier, Waltham, MA, USA, 1st edition, 2014.
  22. X.-S. Yang and S. Deb, “Cuckoo search via lévy flights,” in Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), pp. 210–214, Coimbatore, India, December 2009. View at: Google Scholar
  23. A. S. Joshi, O. Kulkarni, G. M. Kakandikar, and V. M. Nandedkar, “Cuckoo Search optimization—a review,” Materials Today: Proceedings, vol. 4, no. 8, pp. 7262–7269, 2017. View at: Publisher Site | Google Scholar
  24. A. S. Kehtarnavaz and O. Mahotra, Digital Signal Processing Laboratory: LabVIEW-Based FPGA Implementation, BrownWalker Press, Boca raton, FL, USA, 2010.
  25. M. Behroozi and A. Sami, “A multiple-classifier framework for Parkinson’s disease detection based on various vocal tests,” International Journal of Telemedicine and Applications, vol. 2016, Article ID 6837498, 9 pages, 2016. View at: Publisher Site | Google Scholar

Copyright © 2019 Duygu Kaya. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

1195 Views | 553 Downloads | 4 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.