Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 1648573 | https://doi.org/10.1155/2020/1648573

Zhe Zhang, Xiyu Liu, Lin Wang, "Spectral Clustering Algorithm Based on Improved Gaussian Kernel Function and Beetle Antennae Search with Damping Factor", Computational Intelligence and Neuroscience, vol. 2020, Article ID 1648573, 9 pages, 2020. https://doi.org/10.1155/2020/1648573

Spectral Clustering Algorithm Based on Improved Gaussian Kernel Function and Beetle Antennae Search with Damping Factor

Academic Editor: Anastasios D. Doulamis
Received14 Jan 2020
Accepted02 May 2020
Published29 May 2020

Abstract

There are two problems in the traditional spectral clustering algorithm. Firstly, when it uses Gaussian kernel function to construct the similarity matrix, different scale parameters in Gaussian kernel function will lead to different results of the algorithm. Secondly, K-means algorithm is often used in the clustering stage of the spectral clustering algorithm. It needs to initialize the cluster center randomly, which will result in the instability of the results. In this paper, an improved spectral clustering algorithm is proposed to solve these two problems. In constructing a similarity matrix, we proposed an improved Gaussian kernel function, which is based on the distance information of some nearest neighbors and can adaptively select scale parameters. In the clustering stage, beetle antennae search algorithm with damping factor is proposed to complete the clustering to overcome the problem of instability of the clustering results. In the experiment, we use four artificial data sets and seven UCI data sets to verify the performance of our algorithm. In addition, four images in BSDS500 image data sets are segmented in this paper, and the results show that our algorithm is better than other comparison algorithms in image segmentation.

1. Introduction

Clustering analysis is an important research problem in the field of data mining. The purpose of clustering is to divide the data set into different clusters according to the intrinsic structure and relationship between the data so that the similarity between data points within the same cluster is higher, and the similarity between data points in different clusters is lower. The main clustering methods include partitioning-based clustering, hierarchical clustering, density-based clustering, grid-based clustering, and graph theory-based clustering. Different clustering algorithms are also applied to different fields, such as image segmentation [14], text clustering [5, 6], and community division [79].

Spectral clustering is a kind of clustering algorithm based on graph theory. By spectral graph partition theory [10], the clustering problem of the data set is transformed into the graph partition problem. In spectral clustering, each data point is regarded as the vertex of the graph, and the similarity between data points is regarded as the weight of the edge. By dividing the graph, the sum of the weight of the edge in the subgraph is as high as possible, and the sum of the weight of the edge between different subgraphs is as low as possible.

In 1973, Donath and Hoffman [10] first proposed the concept of graph partition based on the adjacency matrix, marking the formal birth of spectral clustering. In the same year, Fiedler [11] found that the two-way partition of the undirected graph is closely related to the eigenvector corresponding to the second small eigenvalue of the corresponding Laplacian matrix, which provides a new way to solve the problem of graph partition. In 2000, Shi and Malik [12] put forward the standard cut objective function, also known as the N-cut criterion, based on the spectral theory. In 2001, Ding et al. [13] put forward the minimum and maximum cut-set criterion based on N-cut, which balances the two requirements of minimum division loss and maximum vertex number of subgraphs, making division more inclined to balance the cut set and avoiding segmentation of smaller subgraphs with only a few vertices. In 2002, Jordan, Weiss, and Ng [14] proposed NJW algorithm, which is different from two-way division. The algorithm is based on k-way division, and it is also the most widely used spectral clustering algorithm so far. Despite the good development of spectral clustering, there are still some problems with the algorithm itself, such as how to select the scale parameters in the Gaussian kernel function. In 2004, scholars [15] have proved that the selection of scale parameters will affect the clustering results. To solve this problem, Zhang et al. [16] proposed a construction method of the similarity matrix based on local density. Nataliani and Yang [17] proposed an energy Gaussian kernel function to solve this problem.

Beetle antennae search algorithm (BAS) is an optimization algorithm inspired by the beetle’s foraging principle proposed by Jiang and Li [18] in 2017. By simulating the detection function of beetle’s tentacles and the mechanism of beetle’s random walking, an optimization mechanism similar to beetle’s foraging process is realized. According to the smell of food, the moving direction of the beetle is determined. When the smell of the left tentacle is strong, it will move to the left; otherwise, it will move to the right. Through the random orientation mechanism and variable step size mechanism, a beetle can search in the global scope. Compared with other intelligent algorithms, the algorithm does not need to know the specific form of gradient information and function and has the advantages of fast convergence speed and low requirements for parameters. So, it has been applied in some fields. Wang and Liu [19] combined the reverse neural network with the BAS algorithm to predict the loss of storm disaster. Chen et al. [20] used the particle swarm optimization algorithm based on the BAS algorithm to solve the portfolio model. Wang and Chen [21] proposed a kind of bee swarm antenna search algorithm (BSAS).

The main contributions of this paper are as follows: (1) A construction method of the similarity matrix is proposed, which uses the distance information of some nearest neighbors to define the scale parameter σ to overcome the influence of artificial designated scale parameter σ on the results. (2) In the clustering stage, we use the proposed beetle antennae search algorithm with damping factor (DBAS) to complete the clustering. Through such an intelligent optimization algorithm, we can overcome the impact of random initialization of the cluster center on the results when K-means is used in the traditional spectrum clustering. And the damping factor overcomes the oscillation in the iterative process and improves the stability of the algorithm.

The content of this paper is organized as follows. In Section 3, an improved spectral clustering algorithm based on the distance information of some nearest neighbors and beetle antennae search algorithm with damping factor is proposed. Section 4 shows the performance of the algorithm through experimental analysis. The conclusion will be presented in Section 5.

2. Spectral Clustering and Beetle Antennae Search Algorithm

2.1. Spectral Clustering

The spectral clustering algorithm uses the eigenvectors of the Laplacian matrix corresponding to the data set to cluster. In the spectral clustering algorithm, firstly, an undirected graph is constructed according to the data points. Each vertex on the graph corresponds to a data point, and the weight on the edge is the similarity between the data points. In general, we use Gaussian kernel function to construct the similar matrix. Then, we can get a degree matrix , whose main diagonal element is equal to the sum of the row elements corresponding to the similar matrix. There are usually three ways to construct the Laplacian matrix : (1) denormalized Laplacian matrix, (2) normalized symmetric Laplacian matrix , and (3) normalized asymmetric Laplacian matrix . The eigenvector corresponding to the first k eigenvalues of the Laplace matrix can be calculated and set . Then, a new feature matrix is obtained by normalizing . Each row in the feature matrix is regarded as a sample, which is clustered to obtain a group of clusters . NJW algorithm [14] is the most commonly used spectral clustering algorithm. The basic step of the NJW algorithm is shown in Algorithm 1.

2.2. Beetle Antennae Search Algorithm (BAS)

Based on the principle of beetle’s foraging, three optimization strategies can be simplified: (1) The left and right antennae of the beetle are located on both sides of the individual. (2) The ratio of the step length of each action to the distance between two antennae is a fixed constant. (3) After a move, the direction of its head is random. Then, we can build an optimization model (the beetle is simplified as an individual):(1)For an optimization problem in the n-dimensional space, is used to represent the coordinates of the left antennae of an individual, represent the coordinates of the right antennae of an individual, and is the centroid coordinate. is the distance between two antennae. Since the orientation of the individual is random after each movement, the direction of the vector that the right of the individual points to the left is also random. It can be expressed by a normalized random vector . There is .(2)For the minimization objective function , and . If is less than , then the individual travels in the direction of the left antennae step, otherwise, the distance step of the individual toward the right antennae direction.(3)Repeat step 1 and step 2 until the maximum number of iterations is reached or the individual does not change in M iterations.

3. Improved Spectral Clustering Algorithm

In this section, we improve Gaussian kernel function and BAS algorithm, respectively. After using the new Gaussian kernel function to construct the similarity matrix, we use the spectral clustering algorithm to get a new feature matrix, and then, we use the improved BAS algorithm to cluster.

3.1. An Improved Gaussian Kernel Function

In the traditional spectral clustering, the similarity matrix is usually constructed according to the Gaussian kernel function in the formula of Algorithm 1, where is the scale parameter; in general, the scale parameter is selected artificially. In 2004, scholars [15] had proved that the selection of scale parameters will affect the clustering results. In order to solve this problem, this paper proposes a method of constructing a similarity matrix based on the distance information of some nearest neighbors:where , which is the mean distance of the nearest points from point . is the ratio of the total number of samples to the square of the number of clusters. , where is the total number of samples and is the number of clusters.

Step 1: use the Gaussian kernel function to construct the similar matrix . .
Step 2: degree matrix .
Step 3: construct a normalized symmetric Laplacian matrix .
Step 4: calculate the feature vector corresponding to the first k eigenvalues of , and construct the feature matrix .
Step 5: normalize the feature matrix to obtain a normalized matrix , which contains n points in space reduced to k dimensions.
Step 6: treat each row of as a point, and cluster them by K-means algorithm.
3.2. Beetle Antennae Search Algorithm with Damping Factor (DBAS)

As mentioned in Section 2.2, the direction of the individual is random in each iteration. This results in more oscillations in the process of algorithm iteration. It is possible that the result of the M + 1 iteration is worse than that of the M iteration many times. We proposed to add a damping factor to the formula of the position update of the individual, which updates the position information by using the results of this iteration and the last iteration. The formula is described aswhere indicates the position in the t − 1th iteration, .

We use the algorithm with damping factor and the algorithm without damping factor to experiment on the Iris data set. Figure 1 shows that adding damping factor to the algorithm can effectively overcome the oscillation problem in the iterative process.

3.3. SC-DBAS Algorithm

Firstly, we use the Gaussian kernel function based on the distance information of some nearest neighbors (formula 2) to construct the similarity matrix and then calculate the corresponding degree matrix and Laplace matrix. We select the eigenvectors corresponding to the first k minimum eigenvalues of the Laplace matrix to construct an eigenmatrix and then normalize it to get a new eigenmatrix. Each row of the matrix is regarded as a sample point. For such a new data set, we randomly initialize a group of cluster centers as an individual and then use DBAS algorithm to cluster. SC-DBAS algorithm flow is given in Algorithm 2.

Input: data set X, number of clusters K, number of iterations of DBAS algorithm N
Step 1: construct similarity matrix,
Step 2: construct the degree matrix
Step 3: construct Laplace matrix
Step 4: calculate the eigenvector corresponding to the first k minimum eigenvalues of the Laplace matrix which forms the eigenmatrix
Step 5: normalize the feature matrix to get a new feature matrix
Step 6: treat each row of the feature matrix as a data point, and randomly initialize a group of cluster centers as an individual
Step 7: randomly initialize a group of cluster centers as an individual
Step 8: calculate the fitness of the right antennae and the left antennae of the current individual, where
Step 9: update individual location information ,
Step 10: repeat steps 8 and 9 until the maximum number of iterations is reached
Step 11: according to the cluster center corresponding to the last individual position, the cluster is obtained
Output:
3.4. Computational Complexity

The computational complexity of the proposed algorithm can be calculated as follows: the SC-DBAS algorithm is divided into three parts: (1) constructing a similar graph, which needs , (2) eigenvalue decomposition, which needs , and (3) clustering by using DBAS algorithm, which needs , where k is the number of cluster centers and l is the number of iterations. According to the notation of big O, the computational complexity of the proposed algorithm is .

4. Experimental Results and Analysis

4.1. Experimental Setting

All the experiments are conducted on the computer with Intel core i5-3230M CPU, 8 GB RAM. The experiment environment is Matlab 2016b. In the experiment, we compare the proposed algorithm with the K-means, NJW [14], MPSC algorithm [22], PGSC algorithm [17], and SC-NP algorithm [23] on four artificial data sets and seven UCI data sets. The proposed algorithm will also use the image in the BSDS500 data set for image segmentation. In the experimental part of image segmentation, the comparison algorithm is K-means, NJW [14], PGSC algorithm [17], and SC-NP algorithm [23].

In the experiment, the parameters are set as follows: step = 0.1; step adjustment factor eta = 0.95; the ratio between step and is 5; the number of iterations n = 100; and damp = 0.5. The information of data sets is shown in Table 1.


Data setObjectsAttributesClassesSource

Iris15043UCI
Wine178133UCI
Seeds21063UCI
Zoo101167UCI
Glass214106UCI
Sonar208602UCI
Ionosphere351342UCI
Spiral94422Artificial
Two moons200022Artificial
Three circles360323Artificial
Zigzag100223Artificial

4.2. Evaluation Indicators

In the experiment, we use four indicators to evaluate the clustering results: accuracy, ARI, F1 score, and time (s).(1)Accuracy rate: the accuracy rate represents the proportion of the number of correct clustering samples to the total number of samples, where V is the division label and U is the real label:(2)ARI: there are four cases by comparing the calculation results V with the real label U. SS contains sample pairs that belong to the same cluster in V and the same cluster in U. SD contains sample pairs that belong to the same cluster in V but not the same cluster in U. DS contains sample pairs that do not belong to the same cluster in V but belong to the same cluster in U. DD contains sample pairs that do not belong to the same cluster in V and do not belong to the same cluster in U. Set ; there are

The larger the value of ARI means that the clustering results are more consistent with the real situation.(3)F1 score: F1 score is one of the commonly used evaluation criteria in information retrieval. It is a weighted harmonic mean value based on precision and recall. Its definition is as follows, where a, b, and c have been defined in the above content:(4)Time: in this paper, we use the average time of each algorithm running 100 times as the evaluation index.

4.3. Data Set Experiment Result Analysis
4.3.1. Experimental Results of Artificial Data Sets

Table 2 shows the experimental results of the six algorithms on the four artificial data sets. From Figure 2, we can see that our proposed algorithm can well divide the data sets of various structures.


Data setK-meansNJWMPSCPGSCSC-NPSC-DBAS

Spiral0.59751110.58901
Two moons0.73371110.71701
Three circles0.55541110.57531
Zigzag0.70761110.72751

4.3.2. Experimental Results of UCI Data Sets

Table 3 and Figure 3 show the experimental results of the six algorithms on seven UCI data sets. By comparing the results, we can see that the algorithm proposed in this paper performs better than the other five algorithms and has a shorter running time.


Data setEvaluation indicatorsK-meansNJWMPSCPGSCSC-NPSC-DBAS

IrisAccuracy0.89330.89330.90670.90000.89330.9600
ARI0.55160.68500.75830.88590.87970.9195
F1 score0.89180.89880.90570.89880.89180.9332
Time (s)0.26100.51640.58800.06690.78410.0553

WineAccuracy0.65300.67420.55050.60670.69100.7247
ARI0.79430.89860.93100.56140.69380.7395
F1 score0.63630.62760.60570.65100.65310.7302
Time (s)0.22060.48450.40200.04430.96870.0638

SeedsAccuracy0.70080.79050.71940.88100.89050.9000
ARI0.70060.70220.68650.85940.86810.8787
F1 score0.88970.81500.89140.88130.89130.9092
Time (s)0.21950.48930.46410.04031.03810.0641

ZooAccuracy0.65340.63370.81190.87130.84160.8713
ARI0.63590.74410.77580.89620.89940.9012
F1 score0.63190.80380.83890.81900.80450.8540
Time (s)0.25610.50350.28440.05550.95380.0608

GlassAccuracy0.79130.65420.81310.78970.88320.8598
ARI0.73750.57180.77670.82060.85520.8817
F1 score0.68120.55150.68720.75090.69730.7364
Time (s)0.24180.43350.55470.05120.97100.0712

SonarAccuracy0.39420.37010.43460.53850.53370.5721
ARI0.08270.00220.13240.50060.49990.5080
F1 score0.46230.61190.55560.53700.55930.5716
Time (s)0.24220.59500.41810.05871.64580.0794

IonosphereAccuracy0.35890.61820.70940.64100.64100.7379
ARI0.32400.42370.47720.49890.52530.6121
F1 score0.41490.68850.69020.61880.68170.7431
Time (s)0.21051.0110.8920.13542.86060.1108

4.4. Application of the SC-DBAS Algorithm to Image Segmentation

Clustering-based image segmentation is based on the similarity between image pixels; through some clustering algorithms, the pixels are divided into different clusters so as to complete the segmentation of the original image.

In this section, we segment some images of the BSDS500 data set. For a 481∗321 pixels image, if we treat each pixel as a data point, there will be 154,401 data points. Therefore, in order to reduce the scale of data points, we first use SLIC algorithm [24] to perform presegmentation (superpixel segmentation) on the image. Each superpixel is an oversegmented region and is considered as a data point. Then, the proposed algorithm is used to segment the image. In the experiment, the number of superpixels of each image is 200. The comparison algorithm used in the experiment is K-means, NJW [14], PGSC algorithm [17], and SC-NP algorithm [23]. Then, we can get the results which are given in Figure 4.

From the experimental results, we can see that our algorithm can segment the object and the background better, while the other four comparison algorithms will have the wrong segmentation area. The segmentation accuracy results are shown in Table 4.


K-meansNJWPGSCSC-NPSC-DBAS

Image 10.88040.94830.94830.93040.9943
Image 20.55620.55620.53470.99420.9969
Image 30.95200.96960.97290.97220.9741
Image 40.99020.98830.99200.99170.9924

5. Conclusion

In this paper, an improved spectral clustering algorithm combined with the improved BAS algorithm is proposed. The proposed algorithm first improves the construction of the similarity matrix, which uses the distance information of some nearest neighbors of each point to calculate the corresponding scale parameters. In the stage of clustering, we proposed BAS algorithm with damping factor to cluster, which can overcome the problem that the original algorithm oscillates many times in the iterative process. The experimental results show that our algorithm is better than other algorithms in UCI data sets, artificial data sets, and image segmentation. However, in the application of image segmentation, our results will be affected by the effect of superpixel segmentation. The future work is to improve our algorithm so that it does not need to preprocess in image segmentation and can directly segment the image, and we will use more real images and medical images to verify our algorithm.

Data Availability

The four artificial data sets that were manually generated can be obtained by contacting the author. The seven UCI data sets are often used in the existing literature which are from the UCI Machine Learning Repository available at http://archive.ics.uci.edu/ml/datasets.php. The four tested images are from the Berkeley computer vision group, Berkeley segmentation data set, and benchmark 500 (BSDS500), which are available at https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research project was supported by the National Natural Science Foundation of China (61876101, 61802234, and 61806114), Social Science Fund Project of Shandong Province, China (16BGLJ06 and 11CGLJ22), Natural Science Fund Project of Shandong Province, China (ZR2019QF007), Postdoctoral Project, China (2017M612339 and 2018M642695), Humanities and Social Sciences Youth Fund of the Ministry of Education, China (19YJCZH244), and Postdoctoral Special Funding Project, China (2019T120607).

References

  1. A. W.-C. Liew, H. Hong Yan, and N. F. Law, “Image segmentation based on adaptive cluster prototype estimation,” IEEE Transactions on Fuzzy Systems, vol. 13, no. 4, pp. 444–453, 2005. View at: Publisher Site | Google Scholar
  2. F. Tung, A. Wong, and D. A. Clausi, “Enabling scalable spectral clustering for image segmentation,” Pattern Recognition, vol. 43, no. 12, pp. 4069–4076, 2010. View at: Publisher Site | Google Scholar
  3. W. Yan, S. Shi, L. Pan, G. Zhang, and L. Wang, “Unsupervised change detection in SAR images based on frequency difference and a modified fuzzy c-means clustering,” International Journal of Remote Sensing, vol. 39, no. 10, pp. 3055–3075, 2018. View at: Publisher Site | Google Scholar
  4. H. Ali, L. Rada, and N. Badshah, “Image segmentation for intensity inhomogeneity in presence of high noise,” IEEE Transactions on Image Processing, vol. 27, no. 99, 2018. View at: Publisher Site | Google Scholar
  5. X. Cui, T. E. Potok, and P. Palathingal, “Document clustering using particle swarm optimization,” in Proceedings 2005 IEEE Swarm Intelligence Symposium, Pasadena, CA, USA, June 2005. View at: Publisher Site | Google Scholar
  6. R. Janani and S. Vijayarani, “Text document clustering using spectral clustering algorithm with particle swarm optimization,” Expert Systems with Applications, vol. 134, pp. 192–200, 2019. View at: Publisher Site | Google Scholar
  7. H. Sun, J. Huang, J. Han, H. Deng, P. Zhao, and B. Feng, “Density-based network clustering via structure-connected tree division or agglomeration,” in Proceedings of the IEEE International Conference on Data Mining, Sydney, Australia, December 2010. View at: Publisher Site | Google Scholar
  8. P. G. Sun, L. Gao, and S. Shan Han, “Identification of overlapping and non-overlapping community structure by fuzzy clustering in complex networks,” Information Sciences, vol. 181, no. 6, pp. 1060–1071, 2011. View at: Publisher Site | Google Scholar
  9. Y. Xu, Z. Zhuang, W. Li, and X. Zhou, “Effective community division based on improved spectral clustering,” Neurocomputing, vol. 279, pp. 54–62, 2018. View at: Publisher Site | Google Scholar
  10. W. E. Donath and A. J. Hoffman, “Lower bounds for the partitioning of graphs,” IBM Journal of Research and Development, vol. 17, no. 5, pp. 420–425, 1973. View at: Publisher Site | Google Scholar
  11. M. Fiedler, “Algebraic connectivity of graphs,” Czechoslovak Mathematical Journal, vol. 23, no. 23, pp. 298–305, 1973. View at: Google Scholar
  12. J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 22, no. 8, pp. 888–905, 2000. View at: Google Scholar
  13. C. Ding, X. He, H. Zha et al., Spectral Min-Max Cut for Graph Partitioning and Data Clustering, Lawrence Berkeley National Lab, Berkeley, CA, USA, 2001.
  14. A. Y. Ng, M. I. Jordan, and Y. Weiss, “On spectral clustering: analysis and an algorithm,” in Proceedings of the International Conference on Neural Information Processing Systems: Natural and Synthetic, pp. 849–856, MIT Press, Vancouver, Canada, December 2001. View at: Google Scholar
  15. L. Zelnik Manor and P. Perona, “Self-tuning spectral clustering,” Advances in Neural Information Processing Systems, vol. 17, pp. 1601–1608, 2005. View at: Google Scholar
  16. X. Zhang, J. Li, and H. Yu, “Local density adaptive similarity measurement for spectral clustering,” Pattern Recognition Letters, vol. 32, no. 2, pp. 352–358, 2011. View at: Publisher Site | Google Scholar
  17. Y. Nataliani and M.-S. Yang, “Powered Gaussian kernel spectral clustering,” Neural Computing and Applications, vol. 31, no. S1, pp. 557–572, 2019. View at: Publisher Site | Google Scholar
  18. X. Jiang and S. Li, “BAS: beetle antennae search algorithm for optimization problems,” International Journal of Robotics and Control, vol. 1, no. 1, pp. 1–5, 2018. View at: Publisher Site | Google Scholar
  19. T. Wang and Q. Liu, “The assessment of storm surge disaster loss based on BAS-BP model,” Marine Environmental Science, vol. 37, no. 170, pp. 140–146, 2018. View at: Google Scholar
  20. T. Chen, H. Yin, H. Jiang et al., “Particle swarm optimization algorithm based on bee antenna search for solving portfolio problem,” Computer Systems & Applications, vol. 28, no. 2, pp. 171–176, 2019. View at: Google Scholar
  21. J. Wang and H. Chen, “Bee swarm antenna search algorithm for optimization problems,” International Journal of Robotics and Control, vol. 1, no. 1, p. 1, 2018. View at: Google Scholar
  22. L. Wang, S. Ding, and H. Jia, “An improvement of spectral clustering via message passing and density sensitive similarity,” IEEE Access, vol. 7, pp. 101054–101062, 2019. View at: Publisher Site | Google Scholar
  23. X.-Y. Li and L.-J. Guo, “Constructing affinity matrix in spectral clustering based on neighbor propagation,” Neurocomputing, vol. 97, pp. 125–130, 2012. View at: Publisher Site | Google Scholar
  24. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2274–2282, 2012. View at: Publisher Site | Google Scholar

Copyright © 2020 Zhe Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views236
Downloads197
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.