Wireless Communications and Mobile Computing

Wireless Communications and Mobile Computing / 2021 / Article
Special Issue

Generative Adversarial Networks for Multi-Modal Multimedia Computing

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 2034125 | https://doi.org/10.1155/2021/2034125

Yiwei Chen, Yi He, Jing Wang, Wanyue Li, Lina Xing, Xin Zhang, Guohua Shi, "DeepLab and Bias Field Correction Based Automatic Cone Photoreceptor Cell Identification with Adaptive Optics Scanning Laser Ophthalmoscope Images", Wireless Communications and Mobile Computing, vol. 2021, Article ID 2034125, 8 pages, 2021. https://doi.org/10.1155/2021/2034125

DeepLab and Bias Field Correction Based Automatic Cone Photoreceptor Cell Identification with Adaptive Optics Scanning Laser Ophthalmoscope Images

Academic Editor: Yulin Wang
Received03 May 2021
Revised24 May 2021
Accepted01 Jun 2021
Published12 Jun 2021

Abstract

The identification of cone photoreceptor cells is important for early diagnosing of eye diseases. We proposed automatic deep-learning cone photoreceptor cell identification on adaptive optics scanning laser ophthalmoscope images. The proposed algorithm is based on DeepLab and bias field correction. Considering manual identification as reference, our algorithm is highly effective, achieving precision, recall, and score of 96.7%, 94.6%, and 95.7%, respectively. To illustrate the performance of our algorithm, we present identification results for images with different cone photoreceptor cell distributions. The experimental results show that our algorithm can achieve accurate photoreceptor cell identification on images of human retinas, which is comparable to manual identification.

1. Introduction

Vision is one of the most important human senses. Unfortunately, as a major cause of blindness, retinopathy has become increasingly common. Most retinopathy patients can prevent blindness with early diagnosis and treatment, which provide promising outcomes. Although optical imaging allows observing the retina, higher-resolution imaging is required for the early diagnosis of retinopathy. However, ocular aberrations limit the resolution of optical imaging. To address this limitation, adaptive optics (AO), which was originally intended for removing aberrations caused by atmospheric instability [1], has been used to correct ocular aberrations in retinal imaging [24]. AO allows the resolution of in vivo retinal imaging to reach the cellular level [46]. In particular, AO scanning laser ophthalmoscopy (AO-SLO) uses an integrated AO for clearly imaging cone photoreceptor cells [4]. Thus, AO-SLO allows to observe pathological changes in the distribution of photoreceptor cells on the retina, thus, outperforming other retinal imaging techniques in the diagnosis of certain diseases characterized by disorders in the distribution of cone photoreceptor cells [711].

To quantitatively calculate the distribution of cone photoreceptor cells, individual cells should be identified. Although manual identification of cone photoreceptor cells is reliable, it is time-consuming and subjective. Therefore, semiautomatic and automatic algorithms for cone photoreceptor-cell identification have been devised [1226]. They can be nonlearning-based algorithms [1218], supervised-learning algorithms [1923], and unsupervised-learning algorithms [2426]. Among them, supervised deep-learning algorithms have achieved the highest accuracy, thus, being a promising research direction given their potentially high performance.

In 2014, Google introduced a supervised deep-learning semantic segmentation model called DeepLab [27]. With remarkable advantages, DeepLab has become a hot topic in research and engineering [2833], and one of its popular variants, DeepLab v3 [34], has been widely used in medical image processing [3541]. We propose an automatic cone photoreceptor cell identification algorithm based on DeepLab v3 for AO-SLO images. The proposed algorithm also uses bias field correction [42] to further improve the identification accuracy. To confirm the effectiveness of the proposed algorithm, we determined various evaluation measures (i.e., precision, recall, and score) with respect to manual identification, which is considered as the reference providing the ground truth. The performance of the proposed algorithm is further demonstrated by showing cone photoreceptor-cell identification results for AO-SLO images with different cell distributions.

2. Methods

Figure 1 shows the outline of the proposed deep-learning cone photoreceptor-cell identification algorithm with its main steps of (1) training, (2) testing, and (3) postprocessing. First, the training dataset that includes AO-SLO images and their corresponding segmented images is used to train DeepLab [34]. Second, the bias-field-corrected images obtained from the test dataset after applying bias field correction [42] are input to the trained DeepLab [32] to generate segmented test images. Third, the bias-field-corrected images and segmented test images are processed by threshold-based algorithm to obtain finely segmented images to identify individual cone photoreceptor cells by calculating their centroids.

2.1. Training

To achieve a fine segmentation of cone photoreceptor cells, we magnified the training AO-SLO images and their corresponding segmented images four times isotropically before training segmentation. In detail, the training AO-SLO images were interpolated using the antialiasing mode to obtain high-quality images, and the corresponding segmented images were interpolated using the nearest mode for binarization. Both interpolation operations are available in Python Imaging Library. Then, DeepLab v3 [34] with its ResNet-101 backbone pretrained on the ImageNet dataset was trained using the magnified images. In the training images, the area of the cone photoreceptor cells is larger than that of the background. To compensate for such imbalance, we introduced a cross-entropy loss function that weights the cone photoreceptor cells (0.3) and background (0.7) separately. During training, we set the batch size and number of epochs to 2 and 100, respectively. The outline of the training process is shown in Figure 2.

2.2. Testing

The direct usage of the trained DeepLab v3 to segment four-time magnified test AO-SLO images can cause failure with high probability. A representative example of a failure case is shown in Figure 3, where segmentation is based on local intensity bias instead of cone photoreceptor cells, leading to segmentation failure.

To solve this problem, we applied bias field correction to the AO-SLO images. First, a bias field image is generated by applying a Gaussian filter whose sigma value is 22 pixels length to the AO-SLO image [26]:

Second, the AO-SLO image is corrected by extracting the bias field image [42]:

Third, the four-time magnified bias-field-corrected image is input to the trained DeepLab, and the segmentation results are obtained. The outline of the testing process is shown in Figure 4.

Figure 5 depicts the bias field correction [42] and DeepLab segmentation [34] performed on the image shown in Figure 3. The bias field is corrected, and the segmented image is accurate.

2.3. Postprocessing

Figure 5 shows that some cone photoreceptor cells are merged after DeepLab segmentation. To mitigate this problem, we applied thresholding to the bias-field-corrected images [36]. The intensity values in the DeepLab segmentation mask were first extracted from the bias-field-corrected image. Then, the mean intensity value was calculated and used as the threshold to segment the bias-field-corrected image. Through thresholding, cone photoreceptor cells were identified in two steps. In detail, the contours of the segmentation results were extracted using function find Contours of OpenCV, and the centroids of the areas inside the contours were then considered as identified cone photoreceptor cells. A representative example of postprocessing is shown in Figure 6, where adjacent cell merging is mostly solved, and individual cone photoreceptor cells are accurately identified.

3. Results

We evaluated the proposed algorithm on a publicly available dataset [15] that contains 840 AO-SLO images and their corresponding cone photoreceptor cell segmentation results as ground truth [15]. We used 800 AO-SLO images for the training dataset, and the remaining 40 images for the test dataset. The automatic processing took 2.95 hours for training with two batch sizes over 100 epochs, 8.77 s for testing, and 0.76 s for postprocessing. These computation times were obtained on a computer running 64-bit Python and equipped with an Intel Core i7-10870H processor (2.20 GHz), 16.0 GB RAM, and NVIDIA GeForce RTX 2060 graphics card.

To confirm the effectiveness of the proposed algorithm for cone photoreceptor cell identification, we evaluated its identification performance regarding three measures, namely, precision, recall, and score, with respect to the manual identification results taken as reference. The overall precision, recall, and score for identification are listed in Table 1, where the values are compared with those of several algorithms [15, 18, 25, 26]. The proposed algorithm achieves accurate cone photoreceptor cell identification, outperforming the comparison algorithm [18, 25, 26] except the graph theory-based algorithm [15] which is often referred to as ground truthing cone photoreceptor cell identification but needs a large amount of computing and complex implementation.


MethodsPrecisionRecall score

Graph theory based algorithm [15]98.2%98.5%98.3%
Proposed algorithm96.7%94.6%95.7%
Watershed based algorithm [18]93.2%96.6%94.9%
-means clustering based algorithm [26]93.4%95.2%94.3%
Superpixels based algorithm [25]80.1%93.5%86.3%

To illustrate the performance of the proposed algorithm, Figure 7 shows cone photoreceptor cell identification results for different cone photoreceptor cell distributions on AO-SLO images. The cone photoreceptor cells are accurately identified on the three AO-SLO images with different distributions.

4. Discussion

In semantic segmentation, the relationship between the target segmentation objects and background is usually complex. Cone photoreceptor cell identification is relatively simple: (1) only one type of object, a cone photoreceptor cell, should be segmented; (2) cone photoreceptor cells do not contain rich texture details. Thus, an algorithm can segment the images according to area-based information. As the target area containing the cone photoreceptor cells is much larger than the area in general semantic segmentation, DeepLab algorithm is trained with bias if the cone photoreceptor cells and background are weighted equally. To prevent bias, we designed a cross-entropy loss function with a smaller weight given to cone photoreceptor cells.

In general, supervised deep-learning algorithms provide higher accuracy than nonlearning-based and unsupervised-learning algorithms. Therefore, automatic algorithms for the accurate identification of cone photoreceptor cells on AO-SLO images can be developed by applying and modifying deep learning algorithms, which have demonstrated high-performance image segmentation and identification but have not yet been used for cone photoreceptor cell identification. In this regard, we presented the modified versions of three famous methods [4345] as promising solutions for developing automatic and accurate cone photoreceptor cell identification algorithms on AO-SLO images.

5. Conclusions

We propose an automatic deep-learning algorithm for the identification of cone photoreceptor cells on AO-SLO images. The algorithm implements DeepLab v3 and bias field correction as its core techniques. To confirm the effectiveness of the proposed algorithm, we obtained its precision, recall, and score with respect to manual identification, obtaining values of 96.7%, 94.6%, and 95.7%, respectively. Furthermore, to illustrate the performance of the proposed algorithm, we obtained the cone photoreceptor cell identification results for different cone photoreceptor cell distributions on AO-SLO images.

Data Availability

The original dataset used in this paper is a publicly available dataset which can be obtained online (http://people.duke.edu/~sf59/Chiu_BOE_2013_dataset.htm) [25]. However, our source codes are not publicly available due to them containing information that could compromise research participant privacy.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

This work was supported in part by the Natural Science Foundation of Jiangsu Province (BK20200214), National Key R&D Program of China (2017YFB0403701), Jiangsu Province Key R&D Program (BE2019682, BE2018667), National Natural Science Foundation of China (61605210, 61675226, 61378090), Youth Innovation Promotion Association of Chinese Academy of Sciences (2019320), Frontier Science Research Project of the Chinese Academy of Sciences (QYZDB-SSW-JSC03), and Strategic Priority Research Program of the Chinese Academy of Sciences (XDB02060000).

References

  1. H. W. Babcock, “The possibility of compensating astronomical seeing,” Publications of the Astronomical Society of the Pacific, vol. 65, no. 386, pp. 229–236, 1953. View at: Publisher Site | Google Scholar
  2. A. Roorda, F. Romero-Borja, W. J. Donnelly III, H. Queener, T. J. Hebert, and M. C. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Optics Express, vol. 10, no. 9, pp. 405–412, 2002. View at: Publisher Site | Google Scholar
  3. S. A. Burns, R. Tumbar, A. E. Elsner, D. Ferguson, and D. X. Hammer, “Large-field-of-view, modular, stabilized, adaptive-optics-based scanning laser ophthalmoscope,” Journal of the Optical Society of America. A, vol. 24, no. 5, pp. 1313–1326, 2007. View at: Publisher Site | Google Scholar
  4. R. D. Ferguson, Z. Zhong, D. X. Hammer et al., “Adaptive optics scanning laser ophthalmoscope with integrated wide-field retinal imaging and tracking,” Journal of the Optical Society of America. A, vol. 27, no. 11, pp. 265–277, 2010. View at: Publisher Site | Google Scholar
  5. Y. Kitaguchi, K. Bessho, T. Yamaguchi, N. Nakazawa, T. Mihashi, and T. Fujikado, “In vivo measurements of cone photoreceptor spacing in myopic eyes from images obtained by an adaptive optics fundus camera,” Japanese Journal of Ophthalmology, vol. 51, no. 6, pp. 456–461, 2007. View at: Publisher Site | Google Scholar
  6. A. Reumueller, “Three-dimensional composition of the photoreceptor cone layers in healthy eyes using adaptive-optics optical coherence tomography (AO-OCT),” PLoS One, vol. 16, no. 1, article e0245293, 2021. View at: Publisher Site | Google Scholar
  7. J. Lammer, S. G. Prager, M. C. Cheney et al., “Cone photoreceptor irregularity on adaptive optics scanning laser ophthalmoscopy correlates with severity of diabetic retinopathy and macular edema,” Investigative Ophthalmology & Visual Science, vol. 57, no. 15, pp. 6624–6632, 2016. View at: Publisher Site | Google Scholar
  8. S. P. Park, W. Lee, E. J. Bae et al., “Early structural anomalies observed by high-resolution imaging in two related cases of autosomal-dominant retinitis pigmentosa,” Ophthalmic Surgery, Lasers and Imaging Retina, vol. 45, no. 5, pp. 469–473, 2014. View at: Publisher Site | Google Scholar
  9. S. Nakatake, Y. Murakami, J. Funatsu et al., “Early detection of cone photoreceptor cell loss in retinitis pigmentosa using adaptive optics scanning laser ophthalmoscopy,” Graefe's Archive for Clinical and Experimental Ophthalmology, vol. 257, no. 6, pp. 1169–1181, 2019. View at: Publisher Site | Google Scholar
  10. R. L. Steinmetz, A. Garner, J. I. Maguire, and A. C. Bird, “Histopathology of incipient fundus flavimaculatus,” Ophthalmology, vol. 98, no. 6, pp. 953–956, 1991. View at: Publisher Site | Google Scholar
  11. Y. Chen, K. Ratnam, S. M. Sundquist et al., “Cone photoreceptor abnormalities correlate with vision loss in patients with Stargardt disease,” Investigative Ophthalmology & Visual Science, vol. 52, no. 6, pp. 3281–3292, 2011. View at: Publisher Site | Google Scholar
  12. A. Turpin, P. Morrow, B. Scotney, R. Anderson, and C. Wolsley, “Automated identification of photoreceptor cones using multi-scale modelling and normalized cross-correlation,” in Image Analysis and Processing – ICIAP 2011. ICIAP 2011. Lecture Notes in Computer Science, vol 6978, G. Maino and G. L. Foresti, Eds., pp. 494–503, Springer, Berlin, Heidelberg. View at: Publisher Site | Google Scholar
  13. D. Cunefare, R. F. Cooper, B. Higgins et al., “Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images,” Biomedical Optics Express, vol. 7, no. 5, pp. 2036–2050, 2016. View at: Publisher Site | Google Scholar
  14. D. M. Bukowska, A. L. Chew, E. Huynh et al., “Semi-automated identification of cones in the human retina using circle Hough transform,” Biomedical Optics Express, vol. 6, no. 12, pp. 4676–4693, 2015. View at: Publisher Site | Google Scholar
  15. S. J. Chiu, Y. Lokhnygina, A. M. Dubis et al., “Automatic cone photoreceptor segmentation using graph theory and dynamic programming,” Biomedical Optics Express, vol. 4, no. 6, pp. 924–937, 2013. View at: Publisher Site | Google Scholar
  16. K. Y. Li and A. Roorda, “Automated identification of cone photoreceptors in adaptive optics retinal images,” Journal of the Optical Society of America A, vol. 24, no. 5, pp. 1358–1363, 2007. View at: Publisher Site | Google Scholar
  17. J. Liu, H. Jung, A. Dubra, and J. Tam, “Automated photoreceptor cell identification on nonconfocal adaptive optics images using multiscale circular voting,” Investigative Ophthalmology and Visual Science, vol. 58, no. 11, pp. 4477–4489, 2017. View at: Publisher Site | Google Scholar
  18. Y. Chen, Y. He, J. Wang et al., “Automated cone photoreceptor cell segmentation and identification in adaptive optics scanning laser ophthalmoscope images using morphological processing and watershed algorithm,” IEEE Access, vol. 8, pp. 105786–105792, 2020. View at: Publisher Site | Google Scholar
  19. D. Cunefare, C. S. Langlo, E. J. Patterson et al., “Deep learning based detection of cone photoreceptors with multimodal adaptive optics scanning light ophthalmoscope images of achromatopsia,” Biomedical Optics Express, vol. 9, no. 8, pp. 3740–3756, 2018. View at: Publisher Site | Google Scholar
  20. D. Cunefare, A. L. Huckenpahler, E. J. Patterson, A. Dubra, J. Carroll, and S. Farsiu, “RAC-CNN: multimodal deep learning based automatic detection and classification of rod and cone photoreceptors in adaptive optics scanning light ophthalmoscope images,” Biomedical Optics Express, vol. 10, no. 8, pp. 3815–3832, 2019. View at: Publisher Site | Google Scholar
  21. J. Hamwood, D. Alonso-Caneiro, D. M. Sampson, M. J. Collins, and F. K. Chen, “Automatic detection of cone photoreceptors with fully convolutional networks,” Translational Vision Science & Technology, vol. 8, no. 6, p. 10, 2019. View at: Publisher Site | Google Scholar
  22. D. Cunefare, L. Fang, R. F. Cooper, A. Dubra, J. Carroll, and S. Farsiu, “Open source software for automatic detection of cone photoreceptors in adaptive optics ophthalmoscopy using convolutional neural networks,” Scientific Reports, vol. 7, no. 1, pp. 1–11, 2017. View at: Publisher Site | Google Scholar
  23. B. Davidson, A. Kalitzeos, J. Carroll et al., “Automatic cone photoreceptor localisation in healthy and Stargardt afflicted retinas using deep learning,” Scientific Reports, vol. 8, no. 1, pp. 1–13, 2018. View at: Publisher Site | Google Scholar
  24. C. Bergeles, A. M. Dubis, B. Davidson et al., “Unsupervised identification of cone photoreceptors in non-confocal adaptive optics scanning light ophthalmoscope images,” Biomedical Optics Express, vol. 8, no. 6, pp. 3081–3094, 2017. View at: Publisher Site | Google Scholar
  25. Y. Chen, Y. He, J. Wang et al., “Automated superpixels-based identification and mosaicking of cone photoreceptor cells for adaptive optics scanning laser ophthalmoscope,” Chinese Optics Letters, vol. 18, no. 10, article 101701, 2020. View at: Publisher Site | Google Scholar
  26. Y. Chen, Y. He, J. Wang et al., “Automated cone cell identification on adaptive optics scanning laser ophthalmoscope images based on TV-L1 optical flow registration and K-means clustering,” Applied Sciences, vol. 11, no. 5, article 2259, 2021. View at: Publisher Site | Google Scholar
  27. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected CRFS,” 2014, http://arxiv.org/abs/1412.7062. View at: Google Scholar
  28. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440, Boston, MA, USA, June 2015. View at: Publisher Site | Google Scholar
  29. W. Liu, D. Anguelov, D. Erhan et al., “SSD: single shot multibox detector,” in Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, vol 9905, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds., Springer, Cham, 2016. View at: Publisher Site | Google Scholar
  30. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1125–1134, Honolulu, HI, USA, July 2017. View at: Publisher Site | Google Scholar
  31. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFS,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, 2018. View at: Publisher Site | Google Scholar
  32. V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp. 2481–2495, 2017. View at: Publisher Site | Google Scholar
  33. M. Cordts, M. Omran, S. Ramos et al., “The cityscapes dataset for semantic urban scene understanding,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, June 2016. View at: Publisher Site | Google Scholar
  34. L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethinking atrous convolution for semantic image segmentation,” 2017, http://arxiv.org/abs/1706.05587. View at: Google Scholar
  35. W.-T. Xiao, L.-J. Chang, and W.-M. Liu, “Semantic segmentation of colorectal polyps with DeepLab and LSTM networks,” in 2018 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), Taichung, Taiwan, May 2018. View at: Publisher Site | Google Scholar
  36. Y. Wang, S. Sun, J. Yu, and D. Yu, “Skin lesion segmentation using atrous convolution via DeepLab V3,” 2018, http://arxiv.org/abs/1807.08891. View at: Google Scholar
  37. E. Grøvik, D. Yi, M. Iv et al., “Handling missing MRI sequences in deep learning segmentation of brain metastases: a multicenter study,” NPJ Digital Medicine, vol. 4, no. 1, pp. 33–37, 2021. View at: Publisher Site | Google Scholar
  38. L. Ahmed, M. M. Iqbal, H. Aldabbas, S. Khalid, Y. Saleem, and S. Saeed, “Images data practices for semantic segmentation of breast cancer using deep neural network,” Journal of Ambient Intelligence and Humanized Computing, pp. 1–17, 2020. View at: Publisher Site | Google Scholar
  39. A. Subramanian and K. Srivatsan, Exploring Deep Learning Based Approaches for Endoscopic Artefact Detection and Segmentation, EndoCV@ ISBI, 2020.
  40. C.-H. Huang, W.-T. Xiao, L.-J. Chang, W.-T. Tsai, and W.-M. Liu, “Automatic tissue segmentation by deep learning: from colorectal polyps in colonoscopy to abdominal organs in CT exam,” in 2018 IEEE Visual Communications and Image Processing (VCIP), Taichung, Taiwan, December 2018. View at: Publisher Site | Google Scholar
  41. D. Yi, E. Gøvik, M. Iv, E. Tong, G. Zaharchuk, and D. Rubin, “Random bundle: brain metastases segmentation ensembling through annotation randomization,” 2020, http://arxiv.org/abs/2002.09809. View at: Google Scholar
  42. P. Zang, G. Liu, M. Zhang et al., “Automated motion correction using parallel-strip registration for wide-field en face OCT angiogram,” Biomedical Optics Express, vol. 7, no. 7, pp. 2823–2836, 2016. View at: Publisher Site | Google Scholar
  43. G. Lin, A. Milan, C. Shen, and I. Reid, “Refinenet: multi-path refinement networks for high-resolution semantic segmentation,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, July 2017. View at: Publisher Site | Google Scholar
  44. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, July 2017. View at: Publisher Site | Google Scholar
  45. C. Peng, X. Zhang, G. Yu, G. Luo, and J. Sun, “Large kernel matters — improve semantic segmentation by global convolutional network,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, July 2017. View at: Publisher Site | Google Scholar

Copyright © 2021 Yiwei Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views166
Downloads387
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.