Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2021 / Article
Special Issue

Swarm Intelligence and Neural Network Schemes for Biomedical Data Evaluation

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 9980326 | https://doi.org/10.1155/2021/9980326

Zhemin Zhuang, Zengbiao Yang, Shuxin Zhuang, Alex Noel Joseph Raj, Ye Yuan, Ruban Nersisson, "Multi-Features-Based Automated Breast Tumor Diagnosis Using Ultrasound Image and Support Vector Machine", Computational Intelligence and Neuroscience, vol. 2021, Article ID 9980326, 12 pages, 2021. https://doi.org/10.1155/2021/9980326

Multi-Features-Based Automated Breast Tumor Diagnosis Using Ultrasound Image and Support Vector Machine

Academic Editor: V. Rajinikanth
Received30 Mar 2021
Accepted07 May 2021
Published19 May 2021

Abstract

Breast ultrasound examination is a routine, fast, and safe method for clinical diagnosis of breast tumors. In this paper, a classification method based on multi-features and support vector machines was proposed for breast tumor diagnosis. Multi-features are composed of characteristic features and deep learning features of breast tumor images. Initially, an improved level set algorithm was used to segment the lesion in breast ultrasound images, which provided an accurate calculation of characteristic features, such as orientation, edge indistinctness, characteristics of posterior shadowing region, and shape complexity. Simultaneously, we used transfer learning to construct a pretrained model as a feature extractor to extract the deep learning features of breast ultrasound images. Finally, the multi-features were fused and fed to support vector machine for the further classification of breast ultrasound images. The proposed model, when tested on unknown samples, provided a classification accuracy of 92.5% for cancerous and noncancerous tumors.

1. Introduction

International Agency for Research on Cancer (IARC) reported that breast cancer accounts for about 24.2% of cancers diagnosed in women worldwide [1]. It is also the leading fatal cause in women, accounting for about 15%. With the development of modern medicine, if breast cancer is diagnosed early, the survival rate of patients will be significantly improved [2]. Breast tumors are usually examined by computerized tomography (CT), magnetic resonance imaging (MRI), molybdenum target X-ray, far infrared, ultrasound (US), and other methods. Among them, the US has become the preferred choice for early breast cancer screening due to its cost-effectiveness and more robust images [3]. However, the Breast Imaging-Reporting and Data System (BI-RADS) [4] grades diagnosed by different clinicians for the same patient are subjective and different since some features in the breast ultrasound (BUS) images are not typically visible to diagnose [5]. Besides, different breast lesions show different features in BUS images. Also, experience and the ability to understand the visual clues from BUS images are essential in reducing false negative detection. The count shows that the missed diagnosis of medical imaging in disease diagnosis can be between 10% and 30% [6].

Artificial intelligence (AI) can assist doctors in making a more accurate judgment because of its objectivity and versatility. AI diagnosis of benign and malignant BUS images can be divided into deep learning and feature extraction. Deep learning transforms the raw BUS images into much higher-dimension expression through convolutional neural networks (CNNs). Multi-level and multi-aspect features are obtained by training the network model, which makes the obtained features have a more robust generalization and expression. Deep learning is often used in the automatic classification of BUS images. For example, in reference [7], 166 malignant and 292 benign BUS images were trained and classified by using a neural network composed of three convolution layers and two fully connected layers. Qi et al. [3] used Mt-Net (malignant tumors) and Sn-Net (solid nodules) to classify BUS images, where Mt-Net was used to detect malignant tumors and Sn-Net was used to detect solid nodules. Although deep learning has achieved good results, they are constrained by the need for a higher number of ground truth (GT).

Feature extraction techniques identify useful characteristic features (CFs) from the original images, where the original image is transformed into a group of features with obvious physical significance, to achieve the purpose of dimensionality reduction. For example, in reference [8], the region growing method was used to segment the lesion, and the histogram method was used to calculate six histograms from the posterior shadowing region (PS). Finally, BUS images were divided into PS enhancement and PS nonenhancement by using the six histograms and multilayer perceptron (MLP). However, PS is only one of the features to judge the benign and malignant BUS images and lacks accuracy to make a classification.

In this paper, (1) by analyzing the different manifestations of benign and malignant breast tumors in ultrasound images, combined with the clinical experience of experts, different and effective characteristic features were designed manually. (2) In order to assist the classification of breast tumor ultrasound images, we used transfer learning for extracting deep learning features. Finally, (3) SVM was used to integrate characteristic features and deep learning features and present an effective classification.

2. Materials and Methods

Since benign and malignant breast BUS images have different histological structures and biological characteristics, they exhibit different properties on BUS images. The malignant tumors such as ductal carcinoma in situ [9], due to their invasive nature, penetrate through the ducts and into adjacent fibrous and adipose tissues. It forms a blurred mixed zone between the tumor and the tissues and complex edge. Besides, the complex interstitial components and hyaline transformation of malignant lesions often lead to scattering of acoustic signals [10, 11]. During the decision process, the specialists often consider the orientation, posterior shadowing (PS), edge indistinctness (EI), and shape complexity (SC) of the tumors as essential features to identify them as benign or malignant. These characteristic features (CFs) of BUS images are not only a critical judgment for clinical diagnosis but also a significant basis for BUS image classification based on feature extraction. Hence, we proposed a method to classify breast lesions using multi-features (MFs) and SVM. The proposed method firstly employed a level set technique to segment the tumor region of BUS images. From the contour of the segmented tumor, orientation and edge indistinctness (EI) scores were calculated. Next, using Hu moments, we determined the characteristic of the posterior shadowing (PS) region. Later, the fractal divider method was used to calculate the shape complexity (SC) score of tumor contour. Meanwhile, the pretrained VGG16 model was used as a feature extractor to get deep learning features (DLFs) of BUS images. Finally, we classified the BUS images based on the multi-features (MFs) obtained above and SVM. The above process is illustrated in Figure 1.

3. Lesion Segementation and Feature Calculation

Due to the complexity involved during the ultrasound examination, the acquired images contain speckle noise, image artifacts, and weak boundaries that hurt the segmentation process. Accurate segmentation can effectively improve the accuracy of classification [12]. Therefore, for accurate extraction of tumor regions, conventional segmentation techniques may not provide desired results. Literature suggests that level set techniques are useful for segmentation problems related to topological changes, and hence we used an advanced level set segmentation algorithm based on geometric active contour model and curve evolution theory to complete the lesion segmentation of BUS images [13]. The technique employed an iterative method to segment the tumor region within the BUS image accurately. The following paragraph briefly explains the use of a level set algorithm to segment the BUS images.

The level set algorithm that we used in this paper does not depend on the gradient information of the BUS image. Therefore, it is insensitive to noise and has a significant advantage in medical image processing [14, 15]. Here we employed the Distance Regularized Level Set Evolution (DRLSE) [16] model that eliminates the need for reinitialization but employs a distance regularization term and energy functions to propagate the zero-level set function (LSF) towards the desired locations. The energy function can be defined as follows.

For ,where is the regularization term and and are the external energy terms. , , are the coefficients, respectively, and are the Heaviside function and Dirac delta function, respectively, and is a potential function.

Due to the addition of the distance regularization term in equation (1), the deviation between the level set and the signed distance function (SDF) is automatically corrected in each iteration of the level set function, thus maintaining stability and avoiding reinitialization.

The following gradient descent flow function can be obtained by differentiating equation (1), to realize the extraction of the tumor region in BUS image while minimizing the energy function.

The implementation of DRLSE for an application is based on the flowchart illustrated in Figure 2, which includes (a) initialization of LSF and narrowband ; (b) updating the LSF and narrowband region; (c) updating the pixel values on the narrowband based on , where is called zero-crossing point, k is the number of iteration, and is the time step; and (d) termination of the process, if the prescribed number of consecutive zero-crossing has opposite signs or the expected number iterations is reached. The segmentation results of the lesions in breast ultrasound images are shown in Figure 3.

3.1. Computation of Orientation

The growth characteristics of benign and malignant tumors vary and therefore show different orientations. Here we used contour obtained from the segmentation process to facilitate the calculation of tumor orientation. First, we transversed the segmented contour in both horizontal and vertical directions and obtain four points: top , bottom , leftmost , and rightmost . These points are vertices obtained from the intersection of lines , and with upper, lower, leftmost, and rightmost extreme regions of contour, respectively, as shown in Figure 4.

Next, using equation (3), we computed the tumor orientation, which is expressed as a ratio between the height to the width of the tumor.

3.2. Computation of Edge Indistinctness (EI) Score of Lesions

Commonly malignant tumors penetrate deeper into the tissues causing indistinctive margins that are different from the benign ones. Therefore, to measure the edge indistinctness more comprehensively, we extracted a region of m x n pixels from the top and bottom vertices of the contour, i.e., around and points, as shown in Figure 5.

For the extracted m × n region, we separately calculated the standard deviation along the row and the column as given in equations (4) and (5). We defined EI score (equation (6)) as the maximum of the standard deviation computed along with the row and column directions:

For our experiments, we have selected two regions around and points and accordingly computed two EI scores: and , respectively.

In cancer diagnosis, the edge strength (blurring) is an important index that is used to classify the tumor. However, there are differences in the degree of blurring across different sections along the tumor boundary. Thus, we defined the average values of and as , as shown in the following equation:

3.3. Computation of the PS Score of Posterior Shadowing Region by Using Hu Moments

The texture characteristics of the PS region are generally different for benign and malignant tumors. Literature suggests that moments can be used for analyzing texture characteristics [17], and therefore, we used moment invariants proposed by Hu [18] to compare the PS regions of different BUS images quantitatively. The PS region of the BUS image was extracted based on the bottom , the leftmost , and the rightmost points of the contour, as shown in Figure 6.

Let be the extracted PS region; then, its order of the geometric moment can be defined as follows:

Their central moments can be defined aswhere and are the center of gravity coordinates of the image, given by

The normalized central moment is defined aswhere .

Normally, seven Hu moments can be computed using the second- and third-order normalized central moments. But here we only use the first moment as it is sufficient to provide a score that could differentiate PS regions of different BUS images effectively. Accordingly, we substituted p = 0 and q = 2 in equation (11) and defined the PS score as

To have a clear distinction, the PS score is transformed as

3.4. Computation of Shape Complexity (SC) Score Based on Fractal Dimensions

The shape is one of the critical factors clinical experts use to classify tumors as benign or malignant. Cancerous tumors have complex contours, whereas benign ones have simpler structures. Therefore, we proposed a technique based on fractals to quantify the shape complexity of the segmented tumors. In image processing, fractals have been widely used in US image segmentation. Omiotek et al. [19] used fractals to quantify the texture of thyroid US images. Lin et al. [20] used fractals to determine the area of alveolar bone defect, and recently Zhuang et al. [21] collectively used fuzzy enhancement and fractal length to segment the US image of atherosclerosis successfully. Also, the fractal theory was successfully used to measure the irregular coastlines and the fault geometries propagated by earthquakes. For example, Mandelbrot [22] employed power law to relate the costal length to the different linear rulers, and Okubo and Aki [23] quantified complex fault geometries to large values of fractal dimension.

Here the divider method used by cartographers [2325] was employed to quantify the shape complexity of the segmented tumor. To measure the complexity, we drew circles of different radii along the boundary of the segmented contour. The starting point was chosen as the top point and the circles of different radii were drawn dividing the contours, as shown in Figure 7. Let N (R) be the number of the circles and R be their corresponding radius; then, according to [26, 27], we can relate N (R) and R aswhere “a” and “b” are constants that are obtained through least-squares fit between and and “b” is the slope of the line that represents the fractal dimension, which determines the SC score of the contour.

The illustration of the divider method for a sample BUS image is shown in Figure 7, and Table 1 presents R and their corresponding N (R) values. Further Figure 8 demonstrates the SC score obtained by the least-square fitting of N (R) and R for the sample BUS image shown in Figure 7.


Figure 7(a)(b)(c)(d)(e)(f)(g)(h)

R4567891011
N (R)10985726153464337

4. Deep Learning Features (DLF) Extraction and SVM-Based Classification

In the above, we identified handcrafted features such as shape score, PS score, and EI and orientation to identify the characteristic features (CFs) of the segmented lesion in a BUS image. Here we retrieved high-dimension features from the BUS image to assist the classification. A deep learning method can fix this problem by performing convolution operations on the input graphics multiple times. Therefore, by using deep learning, we extracted the DLF of the image and combined them with the manually extracted CF to distinguish benign and malignant BUS images.

Due to the limited dataset, our paper used transfer learning [28] to train the neural network as the DLF extractor. Initially, we compared the classification ability of VGG16 [29], VGG19 [29], ResNet50 [30], and Inception V3 [31] for BUS images. The experiment results show that VGG16 is better than other networks. Then, based on transfer learning, we froze the convolution layer of VGG16, added the global average pooling layer, and changed 7 × 7 × 512 of the original VGG16 output into 1 × 1 × 512. For the full connected layer, the nodes were set to 512, 128, 4, and 2, respectively. At the same time, the Relu activation function was added after each fully connected layer except the last one. The modified VGG16 is shown in Figure 9. For the output, the cross-entropy loss function was used to calculate the loss. After training, for other input images, we could get four values (DLF) from the second fully connected layer of the trained model.

Once the features have been computed from the BUS images, we used the SVM classifier to classify them as benign or malignant. The SVM was chosen since (a) the availability of labeled large medical image data sets for training is not feasible. SVM could provide better classification accuracy for smaller training sets [32]; (b) SVM theory provides a way to avoid inseparable problem in low-dimensional space through the use of kernel functions [33]. This attribute simplifies the problems in higher-dimensional space providing a more generalized classification.

Normally, to solve the problem of linear inseparability in low-dimensional space, kernel functions are used to map the features from low-dimensional space to high-dimensional space, thus realizing the linear classification in higher-dimensional space. In the experiments, the radial basis function (RBF) [34] was used as the kernel function of SVM to classify the CF, the DLF, and the MF, respectively, and the classification results were evaluated by various evaluation indexes.

5. Results

For experimental results, we have used 1802 BUS images that include 787 benign and 1015 malignant BUS images. It contains two parts. The first part is provided by Ultrasoundcases.info, which is a professional breast cancer ultrasound website developed by Hitachi Medical Systems in Switzerland and Dr. Taco Geertsma, who works for Gelderse Vallei hospital in the Netherlands. It contains many ultrasound cases, which were collected and labeled by radiologists and ultrasound technicians of the hospital over the years. The other part is supported by the First Affiliated Hospital of Shantou University, Guangdong Province, China.

5.1. Evaluation Indexes

The following measures (equations (15)–(19)) were used as a metric to evaluate the performance of the SVM classifier model [35].where true positive (TP): GT malignant and prediction malignant; false positive (FP): GT benign and prediction malignant; false negative (FN): GT malignant and prediction benign; and true negative (TN): GT benign and prediction benign.

5.2. Characteristic Feature (CF) Calculation

To illustrate the calculation of CF, we presented 6 BUS images as examples, as shown in Figure 10. The calculation results of CF are listed in Table 2. The original value is calculated by using the proposed methods, and the normalized value presents the normalized CF value.


ImagesOrientationEIPS scoreSC
OVNVOVNVOVNVOVNV

(a)0.544027.5300.5722.7511−1.0210
(b)0.6700.28327.5290.5721.9830−1.0250.05
(c)0.8350.65435.15112.2480.371−1.0270.008
(d)0.8780.75117.32902.3210.440−1.0770.7
(e)0.989130.1940.7222.4740.693−1.1011
(f)0.7720.51223.2150.332.1530.221−1.0340.163

It can be seen from Table 2 that the characteristic features (CFs) of Figures 10(a) and 10(d) are consistent with the biological properties of benign and malignant BUS images. For example, Figure 10(a) which was diagnosed as benign tumor has larger EI score compared with malignant tumor (Figure 10(d)). However, we can also find that benign and malignant BUS images may have the same properties, for example, the posterior shadowing (PS) shown in Figures 10(b) and 10(f) is relatively low. Therefore, only relying on a single feature to distinguish benign and malignant BUS images will present a higher probability of misjudgment.

5.3. Deep Learning Feature (DLF) Extractor

To select the best DLF extractor, based on transfer learning, we used VGG16, VGG19, ResNet50, and Inception V3 to train and test the dataset, which is composed of 955 malignant and 727 benign BUS images. In the experiment, the training set accounts for 80% of the total data and the remaining accounts for the test set. The experimental results are shown in Table 3.


Pretrained modelsAccuracySensitivitySpecificityF1-score

VGG160.840.860.820.86
VGG190.810.810.770.82
ResNet500.780.770.770.85
InceptionV30.780.830.70.81

As we can see from Table 3, the accuracy, sensitivity, specificity, and F1-score of VGG16 are 0.84, 0.86, 0.82, and 0.86, respectively, which are higher than those of VGG19, ResNet50, and Inception V3. From the table, it can be inferred that VGG16 has better classification ability; it can learn BUS image features better than other networks. Therefore, we use VGG16 to extract deep learning features (DLFs) to assist the SVM classification.

5.4. Classification of Breast Tumors Based on SVM

In this experiment, we used another 120 BUS images, which are totally different from the data used in the above section. Among them, 80 BUS images were used to train the SVM model, including 40 benign BUS images and 40 malignant BUS images. Also, we took another 40 BUS images as the test set. For those 120 BUS images, we extracted their characteristic features (CFs) and deep learning features (DLFs) and then concatenated them serially to form multi-features (MF), as shown in equation (20). Later, the MF was normalized and labeled for supervised learning. We use “1” to indicate malignant and “0” to label benign. After the preparation, we firstly used triple cross-validation on the training samples to get the best classifier and then used the classifier to test the training samples and test samples, respectively.where,

In Table 4, CF and a different number of deep learning features (DLFs) are used as multi-features (MFs) to classify BUS images based on SVM. The result shows that the MF composed of CF and 4 DLF could get better classification results. Therefore, we choose CF and 4 DLF as the final MF. Further, from Table 5, it can be seen that by using SVM, the classification accuracy, precision, sensitivity, specificity, and F1-score of MF are 0.925, 0.905, 0.95, 0.905, and 0.927, respectively, which are higher than those indicators obtained from other classification methods. In Figure 11, the ROC curve and AUC value of SVM classification based on characteristic features (CFs), four deep learning features (DLFs), and multi-features (MFs) are recorded. The results show that the AUC value of MF is 0.970, which is higher than that of CF (0.935) or 4 DLF (0.895). Therefore, the classification model based on MF is better than other classification techniques.


Classification modelsAccuracySensitivitySpecificityF1-score

2 DLF + CF + SVM0.8750.8950.850.872
4 DLF + CF + SVM0.9250.9050.950.927
6 DLF + CF + SVM0.90.90.90.9
8 DLF + CF + SVM0.850.8890.80.84
10 DLF + CF + SVM0.8750.9410.80.84
12 DLF + CF + SVM0.850.8890.80.84
16 DLF + CF + SVM0.8750.8950.850.872


ModelsAccuracyPrecisionSensitivitySpecificityF1-score

MF + SVM0.9250.9050.950.9050.927
CF + SVM0.8750.8950.850.8950.827
DLF + SVM0.80.8750.70.8750.778
[3]0.901NA0.9350.832NA
[7]0.83NANA0.824NA
Inception V3 [36]0.78NA0.770.78NA
VGG19 [36]0.820.700.700.78NA
[37]0.8667NA0.92450.7838NA

The bold values indicate that the result of the proposed method is better than that of other classification methods.
5.5. Triple Cross-Validation

Here, we used triple cross-validation to illustrate the advantages of using the multi-features (MF), that is, for different c, values in SVM, the training dataset is randomly divided into three parts: two of them are considered as the training set and the rest is used for verification set. The average accuracy of the three validation sets is considered as the accuracy of the SVM with these c, values. Here c = 0.5 is the regularization parameter of SVM, and  = 0.25 is the parameter of radial basis function (RBF).

The contour plot in Figure 12 represents different accuracy values obtained when different c and were used during triple cross-validation. After the triple cross-validation, the c, values that represent the highest accuracy among the triple cross-validation were considered as the parameters of the final SVM classifier. The accuracy among the triple cross-validation can reach over 88%, which is close to the final accuracy of 92.5% obtained by using the best c, g. This indeed illustrates that the model has good robustness.

6. Discussion

Recently, various models relied on deep learning that has been used in BUS image classification. For example, references [36, 3840] used deep learning methods to extract BUS image features and present classification. Compared with the direct use of traditional deep learning models such as VGG16, VGG19, ResNet50, and Inception V3, they got better results. However, they have the following problems. First of all, because CAD system is mainly used to assist doctors in BI-RADS classification of breast tumors, rather than just relying on CAD system to determine benign and malignant breast tumors, the quantitative methods we proposed can better assist doctors in diagnosis. In addition, according to the clinical experience of doctors, on the basis of segmentation, we quantify the regions with different characteristics of benign and malignant breast tumors, such as posterior shadowing (PS), edge indistinctness (EI), and so on, so as to avoid the influence of irrelevant areas in BUS images on the experimental results and reduce the interference of inherent noise of BUS images.

7. Conclusions

In this work, firstly, with the help of the clinical experience of doctors, four characteristic features (CFs) of BUS images were obtained manually, including orientation, edge indistinctness (EI), characteristics of posterior shadowing region (PS), and shape complexity (SC). Based on the experiments, we compared the CF computed from different BUS images, which showed that the CF designed in this paper can characterize the different properties of benign and malignant BUS images. Meanwhile, the experiment showed that using a single feature to distinguish BUS images was prone to false interpretation. Also, this paper introduced deep learning features (DLFs) to improve the accuracy of classification further. In the experiment of DLF extraction, through comparing the classification results of several classical deep neural networks, it was found that the accuracy, sensitivity, specificity, and F1-score of VGG16 are 0.84, 0.86, 0.82, and 0.86, respectively, which were higher than those of other classical neural networks. Therefore, we employed a modified VGG16 as a deep learning feature extractor followed by SVM to classify the fused the CF and DLF. The results showed that the accuracy, sensitivity, specificity, and F1-score of this method are 92.5%, 90.5%, 95%, 90.5%, and 92.7% respectively, which were better than other methods. Finally, through the triple cross-validation of multi-features (MFs), the experiment results further indicated that the proposed method can be used to assist doctors to identify benign and malignant BUS images effectively.

Data Availability

Part of the datasets used in this study can be found on the following website: https://www.ultrasoundcases.info/cases/breast-and-axilla/, and the other part is the data provided by the hospital, which is not open to the public because it involves research privacy.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors would like to thank Dr. Shunmin Qiu from the First Affiliated Hospital of Shantou University, China. She provided some datasets used in this paper and gave us some professional advice. This study was supported by the National Natural Science Foundation of China (grant no. 82071992), Basic and Applied Basic Research Foundation of Guangdong Province (grant no. 2020B1515120061), Guangdong Province University Priority Field (Artificial Intelligence) Project (grant no. 2019KZDZX1013), National Key R&D Program of China (grant no. 2020YFC0122103), Key Project of Guangdong Province Science & Technology Plan (grant no. 2015B020233018), and Research Project of Shantou University, China (grant no. NTF17016).

References

  1. F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: A Cancer Journal for Clinicians, vol. 68, no. 6, pp. 394–424, 2018. View at: Publisher Site | Google Scholar
  2. R. Vijayarajeswari, P. Parthasarathy, S. Vivekanandan, and A. A. Basha, “Classification of mammogram for early detection of breast cancer using SVM classifier and Hough transform,” Measurement, vol. 146, pp. 800–805, 2019. View at: Publisher Site | Google Scholar
  3. X. Qi, L. Zhang, Y. Chen et al., “Automated diagnosis of breast ultrasonography images using deep neural networks,” Medical Image Analysis, vol. 52, pp. 185–198, 2019. View at: Publisher Site | Google Scholar
  4. J. Y. An, K. M. L. Unsdorfer, and J. C. Weinreb, “BI-RADS, C-RADS, CAD-RADS, LI-RADS, lung-RADS, NI-RADS, O-RADS, PI-RADS, TI-RADS: reporting and data systems,” RadioGraphics, vol. 39, no. 5, pp. 1435-1436, 2019. View at: Publisher Site | Google Scholar
  5. J. Chang, J. Yu, T. Han, H. J. Chang, and E. Park, “A method for classifying medical images using transfer learning a pilot study on histopathology of breast cancer,” in Proceedings of the 2017 IEEE 19th International Conference on E-Health Networking, Applications and Services (Healthcom), pp. 1–4, Dalian, China, October 2017. View at: Publisher Site | Google Scholar
  6. C. Dromain, B. Boyer, R. Ferre, S. Canale, S. Delaloge, and C. Balleyguier, “Computed-aided diagnosis (CAD) in the detection of breast cancer,” European Journal of Radiology, vol. 82, no. 3, pp. 417–423, 2013. View at: Publisher Site | Google Scholar
  7. M. Byra, H. Piotrzkowska-Wróblewska, K. Dobruch-Sobczak, and A. Nowicki, “Combining Nakagami imaging and convolutional neural network for breast lesion classification,” in Proceedings of the 2017 IEEE International Ultrasonics Symposium (IUS), pp. 1–4, Washington, DC, USA, September 2017. View at: Publisher Site | Google Scholar
  8. H. A. Nugroho, M. Sahar, I. Ardiyanto, R. Indrastuti, and L. Choridah, “Classification of breast ultrasound images based on posterior feature,” in Proceedings of the 2016 International Conference on Information Technology Systems and Innovation (ICITSI), pp. 1–4, Bandung, Indonesia, October 2016. View at: Publisher Site | Google Scholar
  9. Y. Liu, K. Shou, J. Li et al., “Ductal carcinoma in situ of the breast: perspectives on tumor subtype and treatment,” BioMed Research International, vol. 2020, Article ID 7251431, 9 pages, 2020. View at: Publisher Site | Google Scholar
  10. W. G. McCluggage, “Endometriosis‐related pathology: a discussion of selected uncommon benign, premalignant and malignant lesions,” Histopathology, vol. 76, no. 1, pp. 76–92, 2020. View at: Publisher Site | Google Scholar
  11. S. Shaikh and A. Rasheed, “Predicting molecular subtypes of breast cancer with mammography and ultrasound findings: introduction of sono-mammometry score,” Radiology Research and Practice, vol. 2021, Article ID 6691958, 12 pages, 2021. View at: Publisher Site | Google Scholar
  12. X. Liang, J. Yu, J. Liao, and Z. Chen, “Convolutional neural network for breast and thyroid nodules diagnosis in ultrasound imaging,” BioMed Research International, vol. 2020, Article ID 1763803, 9 pages, 2020. View at: Publisher Site | Google Scholar
  13. R. I. R. Thanaraj, B. Anand, J. A. Rahul, and V. Rajinikanth, “Appraisal of breast ultrasound image using Shannon’s thresholding and level-set segmentation,” Progress in Computing, Analytics and Networking, Springer, Berlin, Germany, 2020. View at: Publisher Site | Google Scholar
  14. M. Caroccia, A. Chambolle, and D. Slepčev, “Mumford-Shah functionals on graphs and their asymptotics,” Nonlinearity, vol. 33, no. 8, pp. 3846–3888, 2020. View at: Publisher Site | Google Scholar
  15. T. A. Ngo, Z. Lu, and G. Carneiro, “Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance,” Medical Image Analysis, vol. 35, pp. 159–171, 2017. View at: Publisher Site | Google Scholar
  16. C. Li, C. Xu, C. Gui, and M. D. Fox, “Distance regularized level set evolution and its application to image segmentation,” IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, vol. 19, no. 12, pp. 3243–3254, 2010. View at: Publisher Site | Google Scholar
  17. Y. Li, J. Zhang, M. Chen, H. Lei, G. Luo, and Y. Huang, “Shape based local affine invariant texture characteristics for fabric image retrieval,” Multimedia Tools and Applications, vol. 78, no. 11, pp. 15433–15453, 2019. View at: Publisher Site | Google Scholar
  18. M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Transactions on Information Theory, vol. 8, pp. 179–187, 1962. View at: Publisher Site | Google Scholar
  19. Z. Omiotek, “Fractal analysis of the grey and binary images in diagnosis of Hashimoto’s thyroiditis,” Biocybernetics and Biomedical Engineering, vol. 37, no. 4, pp. 655–665, 2017. View at: Publisher Site | Google Scholar
  20. P. L. Lin, P. W. Huang, P. Y. Huang, and H. C. Hsu, “Alveolar bone-loss area localization in periodontitis radiographs based on threshold segmentation with a hybrid feature fused of intensity and the H-value of fractional Brownian motion model,” Computer Methods and Programs in Biomedicine, vol. 121, no. 3, pp. 117–126, 2015. View at: Publisher Site | Google Scholar
  21. Z. Zhuang, N. Lei, A. N. Joseph Raj, and S. Qiu, “Application of fractal theory and fuzzy enhancement in ultrasound image segmentation,” Medical & Biological Engineering & Computing, vol. 57, no. 3, pp. 623–632, 2019. View at: Publisher Site | Google Scholar
  22. B. Mandelbrot, “How long is the coast of Britain? Statistical self-similarity and fractional dimension,” Science, vol. 156, no. 3775, pp. 636–638, 1967. View at: Publisher Site | Google Scholar
  23. P. G. Okubo and K. Aki, “Fractal geometry in the San Andreas fault system,” Journal of Geophysical Research, vol. 92, no. B1, pp. 345–355, 1987. View at: Publisher Site | Google Scholar
  24. A. Faghih and A. Nourbakhsh, “Implication of surface fractal analysis to evaluate the relative sensitivity of topography to active tectonics, Zagros mountains, Iran,” Journal of Mountain Science, vol. 12, no. 1, pp. 177–185, 2015. View at: Publisher Site | Google Scholar
  25. J. Dong, Y. Ju, F. Gao, and H. Xie, “Estimation of the fractal dimension of Weierstrass-Mandelbrot function based on cuckoo search methods,” Fractals, vol. 25, no. 6, Article ID 1750065, 2017. View at: Publisher Site | Google Scholar
  26. M. S. Taqqu, “Benoît Mandelbrot and fractional Brownian motion,” Statistical Science, vol. 28, no. 1, pp. 131–134, 2013. View at: Publisher Site | Google Scholar
  27. J. Gatheral, T. Jaisson, and M. Rosenbaum, “Volatility is rough,” Quantitative Finance, vol. 18, no. 6, pp. 933–949, 2018. View at: Publisher Site | Google Scholar
  28. E. S. Olivas, J. D. M. Guerrero, M. Martinez-Sober, J. R. Magdalena-Benedito, and L. Serrano, Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques: Algorithms, Methods, and Techniques, IGI Global, Hershey, PA, USA, 2009.
  29. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2014, https://arxiv.org/abs/1409.1556. View at: Google Scholar
  30. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, Las Vegas, NV, USA, June 2016. View at: Publisher Site | Google Scholar
  31. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826, Las Vegas, NV, USA, June 2016. View at: Publisher Site | Google Scholar
  32. A. Ortiz, J. Munilla, F. J. Martínez-Murcia, J. M. Górriz, and J. Ramírez, “Empirical functional PCA for 3D image feature extraction through fractal sampling,” International Journal of Neural Systems, vol. 29, no. 2, Article ID 1850040, 2019. View at: Google Scholar
  33. H. Chen, L. Xu, W. Ai, B. Lin, Q. Feng, and K. Cai, “Kernel functions embedded in support vector machine learning models for rapid water pollution assessment via near-infrared spectroscopy,” Science of the Total Environment, vol. 714, Article ID 136765, 2020. View at: Publisher Site | Google Scholar
  34. V. Mehdipour and M. Memarianfard, “Ground-level O3 sensitivity analysis using support vector machine with radial basis function,” International Journal of Environmental Science and Technology, vol. 16, no. 6, pp. 2745–2754, 2019. View at: Publisher Site | Google Scholar
  35. W. X. Liao, P. He, J. Hao et al., “Automatic identification of breast ultrasound image based on supervised block-based region segmentation algorithm and features combination migration deep learning model,” IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 4, pp. 984–993, 2019. View at: Publisher Site | Google Scholar
  36. N. Antropova, B. Q. Huynh, and M. L. Giger, “A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets,” Medical Physics, vol. 44, no. 10, pp. 5162–5171, 2017. View at: Publisher Site | Google Scholar
  37. M. Wei, Y. Du, X. Wu et al., “A benign and malignant breast tumor classification method via efficiently combining texture and morphological features on ultrasound images,” Computational and Mathematical Methods in Medicine, vol. 2020, Article ID 5894010, 12 pages, 2020. View at: Publisher Site | Google Scholar
  38. S. Han, H.-K. Kang, J.-Y. Jeong et al., “A deep learning framework for supporting the classification of breast lesions in ultrasound images,” Physics in Medicine & Biology, vol. 62, no. 19, pp. 7714–7728, 2017. View at: Publisher Site | Google Scholar
  39. W. K. Moon, Y. W. Lee, H. H. Ke, S. H. Lee, C. S. Huang, and R. F. Chang, “Computer‐aided diagnosis of breast ultrasound images using ensemble learning from convolutional neural networks,” Computer Methods and Programs in Biomedicine, vol. 190, Article ID 105361, 2020. View at: Google Scholar
  40. T. Xiao, L. Liu, K. Li, W. Qin, S. Yu, and Z. Li, “Comparison of transferred deep neural networks in ultrasonic breast masses discrimination,” BioMed Research International, vol. 2018, Article ID 4605191, 9 pages, 2018. View at: Publisher Site | Google Scholar

Copyright © 2021 Zhemin Zhuang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views173
Downloads193
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.