Reconstructing the Photoacoustic Image with High Quality using the Deep Neural Network Model
Read the full article
Journal profile
Contrast Media & Molecular Imaging is an exciting journal in the area of contrast agents and molecular imaging, covering all areas of imaging technologies with a special emphasis on MRI and PET.
Editor spotlight
Chief Editor, Professor Zimmer, focuses on the development and use of PET radiotracers for new applications of PET/MRI imaging in neuroscience and pharmacology.
Special Issues
Latest Articles
More articlesDeep Transfer Learning Technique for Multimodal Disease Classification in Plant Images
Rice (Oryza sativa) is India’s major crop. India has the most land dedicated to rice agriculture, which includes both brown and white rice. Rice cultivation creates jobs and contributes significantly to the stability of the gross domestic product (GDP). Recognizing infection or disease using plant images is a hot study topic in agriculture and the modern computer era. This study paper provides an overview of numerous methodologies and analyses key characteristics of various classifiers and strategies used to detect rice illnesses. Papers from the last decade are thoroughly examined, covering studies on several rice plant diseases, and a survey based on essential aspects is presented. The survey aims to differentiate between approaches based on the classifier utilized. The survey provides information on the many strategies used to identify rice plant disease. Furthermore, model for detecting rice disease using enhanced convolutional neural network (CNN) is proposed. Deep neural networks have had a lot of success with picture categorization challenges. We show how deep neural networks may be utilized for plant disease recognition in the context of image classification in this research. Finally, this paper compares the existing approaches based on their accuracy.
18F-FDG PET/CT Image Deep Learning Predicts Colon Cancer Survival
Colon cancer is a type of cancer that begins in the large intestine. In the process of efficacy evaluation, postoperative recurrence prediction and metastasis monitoring of colon cancer, traditional medical image analysis methods are highly dependent on the personal ability of the doctors. In the process of patient treatment, it not only increases the workload and work pressure for doctors, but also has some problems with traditional medical image analysis methods. Moreover, the traditional medical image analysis methods have problems such as insufficient prediction accuracy, slow prediction speed, and the risk of errors in prediction. When analyzing 18F-FDG PET/CT images by traditional medical image analysis methods, it is easy to cause problems such as untimely treatment plans and errors in diagnosis, which will adversely affect the survival of colon cancer patients. Although 18F-FDG PET/CT images have certain advantages in image clarity and accuracy compared with traditional medical imaging methods, the analysis method based on 18F-FDG PET/CT images also has certain effects in predicting the survival of colon cancer patients, but there are still many shortcomings: the 18F-FDG PET/CT image analysis method overly relies on the technical advantages of 8F-FDG PET/CT images; in the analysis and prediction of image data, it has not gotten rid of the dependence on the personal medical quality of the doctors; traditional medical image analysis methods are still used when analyzing and predicting images; there is no breakthrough in image analysis effects. In order to solve these problems, this paper combined deep learning theory, using three algorithms of the improved RBM algorithm, image feature extraction method based on deep learning, and regression neural network to analyze and predict 18F-FDG PET/CT images, and applied some algorithms to analyze and predict 18F-FDG PET/CT images, and also established a deep learning-based 18F-FDG PET/CT image survival analysis prediction model. Four aspects survival prediction accuracy, survival prediction speed, survival prediction precision, and physician satisfaction were studied through this model. The research results have shown that compared with traditional medical image analysis methods, the prediction accuracy of 18F-FDG PET/CT image survival analysis prediction model based on deep learning is improved by 0.83%, and the prediction speed is improved by 3.42%, as well as the prediction precision increased by 6.13%. The research results show that the deep learning-based 18F-FDG PET/CT image survival analysis prediction model established in this paper is of great significance to improve the survival rate of colon cancer patients, and also promotes the development of the medical industry.
Noise Estimation and Type Identification in Natural Scene and Medical Images using Deep Learning Approaches
The image enhancement for the natural images is the vast field where the quality of the images degrades based on the capturing and processing methods employed by the capturing devices. Based on noise type and estimation of noise, filter need to be adopted for enhancing the quality of the image. In the same manner, the medical field also needs some filtering mechanism to reduce the noise and detection of the disease based on the clarity of the image captured; in accordance with it, the preprocessing steps play a vital role to reduce the burden on the radiologist to make the decision on presence of disease. Based on the estimated noise and its type, the filters are selected to delete the unwanted signals from the image. Hence, identifying noise types and denoising play an important role in image analysis. The proposed framework addresses the noise estimation and filtering process to obtain the enhanced images. This paper estimates and detects the noise types, namely Gaussian, motion artifacts, Poisson, salt-andpepper, and speckle noises. Noise is estimated by using discrete wavelet transformation (DWT). This separates the image into quadruple sub-bands. Noise and HH sub-band are high-frequency components. HH sub-band also has vertical edges. These vertical edges are removed by performing Hadamard operation on downsampled Sobel edge-detected image and HH sub-band. Using HH sub-band after removing vertical edges is considered for estimating the noise. The Rician energy equation is used to estimate the noise. This is given as input for Artificial Neural Network to improve the estimated noise level. For identifying noise type, CNN is used. After removing vertical edges, the HH sub-band is given to the CNN model for classification. The classification accuracy results of identifying noise type are 100% on natural images and 96.3% on medical images.
Evaluation of Inflammatory Infiltration in the Retroperitoneal Space of Acute Pancreatitis Using Computer Tomography and Its Correlation with Clinical Severity
This paper investigates the correlation between the degree and severity of CT inflammatory infiltration in the retroperitoneal space of acute pancreatitis (AP). A total of 113 patients were included based on diagnostic criteria. The general data of the patients and the relationship between the computed tomography severity index (CTSI) and pleural effusion (PE), involvement, degree of inflammatory infiltration of retroperitoneal space (RPS), number of peripancreatic effusion sites, and degree of pancreatic necrosis on contrast-enhanced CT at different times were studied. The results showed that the mean age of onset in females was later than that in males; 62 cases involved RPS to varying degrees, with a positive rate of 54.9% (62/113), and the total involvement rates of only the anterior pararenal space (APS); both APS and perirenal space (PS); and APS, PS, and posterior pararenal space (PPS) were 46.9% (53/113), 53.1% (60/113), and 17.7% (20/113), respectively. The degree of inflammatory infiltration in the RPS worsened with the increase in CTSI score; the incidence of PE was higher in the group greater than 48 hours than in the group less than 48 hours; necrosis >50% grade was predominant (43.2%) 5 to 6 days after onset, with a higher detection rate than other time periods ( < 0.05). Thus, when the PPS was involved, the patient’s condition can be treated as severe acute pancreatitis (SAP); the higher the degree of inflammatory infiltration in the retroperitoneum, the higher the severity of AP. Enhanced CT examination 5 to 6 days after onset in patients with AP revealed the greatest extent of pancreatic necrosis.
MicroRNA-27a Suppresses the Toxic Action of Mepivacaine on Breast Cancer Cells via Inositol-Requiring Enzyme 1-TNF Receptor-Associated Factor 2
Objective. To investigate the toxic effects of microRNA-27a on breast cancer cells through inositol-acquiring enzyme 1-TNF receptor-associated factor 2 inhibition by mepivacaine. Methods. The elevation of miR-27a in MCF-7 of BCC lines was measured, and groups were set up as control, mepivacaine, and elevated groups. Cells from each group were examined for inflammatory progression. Results. Elevated miR-27a in MCF-7 cells was able to distinctly augment the cell advancement () and decline cell progression (). Meanwhile, miR-27a reduced the content of intracellular inflammatory factors IL-1β () and IL-6 (), elevated the content of IL-10 (), suppressed levels of cleaved-caspase-3 and p-signal transducer and activator of transcription-3 (STAT3) (), and increased Bcl-2/Bax (). Conclusion. Elevated miR-27a in MCF-7 of BCC lineage was effective in reducing the toxic effects of mepivacaine on cells and enhancing cell progression. This mechanism is thought to be related to the activation of the IRE1-TRAF2 signaling pathway in BCC. The findings may provide a theoretical basis for targeted treatment of BC in clinical practice.
An Ensemble of Transfer Learning Models for the Prediction of Skin Lesions with Conditional Generative Adversarial Networks
Skin cancer is one of the most serious forms of the disease, and it can spread to other parts of the body if not detected early. Therefore, it is crucial to diagnose and treat skin cancer patients at an early stage. Due to the fact that a manual diagnosis of skin cancer is both time-consuming and expensive, an incorrect diagnosis is made due to the high degree of similarity between the various skin lesions. Improved categorization of multi-class skin lesions requires the development of automated diagnostic systems. We offer a fully automated method for classifying several skin lesions by fine-tuning the deep learning models, namely VGG16, ResNet50, and ResNet101. Prior to model creation, the training dataset should undergo data augmentation using traditional image transformation techniques and generative adversarial networks (GANs) to prevent class imbalance issues that may lead to model overfitting. In this study, we investigate the feasibility of creating dermoscopic images that have a realistic appearance using conditional generative adversarial network (CGAN) techniques. Afterward, the traditional augmentation methods are used to augment our existing training set to improve the performance of pretrained deep models on the skin lesion classification task. This improved performance is then compared to the models developed using the unbalanced dataset. In addition, we formed an ensemble of finely tuned transfer learning models, which we trained on balanced and unbalanced datasets. These models were used to make predictions about the data. With appropriate data augmentation, the proposed models attained an accuracy of 92% for VGG16, 92% for ResNet50, and 92.25% for ResNet101. The ensemble of these models increased the accuracy to 93.5%. There was a comprehensive discussion on the performance of the models. It is possible to conclude that using such a method leads to enhanced performance in skin lesion categorization compared to the efforts made in the past.