International Journal of Biomedical Imaging
 Journal metrics
See full report
Acceptance rate7%
Submission to final decision127 days
Acceptance to publication23 days
CiteScore10.200
Journal Citation Indicator1.310
Impact Factor7.6

Empowering Radiographers: A Call for Integrated AI Training in University Curricula

Read the full article

 Journal profile

International Journal of Biomedical Imaging aims to promote research and development of biomedical imaging by publishing high-quality research articles and reviews in this rapidly growing interdisciplinary field.

 Editor spotlight

International Journal of Biomedical Imaging maintains an Editorial Board of practicing researchers from around the world, to ensure manuscripts are handled by editors who are experts in the field of study.

 Special Issues

Do you think there is an emerging area of research that really needs to be highlighted? Or an existing research area that has been overlooked or would benefit from deeper investigation? Raise the profile of a research area by leading a Special Issue.

Latest Articles

More articles
Research Article

Facile Conversion and Optimization of Structured Illumination Image Reconstruction Code into the GPU Environment

Superresolution, structured illumination microscopy (SIM) is an ideal modality for imaging live cells due to its relatively high speed and low photon-induced damage to the cells. The rate-limiting step in observing a superresolution image in SIM is often the reconstruction speed of the algorithm used to form a single image from as many as nine raw images. Reconstruction algorithms impose a significant computing burden due to an intricate workflow and a large number of often complex calculations to produce the final image. Further adding to the computing burden is that the code, even within the MATLAB environment, can be inefficiently written by microscopists who are noncomputer science researchers. In addition, they do not take into consideration the processing power of the graphics processing unit (GPU) of the computer. To address these issues, we present simple but efficient approaches to first revise MATLAB code, followed by conversion to GPU-optimized code. When combined with cost-effective, high-performance GPU-enabled computers, a 4- to 500-fold improvement in algorithm execution speed is observed as shown for the image denoising Hessian-SIM algorithm. Importantly, the improved algorithm produces images identical in quality to the original.

Research Article

White Matter Fiber Tracking Method with Adaptive Correction of Tracking Direction

Background. The deterministic fiber tracking method has the advantage of high computational efficiency and good repeatability, making it suitable for the noninvasive estimation of brain structural connectivity in clinical fields. To address the issue of the current classical deterministic method tending to deviate in the tracking direction in the region of crossing fiber region, in this paper, we propose an adaptive correction-based deterministic white matter fiber tracking method, named FTACTD. Methods. The proposed FTACTD method can accurately track white matter fibers by adaptively adjusting the deflection direction strategy based on the tensor matrix and the input fiber direction of adjacent voxels. The degree of correction direction changes adaptively according to the shape of the diffusion tensor, mimicking the actual tracking deflection angle and direction. Furthermore, both forward and reverse tracking techniques are employed to track the entire fiber. The effectiveness of the proposed method is validated and quantified using both simulated and real brain datasets. Various indicators such as invalid bundles (IB), valid bundles (VB), invalid connections (IC), no connections (NC), and valid connections (VC) are utilized to assess the performance of the proposed method on simulated data and real diffusion-weighted imaging (DWI) data. Results. The experimental results of the simulated data show that the FTACTD method tracks outperform existing methods, achieving the highest number of VB with a total of 13 bundles. Additionally, it identifies the least number of incorrect fiber bundles, with only 32 bundles identified as wrong. Compared to the FACT method, the FTACTD method reduces the number of NC by 36.38%. In terms of VC, the FTACTD method surpasses even the best performing SD_Stream method among deterministic methods by 1.64%. Extensive in vivo experiments demonstrate the superiority of the proposed method in terms of tracking more accurate and complete fiber paths, resulting in improved continuity. Conclusion. The FTACTD method proposed in this study indicates superior tracking results and provides a methodological basis for the investigating, diagnosis, and treatment of brain disorders associated with white matter fiber deficits and abnormalities.

Research Article

Skin Cancer Segmentation and Classification Using Vision Transformer for Automatic Analysis in Dermatoscopy-Based Noninvasive Digital System

Skin cancer is a significant health concern worldwide, and early and accurate diagnosis plays a crucial role in improving patient outcomes. In recent years, deep learning models have shown remarkable success in various computer vision tasks, including image classification. In this research study, we introduce an approach for skin cancer classification using vision transformer, a state-of-the-art deep learning architecture that has demonstrated exceptional performance in diverse image analysis tasks. The study utilizes the HAM10000 dataset; a publicly available dataset comprising 10,015 skin lesion images classified into two categories: benign (6705 images) and malignant (3310 images). This dataset consists of high-resolution images captured using dermatoscopes and carefully annotated by expert dermatologists. Preprocessing techniques, such as normalization and augmentation, are applied to enhance the robustness and generalization of the model. The vision transformer architecture is adapted to the skin cancer classification task. The model leverages the self-attention mechanism to capture intricate spatial dependencies and long-range dependencies within the images, enabling it to effectively learn relevant features for accurate classification. Segment Anything Model (SAM) is employed to segment the cancerous areas from the images; achieving an IOU of 96.01% and Dice coefficient of 98.14% and then various pretrained models are used for classification using vision transformer architecture. Extensive experiments and evaluations are conducted to assess the performance of our approach. The results demonstrate the superiority of the vision transformer model over traditional deep learning architectures in skin cancer classification in general with some exceptions. Upon experimenting on six different models, ViT-Google, ViT-MAE, ViT-ResNet50, ViT-VAN, ViT-BEiT, and ViT-DiT, we found out that the ML approach achieves 96.15% accuracy using Google’s ViT patch-32 model with a low false negative ratio on the test dataset, showcasing its potential as an effective tool for aiding dermatologists in the diagnosis of skin cancer.

Research Article

Segmentation of Dynamic Total-Body [18F]-FDG PET Images Using Unsupervised Clustering

Clustering time activity curves of PET images have been used to separate clinically relevant areas of the brain or tumours. However, PET image segmentation in multiorgan level is much less studied due to the available total-body data being limited to animal studies. Now, the new PET scanners providing the opportunity to acquire total-body PET scans also from humans are becoming more common, which opens plenty of new clinically interesting opportunities. Therefore, organ-level segmentation of PET images has important applications, yet it lacks sufficient research. In this proof of concept study, we evaluate if the previously used segmentation approaches are suitable for segmenting dynamic human total-body PET images in organ level. Our focus is on general-purpose unsupervised methods that are independent of external data and can be used for all tracers, organisms, and health conditions. Additional anatomical image modalities, such as CT or MRI, are not used, but the segmentation is done purely based on the dynamic PET images. The tested methods are commonly used building blocks of the more sophisticated methods rather than final methods as such, and our goal is to evaluate if these basic tools are suited for the arising human total-body PET image segmentation. First, we excluded methods that were computationally too demanding for the large datasets from human total-body PET scanners. These criteria filtered out most of the commonly used approaches, leaving only two clustering methods, -means and Gaussian mixture model (GMM), for further analyses. We combined -means with two different preprocessing approaches, namely, principal component analysis (PCA) and independent component analysis (ICA). Then, we selected a suitable number of clusters using 10 images. Finally, we tested how well the usable approaches segment the remaining PET images in organ level, highlight the best approaches together with their limitations, and discuss how further research could tackle the observed shortcomings. In this study, we utilised 40 total-body [18F] fluorodeoxyglucose PET images of rats to mimic the coming large human PET images and a few actual human total-body images to ensure that our conclusions from the rat data generalise to the human data. Our results show that ICA combined with -means has weaker performance than the other two computationally usable approaches and that certain organs are easier to segment than others. While GMM performed sufficiently, it was by far the slowest one among the tested approaches, making -means combined with PCA the most promising candidate for further development. However, even with the best methods, the mean Jaccard index was slightly below 0.5 for the easiest tested organ and below 0.2 for the most challenging organ. Thus, we conclude that there is a lack of accurate and computationally light general-purpose segmentation method that can analyse dynamic total-body PET images.

Research Article

Automatic Detection of AMD and DME Retinal Pathologies Using Deep Learning

Diabetic macular edema (DME) and age-related macular degeneration (AMD) are two common eye diseases. They are often undiagnosed or diagnosed late. This can result in permanent and irreversible vision loss. Therefore, early detection and treatment of these diseases can prevent vision loss, save money, and provide a better quality of life for individuals. Optical coherence tomography (OCT) imaging is widely applied to identify eye diseases, including DME and AMD. In this work, we developed automatic deep learning-based methods to detect these pathologies using SD-OCT scans. The convolutional neural network (CNN) from scratch we developed gave the best classification score with an accuracy higher than 99% on Duke dataset of OCT images.

Research Article

Assessment of the Impact of Turbo Factor on Image Quality and Tissue Volumetrics in Brain Magnetic Resonance Imaging Using the Three-Dimensional T1-Weighted (3D T1W) Sequence

Background. The 3D T1W turbo field echo sequence is a standard imaging method for acquiring high-contrast images of the brain. However, the contrast-to-noise ratio (CNR) can be affected by the turbo factor, which could affect the delineation and segmentation of various structures in the brain and may consequently lead to misdiagnosis. This study is aimed at evaluating the effect of the turbo factor on image quality and volumetric measurement reproducibility in brain magnetic resonance imaging (MRI). Methods. Brain images of five healthy volunteers with no history of neurological diseases were acquired on a 1.5 T MRI scanner with varying turbo factors of 50, 100, 150, 200, and 225. The images were processed and analyzed with FreeSurfer. The influence of the TFE factor on image quality and reproducibility of brain volume measurements was investigated. Image quality metrics assessed included the signal-to-noise ratio (SNR) of white matter (WM), CNR between gray matter/white matter (GM/WM) and gray matter/cerebrospinal fluid (GM/CSF), and Euler number (EN). Moreover, structural brain volume measurements of WM, GM, and CSF were conducted. Results. Turbo factor 200 produced the best SNR () and GM/WM CNR (), but turbo factor 100 offered the most reproducible SNR () and GM/WM CNR (). Turbo factor 50 had the worst and the least reproducible SNR, whereas turbo factor 225 had the worst and the least reproducible GM/WM CNR. Turbo factor 200 again had the best GM/CSF CNR but offered the least reproducible GM/CSF CNR. Turbo factor 225 had the best performance on EN (-21), while turbo factor 200 was next to the most reproducible turbo factor on EN (11). The results showed that turbo factor 200 had the least data acquisition time, in addition to superior performance on SNR, GM/WM CNR, GM/CSF CNR, and good reproducibility characteristics on EN. Both image quality metrics and volumetric measurements did not vary significantly () with the range of turbo factors used in the study by one-way ANOVA analysis. Conclusion. Since no significant differences were observed in the performance of the turbo factors in terms of image quality and volume of brain structure, turbo factor 200 with a 74% acquisition time reduction was found to be optimal for brain MR imaging at 1.5 T.

International Journal of Biomedical Imaging
 Journal metrics
See full report
Acceptance rate7%
Submission to final decision127 days
Acceptance to publication23 days
CiteScore10.200
Journal Citation Indicator1.310
Impact Factor7.6
 Submit Check your manuscript for errors before submitting

Article of the Year Award: Impactful research contributions of 2022, as selected by our Chief Editors. Discover the winning articles.