Deep Learning-Based Image Reconstruction for Different Medical Imaging Modalities
Image reconstruction in magnetic resonance imaging (MRI) and computed tomography (CT) is a mathematical process that generates images at many different angles around the patient. Image reconstruction has a fundamental impact on image quality. In recent years, the literature has focused on deep learning and its applications in medical imaging, particularly image reconstruction. Due to the performance of deep learning models in a wide variety of vision applications, a considerable amount of work has recently been carried out using image reconstruction in medical images. MRI and CT appear as the ultimate scientifically appropriate imaging mode for identifying and diagnosing different diseases in this ascension age of technology. This study demonstrates a number of deep learning image reconstruction approaches and a comprehensive review of the most widely used different databases. We also give the challenges and promising future directions for medical image reconstruction.
The reconstruction of images is an integral part of several systems of visual perception. This includes partitioning several segments or objects  with images. Reconstruction of medical images is one of the most basic and essential elements of medical imaging, the main goal of which is to obtain high-quality medical images for clinical use at the lowest cost and risk to patients. The process of image reconstruction can be characterized as a way of importing two-dimensional images into a computer, then improving or exploring the image by transforming it into a form that is more constructive and useful to the human observer. In computer vision and image processing, deep learning has been commonly used to deal with existing images, enhance these images, and generate features from them. Deep learning (DL) approaches have been effectively applied in medical imaging, including computer-aided detection and diagnosis, radionics, and medical image analysis [2, 3]. In a wide variety of applications, deep neural networks are highly effective, often with a higher degree of human efficiency. For example, with the energy usage of popular graphic processing units, making thousands of predictions a day brings with it such a considerable amount of energy. Equally, in terms of the speed deep neural network can deliver and the transmission of NNs with millions of parameters across small band channels, legitimate predictions are often around one hundred meters away. This means that it takes significant changes in all these issues to operate on hardware-related devices, such as smart phones, robots, or vehicles. Compaction and performance have also become a focus of concern in the field of deep learning. The recent increase in deep learning techniques has made it possible for deep models to solve increasingly difficult and complex problems. Convolution neural networks (CNNs) have shown that in all areas of existence, they outperform invariant solutions and sophisticated algorithms for reconstruction. Deep learning has been a huge success, and in particular, data analysis has now become a rising trend [4, 5]. The most important patches in the image are first detected in the proposed system I, and then, a three-hidden-layer convolutional neural network (CNN) is designed and trained for feature extraction and patch classification. The proposed system II uses a two-layer long short-term memory (LSTM) network for classification and a CNN for feature extraction. For image grading, the LSTM network is used to simultaneously consider all patches of an image . Recent advances in efficient computational resources, including cloud computing systems and GPUs, have increased the usage and applicability of deep learning in a variety of disciplines, particularly medical image reconstruction .
The remainder of this work is structured as follows: Section 2 gives an overview of deep learning applications in medical imaging. Section 2.1 reviews techniques of image reconstruction. Section 2.9 lists a detailed discussion of widely used databases for medical image reconstruction. The key problems and future directions for image reconstruction are discussed in Section 3. In Section 3.2, we present our review conclusions.
1.1. Deep Learning-Based Image Reconstruction
ANNs are the foundation of deep learning techniques . Deep learning gained popularity in 2012, when a DL-based approach dominated a computer vision competition. Furthermore, deep learning approaches have increased their performance since 2010, and by 2015, they have surpassed human accuracy in large-scale image identification problems . Deep learning directly learns from image data, whereas conventional approaches require human involvement for feature extraction . The studies [8–10] provide a more general review of deep learning. They propose a deep feed-forward neural network approach to classify binary microarray datasets . The proposed method is tested against CNS, Colon, Prostate, Leukemia, Ovarian, Lung-Harvard2, Lung Michigan, and Breast cancers using eight standard microarray cancer datasets. According to the study , the area of medical imaging reconstruction has gone through three stages of growth, as shown in Table 1.
1.2. Medical Image Reconstruction Using Deep Learning on MRI
Deep learning applications in medical image reconstruction have a modest amount of published material. Machine learning, according to scientists, might also be used for medical image reconstruction, as it has been effectively used for image-processing tasks such as classification, segmentation, super-resolution, and edge detection. The primary goal of our research is to conduct a review of the current literature on medical image reconstruction. In [13–15], related research can be found on medical imaging. MRI has revolutionized radiology and medicine since its beginning in the early 1970s . In addition to a high-quality data collection process, image reconstruction is a key step in guaranteeing good MRI image quality. While the first magnetic resonance images were acquired by using an iterative reconstruction algorithm  from data resembling radial projections of the imaged specimen, non-Cartesian acquisition and iterative reconstruction techniques were not implemented for several years in clinical MRI, and their use is still very limited today. There are two explanations behind this: First, in the case of homogeneity or gradient waveform imperfections, the fundamental presumption that the measured data is radial representations of the imaged object fails. Second, the practical application is restricted by the long reconstruction times associated with iterative reconstruction algorithms. MRI reconstruction became practical and the image quality acceptable only after the introduction of spin-warp (Cartesian) imagery , which made it possible to use the fast Fourier transform (FFT) for image reconstruction. MRI reconstruction was made effective by the -space formalism [18, 19] and the FFT. This susceptibility to multieffects makes MRI scans very resilient but also vulnerable to artefacts. The simple Fourier signal model must be developed to offer the whole physical description underlying image generation in order to produce an artefact-free image or a quantitative map of a tissue or device attribute . The system view of MRI has been presented in Figure 1. This can be achieved by formulating the reconstruction of the image as an inverse problem and applying a suitable algorithm to solve it. MRIs have come a long way over the last 45 years. With the growing computational capacity and the growth of novel reconstruction techniques, we are now able to solve more complex problems in appropriate reconstruction times. The availability of technical facilities as well as the availability and use of medical technology of the European Union (EU)  is presented in Figure 2. The medical technology concerns a variety of equipment used for diagnostic imaging, for example, magnetic resonance imaging (MRI) units. While several model-based reconstruction techniques have been developed to solve a specific problem, it is still a challenge to address a complete explanation of the reconstruction measurement process in future works. Table 2 shows an overview of some of the papers reviewed. Furthermore, we reviewed other modalities including CT, ultrasound, and PET.
1.3. Medical Image Reconstruction Using Deep Learning on CT
In CT, image reconstruction is accomplished by the use of projection data. When the projection data is sufficiently comprehensive, filtered back projection (FBP) techniques generate high-quality images. However, some applications, such as reducing scan time, lowering X-ray radiation, which may expose patients to additional health concerns, and scanning of some lengthy objects with restricted angular range, may result in inadequate projection data, making FBP algorithms ineffective. Iteratives such as total variation- (TV-) based algorithms produce acceptable quality reconstructions from partial projection data; however, certain artefacts arise at the margins of a reconstructed image if the projection data is gathered by a reduced CT-angle. Deep learning methods are currently being utilized to solve these issues , including PYRO-NN , Learn , DEAR , GoogLeNet Improved , among other deep frameworks for image reconstruction in CT. In contrast to established traditional techniques, a series of other research also reveals exact image reconstruction . These deep learning techniques have been utilized for 2D and 3D reconstruction, reducing noise efficiently, improving spatial resolution, and working more quickly on processor graphics units (GPUs). A summary of articles that use deep learning approaches is provided in Table 3 for image reconstruction in CT.
1.4. Deep Learning for Image Reconstruction in Other Imaging Modalities
Deep learning approaches have been employed to reconstruct images in different imaging modalities including ultrasound, PET, optical microscopy , fluorescence microscopy , electromagnetic tomography (EMT) , photoacoustic tomography (PAT) , diffuse optical tomography (DOT), monocular colonoscopy , stochastic microstructure reconstruction , holographic image reconstruction , reconstruction of neural volumes , tomographic 3D reconstruction of a single-molecule structure , neutron tomography , coherent imaging systems, and integration of deep and transfer learning in imaging. A summary of articles that use deep learning approaches is provided in Table 4 for image reconstruction in other imaging modalities. Conventional image reconstruction method flow chart is shown in Figure 3.
2. Overview of Traditional Image Reconstruction Techniques
Image reconstruction, in general, is an inverse problem that assists in the recovery of the original ideal image from a supplied inferior version. Image reconstruction is defined as a method of inputting two-dimensional pictures into a computer, then enhancing or exploring the image by changing it into a more constructive and usable shape for the human viewer. Analytical reconstruction and iterative reconstruction are the two main types of reconstruction approaches (IR). In the clinical use of magnetic resonance imaging, image reconstruction plays a crucial role. The role of image reconstruction is to convert the acquired -space data to images that can be interpreted clinically. It is necessary to develop common tools and vocabulary to describe the consistency of the reconstructed image before explaining particular image restoration techniques and the signal processing steps they employ. The concept image quality is not intended to mean that one image is better than another. Here, it is widely used to describe certain characteristics that differentiate between one image reconstruction product and another. Techniques used in medical reconstruction, their strengths, and limitations are presented in Table 1. There are many ways in which it is possible to explain image reconstruction results. A strong noise image denoising method based on improved K-SVD and atom optimization is presented. The proposed method includes sparse coding based on correlation coefficient matching and iterative stopping criteria of an OMP algorithm based on a weak selection iterative threshold strategy in dictionary training , in addition to image feature extraction and noise atom suppression. Image reconstruction is used in a wide range of industries and activities in the real world. Reconstruction enables us to obtain insight into qualitative properties of the item that are impossible to discern from a single plane of sight, such as volume and the object’s relative location to other objects in the scene. Reconstruction techniques can be used in cardiology, art analysis and restoration, film, television, phenotype analysis and criminal investigations, game graphics, and design.
2.1. Iterative Reconstruction (IR)
In image restoration, there are essentially two approaches: iterative reconstruction and analytical reconstruction. The restoration question in the iterative method is limited to calculating a finite number of image values from a finite number of measurements. Major IR algorithms are compared on different parameters in Table 5. Iterative reversal appears to require more computing resources, but the procurement process can deal with more complicated models.
2.2. Algebraic Reconstruction Technique (ART)
ART is a widely recognized iterative approach used for medical image reconstruction to solve equations in the linear system.
Equation (1) is related to multilevel thresholding. ART updates the projection values from the projection in Equation (2) and applies a correction factor to determine the pixel value at every -position as defined in Equation (3). where is the measured effect projection info, specifies a value on pixels at the -position, is the pixel weighted ratio that the ray moves around, and is the number of iterations.
2.3. Simultaneous Reconstruction Technique (SIRT)
It is believed that SIRT (Simultaneous Iterative Reconstruction Technique) would recreate incorrect acquisition data containing any noise for a reconstructed picture of reasonable quality. SIRT, though, is very slow to reconstruct an image because it takes a lot of time to achieve a sufficiently high precision image during iteration. In addition, the SIRT creates a distorted smoothing effect .
2.4. Simultaneous Algebraic Reconstruction Technique (SART)
SART, which makes full use of the combination of ART and SIRT algorithms, has been proposed as an upgrade to the ART and SIRT algorithms . ART is a quick convergence operation, whereas SIRT produces an image of high quality, so SART is supposed to have certain useful characteristics. As expressed below in Equation (4), smoothing the noise on core SART is defined as follows:
2.5. Conjugate Gradient (CG)
The algorithm for the gradient descent is designed to prevent the fluctuations. The first iteration is the same as the climb of the highest gradient. The algorithm begins to travel in the context of the highest gradient and, once a maximum is reached, proceeds to move along the same axis. For effective cohesion, several iterations are involved which always leads to a zigzag line. In the subsequent iterations, indeed, the algorithm tends to step in a direction in which the gradient stays the same in the prior direction. In these prior directions, the idea is to remove the need of fresh optimization. We let the previous path be and the Hessian matrix be . The new path is now expected to be such that the gradient along does not change. The gradient would change to when heading in the direction of . Requiring zero for the resulting adjustment to yields the following situation:
2.6. Maximum Likelihood Expectation Maximization (MLEM)
The expectation-maximization (EM) algorithm is based on the maximum-likelihood (ML) technique for reconstruction which has a strong theoretical base, is easy to implement, and has proven to be more resilient than filtered back projection (FBP) against noise and systemic anomalies in the data and device matrix. As a result, it is commonly employed in image reconstruction in positron emission tomography (PET) and single photon emission tomography (SPECT) . where at the th iteration is the th image pixel, is the measurement value of the th line-integral (ray-sum), and is the addition of the image pixel to the measurement of the . The summation over the index is the projector, and the back projector is the summation over the index.
2.7. Ordered Subset Expectation Maximization (OSEM)
The ordered subset expectation maximization (OSEM) approach in mathematical optimization is an iterative method that is used in computed tomography. In medical imaging, positron emission tomography, single photon emission computed tomography, and X-ray computed tomography all use the OSEM technique. The OSEM methodology is linked to the expectation maximisation (EM) mechanism in stats. The OSEM mechanism is also linked to FBP strategies. For the OSEM algorithm, the primary upgrade equation is explained by . where the image under reconstruction is , the voxel indices are and , the iteration number is , the subset number is , the subset is , the system matrix is , the LOR calculation is , and the scatter and random corrections are models.
2.8. Maximum A Posteriori (MAP)
The principal problems associated with ML algorithms are alleviated by MAP techniques. Next, there are clearer MAP reconstructions than their ML equivalents. Second, with more iterations, iterated MAP estimates appear to hit a point at which they change very little, suggesting estimated convergence . MAP restoration efficiently helped smooth noise and strengthen convergence, with some drawbacks as well. Second, performance can be strongly dependent on parameter selection. Unfortunately, as may be done for post reconstruction filters, it is not effective to use a trial-and-error strategy for parameter collection, since a complete iterative reconstruction should be conducted to test the response from one range of constraints. Second, the loss of image characteristics or, in some situations, the development of spurious characteristics may result from unnecessary smoothing using Gibbs priors. This is a representation of the fact that, in return for reduced noise variation, the MAP estimator introduces some bias to the ML problem. Finally, smoothing properties that are somewhat different from traditional Fourier domain filters used in nuclear medicine are implemented by MAP algorithms, so they can be important for doctors to first interpret.
2.9. Analytical Reconstruction
The foundation of the analytical approach is mathematical inversion, offering effective, noniterative algorithms for reconstruction. Through reconstruction based on data by modelling the projection as a line integral form, you may reassemble the image.
2.10. Central Slice Theorem (CST)
Knowing how different operations in an image or real space are connected to those in Fourier space is often beneficial. The central slice theorem establishes a link between an object’s radon transformation and its two-dimensional Fourier transformation . The theorem is as follows: the one-dimensional Fourier transformation of a projection at an angle of is the same as the radial or central slice at the same angle drawn from the object’s two-dimensional Fourier domain. In order to illustrate this, we can write the two-dimensional Fourier transformation in terms of operators.
Another means in which the entity function can be reconstructed from its parallel representations is given by the central slice theorem. We take each projection’s Fourier transformation and “position it” along the relevant radial slice. By running this step between 0 and π for all values of , the Fourier inversion helps one to recover the piece.
2.11. Filtered Back Projection (FBP)
One of the most prevalent techniques used in tomographic image restoration is the filtered back projection (FBP) algorithm. The method of estimating an object image slice of from a collection of projections is image reconstruction. This role can be accomplished by many algorithms with various benefits. The basis of the image restoration mathematical kit is the  reconstruction algorithm. The FBP technique is commonly referred to as the convolution method for reconstructing a two-dimensional image using a one-dimensional integral equation. The most common reconstruction algorithm currently used in the application of CT is this method.
We give a comprehensive description of commonly utilized databases that are used for image reconstruction and segmentation. It should be noted that some of these works use information augmentation to increase the number of samples labelled, particularly those working with limited datasets. Increasing the amount of training samples by adding a transformation collection to the images helps to increase the data. Translation, reflection, rotating, warping, scaling, colour space flipping, cropping, and projections into key components are several common transformations. Data augmentation, particularly when learning from small databases, such as those in medical image analysis, has been shown to enhance the output of the models. It may also be helpful in having rapid integration, reducing the probability of overfitting, and increasing generalization. Data augmentation has shown to improve model efficiency by more than 20% for certain limited datasets. Most prominent medical image analysis databases with modalities and anatomic are shown in Figure 4.
3.1. Interstitial Lung Diseases (ILDs)
A digital series of interstitial lung disease (ILD) cases constructed at the University Hospitals of Geneva (HUG) would be made open to the public. The collection includes a high-resolution computed tomography (HRCT) image sequence of three-dimensional annotated areas of diseased lung tissue, as well as clinical criteria for individuals who have pathologically verified ILD diagnoses. Few samples of CT image slices in the ILD dataset for six lung tissue forms are shown in Figure 5. The library features 128 patients with one of 13 histological diagnoses of ILDs, 108 sequences of photographs of more than 41 litres of annotated lung tissue patterns, and a complete range of 99 ILD-related clinical criteria. On request and after signing of the licence agreement , the database is accessible for study.
3.2. Brain Tumour Segmentation (BraTS) Challenge
BraTS have typically focused on evaluating state-of-the-art techniques for the segmentation of brain tumours in multimodal magnetic resonance imaging (MMRI) data. In a workshop conducted as part of the MICCAI 2012 conference in October 2012 in Nice, France, the first benchmark was planned; then, this challenge dataset release was accessible in the 2013, 2015, 2017, 2018, and now 2020 BraTS challenge deadlines. Four MRI sequences are eligible for each patient in BraTS 2015: FLAIR, T1-C weighted, T1-weighted, and T2-weighted. The training sample includes 54 Low-Grade Gliomas (LGG) and 220 High-Grade Gliomas (HGG) from the BraTS 2015 challenge dataset . BraTS 2017 focuses on segmenting inherently heterogeneous brain tumours, such as gliomas, using multi-institutional preoperative MRI imaging. BraTS 2017 also reflects on the estimation of patient overall survival rate  to assess the therapeutic significance of segmentation tasks. Multi-institutional preoperative MRI scans are used by BraTS 2019 and concentrate on segmenting intrinsically heterogeneous brain tumours, including gliomas. Sample MRI from the BraTS dataset are shown in Figure 6.
3.3. Alzheimer’s Disease Neuroimaging Initiative (ADNI)
With the assistance of a public-private collaboration under the leadership of Dr. Michael W. Weiner, ANDI launched in 2004. ADNI’s primary objectives are to evaluate more authentic and sensitive methods for multiple biomarkers such as MRI, Cat, structural magnetic resonance imaging (sMRI), and clinical examination to assess MCI development and early stages of AD. Second, the latest groundbreaking, unrestricted data-access programme was extended to all researchers worldwide. ADNI’s original aim was to employ 800 people aged 55 to 90 years to enroll in the analysis of approximately 200 cognitively normal elderly individuals to be pursued for 3 years, 400 individuals to be followed for 3 years with MCI, and 200 individuals to be followed for two years with early AD. In the ADNI dataset MR images shown in Figure 7, the top row depicts the strength images, and the bottom row shows the manually segmented labeling.
MURA is a broad dataset of radio diagrams, including 14,863 upper extremity musculoskeletal tests. Each analysis involves one or more views and is manually labelled as either usual or abnormal by radiologists. A crucial rational radio challenge is to decide if a radiographic analysis is usual or abnormal: a study perceived as normal rules out illness and may remove the need for more medical tests or treatments for patients. The role of identifying musculoskeletal abnormality is especially important since more than 1.7 billion individuals worldwide are impaired by musculoskeletal conditions . These disorders, with 30 million emergency room visits annually and growing, are the most prevalent source of serious, long-term pain and injury. An example from the MURA dataset is Towards Radiologist-Level Abnormality Identification in Musculoskeletal Radiographs shown in Figure 8. The MURA dataset comprises 9045 regular and 5818 irregular upper extremity musculoskeletal radiographic tests, covering the back, humerus, elbow, forearm, wrist, neck, and finger. One of the main public radiovisual picture datasets is MURA.
3.5. The Cancer Imaging Archive (TCIA)
TCIA is an extensive collection of cancer patient images available for public download. A large-scale lung cancer detection dataset for CT and PET/CT from the TCIA dataset is shown in Figure 9. The data are structured as “collections,” usually the imaging of patients linked to a particular condition, form or type of picture (CT, MRI, optical histopathology, etc.), or subject of study. The main file format used by TCIA for radiological imaging is DICOM. Ref.  also offers supporting image-related evidence such as medical results, care specifics, genomics, and expert analyses. Both users have links to most of the datasets in TCIA.
However, TCIA offers protection assistance to restrict, where necessary, access to datasets. For example, if a dataset needs to be exchanged for preliminary review amongst collaborators on TCIA, TCIA would facilitate this. The data provider may, at the time of request, provide the TCIA team with a list of partners that will have exclusive access. Getting the data mounted on TCIA will also enable the release of the data at a later date if needed.
3.6. Digital Database for Screening Mammography (DDSM)
The DDSM is a series of mammograms from the following organizations: the Massachusetts General Hospital, the School of Medicine at Wake Forest University, the Holy Heart Hospital, and the St. Louis School of Medicine at Washington University. The DDSM was created with funding from the Department of Defense Breast Cancer Research Program and the US Army Research and Material Command, and the DDSM’s principal designers obtained the appropriate patient permissions. The instances include calcification and bulk ROIs, as well as the following details that may be beneficial for CADe and CADx algorithms: descriptors for mass form, mass margin, type of calcification, distribution of calcification, and breast density from the Breast Imaging Reporting and Data System (BI-RADS); BI-RADS overall rating from 0 to 5; and abnormality subtlety ranking from 1 to 5. An example of annotated mammogram images from DDSM database is shown in Figure 10.
3.7. Open Access Series of Imaging Studies (OASIS)
In the Open Access Sequence of Imaging Research (OASIS), OASIS-3 is the current publication aimed at rendering neuroimaging databases widely accessible to the science community. They hope to encourage potential developments in fundamental and clinical neuroscience by compiling and freely distributing this multimodal dataset produced by the Knight ADRC and its associated studies. For hypothesis-driven data analyses, neuroanatomical atlas creation, and segmentation algorithm development, previously published data for OASIS-Cross-sectional and OASIS-Longitudinal was used. For natural ageing and Alzheimer’s disease , OASIS-3 is a longitudinal neuroimaging, behavioral, emotional, and biomarker dataset. Sample images from the OASIS dataset are shown in Figure 11. OASIS databases (Central.xnat.orgs) provide the population with free access to a vast collection of neuroimaging and processed imaging evidence across a diverse demographic, cognitive, and genetic continuum, as well as a readily available forum for neuroimaging, clinical, and cognitive studies on natural ageing and cognitive impairment. Any of the data can be obtained from http://www.oasis-brains.org/.
3.8. Autism Brain Imaging Data Exchange (ABIDE)
Autism spectrum disorder (ASD) is defined by qualitative dysfunction and by repeated, restrictive, and stereotyped behaviors/interests in social reciprocity. ASD is now known to occur in more than 1% of infants, which was historically considered uncommon. Their speed and therapeutic effects have not kept up with the urgency to find forms of assessing disease at younger ages, choosing appropriate therapies, and forecasting performance, amid ongoing scientific advancements. This is mainly because of the ambiguity and variability of ASD. Large-scale samples are required to face these obstacles, but single labs are unable to collect enough large datasets to expose the brain structures underlying ASD. In response, to accelerate our knowledge of the neurological basis of autism, the Autism Brain Imaging Research Sharing (ABIDE) project has aggregated functional and structural brain imaging data obtained from laboratories across the globe. The ABIDE project currently comprises two large-scale collections: ABIDE I and ABIDE II, with the overall aim of promoting discovery science and sample-to-sample comparisons.
The selection was created by combining datasets gathered from more than 24 different foreign brain imaging institutions and made available to researchers all around the world. An example of the ABIDE dataset which applied a pipeline to an input volume to prepare it for feature extraction is shown in Figure 12.
OpenNeuro is a directory of open neuroimaging data . The data is shared under a Creative Commons CC0 licence, which provides researchers as well as community scientists with a large variety of brain imaging data. The database relies mainly on results from practical magnetic resonance imaging (fMRI) but also encompasses other modalities of imaging, including longitudinal and diffusion MRI, electroencephalography (EEG), and magnetoencephalography (MEG). Open fMRI is a collaboration of Stanford University’s Centre for Reproducible Neuroscience. An example of brainstem MRI from the OpenNeuro dataset is shown in Figure 13. The National Science Foundation, the National Institution on Mental Health, the National Institute on Substance Addiction, and the Laura and John Arnold Foundation also supported the creation of the OpenNeuro resource.
3.10. Osteoarthritis Initiative (OAI)
The Osteoarthritis Initiative (OAI), supported by the National Institutes of Health, is a multicenter, ten-year retrospective analysis of men and women. The OAI’s aims are to include tools to allow a deeper understanding of knee osteoarthritis prevention and care, one of the most prevalent causes of adult impairment. The supervised machine learning phase of AQ-CART employed a training dataset of 378 patient single-knee MRI images as input data. These were chosen to reflect the full spectrum of structural severity of radiographic OA, like Kellgren-Lawrence medial compartment grades 0-4, OA lateral compartment, along with good young knees that appear to have thicker cartilage. An OAI 3D double-echo-in-steady-state sequence was used to capture the 286 images (DESS-we) . A sample of knee MRI from the Osteoarthritis Initiative (OAI) dataset is shown in Figure 14.
3.11. Ischemic Stroke Lesion Segmentation (ISLES)
Over the past three years (2015, 2016, and 2017), this challenge for stroke lesion segmentation has been very common and has culminated in several approaches that help to overcome major challenges in contemporary stroke imaging research. Many activities include medical image processing, for which innovative approaches are constantly proposed. Varying dataset size and heterogeneity, though, render it virtually difficult to fairly equate numerous approaches. Challenges such as ISLES seek to address these limitations and establish a shared structure for adequate comparison of outcomes by delivering a high-quality data collection publicly and predefined assessment guidelines. A sample of ischemic lesion segmentation in multispectral MR images is shown in Figure 15.
The data collection for training comprises 63 patients. In certain hospital instances, the stroke lesion has two slabs to protect. There are brain areas that are pre-, or partially, overlapping. For the first and second slabs, slabs per patient are indicated with the letters “A” and “B.” In SMIR, mapping is also given between case number and training term. A test range consisting 40 stroke cases  would assess proven techniques.
3.12. Automated Cardiac Diagnosis Challenge (ACDC)
From real clinical examinations acquired at the University Hospital of Dijon, the total ACDC dataset was developed. Acquired knowledge was thoroughly anonymized and handled in compliance with the rules laid down by the Dijon Hospital Local Ethics Committee (France). Our dataset encompasses many well-defined pathologies with ample cases to (1) train machine learning methods adequately and (2) test explicitly the variations of the key physiological parameters obtained from cine-MRI (in particular, diastolic volume and ejection fraction).
As mentioned below, the dataset consists of 150 assessments (all from separate patients) grouped into 5 equally distributed subgroups (4 pathological groups + 1 balanced topic group). In addition, the following additional details come for each patient: weight, height, as well as the instants of the diastolic and systolic processes. After personal enrollment, the database is made accessible to the participants by means of two datasets from the designated online assessment website: (i) a study dataset of 100 patients along with the relevant manual references focused on the review of one professional expert; (ii) a test dataset consisting of 50 new patients, without manual notes, but with the above patient details. Via the NIfTI format, raw input images are given. An example of MRI from the ACDC challenge dataset is shown in Figure 16.
4. Performance Evaluation
We include a description of some of the common metrics used in assessing the segmentation algorithm output in this portion. In different ways, such as quantitative precision, speed, and storage specifications, the algorithm should be tested. Much of the research work to date focuses on the metrics used to determine the accuracy of the model.
4.1. Precision/F1 Score/Recall
They are standard metrics for reporting the accuracy of several models of classical image segmentation. For each class, as well as at the aggregate level, precision and recall may be described as follows: where TP refers to the true positive fraction, the false positive fraction is referred to by FP, and the false negative fraction is referred to by FN. We are generally interested in a blended version of precision and rates of recall. A common metric of this kind is called the F1 score, which is defined as the harmonic mean of consistency and recall:
4.2. Dice Coefficient
Another popular metric for image segmentation is the Dice coefficient, and it is more widely used in medical image analysis.
4.3. Jaccard Index or Intersection over Union (IoU)
It is one of the most widely used semantic segmentation metrics. The area of intersection between the forecast segmentation map and the ground truth is defined as divided by the area of union between the forecast segmentation map and the ground truth: where, respectively, and denote the ground truth and the projected segmentation maps. It has a scale of 0 to 1. This section is concluded with the important discussion about the above-mentioned metrics. We have seen that in most experiments, IoU is used for detection and the Dice coefficient is used for segmentation tasks. As a loss function, the Dice coefficient is used because it is distinguishable in segmentation tasks where IoU is not distinguishable. Both can be used to measure the efficiency of your model as a metric, but only the Dice coefficient is used as a loss function. Owing to the overwhelming number of class events, the class imbalance dilemma is a common problem concerning machine learning.
5. Challenges and Future Directions
While deep learning-based models continue to dominate medical imaging, there are still plenty of deep modelling challenges that restrict the application and adoption in the clinical practice of these new approaches. These problems are often posed to researchers working in similar fields as future ideas. In order to create new deep models in diagnostic imaging, there is only minimal labelled data available. Medical image transcription is time intensive and allows physicians to have deep knowledge. Can we build efficient models of learning that could really facilitate efficient use of both labelled and unlabeled data?
It is typically difficult to collect very huge health-care data for a particular task due to morbidity and privacy issues. In addition, (by definition) the number of rare cases is limited but can be more significant than common cases. To effectively extract information from these small samples and recognize such unequal value among the samples, can we design learning models and data augmentation techniques? The clinical judgement is not focused exclusively on images by radiologists. In decision-making, more input from the patients and the experience of the doctors from their years of medical school training are also important. In order to improve system performance, it is therefore necessary to incorporate data gathered from various different sources into deep modelling. Reasoning is almost as crucial as, if not even more essential than, referring. Most deep models currently mask the cognitive development. There is a possibility that the model makes predictions based on inaccurate logic. It renders the model unstable. Can deep simulation be integrated with logic or a graph of medical expertise? This will further decrease the amount of labeled images that will be needed to train deep learning models without losing output.
A study of the latest literature on deep learning for medical imaging has been presented in this article. A detailed review of the image reconstruction methods and concise description of the components has been presented. A summary of some of the most widely used datasets for MR image reconstruction is given in the third part of this article. The key problems facing deep learning in medical image processing were identified in the later section of the paper; also, the possible directions for overcoming these challenges were discussed. Presented literature will exploit huge advantages for medical imaging applications and will boost the ability of artificial algorithms to assist radiologists.
Conflicts of Interest
The authors declared that they have no competing interests.
All authors read and approved the final manuscript.
The authors would like to thank the sponsored organizations. This work has been sponsored by the National Natural Science Foundation of China under Grant No. 81871394 and the Beijing Laboratory of Advanced Information Networks.
A. Al-Kaff, D. Martin, F. Garcia, A. de la Escalera, and J. M. Armingol, “Survey of computer vision algorithms and applications for unmanned aerial vehicles,” Survey of Computer Vision Algorithms and Applications for Unmanned Aerial Vehicles, vol. 92, pp. 447–463, 2018.View at: Publisher Site | Google Scholar
K. Suzuki, “Overview of deep learning in medical imaging,” Radiological Physics and Technology, vol. 10, no. 3, pp. 257–273, 2017.View at: Publisher Site | Google Scholar
S. Singh, R. Bansal, and S. J. I. Bansal, “Medical image enhancement using histogram processing techniques followed by median filter,” Ijipa, vol. 3, no. 1, pp. 1–9, 2012.View at: Google Scholar
H. Greenspan, B. Van Ginneken, and R. M. Summers, “Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1153–1159, 2016.View at: Publisher Site | Google Scholar
Q. Yuan, H. Shen, T. Li et al., “Deep learning in environmental remote sensing: achievements and challenges,” Remote Sensing of Environment, vol. 241, p. 111716, 2020.View at: Publisher Site | Google Scholar
J. Kim, J. Hong, and H. Park, “Prospects of deep learning for medical imaging,” Prospects of Deep Learning for Medical Imaging, vol. 2, no. 2, pp. 37–52, 2018.View at: Publisher Site | Google Scholar
Y. Wu, M. Schuster, Z. Chen et al., “Google's neural machine translation system: bridging the gap between human and machine translation,” 2016, http://arxiv.org/abs/1609.08144.View at: Google Scholar
M. Bakator and D. J. M. T. Radosav, “Deep learning and medical diagnosis: a review of literature,” Multimodal Technologies and Interaction, vol. 2, no. 3, p. 47, 2018.View at: Publisher Site | Google Scholar
G. Litjens, T. Kooi, B. E. Bejnordi et al., “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, pp. 60–88, 2017.View at: Publisher Site | Google Scholar
A. Kamilaris and F. X. Prenafeta-Boldú, “Deep learning in agriculture: a survey,” Computers and Electronics in Agriculture, vol. 147, pp. 70–90, 2018.View at: Publisher Site | Google Scholar
I. Arel, D. C. Rose, and T. P. Karnowski, “Deep machine learning-a new frontier in artificial intelligence research,” IEEE Computational Intelligence Magazine, vol. 5, no. 4, pp. 13–18, 2010.View at: Publisher Site | Google Scholar
D. Wang, N. R. Zwart, and J. G. Pipe, “Joint water–fat separation and deblurring for spiral imaging,” Magnetic Resonance in Medicine, vol. 79, no. 6, pp. 3218–3228, 2018.View at: Publisher Site | Google Scholar
H. M. Zhang and B. Dong, “A review on deep learning in medical image reconstruction,” vol. 8, no. 2, pp. 311–340, 2020.View at: Publisher Site | Google Scholar
D. Liang, J. Cheng, Z. Ke, and L. Ying, “Deep MRI reconstruction: unrolled optimization algorithms meet neural networks,” 2019, http://arxiv.org/abs/1907.11711.View at: Google Scholar
M. Doneva, “Mathematical models for magnetic resonance imaging reconstruction: an overview of the approaches, problems, and future research areas,” IEEE Signal Processing Magazine, vol. 37, no. 1, pp. 24–32, 2020.View at: Publisher Site | Google Scholar
W. A. Edelstein, J. M. S. Hutchison, G. Johnson, and T. Redpath, “Spin warp NMR imaging and applications to human whole-body imaging,” Physics in Medicine and Biology, vol. 25, no. 4, pp. 751–756, 1980.View at: Publisher Site | Google Scholar
P. W. Goodwill and S. M. Conolly, “Multidimensional X-space magnetic particle imaging,” IEEE Transactions on Medical Imaging, vol. 30, no. 9, pp. 1581–1590, 2011.View at: Publisher Site | Google Scholar
D. B. Twieg, “The k‐trajectory formulation of the NMR imaging process with applications in analysis and synthesis of imaging methods,” Medical Physics, vol. 10, no. 5, pp. 610–621, 1983.View at: Publisher Site | Google Scholar
J. A. Fessler, “Model-based image reconstruction for MRI,” IEEE Signal Processing Magazine, vol. 27, no. 4, pp. 81–89, 2010.View at: Publisher Site | Google Scholar
S. E. Eurostat, Healthcare resource statistics-technical resources and medical technology, Eurostat Luxembourg, 2019.
J. Y. Cheng, F. Chen, M. T. Alley, J. M. Pauly, and S. S. Vasanawala, “Highly scalable image reconstruction using deep neural networks with bandpass filtering,” 2018, http://arxiv.org/abs/1805.03300.View at: Google Scholar
R. Chen, D. Pu, Y. Tong, and M. Wu, “Image‐denoising algorithm based on improved K‐singular value decomposition and atom optimization,” CAAI Transactions on Intelligence Technology, vol. 7, no. 1, pp. 117–127, 2022.View at: Publisher Site | Google Scholar
R. Souza, R. M. Lebel, and R. Frayne, “Dual-domain cascade of U-nets for multi-channel magnetic resonance image reconstruction,” Magnetic Resonance Imaging, vol. 71, pp. 140–153, 2020.View at: Publisher Site | Google Scholar
S. Xie, X. Zheng, Y. Chen et al., “Artifact removal using improved GoogLeNet for sparse-view CT reconstruction,” Scientific Reports, vol. 8, no. 1, pp. 1–9, 2018.View at: Publisher Site | Google Scholar
J. Schlemper, J. Caballero, J. V. Hajnal, A. N. Price, and D. Rueckert, “A deep cascade of convolutional neural networks for dynamic MR image reconstruction,” IEEE Transactions on Medical Imaging, vol. 37, no. 2, pp. 491–503, 2018.View at: Publisher Site | Google Scholar
E. Topal, M. Löffler, and E. J. S. R. Zschech, “Deep learning-based inaccuracy compensation in reconstruction of high resolution XCT data,” Scientific Reports, vol. 10, no. 1, pp. 1–13, 2020.View at: Publisher Site | Google Scholar
K. de Haan, Y. Rivenson, Y. Wu, and A. Ozcan, “Deep-learning-based image reconstruction and enhancement in optical microscopy,” Proceedings of the IEEE, vol. 108, no. 1, pp. 30–50, 2020.View at: Publisher Site | Google Scholar
C. Belthangady and L. A. Royer, “Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction,” Nature Methods, vol. 16, no. 12, pp. 1215–1225, 2019.View at: Publisher Site | Google Scholar
S. Guan, A. A. Khan, S. Sikdar, and P. V. Chitnis, “Limited-view and sparse photoacoustic tomography for neuroimaging with deep learning,” Scientific Reports, vol. 10, no. 1, pp. 1–12, 2020.View at: Publisher Site | Google Scholar
H. Ben Yedder, A. BenTaieb, M. Shokoufi, A. Zahiremami, F. Golnaraghi, and G. Hamarneh, “Deep learning based image reconstruction for diffuse optical tomography,” International workshop on machine learning for medical image reconstruction, Springer, Cham, 2018.View at: Publisher Site | Google Scholar
F. Mahmood and N. J. Durr, “Topographical reconstructions from monocular optical colonoscopy images via deep learning,” in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, April 2018.View at: Publisher Site | Google Scholar
X. Li, Y. Zhang, H. Zhao, C. Burkhart, L. C. Brinson, and W. Chen, “A transfer learning approach for microstructure reconstruction and structure-property predictions,” Scientific Reports, vol. 8, no. 1, pp. 1–13, 2018.View at: Publisher Site | Google Scholar
Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Science & Applications, vol. 7, no. 2, p. 17141, 2018.View at: Publisher Site | Google Scholar
A. Shahbazi, J. Kinnison, R. Vescovi et al., “Flexible learning-free segmentation and reconstruction of neural volumes,” Scientific Reports, vol. 8, no. 1, pp. 1–15, 2018.View at: Publisher Site | Google Scholar
X. Zhai, D. Lei, M. Zhang et al., “LoTToR: an algorithm for missing-wedge correction of the low-tilt tomographic 3D reconstruction of a single-molecule structure,” Scientific Reports, vol. 10, no. 1, pp. 1–17, 2020.View at: Publisher Site | Google Scholar
D. Micieli, T. Minniti, L. M. Evans, and G. Gorini, “Accelerating neutron tomography experiments through artificial neural network based reconstruction,” Scientific Reports, vol. 9, no. 1, pp. 1–12, 2019.View at: Publisher Site | Google Scholar
B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen, “Image reconstruction by domain-transform manifold learning,” Nature, vol. 555, no. 7697, pp. 487–492, 2018.View at: Publisher Site | Google Scholar
M. Yaqub, J. Feng, M. Zia et al., “State-of-the-art CNN optimizer for brain tumor segmentation in magnetic resonance images,” Brain Sciences, vol. 10, no. 7, p. 427, 2020.View at: Publisher Site | Google Scholar
Y. Han, J. Yoo, H. H. Kim, H. J. Shin, K. Sung, and J. C. Ye, “Deep learning with domain adaptation for accelerated projection-reconstruction MR,” Magnetic Resonance in Medicine, vol. 80, no. 3, pp. 1189–1205, 2018.View at: Publisher Site | Google Scholar
F. Knoll, K. Hammernik, C. Zhang et al., “Deep-learning methods for parallel magnetic resonance imaging reconstruction: a survey of the current approaches, trends, and issues,” IEEE Signal Processing Magazine, vol. 37, no. 1, pp. 128–140, 2020.View at: Publisher Site | Google Scholar
G. Yang, S. Yu, H. Dong et al., “DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1310–1321, 2018.View at: Publisher Site | Google Scholar
H. Jeelani, J. Martin, F. Vasquez, M. Salerno, and D. S. Weller, “Image quality affects deep learning reconstruction of MRI,” in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, April 2018.View at: Publisher Site | Google Scholar
J. Chun, H. Zhang, H. M. Gach et al., “MRI super‐resolution reconstruction for MRI‐guided adaptive radiotherapy using cascaded deep learning: In the presence of limited training data and unknown translation model,” Medical Physics, vol. 46, no. 9, pp. 4148–4164, 2019.View at: Publisher Site | Google Scholar
C. Liu, M. E. Moseley, and R. Bammer, “Simultaneous phase correction and SENSE reconstruction for navigated multi-shot DWI with non-cartesian k-space sampling,” Magnetic Resonance in Medicine, vol. 54, no. 6, pp. 1412–1422, 2005.View at: Publisher Site | Google Scholar
H. Banjak, T. Grenier, T. Epicier et al., “Evaluation of noise and blur effects with SIRT-FISTA-TV reconstruction algorithm: application to fast environmental transmission electron tomography,” Ultramicroscopy, vol. 189, pp. 109–123, 2018.View at: Publisher Site | Google Scholar
J. Wang, J. Liang, J. Cheng, Y. Guo, and L. Zeng, “Deep learning based image reconstruction algorithm for limited-angle translational computed tomography,” Plos One, vol. 15, no. 1, article e0226963, 2020.View at: Publisher Site | Google Scholar
A. Depeursinge, A. Vargas, A. Platon, A. Geissbuhler, P. A. Poletti, and H. Müller, “Building a reference multimedia database for interstitial lung diseases,” Computerized Medical Imaging and Graphics, vol. 36, no. 3, pp. 227–238, 2012.View at: Publisher Site | Google Scholar
J.-R. Bilbao-Castro, C. O. S. Sorzano, I. Garcia, and J. J. Fernandez, “XMSF: structure-preserving noise reduction and pre-segmentation in microscope tomography,” Bioinformatics, vol. 26, no. 21, pp. 2786-2787, 2010.View at: Publisher Site | Google Scholar
S. Jiang, X. Li, Z. Zhang et al., “Scan efficiency of structured illumination in iterative single pixel imaging,” Optics Express, vol. 27, no. 16, pp. 22499–22507, 2019.View at: Publisher Site | Google Scholar
K. Boedeker, “AiCE deep learning reconstruction: bringing the power of ultra-high resolution CT to routine imaging,” Canon Medical Systems Corporation, 2019.View at: Google Scholar
C. Syben, M. Michen, B. Stimpel, S. Seitz, S. Ploner, and A. K. Maier, “Technical note: PYRO‐NN: Python reconstruction operators in neural networks,” Medical Physics, vol. 46, no. 11, pp. 5110–5115, 2019.View at: Publisher Site | Google Scholar
L. Gjesteby, H. Shan, Q. Yang et al., “Deep neural network for CT metal artifact reduction with a perceptual loss function,” in In Proceedings of The Fifth International Conference on Image Formation in X-ray Computed Tomography, Salt Lake City, Utah, 2018.View at: Google Scholar
D. H. Ye, G. T. Buzzard, M. Ruby, and C. A. Bouman, “Deep back projection for sparse-view CT reconstruction,” in 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Anaheim, CA, USA, November 2018.View at: Publisher Site | Google Scholar
H. Chen, Y. Zhang, Y. Chen et al., “LEARN: learned experts’ assessment-based reconstruction network for sparse-data CT,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1333–1347, 2018.View at: Publisher Site | Google Scholar
P. Jarosik, M. Byra, and M. Lewandowski, “Waveflow-towards integration of ultrasound processing with deep learning,” in 2018 IEEE International Ultrasonics Symposium (IUS), Kobe, Japan, October 2018.View at: Publisher Site | Google Scholar
R. Wang, Z. Fang, J. Gu et al., “High-resolution image reconstruction for portable ultrasound imaging devices,” EURASIP Journal on Advances in Signal Processing, vol. 2019, no. 1, 2019.View at: Publisher Site | Google Scholar
Y. H. Yoon, S. Khan, J. Huh, and J. C. Ye, “Efficient B-mode ultrasound image reconstruction from sub-sampled RF data using deep learning,” IEEE Transactions on Medical Imaging, vol. 38, no. 2, pp. 325–336, 2019.View at: Google Scholar
I. Häggström, C. R. Schmidtlein, G. Campanella, and T. J. Fuchs, “DeepPET: a deep encoder-decoder network for directly solving the PET image reconstruction inverse problem,” Medical image analysis, vol. 54, pp. 253–262, 2019.View at: Publisher Site | Google Scholar
K. Kim, D. Wu, K. Gong et al., “Penalized PET reconstruction using deep learning prior and local linear fitting,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1478–1487, 2018.View at: Publisher Site | Google Scholar
Q. Wang, H. Zhang, X. Li et al., “Error-constraint deep learning scheme for electrical impedance tomography (EIT),” IEEE Transactions on Instrumentation and Measurement, vol. 71, 2021.View at: Google Scholar
H. Xie, H. Shan, and G. J. B. Wang, “Deep encoder-decoder adversarial reconstruction (DEAR) network for 3D CT from few-view data,” Bioengineering, vol. 6, no. 4, p. 111, 2019.View at: Publisher Site | Google Scholar
M. Ravi, A. Sewa, T. G. Shashidhar, and S. S. S. Sanagapati, “FPGA as a hardware accelerator for computation intensive maximum likelihood expectation maximization medical image reconstruction algorithm,” IEEE Access, vol. 7, pp. 111727–111735, 2019.View at: Publisher Site | Google Scholar
W. Wang, Z. Hu, E. E. Gualtieri et al., “Systematic and distributed time-of-flight list mode PET reconstruction,” in 2006 IEEE Nuclear Science Symposium Conference Record, San Diego, CA, USA, November 2006.View at: Publisher Site | Google Scholar
A. J. Reader, G. Corda, A. Mehranian, C. da Costa-Luis, S. Ellis, and J. A. Schnabel, “Deep learning for PET image reconstruction,” IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 5, no. 1, pp. 1–25, 2021.View at: Publisher Site | Google Scholar
S. Zhao, K. Yang, and K. Yang, “Fan beam image reconstruction with generalized Fourier slice theorem,” Journal of X-ray Science and Technology, vol. 22, no. 4, pp. 415–436, 2014.View at: Publisher Site | Google Scholar
S. Felix, R. Bolzern, and M. Battaglia, “A compressed sensing-based image reconstruction algorithm for solar flare X-ray observations,” The Astrophysical Journal, vol. 849, no. 1, p. 10, 2017.View at: Publisher Site | Google Scholar
M. I. Razzak, S. Naz, and A. Zaib, “Deep learning for medical image processing: overview, challenges and the future,” Classification in BioApps, Springer Nature, pp. 323–350, 2018, https://link.springer.com/chapter/10.1007/978-3-319-65981-7_12.View at: Google Scholar
B. H. Menze, A. Jakab, S. Bauer et al., “The multimodal brain tumor image segmentation benchmark,” IEEE transactions on medical imaging, vol. 34, no. 10, pp. 1993–2024, 2015.View at: Google Scholar
P. Rajpurkar, J. Irvin, A. Bagul et al., “MURA: large dataset for abnormality detection in musculoskeletal radiographs,” 2017, http://arxiv.org/abs/1712.06957.View at: Google Scholar
F. Prior, J. Almeida, P. Kathiravelu et al., “Open access image repositories: high-quality data to enable machine learning research,” Clinical Radiology, vol. 75, no. 1, pp. 7–12, 2020.View at: Publisher Site | Google Scholar
L. Shen, L. R. Margolies, J. H. Rothstein, E. Fluder, R. McBride, and W. Sieh, “Deep learning to improve breast cancer detection on screening mammography,” Scientific Reports, vol. 9, no. 1, pp. 1–12, 2019.View at: Publisher Site | Google Scholar
E. D. Bò, C. Gentili, and C. Cecchetto, “Human chemosignals and brain activity: a preliminary meta-analysis of the processing of human body odors,” Chemical Senses, vol. 45, no. 9, pp. 855–864, 2020.View at: Publisher Site | Google Scholar
M. A. Bowes, G. A. Guillard, G. R. Vincent, A. D. Brett, C. B. H. Wolstenholme, and P. G. Conaghan, “Precision, reliability, and responsiveness of a novel automated quantification tool for cartilage thickness: data from the Osteoarthritis Initiative,” The Journal of Rheumatology, vol. 47, no. 2, pp. 282–289, 2020.View at: Publisher Site | Google Scholar
A. Clerigues, S. Valverde, J. Bernal, J. Freixenet, A. Oliver, and X. Lladó, “Acute ischemic stroke lesion core segmentation in CT perfusion images using fully convolutional neural networks,” Computers in Biology and Medicine, vol. 115, p. 103487, 2019.View at: Publisher Site | Google Scholar