Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2015 (2015), Article ID 152693, 9 pages
Review Article

Recent Development of Dual-Dictionary Learning Approach in Medical Image Analysis and Reconstruction

1Department of Engineering Physics, Tsinghua University, Beijing 100084, China
2Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing 100084, China

Received 5 October 2014; Revised 12 January 2015; Accepted 6 April 2015

Academic Editor: Valeri Makarov

Copyright © 2015 Bigong Wang and Liang Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


As an implementation of compressive sensing (CS), dual-dictionary learning (DDL) method provides an ideal access to restore signals of two related dictionaries and sparse representation. It has been proven that this method performs well in medical image reconstruction with highly undersampled data, especially for multimodality imaging like CT-MRI hybrid reconstruction. Because of its outstanding strength, short signal acquisition time, and low radiation dose, DDL has allured a broad interest in both academic and industrial fields. Here in this review article, we summarize DDL’s development history, conclude the latest advance, and also discuss its role in the future directions and potential applications in medical imaging. Meanwhile, this paper points out that DDL is still in the initial stage, and it is necessary to make further studies to improve this method, especially in dictionary training.

1. Introduction

Compressive sensing (CS) is a novel theory in information acquisition and processing [1]. Since general signals are broadband, traditional signal reconstruction methods usually adopt Nyquist Sampling, requiring high sample rate and long processing time. However, CS theory offers a way to restore signal accurately with less measurement by solving an optimization problem in which signal is sparse, represented using a basis matrix, and the high-dimensional transformation is projected to a lower dimensional subspace. Therefore, CS theory has been widely recognized and applied in various fields.

Some groups focus on studies of CS applications and have developed various braches such as Bayesian CS and 1-Bit CS [24]. After it is applied in medical imaging reconstruction, CS theory is proven to be a method that effectively retains high image quality using undersampling measurement data in different imaging modalities including computed tomography (CT) and magnetic resonance imaging (MRI) [57]. Besides, CS theory shows great potential in multimodalities image reconstruction, one of the future directions of medical imaging.

Dictionary learning (DL) is a typical method of CS image reconstruction. In this method, sampled data is compressible in specific transform domain, and transformation coefficients are projected to a lower dimensional vector with essential image information retained well. As a result, complex reconstruction problem is simplified to an optimization problem. Usually, one should take three problems into consideration to solve image reconstruction problems using DL methods. First, design an overcompleted dictionary which can represent a signal sparsely. Second, get a measurement matrix strictly satisfied with isometry property. Third, develop a fast signal reconstruction algorithm with good robustness. The designed dictionary is important to the accuracy of CS image reconstruction. In DL method, the dictionary is self-adaptive and flexible; it is trained by particular image samples or group of images. Using different training methods, the image sparseness is quite different [8].

Though DL-based approach has been recognized in medical image reconstruction field, single dictionary applied in the whole image process brings out a limit in image quality. That means only one dictionary is far from enough as the prior information. In order to improve image quality, research scholars have optimized DL method to dual-dictionary learning (DDL) which has more diverse prior information in imaging modalities like CT and MRI. DDL method was initially developed for image super-resolution. Lu et al. [9, 10] applied this method for CT reconstruction. Song et al. [11] used it in 3D MRI reconstruction. DDL shows a great potential in medical image reconstruction.

In this paper, we discuss the DL method in Section 2. Based on DL method, we review DDL’s history and new development in Section 3, including its theory, feasibility demonstration, and the application in different fields. In Section 4, we discuss the use of DDL in medical image analysis. In the section of Discussion and Conclusion, we summarize algorithms and explore the future directions in medical image reconstruction.

2. Dictionary Learning (DL) Algorithm

2.1. DL Method and Theory

According to the CS theory, an undersampling image reconstruction problem is to solve an underdetermined system of linear equations by minimizing the quasi norm (e.g., number of nonzeros) of the sparsified transform ; it means the image is sparse after a completed sparse transform . The corresponding optimization problem is

In (1), is the image to be reconstructed, is the codebook for the given measurements . Equation (1) is also known as a sparse coding problem, which is a NP-hard problem (nondeterministic polynomial). It can be solved by some greedy algorithms, for example, orthogonal matching pursuit (OMP) [12]. It is notable that if the norm is replaced with norm, the problem can be solved by linear programming in the real domain or second order cone programming in the complex domain.

Given an image of size , it can be decomposed into some small patches of size , . Each patch can be expressed as a dimensional vector . All the patches are extracted from the object image according to the patch size and the slide distance. A dictionary is a matrix that consists of atoms which are the columns of the dictionary. As is the patch vector from sample images, the initial dictionary constructed from the extracted patches is usually redundant or overcompleted; that is, . Using specific atoms of initial dictionary , each vector in the image can be approximately represented as sparse coefficient [13]. Considerwhere for the error bound and for the sparse representation vector which has few nonzero elements: , . To get the sparse representation of the vector , one can minimize the norm as

If an image contains patches, DL is to find a dictionary in which all the patches should be sparsely represented as follows:

Usually, if is fixed by specific value, (3) is equivalent to solve the following problem:

2.2. Dictionary Construction

DL problem is NP-hard because it turns to a sparse coding problem when and are fixed. Currently, mainly four adaptive dictionary training algorithms were proposed to solve such a dictionary learning problem.(1)Direct method (DM): DM is an original method that preserves all the details in the sample images because of a direct extraction process, and then a target image can be fully recovered as the patches are well chosen. Usually, this method is effective in super-resolution image reconstruction.(2)Method of optimal directions (MOD): MOD fixes the coefficients corresponding to the dictionary vectors and then updates the atoms by minimizing the residuals between the training vectors and its representations. The main advantage of MOD is that it gives the optimal adjustment of the dictionary vectors in each iteration. Usually, it provides better convergence properties in ECG (electrocardiogram) signals [14].(3)Generalized principal component analysis (GPCA): GPCA is a general method for modeling and segmenting some mixed data using a collection of subspaces. By introducing certain algebraic models and techniques into data clustering, traditionally a statistical problem, GPCA offers a new spectrum of algorithms for data modeling and clustering [15].(4)-means singular value decomposition (-SVD): -SVD is an iterative method updating the dictionary atoms to fit the data better. The method does SVD on the errors and updates the current dictionary atom and coefficient simultaneously with the item which has the minimum error. As the most widely used method to train the dictionary, -SVD has an excellent convergence and sparsity [16].

Dictionary learning can be used to reconstruct image; a classic algorithm is summarized in Figure 1. Given an initial value (initial dictionary), do dictionary learning using appropriate training method and obtain the sparse representation, and then update under specific transform (i.e., wavelet, Fourier) and output the result after several iterations at last.

Figure 1: The algorithm block diagram of diction learning applied in image reconstruction.

3. DDL Algorithm in Image Analysis

3.1. From Single to Dual-Dictionary

DL method is widely used in image restoration [1719], super-resolution reconstruction [2023], image deblurring [2426], denoising [2732], medical image reconstruction [13, 33], image prediction [34], and image inpainting [35]. However, both dynamitic atoms in each iteration step and certain noise in measurement data would increase iteration time making DL method slow in most cases. As to improve DL’s inefficiency, some come up with the solution that by introducing two or more dictionaries image quality would be further improved within less time. One of the improved methods is dual-dictionary learning (DDL).

DDL theory is first introduced by Curzion et al. as PADDL; it aimed to train a linear mapping in the case of a single dictionary. Note that this method is not using two different dictionaries but training one dictionary with its “dual” dictionary. In PADDL method, the essential concept is to update the dictionary by means of its “dual” dictionary , as an auxiliary item. It aims to find an optimal pair of linear operators by minimizing the following:where is the matrix to be trained and is the representation. The can be treated as filters to approximate its optimal . is the weight parameters.

The result shows that this dual-dictionary training method can be applied well in calculating the sparse representations [36].

3.2. DDL in Super-Resolution Reconstruction

Zhang et al. proposed an efficient sparse representation method to solve image super-resolution reconstruction via DDL [37]. In this work, they assume that image patches with different resolution can share the same underlying sparse representation. Thus, given a dictionary pair , where stands for high resolution and stands for low resolution, the sparse representation of from low-resolution image is similar as (3). Consider

With the sparse representation vector , the high-resolution patch can be approximately expressed as . Put all the high-resolution patches back into corresponding positions and perform normalization. Finally we obtain the estimation of the high-resolution image .

The optimization model for learning coupled dictionaries with “dual” is as follows:

, in which and are the dimension of the high- and low-resolution patches. . is the dual of as mentioned in Section 2.1. After multiplying by , we acquire the high-resolution patch . In this method, and are treated as one dictionary and trained simultaneously with their dual, which refers to and .

With the approximate sparse coding procedure via model (8), the result shows that their method speeds up the overall super-resolution process significantly.

3.3. DDL in Image Restoration

Similar to HaiChaos’ work, Wang et al. also applied DDL in image restoration [38]. They solved the problem of restoring the lost part of high-frequency detail information of images.

Wang et al. reconstructed the high-frequency (HF) details from the low-resolution images using the prior models. HF is decomposed into a combination of two components, main high-frequency (MHF) and residual high-frequency (RHF). Wang et al. restored MHF and RHF, respectively, with dual-dictionary and then added up MHF and RHF at last. For dictionary construction, -SVD was used to train the two dictionaries. The experiment result reveals that the PSNR values are better than bicubic and sparse representation algorithm.

3.4. DDL in Human Pose Estimation

Ji and Su proposed a new method for robust 3D human pose estimation using DDL [39]. In their study, they constructed two dictionaries simultaneously including visual observation dictionary and body configuration dictionary. Both of the two dictionaries share with a same sparse representation with respect to every visual observation and its corresponding 3D body pose.

Since outline features are usually corrupted, the optimization model for robust human pose estimation is as follows:where for observation data matrix, for observation dictionary, for 3D pose data matrix, and for body configuration dictionary. for common sparse representation of and , and is the corruption item to be minimized.

To solve problem (9), Hao and Fei used an inexact Augmented Lagrange Multiplier (IALM) method to update the two dictionaries. More details related to the IALM method can be learned from [29].

The experimental results show that their approach performs well in recovering outlines from corrupted data compared with other methods.

4. DDL Algorithm in Medical Image Reconstruction

Recently, DDL has gained attention in medical image reconstruction, which can improve image qualities and accelerate reconstruction process.

4.1. Method and Theory

Let be a low-quality image and , and let be a low dictionary constructed from . Similarly, let be the high-quality counterpart of and ; constructed from . As a corresponding relation between and , they can be connected with a general following model:where is the noise and is the transform operator. For a specific , we can assume that each patch in can be expressed as the linear combination of the atoms in the following dictionary :where is the error; . is sparse coefficient, . Combining (11) and (10) gives

According to the above derivations which are referred to as the Sparse-Land Model, the low-quality patch can be sparse coded by the same vector under dictionary . Thus, given the dictionaries and with accurate one-to-one mapping atoms, we can approximately recover simply by multiplying and the sparse representation obtained from as follows:

The general workflow for DDL method in medical image reconstruction is summarized in Figure 2. Given two sets of measured data (high-resolution sample images and low-resolution sample images), we can obtain two dictionaries and using appropriate training methods (DM, MOD, GPCA, or -SVD). When a measured data is input, we can obtain the sparse representation with and then update the using .

Figure 2: The general workflow for DDL method.
4.2. DDL in CT Reconstruction

Computed tomography (CT) reconstruction is a process obtaining the tomographic image of human body from X-ray projection data. The reconstruction methods can be divided into two types, analytic and iterative methods. In recent years, CS-based iterative method was applied in 3D X-ray image reconstruction. It performs more flexible and accurate than analytic method in most of cases. Some typical topics include interior CT problem, low-dose imaging, and incomplete data reconstruction [4044].

Lu et al. made a progress in few-view image reconstruction of CT images (SART-TV-DL) [9, 10] using DDL. Since each pair of corresponding sample images is reconstructed from the same object just different in view numbers of projection, a high-quality image and its low-quality counterpart have the relationship described in (10).

In their work, a set of high-quality images which were reconstructed with SART algorithm from adequate projection were used to construct a high-quality dictionary ; however, according to the pixel-to-pixel mapping rule, a low-quality dictionary can be also generated from a set of blurry images which were reconstructed from under-sampled projection data. To solve the dictionary training problem, they used DM mentioned in Section 1 because it could reserve most details of the sample images. Moreover, this method can generate dictionaries easiest and fastest.

However, in a CT image, pixel values alone cannot reflect the relationship of the adjacent two pixels. Therefore, in addition to DM, they used pixel values combined with its first-order gradient vector along and direction to provide more information of an image vector for each patch. That is, if an image patch is of size , the atom in the dictionary had features because of the gradient. As the dictionaries were redundant or overcomplete, they reduced the redundancy of the dictionaries by means of setting a minimum Euclidean distances threshold.

The real data results demonstrate the potential of SART-TV-DL algorithm in CT image reconstruction with 30–50 views. It contributes to some preclinical and clinical applications such as C-arm, breast CT, and tomosynthesis.

Different from Lu’s work, Cao and Xing applied DDL in CT limited angle reconstruction [45]. In his work, a two-dictionary learning (ART-TV-TDL) algorithm is proposed to remove the limited angle artifacts. The two dictionaries were, respectively, object dictionary learned from a high-quality training image and artifact dictionary from artifact image. A limited angle reconstruction , which could be divided into the object part and the artifact part , had the different sparse representation coefficients with and as follows:

Here and are the sparse coefficient with and sparsity; the training method was -SVD in this work. To get a better image with restrain artifacts, they combined these two representations for iterative reconstruction. Considerwhere , , and are parameters to balance the effect. Their results show that the ART-TV-TDL method has smaller RMSE values in different limited angles (90 and 120) compared with ART-TV method.

4.3. DDL in 3D MRI Reconstruction

Song et al. proposed a novel method for multislice (3D) MRI reconstruction from undersampled -space data using dual-dictionary learning (Dual-DL-MRI) [11].

For a high-resolution MRI images series , one can represent them as one vector of length and get its undersampled -space measurements by Fourier transform . is a three-dimension undersampling Fourier matrix. Therefore, the corresponding series can be reconstructed from undersampled -space by inverse Fourier transform as follows:

As we can see, (16) is one form of (10), which demonstrates the possibility of dual-dictionary in MRI reconstruction.

To construct dual-dictionary, they used -SVD method to train the two dictionaries simultaneously to ensure the matching accuracy (one-to-one correspondence); and can be obtained bywhere stands for two sample sets that are one-to-one matching; . It is worth noting that no more feature vectors are written in each dictionary atom except pixel values.

After updating the reconstruction result for each slice in the Fourier domain (restore the measured data), their work successfully reduce the PSNR of low-resolution MRI reconstruction images.

4.4. DDL in Multimodality Image Reconstruction

Multimodality biomedical imaging has found its increasing applications during the last decade and is becoming routine in clinical practice. Multimodality imaging is to integrate multiple imaging techniques into one instrument or fuse two or more imaging modalities such as CT, MRI, PET, and SPECT. This integration of structural, functional, and molecular information provides more accurate diagnoses. For example, MRI methods offer human soft tissue information with excellent clarity whereas CT depicts human hard tissue such as bone. Both of CT and MRI reveal important functional information. If these two modalities can be combined in one device, some small disease such as caducous blood clots could be exactly diagnosed. However, the imaging principles of MRI and CT are totally different, and how to build an accurate connection of these two modalities is an urgent problem.

In order to stylize the synergy between CT and MRI data sets from an object at the same time, Lu et al. try to investigate the possibility of CT-MRI unified imaging via dual-dictionary [46]. Figures 3(a) and 3(b) are, respectively, CT and MRI image; these two images are obtained from one layer of a patient’s brain and are well registered. Figures 3(c) and 3(d) are the first-order gradient images of Figures 3(a) and 3(b) along direction. Figure 3(e) is the subtraction of CT and MRI, and Figure 3(f) is the subtraction of their gradients. From Figures 3(c), 3(d), and 3(f), we can see that the interiors of CT and MRI are structurally correlated, especially the brain bone. Thus, it is possible to build a connection of CT and MRI using the structural information. With an MRI image as the a priori information, Lu tries to recover its corresponding CT image.

Figure 3: (a) CT image; (b) corresponding MRI image; (c) the first-order gradient of CT; (d) the first-order gradient of MRI; (e) CT and MRI images subtraction; (f) gradient images subtraction. (a) and (b) are obtained from Visible Human Project

Since CT scan is totally different with MRI scan in physical principle, they use direct method to reserve as much information as possible to establish a knowledge-based connection between the two datasets. The two dictionaries are and ; the former is derived from high-resolution MRI images, and the latter is from high-resolution CT images. The significant point of two dictionaries is that the patches in each dictionary are restricted one-to-one correspondence.

In reconstruction step, and are treated as and in (12), respectively. With dual-dictionary learning, a base CT image is first obtained just from a high-quality MRI image without corresponding CT data. Second, combined with base CT image and highly undersampled CT data, they reconstruct better resolution CT image using iterative method. The base CT image provides a better resolution and outline information, while highly undersampled CT image provides all the detailed information.

5. Discussion and Conclusion

In this paper, we discussed the recent advances of the DDL methods in medical imaging. Based on highly undersampled measured data, DDL algorithm has shown its great potential in reconstructing high-resolution images [47, 48].

Nowadays, MRI has become an indispensable medical modality of imaging diagnosis. However, during an MRI process, the scan time is usually up to fifteen minutes or even more. Patients might feel uncomfortable to keep motionless for a long time in the huge MRI gantry. Moreover, motion artifacts which reduce the images quality are always inevitable due to some organ movements such as heartbeat, pulse, and spasm. Researches demonstrated that the average displacement is over 0.35 mm within 100 seconds for one person lying on the cradle, while this number is up to 2.5 mm for a patient [42, 43]. Therefore, it has an important clinical significance to save the MRI scan time for better images quality and healthcare.

DDL method may be the future direction of fast MRI reconstruction. As mentioned in Section 4.4, the same slice of CT and MRI images from one object are structurally correlated. The advantage of CT is that the scanning time is short for some typical parts of body. Besides, the spatial resolution of CT is better than MRI. In the fast MRI, the measurement data is incomplete. Therefore, if the CT image data can be utilized as prior information in MRI reconstruction process, fewer measurement data (-space) is required for high-resolution MRI image reconstruction. The essence of the reviewed DDL is establishing an appropriate relation between two spatial domains (e.g., different resolutions and different frequencies). One domain is for atom matching and the other domain is for image updating. Similarly, we may establish a quantitative relation between the two modalities using DDL. The relation can be a one-to-one mapping between the images boundaries which reflect the correlation between CT and MRI. In this way, DDL enables the fast MRI.

Overall, DDL method has shown its effective application in medical image reconstruction. With DDL method, we can reconstruct a high-resolution image with highly undersampling data. Inspired by its performances in one medical modality, DDL can be applied in structurally correlated image reconstruction problem, for example, multimodalities image reconstruction (CT-MRI).

However, the research work of DDL still remains in preliminary stage. For example, as discussed in the paper, reconstruction results may be relatively sensitive to the matching accuracy between the two dictionaries. Thus, how to establish closest connections between the images with different resolutions or even different modalities will be an important issue to be solved in the future. Also, the redundancy of dictionaries should be eliminated more reasonable to ensure better sparse representation.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This work was partly supported by the Grants from NNSFC 10905030 and 81427803, Beijing Natural Science Foundation (research on key techniques of medical cone-beam CT reconstruction from little data based on compressed sensing theory), and Beijing Excellent Talents Training Foundation (2013D009004000004). Thanks are due to the supports by Visible Human Project (; CT and MRI figures are obtained from Visible Human Project.


  1. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE Journal on Selected Topics in Signal Processing, vol. 1, no. 4, pp. 586–597, 2007. View at Publisher · View at Google Scholar · View at Scopus
  3. T. Blumensath and M. E. Davies, “Iterative hard thresholding for compressed sensing,” Applied and Computational Harmonic Analysis, vol. 27, no. 3, pp. 265–274, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. I. Daubechies, M. Defrise, and C. de Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Communications on Pure and Applied Mathematics, vol. 57, no. 11, pp. 1413–1457, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. J. Qin and W. H. Guo, “An efficient compressive sensing MR image reconstruction scheme,” in Proceedings of the IEEE 10th International Symposium on Biomedical Imaging (ISBI '13), pp. 306–309, IEEE, San Francisco, Calif, USA, April 2013. View at Publisher · View at Google Scholar · View at Scopus
  6. F. A. Razzaq, S. Mohamed, A. Bhatti, and S. Nahavandi, “Locally sparsified compressive sensing for improved MR image quality,” in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC '13), pp. 2163–2167, 2013.
  7. F. A. Razzaq, S. Mohamed, A. Bhatti, and S. Nahavandi, “Non-uniform sparsity in rapid compressive sensing MRI,” in Proceedings of IEEE International Conference on Systems, Man, and Cybernetics (SMC '12), pp. 2253–2258, October 2012. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Ma, W. T. Yin, Y. Zhang, and A. Chakraborty, “An efficient algorithm for compressed MR imaging using total variation and wavelets,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1–8, Anchorage, Alaska, USA, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  9. Y. Lu, J. Zhao, and G. Wang, “Few-view image reconstruction with dual dictionaries,” Physics in Medicine and Biology, vol. 57, no. 1, pp. 173–189, 2012. View at Publisher · View at Google Scholar · View at Scopus
  10. B. Zhao, H. Ding, Y. Lu, G. Wang, J. Zhao, and S. Molloi, “Dual-dictionary learning-based iterative image reconstruction for spectral computed tomography application,” Physics in Medicine and Biology, vol. 57, no. 24, pp. 8217–8229, 2012. View at Publisher · View at Google Scholar · View at Scopus
  11. Y. Song, Z. Zhu, Y. Lu, Q. G. Liu, and J. Zhao, “Reconstruction of magnetic resonance imaging by three-dimensional dual-dictionary learning,” Magnetic Resonance in Medicine, vol. 71, no. 3, pp. 1285–1298, 2014. View at Publisher · View at Google Scholar · View at Scopus
  12. H. Lee, D. S. Lee, H. Kang, B.-N. Kim, and M. K. Chung, “Sparse brain network recovery under compressed sensing,” IEEE Transactions on Medical Imaging, vol. 30, no. 5, pp. 1154–1165, 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. S. Ravishankar and Y. Bresler, “MR image reconstruction from highly undersampled k-space data by dictionary learning,” IEEE Transactions on Medical Imaging, vol. 30, no. 5, pp. 1028–1041, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. K. Engan, S. O. Aase, and J. H. Husoy, “Method of optimal directions for frame design,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '99), vol. I–VI, pp. 2443–2446, March 1999.
  15. R. Vidal, Y. Ma, and S. Sastry, “Generalized principal component analysis (GPCA),” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 12, pp. 1945–1959, 2005. View at Publisher · View at Google Scholar · View at Scopus
  16. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006. View at Publisher · View at Google Scholar · View at Scopus
  17. Y. Li, “Dictionary learning based multitask image restoration,” in Proceedings of the 5th International Congress on Image and Signal Processing (CISP '12), pp. 364–368, October 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. J. Zhang, D. B. Zhao, and W. Gao, “Group-based sparse representation for image restoration,” IEEE Transactions on Image Processing, vol. 23, no. 8, pp. 3336–3351, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  19. J. Mairal, G. Sapiro, and M. Elad, “Learning multiscale sparse representations for image and video restoration,” Multiscale Modeling & Simulation, vol. 7, no. 1, pp. 214–241, 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. M. Tanaka, A. Sakurai, and M. Okutomi, “Across-resolution adaptive dictionary learning for single-image super-resolution,” in Digital Photography IX, vol. 8660 of Proceedings of SPIE, February 2013. View at Publisher · View at Google Scholar · View at Scopus
  21. Y. Y. Fan, M. Tanaka, and M. Okutomi, “A classification-and-reconstruction approach for a single image super-resolution by a sparse representation,” in Digital Photography X, vol. 9023 of Proceedings of SPIE, May 2014. View at Publisher · View at Google Scholar
  22. L. Shang and Z. L. Sun, “Image super-resolution reconstruction based on two-stage dictionary learning,” in Intelligent Computing Methodologies, vol. 8589 of Lecture Notes in Computer Science, pp. 277–284, Springer, Berlin, Germany, 2014. View at Publisher · View at Google Scholar
  23. J. Zhang, C. Zhao, R. Q. Xiong, S. W. Ma, and D. B. Zhao, “Image super-resolution via dual-dictionary learning and sparse representation,” in Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS '12), pp. 1688–1691, IEEE, Seoul, Republic of Korea, May 2012. View at Publisher · View at Google Scholar · View at Scopus
  24. Q. G. Liu, D. Liang, Y. Song, J. H. Luo, Y. M. Zhu, and W. S. Li, “Augmented Lagrangian-based sparse representation method with dictionary updating for image deblurring,” SIAM Journal on Imaging Sciences, vol. 6, no. 3, pp. 1689–1718, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. L. Ma, L. Moisan, J. Yu, and T. Zeng, “A dictionary learning approach for Poisson image Deblurring,” IEEE Transactions on Medical Imaging, vol. 32, no. 7, pp. 1277–1289, 2013. View at Publisher · View at Google Scholar · View at Scopus
  26. Z. Hu, J.-B. Huang, and M.-H. Yang, “Single image deblurring with adaptive dictionary learning,” in Proceedings of the 17th IEEE International Conference on Image Processing (ICIP '10), pp. 1169–1172, Hong Kong, China, September 2010. View at Publisher · View at Google Scholar · View at Scopus
  27. S. Beckouche, J. L. Starck, and J. Fadili, “Astronomical image denoising using dictionary learning,” Astronomy & Astrophysics, vol. 556, article A132, 2013. View at Publisher · View at Google Scholar · View at Scopus
  28. M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Transactions on Image Processing, vol. 15, no. 12, pp. 3736–3745, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  29. R. Giryes and M. Elad, “Sparsity-based Poisson denoising with dictionary learning,” IEEE Transactions on Image Processing, vol. 23, no. 12, pp. 5057–5069, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  30. M. Razaviyayn, H.-W. Tseng, and Z.-Q. Luo, “Dictionary learning for sparse representation: complexity and algorithms,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '14), pp. 5247–5251, IEEE, Florence, Italy, May 2014. View at Publisher · View at Google Scholar
  31. E. M. Eksioglu, “Online dictionary learning algorithm with periodic updates and its application to image denoising,” Expert Systems with Applications, vol. 41, no. 8, pp. 3682–3690, 2014. View at Publisher · View at Google Scholar · View at Scopus
  32. X. Zhang, X. Feng, W. Wang, and G. Liu, “Image denoising via 2D dictionary learning and adaptive hard thresholding,” Pattern Recognition Letters, vol. 34, no. 16, pp. 2110–2117, 2013. View at Publisher · View at Google Scholar · View at Scopus
  33. Q. Xu, H. Y. Yu, X. Q. Mou, L. Zhang, J. Hsieh, and G. Wang, “Low-dose X-ray CT reconstruction via dictionary learning,” IEEE Transactions on Medical Imaging, vol. 31, no. 9, pp. 1682–1697, 2012. View at Publisher · View at Google Scholar · View at Scopus
  34. M. Türkan and C. Guillemot, “Dictionary learning for image prediction,” Journal of Visual Communication and Image Representation, vol. 24, no. 3, pp. 426–437, 2013. View at Publisher · View at Google Scholar · View at Scopus
  35. G. H. Zhou, D. Z. Zhu, K. Wang, Q. Wu, X. C. Feng, and C. Wang, “Wavelet image inpainting based on dictionary learning with a beta process,” in Proceedings of the World Automation Congress (WAC '12), June 2012.
  36. C. Basso, M. Santoro, A. Verri, and S. Villa, “PADDLE: proximal algorithm for dual dictionaries LEarning,” in Artificial Neural Networks and Machine Learning—ICANN 2011, vol. 6791 of Lecture Notes in Computer Science Volume, pp. 379–386, Springer, Berlin, Germany, 2011. View at Publisher · View at Google Scholar
  37. H. Zhang, Y. Zhang, and T. S. Huang, “Efficient sparse representation based image super resolution via dual dictionary learning,” in Proceedings of the 12th IEEE International Conference on Multimedia and Expo (ICME '11), July 2011. View at Publisher · View at Google Scholar · View at Scopus
  38. X. Wang, Q. Ran, D. Chen, and F. Jiang, “Image restoration through dictionary learning and sparse representation,” Journal of Information and Computational Science, vol. 10, no. 11, pp. 3497–3502, 2013. View at Publisher · View at Google Scholar · View at Scopus
  39. H. Ji and F. Su, “Robust 3D human pose estimation via dual dictionaries learning,” in Proceedings of the 21st International Conference on Pattern Recognition (ICPR '12), pp. 3370–3373, Tsukuba, Japan, November 2012. View at Scopus
  40. L. Li, K. J. Kang, Z. Q. Chen, L. Zhang, and Y. X. Xing, “A general region-of-interest image reconstruction approach with truncated Hilbert transform,” Journal of X-Ray Science and Technology, vol. 17, no. 2, pp. 135–152, 2009. View at Google Scholar · View at Scopus
  41. X. H. Duan, L. Zhang, Y. X. Xing, Z. Q. Chen, and J. P. Cheng, “Few-view projection reconstruction with an iterative reconstruction-reprojection algorithm and TV constraint,” IEEE Transactions on Nuclear Science, vol. 56, no. 3, pp. 1377–1382, 2009. View at Publisher · View at Google Scholar · View at Scopus
  42. L. Li, Y. Xing, Z. Chen, L. Zhang, and K. Kang, “A curve-filtered FDK (C-FDK) reconstruction algorithm for circular cone-beam CT,” Journal of X-Ray Science and Technology, vol. 19, no. 3, pp. 355–371, 2011. View at Publisher · View at Google Scholar · View at Scopus
  43. M. Chang, L. Li, Z. Q. Chen, Y. S. Xiao, L. Zhang, and G. Wang, “A few-view reweighted sparsity hunting (FRESH) method for CT image reconstruction,” Journal of X-Ray Science and Technology, vol. 21, no. 2, pp. 161–176, 2013. View at Publisher · View at Google Scholar · View at Scopus
  44. Z. Chen, X. Jin, L. Li, and G. Wang, “A limited-angle CT reconstruction method based on anisotropic TV minimization,” Physics in Medicine and Biology, vol. 58, no. 7, pp. 2119–2141, 2013. View at Publisher · View at Google Scholar · View at Scopus
  45. M. Cao and Y. X. Xing, “Limited angle reconstruction with two dictionaries,” in IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC '13), pp. 1–4, Seoul, Republic of Korea, October 2013. View at Publisher · View at Google Scholar
  46. Y. Lu, J. Zhao, T. G. Zhuang, and G. Wang, “Unified dual-modality image reconstruction with dual dictionaries,” in Developments in X-Ray Tomography VIII, vol. 8506 of Proceedings of SPIE, October 2012. View at Publisher · View at Google Scholar
  47. Z.-Q. Liu, L.-J. Bao, and Z. Chen, “Super-resolution reconstruction for magnetic resonance imaging based on adaptive dual dictionary,” Electro-Optic Technology Application, vol. 28, no. 4, pp. 55–60, 2013. View at Google Scholar
  48. L. Feng, P. Wang, T.-F. Xu, M.-Z. Shi, and F. Zhao, “Dual dictionary sparse restoration of blurred images,” Optics and Precision Engineering, vol. 19, no. 8, pp. 1982–1989, 2011. View at Publisher · View at Google Scholar · View at Scopus