Computational and Mathematical Methods in Medicine

Volume 2015, Article ID 152693, 9 pages

http://dx.doi.org/10.1155/2015/152693

## Recent Development of Dual-Dictionary Learning Approach in Medical Image Analysis and Reconstruction

^{1}Department of Engineering Physics, Tsinghua University, Beijing 100084, China^{2}Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing 100084, China

Received 5 October 2014; Revised 12 January 2015; Accepted 6 April 2015

Academic Editor: Valeri Makarov

Copyright © 2015 Bigong Wang and Liang Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

As an implementation of compressive sensing (CS), dual-dictionary learning (DDL) method provides an ideal access to restore signals of two related dictionaries and sparse representation. It has been proven that this method performs well in medical image reconstruction with highly undersampled data, especially for multimodality imaging like CT-MRI hybrid reconstruction. Because of its outstanding strength, short signal acquisition time, and low radiation dose, DDL has allured a broad interest in both academic and industrial fields. Here in this review article, we summarize DDL’s development history, conclude the latest advance, and also discuss its role in the future directions and potential applications in medical imaging. Meanwhile, this paper points out that DDL is still in the initial stage, and it is necessary to make further studies to improve this method, especially in dictionary training.

#### 1. Introduction

Compressive sensing (CS) is a novel theory in information acquisition and processing [1]. Since general signals are broadband, traditional signal reconstruction methods usually adopt Nyquist Sampling, requiring high sample rate and long processing time. However, CS theory offers a way to restore signal accurately with less measurement by solving an optimization problem in which signal is sparse, represented using a basis matrix, and the high-dimensional transformation is projected to a lower dimensional subspace. Therefore, CS theory has been widely recognized and applied in various fields.

Some groups focus on studies of CS applications and have developed various braches such as Bayesian CS and 1-Bit CS [2–4]. After it is applied in medical imaging reconstruction, CS theory is proven to be a method that effectively retains high image quality using undersampling measurement data in different imaging modalities including computed tomography (CT) and magnetic resonance imaging (MRI) [5–7]. Besides, CS theory shows great potential in multimodalities image reconstruction, one of the future directions of medical imaging.

Dictionary learning (DL) is a typical method of CS image reconstruction. In this method, sampled data is compressible in specific transform domain, and transformation coefficients are projected to a lower dimensional vector with essential image information retained well. As a result, complex reconstruction problem is simplified to an optimization problem. Usually, one should take three problems into consideration to solve image reconstruction problems using DL methods. First, design an overcompleted dictionary which can represent a signal sparsely. Second, get a measurement matrix strictly satisfied with isometry property. Third, develop a fast signal reconstruction algorithm with good robustness. The designed dictionary is important to the accuracy of CS image reconstruction. In DL method, the dictionary is self-adaptive and flexible; it is trained by particular image samples or group of images. Using different training methods, the image sparseness is quite different [8].

Though DL-based approach has been recognized in medical image reconstruction field, single dictionary applied in the whole image process brings out a limit in image quality. That means only one dictionary is far from enough as the prior information. In order to improve image quality, research scholars have optimized DL method to dual-dictionary learning (DDL) which has more diverse prior information in imaging modalities like CT and MRI. DDL method was initially developed for image super-resolution. Lu et al. [9, 10] applied this method for CT reconstruction. Song et al. [11] used it in 3D MRI reconstruction. DDL shows a great potential in medical image reconstruction.

In this paper, we discuss the DL method in Section 2. Based on DL method, we review DDL’s history and new development in Section 3, including its theory, feasibility demonstration, and the application in different fields. In Section 4, we discuss the use of DDL in medical image analysis. In the section of Discussion and Conclusion, we summarize algorithms and explore the future directions in medical image reconstruction.

#### 2. Dictionary Learning (DL) Algorithm

##### 2.1. DL Method and Theory

According to the CS theory, an undersampling image reconstruction problem is to solve an underdetermined system of linear equations by minimizing the quasi norm (e.g., number of nonzeros) of the sparsified transform ; it means the image is sparse after a completed sparse transform . The corresponding optimization problem is

In (1), is the image to be reconstructed, is the codebook for the given measurements . Equation (1) is also known as a sparse coding problem, which is a NP-hard problem (nondeterministic polynomial). It can be solved by some greedy algorithms, for example, orthogonal matching pursuit (OMP) [12]. It is notable that if the norm is replaced with norm, the problem can be solved by linear programming in the real domain or second order cone programming in the complex domain.

Given an image of size , it can be decomposed into some small patches of size , . Each patch can be expressed as a dimensional vector . All the patches are extracted from the object image according to the patch size and the slide distance. A dictionary is a matrix that consists of atoms which are the columns of the dictionary. As is the patch vector from sample images, the initial dictionary constructed from the extracted patches is usually redundant or overcompleted; that is, . Using specific atoms of initial dictionary , each vector in the image can be approximately represented as sparse coefficient [13]. Considerwhere for the error bound and for the sparse representation vector which has few nonzero elements: , . To get the sparse representation of the vector , one can minimize the norm as

If an image contains patches, DL is to find a dictionary in which all the patches should be sparsely represented as follows:

Usually, if is fixed by specific value, (3) is equivalent to solve the following problem:

##### 2.2. Dictionary Construction

DL problem is NP-hard because it turns to a sparse coding problem when and are fixed. Currently, mainly four adaptive dictionary training algorithms were proposed to solve such a dictionary learning problem.(1)Direct method (DM): DM is an original method that preserves all the details in the sample images because of a direct extraction process, and then a target image can be fully recovered as the patches are well chosen. Usually, this method is effective in super-resolution image reconstruction.(2)Method of optimal directions (MOD): MOD fixes the coefficients corresponding to the dictionary vectors and then updates the atoms by minimizing the residuals between the training vectors and its representations. The main advantage of MOD is that it gives the optimal adjustment of the dictionary vectors in each iteration. Usually, it provides better convergence properties in ECG (electrocardiogram) signals [14].(3)Generalized principal component analysis (GPCA): GPCA is a general method for modeling and segmenting some mixed data using a collection of subspaces. By introducing certain algebraic models and techniques into data clustering, traditionally a statistical problem, GPCA offers a new spectrum of algorithms for data modeling and clustering [15].(4)-means singular value decomposition (-SVD): -SVD is an iterative method updating the dictionary atoms to fit the data better. The method does SVD on the errors and updates the current dictionary atom and coefficient simultaneously with the item which has the minimum error. As the most widely used method to train the dictionary, -SVD has an excellent convergence and sparsity [16].

Dictionary learning can be used to reconstruct image; a classic algorithm is summarized in Figure 1. Given an initial value (initial dictionary), do dictionary learning using appropriate training method and obtain the sparse representation, and then update under specific transform (i.e., wavelet, Fourier) and output the result after several iterations at last.