International Journal of Biomedical Imaging

Volume 2015, Article ID 958963, 18 pages

http://dx.doi.org/10.1155/2015/958963

## Clutter Mitigation in Echocardiography Using Sparse Signal Separation

Department of Computer Science, Israel Institute of Technology (Technion), 3200003 Haifa, Israel

Received 7 August 2014; Revised 23 March 2015; Accepted 30 March 2015

Academic Editor: Michael W. Vannier

Copyright © 2015 Javier S. Turek et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB.

#### 1. Introduction

In medical ultrasound imaging, a source of artifact called “clutter” is commonly caused by multipath reverberations or off-axis scatterers, and it materializes as a static cloud of echo signals occluding the tissue regions of interest [1, 2]. Clutter artifacts affect the contrast and the readability of images and can induce misleading functional measurements like myocardium strain in cardiac ultrasound and displacement estimation in blood flow imaging. Often, clutter artifacts degrade ultrasound images entailing the use of imaging modalities such as CT or MRI that are more expensive and involve a radiation risk to the patient.

Clutter artifacts from reverberations appear when the acoustic wave bounces back and forth between a reflective structure and the transducer surface. In echocardiography, this is a common phenomenon because the rib cage and the sternum are highly reflective structures in proximity of the path of the acoustic waves to the heart [3]. The energy of the acoustic waves decays with the distance covered and with the number of bounces, such that the effect becomes more significant in the near-field region of the image and less visible in far-field areas. As a consequence, the myocardium is partially occluded by the artifacts, which may lead to wrong cardiac functioning diagnosis through visual inspection or tracking techniques [4, 5]. Methods that overcome the challenges imposed by reverberation echoes include interpolating data from regions of the heart where artifacts are not present [6] or inferring heart motion using probabilistic models for the challenging regions [7]. However, these techniques tend to fail in data from diseased hearts, because abnormal myocardial motion cannot be inferred using statistical assumptions or models. Therefore, a more appropriate methodology may be to separate the clutter from the signal of interest using filtering strategies, allowing motion tracking to be computed in the entire image.

Suggested filtering methods usually involve separation of the tissue and clutter echo signals by linear decomposition. Echo data is transformed to a new coordinate system in which the clutter artifacts and the signal of interest can be separated along different bases or dictionaries. Then, clutter artifacts are suppressed by reducing their respective coefficients while leaving those of the basis of the tissue signal fixed. Existing methods either use a priori orthogonal bases or learn them adaptively from the data. A priori methods use predefined bases that are orthonormal and independent of the data. Commonly used bases are the Discrete Fourier Transform (DFT), which has been used to define FIR or IIR filters for clutter mitigation in blood flow imaging [8, 9], and the wavelet transform for clutter artifact reduction [10, 11]. Also, the short-time Fourier Transform has been used to filter clutter artifacts during beamforming [12].

Although a priori bases are fast to compute, they may produce poor results when clutter and tissue characteristics overlap. Furthermore, physiological differences among patients entail space and time variability for signals characteristics. Adaptive methods have been suggested to overcome these limitations and learn a basis based on the actual data. The predominant method for determining a basis adaptively is the Principal Component Analysis (PCA) that is used to compute a basis based on the covariance characteristics of the data. Usually, adaptive techniques outperform methods that rely on choosing of bases a priori [13–16]. Some methods learn the basis from local areas of the signal [13] without exploiting the whole image information.

In the present paper, a Morphological Component Analysis (MCA) based separation algorithm [17] is introduced to mitigate clutter in ultrasound images while preserving the tissue signal. As described below, the current method learns a nonorthonormal redundant matrix (also called dictionary) from the entire data and decomposes the signal into a linear combination of a few columns (atoms) from the dictionary. Consequently, by separating the dictionary’s atoms into clutter and tissue representatives, clutter filtering is achieved by selectively removing clutter atoms. The feasibility of the method is demonstrated with simulated and experimental ultrasound data. Simulation is used to quantify the performance of the method across algorithm parameters and signal motion characteristics. The suggested algorithm is also experimentally demonstrated with echocardiography images, where clutter artifacts are a significant cause of image degradation. Its performance is compared against a high-pass FIR filter and state-of-the-art Singular Value Filtering (SVF) [13].

#### 2. Methods

##### 2.1. Sparse Representation of a Signal

The sparse representation model [18] assumes that a signal of interest can be decomposed into a linear combination of a few vectors or “atoms” from a given matrix, also called “dictionary.” The atoms that take part in the linear combination are a small subset of the dictionary, and their respective coefficients are called the sparse representation of the signal. This model is used as prior information for signals, where signal reconstruction is performed by first computing the sparse representation of the signal of interest and then by reconstructing the signal from its sparse representation and the dictionary. Selecting the dictionary is an important step in this process and it is usually dependent on the application. The objective is to find an adaptive dictionary that will enable sparse representations of relevant signals as accurately as possible.

The sparse representation principle can be illustrated by considering a signal , which can be decomposed into a linear combination of atoms: where vector is the sparse representation of the signal** t**, implying that most entries are zeros, and are the atoms (columns) of the dictionary . The sparse vector has nonzero elements with . The notation represents the -norm (usually, the -norm is wrongly known as a quasi- or pseudonorm. The -norm satisfies only two axioms of the norms and thus it should not be considered a norm), that is, the number of nonzero elements in the vector. The set of indices of the nonzero coefficients in is defined as the support and the signal can be decomposed alternatively into , with being the subset of columns from and the reduced vector with only the nonzero elements. When the support of a representation vector is known, the respective coefficients in are computed using the pseudoinverse of the dictionary restricted to the support, :The support is unknown in practice and is estimated together with the nonzero coefficient values.

The sparse representation is computed by finding the sparsest vector that yields when multiplied by the given dictionary . This problem can be written as the following optimization task:In practice, a noisy observation of the signal of interest is obtained. It is often assumed that the noisy signal is contaminated with additive i.i.d. white Gaussian noise with noise level ; that is, . Problem (3) is then reformulated to yield a solution that is close to the observed signal in the -norm sense:where is the desired bound on the distance from the observed signal and is usually proportional to the noise standard deviation . The notation represents the -norm of a vector . An alternative to (4) may be formulated where the fidelity data term is minimized and the number of nonzero elements is constrained:where is the maximum sparsity allowed in each representation. Solving problem (4) or (5) yields an approximate sparse representation and it is used to reconstruct the clean signal by multiplying with ; that is, . The Orthogonal Matching Pursuit (OMP) [19] is a commonly used algorithm designed to approximate solutions to problem (4) or (5), and it is presented in Algorithm 1. The OMP is a greedy pursuit algorithm that increments the support size by one nonzero element at a time. In each iteration, an atom is chosen such that it reduces the residual distance to the observed signal the most. The stopping criterion is given by the constraint of the problem to be solved: the term error bound for (4) or the number of nonzeros for (5). There are other methods for approximating the solution of (4) or (5); such is the Basis Pursuit method [20] that relaxes the quasinorm in problem (4) with an -norm (the -norm of a vector is defined as and it is well known [18, 20] to give preference to sparse solutions) and solves a convex optimization problem.