Abstract

Compressed sensing or compressive sampling is a recent theory that originated in the applied mathematics field. It suggests a robust way to sample signals or images below the classic Shannon-Nyquist theorem limit. This technique has led to many applications, and has especially been successfully used in diverse medical imaging modalities such as magnetic resonance imaging, computed tomography, or photoacoustics. This paper first revisits the compressive sampling theory and then proposes several strategies to perform compressive sampling in the context of ultrasound imaging. Finally, we show encouraging results in 2D and 3D, on high- and low-frequency ultrasound images.

1. Introduction

Ultrasound (US) imaging acquisition, like all other imaging modalities, relies on Shannon’s theorem. This theorem states that, in order to reconstruct perfectly a signal, its sampling frequency must be at least twice the highest-frequency component present in the signal. Often, US devices use a sampling rate that is at least four times the central frequency (wideband), to guarantee Shannon’s theorem. However, the volume of data obtained is large, especially in 3D imaging, and can impair imaging in real-time or data transfer [14].

Compressed sensing (CS) or compressive sampling is a novel theory aiming to reduce the volume of data acquired, below the one dictated by Shannon’s theorem. First introduced by Candès et al. and Donoho in 2006 [5, 6], CS guarantees the reconstruction of a signal from far fewer samples than usually necessary. CS has led to many applications, including medical imaging (particularly in magnetic resonance imaging (MRI) and tomography), sampling the spatial or frequency domains [79]. In ultrasound imaging, a few groups proposed very recently preliminary works for adapting the compressive sampling framework to ultrasound imaging [1015], to ultrasound Doppler [16, 17], or to photoacoustic tomography [18].

The success of this reconstruction lies on two principles inherent to CS: sparsity and incoherence.

Sparsity reflects the ability of a signal to be compressed. A signal that has a sparse representation in a given basis will have most of its coefficients (or entries) null or very close to zero in this representation. Hence, by suppressing those negligible coefficients, the signal can be compressed, that is, reconstructed from relatively few samples [19]. Of course, during data acquisition, there is no knowledge about which coefficients are significant and which are not. CS overcomes this issue using a sampling basis incoherent with the sparsity basis.

Incoherence in CS expresses the idea that signals that are sparse in a given basis cannot be sampled in this basis but in another where the signal is dense. This property guarantees that the samples acquired contain the same amount of information. If the sparse basis was sampled, there would be a risk of acquiring negligible coefficients, not participating to the signal reconstruction.

The challenge of CS is to design a sampling protocol that will capture the information contained on the relatively few coefficients of the sparse basis. This sampling protocol will also suit any type of images within a specific application, here US imaging, without prior knowledge on the signal or image to sample.

By means of optimization methods, the original signal can then be recovered from those few measurements and the reconstruction quality will be similar to the one obtained respecting Shannon’s theorem.

In summary, CS is a simple acquisition method where only a few samples of a signal are blindly measured. The full signal is later retrieved using reconstruction methods.

The purpose of this paper is twofold: first, make CS familiar to US imaging, and second, show the mechanisms involved in a successful CS reconstruction. The structure of the paper includes a reminder of the theory of sampling. Then, an overview of the CS theory and its components (essentially sparsity, incoherence, and optimization) will be given in detail and illustrated in a US context. Then, an application to US imaging will be proposed and results of US image reconstruction from usually 25%, 33%, and 50% of samples acquired will be shown. Finally, perspectives for CS in US will be drawn and future work will be described.

2. Sampling a Signal

The general framework of sampling can be summarized by measuring linear combinations of an analog signal , possibly considered as projections on a given basis: where denotes an inner product, are the measurements, are sampling vectors, and is the number of measurements. The most common sampling protocol consists of vectors of Diracs at equal time laps (ideal sampling). The measurements obtained represent then a simple discretization of .

However, if the sampling vectors are complex exponentials, then the measurements are Fourier coefficients. This sampling protocol is used in MRI, for example.

In CS, the number of measurements is far below the criteria established by Shannon’s theorem for a given signal duration. If is a digital signal of size (respecting Shannon’s theorem), then . This situation can rise with slow imaging devices, for example, or when the number of sensors is limited. When it is possible to sample signals respecting Shannon’s theorem, like in US imaging, it might be more advantageous to reduce the volume of data or the acquisition time.

However, when the number of measurements is smaller than the signal size, then we are facing an ill-posed inverse problem. If is a matrix of size , concatenating the sampling vectors , then . When we want to recover the signal corresponding to the measurements , then there is an infinity of solutions possible.

CS shows that it is possible to recover , provided that it has a sparse representation in a given basis and that the measurements are incoherent with that basis [20]. The following two sections explain those two concepts.

3. The Concept of Sparsity

Sparsity is the idea that signals may have a concise representation in a given basis. For example, a signal composed of three sinusoids will be sparse in the Fourier domain as its representation in this domain is very concise: namely 12 coefficients (6 symmetrical magnitudes and phases). Hence a dense signal in the time domain can be coded with only a few samples.

Other examples include photographic images. On the image, almost all the pixels have a nonzero value. However, in the wavelet domain, these images are sparse; that is, they contain a majority of null or very small coefficients. By discarding those negligible samples, an approximation of the original image can be obtained, with minimal loss of information. Usually, this loss of information is not noticeable: that is, the concept of JPEG2000 compression [19].

Mathematically, it translates as follows: where is the original signal, are the coefficients of the signal in the sparse basis, and is an orthonormal basis (Fourier or wavelets e.g.). The largest coefficients are noted , and the corresponding signal . If is sparse in the basis composed of the vectors , then and the error is small.

Figure 1 illustrates the sparsity of radio-frequency (RF) US signals. Because the sparse representation of the RF US signal (here, its Fourier transform) concentrates the information on a few coefficients, it is possible to reconstruct almost perfectly the signal from only 30% of its largest Fourier coefficients (keeping only 30% of the largest in (2)).

This concept of concentrated information is also visible when plotting the Fourier coefficients in order of magnitude (Figure 2). If they decay rapidly, then the compressed signal including the largest coefficients will be close to the original signal .

Sparsity therefore leads to the compressive nature of a signal: if a signal has a sparse representation, then the information coding that signal can be compressed on a few coefficients. A reconstruction from those few coefficients can be obtained with minimal loss compared to the original signal. Note however that CS and compression are different in that when sampling a signal, it is impossible to directly acquire the significant coefficients as their positions are not known a priori. CS overcomes this issue via an incoherent sampling.

4. Incoherent Sampling

The term incoherent sampling conveys the idea that the sampling protocol in (1) has to be as little correlated as possible with the sparse representation in (2). This requirement prevents from the risk of sampling insignificant bits of information (the close-to-zero coefficients described in Section 3). Instead, the idea of an incoherent sampling is to introduce noise-like interferences to the signal to recover.

The mathematical definition of incoherence is [21, 22]: where is the sampling basis and is the sparsifying basis. According to (3), if the two bases are strongly correlated, then will be close to , and if they are not correlated at all, then it will be close to 1. CS requires a low coherence between the bases. In other words, incoherence is guaranteed provided that the two bases are not correlated.

Pairs of bases with minimum coherence include, for example, a basis of Diracs associated with a Fourier basis (a spatial Dirac contains information about all the frequencies). In addition, if the sampling basis is completely random, then it will be maximally incoherent with any sparsifying fixed basis (wavelets, curvelets, etc.) [21].

Figure 3 shows an example of coherent and incoherent samplings of a US RF signal. When the sampling is incoherent with the sparsifying basis (Figures 3(c) and 3(d)), then the measurements (namely, in practical situations) in that basis are dense (by opposition to sparse). The original sparse signal is polluted by noise-like interferences and can be reconstructed by optimization. However, when the sampling basis and the sparsifying basis are coherent (Figures 3(e) and 3(f)), the measurements in the sparsifying basis are themselves sparse. There is significant information (large Fourier coefficients) missing and CS will not be able to recover the original signal.

From those incoherent measurements and knowing the sparsifying basis, CS theory states that it is possible to recover the original signal using an optimization routine.

5. Signal Reconstruction through Optimization

Knowing that the signal to recover has a sparse representation in a given basis , it is possible to reconstruct it from the incomplete incoherent measurements obtained using the sampling basis . This reconstruction is performed via a convex optimization program: where is the reconstructed sparse signal and denotes the norm.

This optimization searches amongst all the signals that verify the measurements , the one with the smallest norm, that is, the sparsest. The choice of the norm (sum of magnitudes) over the norm (size of support) is mainly practical: while solving of the norm minimization is computationally infeasible, the norm minimization can easily be recast as a linear program. The norm (sum of magnitudes squared) is unsuited to CS because the minimization would not recover the sparsest signal [23].

In other words, the optimization routine (4) removes the interferences caused by the incoherent undersampling from the sparse representation of the measurements (as in Figure 3(d)). Figure 4 illustrates the process of norm minimization applied to the Fourier transform of an RF signal: the significant coefficients tend to be amplified while the others, corresponding to the interferences, are attenuated.

In this example, at the last iteration, the recovered signal is exactly equal to the original signal. Generally speaking, recovering the signal is true with overwhelming probability if the number of measurements follows: where is a positive constant, is the coherence as defined in (3), is the degree of sparsity, and is the signal size. From (5) it follows that the number of measurements depends on the sparsity of the signal (the sparsest, the best) in a given basis and the coherence of the sampling protocol with that basis.

In practice, many researchers observed that accurate reconstructions can be achieved if the number of measurements is roughly 2 to 5 times the sparsity of the signal [2426]. The work herein matches this statement. For an offline reconstruction, plots in Figure 2 could be used to determine the degree of sparsity and consequently the minimum number of measurements required. For online reconstruction, priors on the US imaging device bandwidth could be exploited. Classic US scanners bandwidth ranges from 50% to 100% or more (depending on the scanner) and is practically estimated at 3 or 6 dB attenuations. An example of an experimental PSF together with its Fourier transform showing the bandwidth at 6 dB is given in Figure 5. Thus, we can observe that, taking into account the device bandwidth, the k-space may be considered even sparser than the impression given by Figure 2. The sparsity could then be set as the number of significant coefficients in the practical bandwidth, or two times this number.

The optimization routine utilizes no prior knowledge about the positions or amplitudes of the sparse coefficients or about the signal to recover.

In practical situations, the measurements are often corrupted by noise , originating from the instrumentation. Therefore, the term guaranteeing data consistency in (4) has to be relaxed, so that . In addition, the signal might not have an exact sparse representation but an approximate sparse representation where very small but not exactly nil coefficients will be neglected. Again, this approximation will introduce some noise.

In those cases, the CS method will still allow a reconstruction of the signal, provided that the CS matrix () respects the Restricted Isometry Property (RIP) [21, 27]: where are integers and is the isometry constant. obeys (6) when the smallest that verifies (6) for all S-sparse signals is not too close to 1. In other words, if obeys the RIP, then the Euclidean lengths, or norms, will be preserved in : this is the isometry. This property basically ensures that a sparse signal will not fall in a null space in A, where it would be impossible to recover.

If (6) is true, then the minimization will allow an accurate reconstruction of the signal. More precisely, the reconstruction of an approximately sparse signal will approach the corresponding compressed signal.

Interestingly, random matrices obey the RIP with overwhelming probability if with being a constant.

The minimization in the case of noisy data is as follows: where the fidelity of the measurement constraint is relaxed to take into account the level of noise . This is again a convex minimization, computationally feasible.

6. Sampling Protocols in Ultrasound Imaging

The sampling protocols in US imaging are designed to fulfill both the requirements of CS and of the US instrumentation. The CS theory has been described in the previous section and, from this perspective, the sampling basis mainly has to be incoherent with the sparsifying basis. The US imaging devices have physical constraints that limit the sampling strategies one can adopt for CS.

The data acquisition in US imaging is performed in the image space (spatial domain), unlike MRI, for example [7, 24]. There are several possible sampling protocols adapted to US imaging and incoherent with the sparsifying basis. They all consist in taking samples of that image at more or less random locations. This is equivalent to taking samples at specific times on the RF signals or taking RF lines at specific locations.

In this paper, eight different sampling protocols are proposed: three 2D masks and five 3D masks. In two dimensions, the CS uniformly random mask, denoted and shown in Figure 6(a), will be studied on different types of RF US images. This sampling protocol is maximally incoherent with the Fourier transform (considered as the sparse decomposition basis in this paper) and therefore should give the best results.

However, switching rapidly from one position to the next in this kind of sampling pattern might be difficult from the instrumentation point of view. Consequently, two other sampling masks denoted by and where studied were whole lines or columns of the images are not sampled at all (Figures 6(b) and 6(c), resp.). The sampled lines or columns are chosen in a uniformly random fashion. On the remaining lines or columns, random points are sampled. The total number of points sampled and chosen was the same as for the sampling mask to be able to compare the quality of the CS reconstructions. The sampling masks and are slightly less incoherent than (due to a certain coherence in the direction that is not sampled at all), so the results are expected to be worse than for . However, these two strategies could translate as a gain of time from the instrumentation point of view as some lines (resp., columns) of the image will not be acquired at all.

For 3D US datasets, five sampling strategies are proposed. The first, similar to , and maximizing incoherence is a uniformly random mask in three directions, denoted (Figure 7(a)).

The other four, to , are inspired by and , that is, sampling only certain RF lines. Whereas and consist in sampling different RF lines or columns on each slice of the azimuthal direction (see Figures 7(b) and 7(c)), with and , the set of unsampled RF lines or lateral profiles is always the same in each slice (see Figures 7(d) and 7(e)). Consequently, with (resp., ), some whole axial-azimuthal (resp., lateral-azimuthal) plans of the volume are not sampled.

7. Reconstructions of Ultrasound Images and Volumes

In US imaging, the acquisition consists in taking samples of the image (or of the RF signals). This sampling protocol is similar to a basis of Diracs (sampling mask). To guarantee the success of the CS reconstruction, a basis incoherent with Diracs and where the US images are sparse is needed. The basis chosen in this paper is the Fourier basis as it is maximally incoherent with Diracs and because the US image k-space is sufficiently sparse. The function to minimize is where is the k-space of the US RF image (), and is the sampling scheme ( here, where corresponds to (), the RF random sample locations in 2D, or to () in 3D). stands for the inverse Fourier transform, are the RF US image measurements and is a coefficient weighting for sparsity.

Other bases of sparsity such as the wavelet transform of the US image k-space have been investigated in previous work and give similar results to those presented here [12].

The first term of (8) represents the fidelity of the measurements and the second term guarantees the signal sparsity in the Fourier basis. The balance between those two terms is given by . The choice of is crucial to a good reconstruction as it corresponds to a threshold for the recovery of significant coefficients in the sparse basis.

In this paper, the optimal chosen was the one giving the minimum errors of reconstruction. However, finding the optimal by trials and errors is obviously not possible during acquisition where no comparison with the real signal can be performed.

One solution to minimize the fluctuations in the CS reconstructions due to a poor choice of lambda is to add an elastic-net regularization, that is, minimization term on the sparse coefficients [28]. The resulting reconstruction errors around the optimal were however quite large compared to the minimum errors found with only the minimization.

Another method, based on an adapting and called reweighted minimization, has been developed [29] to address that issue. It consists in performing (8) with an initial (equal to one e.g.). Then, (8) is reiterated using a new , which value is calculated from the results of the first iteration of (8). Thus, the optimization performed is given as follows:

Namely, the weighted at each iteration is where is the iteration number, are the sparse coefficients estimated after iterations, and is a coefficient ensuring stability (that should be slightly smaller than the smallest nonnil sparse coefficient). At the first iterations, all coefficients are set to 1 (, for all ). In this setting, the choice of lambda is replaced by the choice of . However, we observed that the results of the CS reconstructions were a lot less dependant on than . A major drawback of the reweighted minimization method is its iterative process. Thus, at least two classic optimizations are performed (corresponding to a minimum of two iterations) in order to get an accurate result.

In Section 8, the two techniques of CS reconstruction: using a fixed optimal and the reweighted minimization were compared for the sampling patterns described in Section 6. In both cases, a nonlinear conjugate gradient algorithm was used for numerical optimization. This algorithm is particularly suited to large-scale data and is used for similar convex optimization problems (see, e.g., [30]).

In addition, different undersampling ratios were tested.

8. Results on a 2D Simulation Image

The CS strategy described in (8) was used to reconstruct the k-space of an RF image simulated using the Field II US simulation program [31]. The parameters and the example scatterer map of a kidney used were as follows Transducer centre frequency = 5 MHz, sampling frequency = 20 MHz, number of scatterers = 50,000, number of RF lines generated = 256, size of the object =  mm. The image was then cropped to a 1792 by 128 size matrix.

The three different schemes of sampling, , , and (Figure 6), were studied to compare the CS reconstructions, using a fixed optimal set to 0.005 and the reweighted minimization. The results are shown in Figure 8 for the three sampling patterns (using 33% of the samples).

For the classic 2D random sampling pattern , one RF line of the reconstructed signal was plotted against the corresponding RF lines of the original signal and of the random measurements (Figure 9).

For the sampling pattern , one RF line of the reconstructed signal was displayed in Figure 10, corresponding to a line that was not sampled at all (shown with the dash-dotted line on Figure 8(f)). RF lines that were partially sampled are very similar to the ones shown in Figure 9.

Figure 11 shows one lateral profile that was not sampled at all with the sampling mask (denoted by a dash-dotted line on Figure 8(g)). Again, the reconstruction quality of lateral profiles that were partially sampled was similar to Figure 9.

Table 1 shows the errors of CS reconstructions for the three sampling masks , , and at three undersampling ratios: 25%, 33%, and 50%.

For the reconstruction from (Figure 8(e)), the parts of the image that contained less signal, shown in black on the B-mode images, were less successfully reconstructed, but the diagnostic information was maintained. On the RF signals (Figures 9, 10, and 11), the amplitudes of the reconstructed signals were sometimes reduced. However, the differences in amplitude were constant and the timing information of the signal was maintained. Consequently, the CS-reconstructed images would give very close visualizations in B-mode and could also be used for tissue motion estimation or tissue characterization. The errors of reconstruction increased with a smaller undersampling ratio as one would expect. In addition, the errors were always reduced when was used, due to a greater incoherence.

With , the reconstruction of partially sampled RF lines was again very close to the original signal. For the RF lines that were not sampled at all (Figure 10), a good reconstruction was performed as well, showing the potential of CS for US imaging with reduced pulse emissions. The visualizations in B-mode were again very satisfactory in terms of diagnostic power.

When the sampling mask was used, partially sampled and unsampled lateral lines were well reconstructed (Figure 11). The overall CS reconstruction displayed in B-mode did not exhibit any line artifact.

Results obtained from the reweighted minimization (i.e., with an adaptive ) were similar to those with the optimal , found experimentally.

9. Results on In Vivo 2D Images

In this section, US CS is performed using high- and low-frequency ultrasound images. These images were sampled a posteriori using CS.

First, results of a CS reconstruction using method (8) on in vivo images of the skin are shown. The central frequency was 20 MHz and the sampling frequency 100 MHz (ATYS Medical). Results from three different sampling patterns , , and (Figure 6) are shown in Figure 12.

The same CS method (8) was used to reconstruct a US image of the right lobe of a normal human thyroid. The imaging was performed using a clinical scanner that was modified for research with a 7.5 MHz linear probe (Sonoline Elegra, Siemens Medical Systems, Issaquah, WA, USA). The sampling frequency was adjusted to 40 MHz. For the image presented here, and 33% of the samples were measured a posteriori. The three different sampling patterns , and (Figure 6) were used. The reconstructed US images of the thyroid are shown in Figure 13.

On in vivo US images, similarly to simulation images, the CS reconstruction using both sampling patterns was very good. The tissue structures were restored and the diagnostic information was maintained. Note that the results do not depend on the US frequency used. Tables 2 and 3 show the reconstruction errors for both in vivo images and for the different sampling schemes proposed.

10. Results on an In Vivo 3D Volume

In vivo US volumes of mouse embryos, acquired on anaesthetised mice, were reconstructed using the sampling masks described in Section 6 (Figure 7).

A single element-high-resolution scanner SHERPA, developed and commercialized by Atys Medical (Lyon, France) where RF data was available, was used (central frequency 22 MHz, frame rate 10 images per second, scanning width 16 mm, sampling frequency 80 M samples/second, emission frequency 20 MHz, exploration depth 7.8 mm). The volume was then cropped to a 1283 size volume for illustration purposes.

The CS reconstruction of the volume was performed using (8) and .

Figure 14 shows the CS reconstruction obtained from the sampling masks , , , , and for a 50% undersampling factor. Figure 14(a) represents the original volume and Figures 14(g), 14(h), 14(i), 14(j), and 14(k) the CS reconstructions obtained from each mask, whereas Figures 14(b), 14(c), 14(d), 14(e), and 14(f) are the measurements obtained from to masks, respectively.

The first observation to make is that for all the sampling masks, the CS method (8) provided good reconstructions of the whole volume from only 50% of the samples. The plan that was best reconstructed in each case was always the axial-lateral plan, where the 2D masks were applied (and then repeated along the azimuthal direction). However this setting could easily be changed for other applications where another plan is more crucial.

When the coherence increased, that is, from to to and from to to , the reconstructions were degraded, as expected. This is particularly visible on the axial-azimuthal plans of Figures 14(g), 14(h), and 14(i). However, considering that absolutely no samples were kept for the axial-azimuthal plan visible on Figure 14(i), the result is still quite impressive. This setting could be used in a situation where the speed of imaging would prevail on the quality of the reconstruction. In addition, despite being less sharp, the image still exhibits the tissue structure and might be sufficient in many applications.

For puropse of illustration, Table 4 shows the normalised root mean squared errors of reconstruction between the RF original and reconstructed US volumes for the two sampling patterns and and different rates of decimation (25%, 33%, and 50%). In addition, the NRMSE of the CS reconstructions was compared with results from an interpolation reconstruction method, based on a 2D cubic spline interpolation. The decimation used for the interpolation was a regular lateral undersampling corresponding to the sampling ratios 33% (no axial decimation). As expected, CS outperforms interpolated regular subsampling.

11. Some Hints for Practical Implementation

As shown previously, one of the key points for compressive sampling success is the incoherence of the acquired samples. For this, random sampling schemes are necessary, in both axial and lateral directions.

Regarding the axial direction for the sampling masks and , one way to incoherently sample one RF line is to fix a constant sampling frequency , to consider a vector of random integers and to only acquire samples situated at . This can be achieved by programming specific acquisition devices such as FPGA or CPLD. If the mask is considered, the same random vector is repeated on each RF line.

Alternatively, for all the masks proposed in the paper, the whole RF line can be acquired respecting Shannon’s theorem and random samples subsequently discarded, in order to speed up the transfer to the scanner memory.

In the lateral direction, for RF lines selection, the acquisition scheme depends on the type of scanner. For multiple element transducers, each element can be randomly set to be active or not. For single-element transducers, each RF line is acquired separately. Thus, RF lines at random lateral positions can be omitted.

This methodology could be suitable to all the sampling strategies presented in this paper.

Of course, one could also implement direct random acquisitions through random triggering acquisition board. But this possibility needs more sophisticated electronic design.

12. Summary

Table 5 summarizes the different results presented so far and qualitatively compares the speed of the acquisition, the quality of the reconstruction in term of errors, and the ease of a practical implementation of the different masks and undersampling ratios tested in this paper.

13. Discussion and Conclusion

This paper investigates the feasibility of CS in ultrasound imaging. From this presentation many questions are opened.

13.1. Sparsity of Ultrasound Signals

Sparsity is key point in the success of CS. We dealt here with the fact that the Fourier transform of US is sparse. This may of course be questionable.

US signals exhibit bandpass characteristics and thus are sparse in frequency domain. Consequently, a highly oversampled version of the ultrasound signal could be reconstructed from fewer regular samples. However, it is well known that sampling using high sampling rate is neither easy nor cost-effective particularly in high-frequency US applications. The interest of CS lies in the ability of allowing under sampling from Nyquist limits. Indeed when sampled at Nyquist (which is cost-effective), by taking the demodulated I/Q signal, which remains sparse, CS allows correct reconstruction, whereas no reconstruction is possible after under sampling from Nyquist rate, according to the regular sampling theory.

In addition CS allows skipping RF line in lateral and azimuthal directions.

13.2. Real-Time Nature

In this paper, we showed the powerful potential of CS to reduce data volume and speed up acquisitions at the price of a reconstruction using the norm. However, using dedicated circuits (GPU type) for the CS reconstruction could allow a great improvement in processing times and overall increase the imaging rate, keeping the real-time nature of US imaging.

In addition, various sampling protocols suited to US imaging were proposed here where the RF signals can be sampled at random times to provide measurements of the final image k-space. Through the minimization, the original k-space can be reconstructed and the RF US image subsequently recovered with minimal loss of information.

The method presented here differs from inpainting methods as the reconstruction is performed in another domain than the image itself.

Future work will include the identification of optimal conditions as well as an investigation of several optimization routines and better sparsity basis. Additional knowledge about the US images will be inserted in the reconstruction process (statistics of the signal, attenuation). The aim is to reach the fastest and most reliable reconstruction from as little samples as possible. Various applications will also be considered (multidimensional Doppler and tissue characterization).

Acknowledgments

This work was supported by the French National Agency of Research (ANR) under the SURFOETUS Grant. The authors would like to thank Jean-Marc Gregoire and Hugues Herault for ultrasound imaging software development.