Table of Contents
ISRN Biomedical Engineering
Volume 2013 (2013), Article ID 832527, 6 pages
http://dx.doi.org/10.1155/2013/832527
Research Article

Lossless Medical Image Compression by Integer Wavelet and Predictive Coding

Electronics Engineering Department, SAE Kondhwa, Pune, India

Received 30 March 2013; Accepted 8 May 2013

Academic Editors: F. Boccafoschi and A. El-Baz

Copyright © 2013 T. G. Shirsat and V. K. Bairagi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The future of healthcare delivery systems and telemedical applications will undergo a radical change due to the developments in wearable technologies, medical sensors, mobile computing, and communication techniques. When dealing with applications of collecting, sorting and transferring medical data from distant locations for performing remote medical collaborations and diagnosis we required to considered many parameters for telemedical application. E-health was born with the integration of networks and telecommunications. In recent years, healthcare systems rely on images acquired in two-dimensional domains in the case of still images or three-dimensional domains for volumetric video sequences and images. Images are acquired by many modalities including X-ray, magnetic resonance imaging, ultrasound, positron emission tomography, and computed axial tomography (Sapkal and Bairagi, 2011). Medical information is either in multidimensional or multiresolution form, which creates enormous amount of data. Retrieval, efficient storage, management, and transmission of these voluminous data are highly complex. One of the solutions to reduce this complex problem is to compress the medical data without any loss (i.e., lossless). Since the diagnostics capabilities are not compromised, this technique combines integer transforms and predictive coding to enhance the performance of lossless compression. The proposed techniques can be evaluated for performance using compression quality measures.

1. Introduction

Applications involve image transmission within and among health care organizations using public networks. In addition to compressing the data, this requires handling of security issues when dealing with sensitive medical information system. Compressing medical data includes high compression ratio and the ability to decode the compressed data at various resolutions.

In order to provide a reliable and efficient means for storing and managing medical data computer-based archiving systems such as Picture Archiving and Communication Systems (PACSs) and Digital Imaging and Communications in Medicine (DICOM), standards were developed with the explosion in the number of images acquired for diagnostic purposes; the importance of compression has become invaluable in developing standards for maintaining and protecting medical images and health records. Compression offers a means to reduce the cost of storage and to increase the speed of transmission. Thus, medical images have attained a lot of attention towards compression. These images are very large in size and require a lot of storage space. Image compression, can be lossless and lossy. In lossless compression, the recovered data is identical to the original, whereas in the case of lossy compression the recovered data is a close replica of the original with minimal loss of data [13].

The most common lossless compression algorithms are run-length encoding, LZW, DEFLATE, JPEG, JPEG 2000, JPEG-LS, LOCO-I, and so forth. Lempel-Ziv-Welch is a lossless data compression [4, 5].

2. System Overview

2.1. IWT Followed by Predictive Coding

Figure 1 shows that the integer wavelet transform method is applied on the image which divides the image into four subbands , , , and . Now predictive coding is applied on the four different bands separately giving outputs , , , and .

832527.fig.001
Figure 1: IWT followed by predictive coding.

The reconstruction process involves applying the predictive decoding followed by inverse integer transform to 10, 11, 12, and 13 decoded bands. The reconstructed image is represented by “.” To verify the perfect reconstruction, the original and the reconstructed images are subtracted and the output is a dark image with maximum values as zero.

In this first technique, IWT is performed first (Figure 1) followed by predictive coding technique on the transformed image, while in the second technique method, the predictive coding technique is applied first followed by the integer wavelet transform [6]. These methods use Haar filter in the lifting scheme and the filter coefficients which are given by Type I as follows: where is the prediction filter coefficient and is the update filter coefficient in the lifting scheme.

The filter coefficients of reduction are given by Type-II The implementation can be done by using other filters in the lifting scheme [7].

3. Overview of Approach

3.1. Implementation of IWT (Lifting Scheme)

In integer wavelet transform, there is a mapping between integers to integers. The simplest lifting scheme is the lazy wavelet transform [8, 9].

3.2. Forward Lifting Scheme

In this technique, the input signal is first split into even and odd indexed samples (Figure 2) as follows: The samples are correlated, so it is possible to predict odd samples from even samples which in the case of Haar transform are even values themselves.

832527.fig.002
Figure 2: Forward lifting scheme [10].

The difference between the actual samples, odd samples, and the prediction becomes the wavelet coefficients which is called lifting scheme. The update step follows the prediction step, where the even values are updated from the input even samples and the updated odd samples. Now this becomes the scaling coefficients, which will be passed on to the next stage of transform. The second lifting step is as follows: Finally, the odd elements are replaced by the difference and the even elements by the averages. The lifting scheme provides integer coefficients, and so it is exactly reversible. Computations in the lifting scheme are done to saves a lot of memory and computation time. The total number of coefficients before and after the transform remains the same.

3.3. Reverse Lifting Scheme

Figure 3 shows that the inverse transform gets back the original signal by exactly reversing the operations of the forward transform with a merge operation in place of a split operation. Here, the number of the input signal must be a power of two, and these samples are reduced by half in each succeeding step until the last step which produces one sample.

832527.fig.003
Figure 3: Reverse lifting scheme [10].

Reverse lifting scheme (Figure 3) is exactly reverse process of encoding which is exactly reversing the operations of the forward transform with a merge operation in place of a split operation as follows: The Haar wavelet transform uses predict and update operations of order one. Using different predict and update operations of higher order, other wavelet transforms can be built using the lifting scheme [1012].

Basic steps involved in the decomposition are firstly, the image/signal is sent through a low pass filter and band-pass filter simultaneously (predict and update in case of lifting) and then downsampled by a factor of 2.

This process is repeated, and the final four outputs are combined to form the transformed image. The image is transformed in different subbands of which the first subband is called LL (which represents the low resolution version of the image), the second subband is called LH (which represents the horizontal fluctuations), the third band is called the HL (which represents the vertical fluctuations), and the fourth subband is called the HH (which represents the diagonal fluctuations) (Figure 4) [10].

832527.fig.004
Figure 4: Decomposition.

The same procedure can be followed to obtain different levels of image decomposition, where we need the inputs given to the lifting or implementing with filter bank techniques.

3.4. Predictive Coding-Based Image Compression

There is no particular condition for invertible filters for prediction but broad statements can be made about sufficient conditions for lossless filtering. The condition as given in [8] says “The digital implementation of a filter is lossless if the output is the result of a digital one-to-one operation on the current sample and the recovery processor is able to construct the inverse operator.” It is important to note that the operations must be one-to-one not only on paper but also on the computer. Integer addition of the current sample and a constant is one-to-one under some amplitude constraints on all computer architectures [7].

Integer addition can be expressed as where is the overflow operator, is the current sample, an integer value which defines the transformation at time , and is the current filter output.

The reverse operation is given by the following equation: This process always leads to an increase in the number of bits required. To overcome this, rounding operation on the predictor output is performed making the predictor lossless. The lossless predictive encoder and decoder are shown in Figures 5 and 6.

832527.fig.005
Figure 5: Predictive encoder [5].
832527.fig.006
Figure 6: Predictive decoder [5].

Generally, the second-order predictor is used which is also called finite impulse response (FIR) filter. The simplest predictor is the previous value; in this experiment, the predicted value is the sum of the previous two values with alpha and beta being the predictor coefficients, In the process of predictive coding, input image is passed through a predictor where it is predicted with its two previous values, where is the rounded output of the predictor, and are the previous values, and and are the coefficients of the second-order predictor ranging from 0 to 1. The output of the predictor is rounded and is subtracted from the original input. This difference is given by Now this difference is given as an input to the decoder part, of the predictive coding technique. In the decoding part the difference is added with the to give the original data as follows:

Algorithm Read the original image. Get size of image in . Get a new matrix with first two columns and rows with zeros padded to the original matrix. Multiply th and th element in row with alpha and beta, respectively. After rounding it off, add those predicted values with their original values. Repeat steps and for iterations.Haar wavelet transformation is used in lifting scheme to perform the operation of averaging and differencing to arrive at a new matrix representing the same image in a more concise manner asaveraging: , , , , differencing: , , and .

2D array representing the 8 × 8 image [13]:

Transformed array after operation on each row of 8 × 8 image [13]:

Final transformed matrix after one step [13]:

3.5. Predictive Coding Techniques

Various predictive-based coding techniques are analyzed for their effectiveness as in Table 1.

tab1
Table 1: Comparison of available techniques.

3.6. Comparative Analysis

Predictor is the first and the most important step which removes a large amount of spatial redundancy. The most representative predictors are median edge detection (MED) predictor used in JPEG-LS standard and gradient adjusted predictor (GAP) used in CALIC. But novel threshold controlled required, gradient edge detection (GED) predictor which combines simplicity of MED and efficiency of GAP. Analysis shows that GED gives comparable entropies with much complicated GAP. Hence, the suggested method provides efficient way by reducing complexity for X-ray, CT, and MRI images.

Lossless image compression has to preserve the exact value of each pixel, regardless of whether there is noise or not. Measure performance predictor can be expressed over the degree of compression. Predictor only eliminates redundancy, and, in fact, does not do compression. Linear predictor is easy and efficient method among this for medical image.

4. Experimental Result

For more details, see Tables 2 and 3.

tab2
Table 2: Compression ratio of Type I scheme.
tab3
Table 3: Compression ratio of Type II scheme.

5. Discussion

With the previous available technique of compression, exact inverse is possible so as to recover the original image without any loss. However, due to the finite number representation of computers and rounding off the floating point numbers, exact lossless version of original image after reconstruction is not possible. IWT is the solution form, in which it is seen that using lifting scheme by the perfect reconstruction is possible. Also hardware Design of 2-D High Speed DWT by using Multiplier Wavelet Filters easier [16].

These methods for lossless medical image compression performing integer wavelet transform using lifting technique and then lossless predictive coding technique give efficient compression; different combinations of transformed and predicted images are inspected. IWT is giving the smallest file size with Haar wavelet. and are the coefficients of the second-order predictor ranging from 0 to 1, giving better result at values 0.5.

6. Future Scope

The lifting scheme used is only a two-level lifting scheme. In order to improve the entropy of the transformed image, a multilevel lifting scheme is to be implemented. The performance of the predictive coding can be increased by using higher order predictors with two-dimensional predictions. Another possibility for improving the performance would be to use model-based and adaptive approaches.

The performance for lossless compression techniques can also be improved by performing different combinations of various transforms and coding techniques involving IWT and predictive coding, for example, IWT followed by predictive or predictive followed by IWT, and by realizing the most optimal combination that gives the least entropy.

7. Conclusion

Among various methods of wavelet transform, it is suggested that the predictive method gives more compression rather than plane wavelet-based compression.

In lossless predictive coding technique, we take the difference or prediction error into consideration rather than taking into account the original sample or image; hence, very little amount of data can be lost while predicting, but with acceptable limit of image quality. Entropy and scaled entropy are used to calculate the performance of the system for compressed images.

References

  1. M. Das and N. K. Loh, “New studies on adaptive predictive coding of images using multiplicative autoregressive models,” IEEE Transactions of Image Processing, vol. 1, no. 1, pp. 106–111, 1992. View at Google Scholar · View at Scopus
  2. X. Wu and N. Memon, “Context-based, adaptive, lossless image coding,” IEEE Transactions on Communications, vol. 45, no. 4, pp. 437–444, 1997. View at Publisher · View at Google Scholar · View at Scopus
  3. V. K. Bairagi and A. M. Sapkal, “ROI based DICOM image compression for telemedicine,” Sadhana, vol. 38, no. 1, pp. 123–131, 2013. View at Google Scholar
  4. V. K. Bairagi and A. M. Sapkal, “Automated region based hybrid compression for DICOM MRI images for telemedicine applications,” The IET Science, Measurement & Technology, vol. 6, no. 4, pp. 247–253, 2012. View at Google Scholar
  5. N. Memon and X. Wu, “Recent developments in context-based predictive techniques for lossless image compression,” Computer Journal, vol. 40, no. 2-3, pp. 127–135, 1997. View at Google Scholar · View at Scopus
  6. N. V. Boulgouris, D. Tzovaras, and M. G. Strintzis, “Lossless image compression based on optimal prediction, adaptive lifting, and conditional arithmetic coding,” IEEE Transactions on Image Processing, vol. 10, no. 1, pp. 1–14, 2001. View at Publisher · View at Google Scholar · View at Scopus
  7. D. Neela, Lossless Medical Image Compression Using Integer Transforms and Predictive Coding Technique, Department of Electrical and Computer Engineering, Jawaharlal Nehru Technological University, Jawaharlal Nehru, India, 2010.
  8. I. Daubechies and W. Sweldens, “Factoring wavelet transforms into lifting steps,” Journal of Fourier Analysis and Applications, vol. 4, no. 3, pp. 245–267, 1998. View at Google Scholar · View at Scopus
  9. J. W. McCoy, N. Magotra, and S. Stearns, “Lossless predictive coding,” in Proceedings of the 37th Midwest Symposium on Circuits and Systems, pp. 927–930, August 1994. View at Scopus
  10. A. Vasuki and P. T. Vasanta, “Image compression using lifting and vector quantization,” GVIP Journal, vol. 7, no. 1, 2007. View at Google Scholar
  11. A. Fukunaga and A. Stechert, “Evolving nonlinear predictive models for lossless image compression with genetic programming,” Jet Propulsion Laboratory California Institute of Technology 4800, Oak Grove Dr, 2000, M/S 126-347. View at Google Scholar
  12. Simon Fraser University and B. C. Burnaby, “A two-stage algorithm for multiple description predictive coding,” in Canadian Conference on Electrical and Computer Engineering (CCECE '08), pp. 6854–6887, May 2008.
  13. K. H. Talukder and K. Harada, “Harr wavelet based approach for image compression and quality assessment of compressed image,” International Journal of Applied Mathematics, vol. 36, no. 1, 2010. View at Google Scholar
  14. S. Arivazhagan, W. Sylvia Lilly Jebarani, and G. Kumaran, “Performance comparison of discrete wavelet transform and dual tree discrete wavelet transform for automatic airborne target detection,” in Proceedings of the International Conference on Computational Intelligence and Multimedia Applications (ICCIMA '07), pp. 495–500, December 2007. View at Publisher · View at Google Scholar · View at Scopus
  15. ISO/IEC/JTC1/SC29/WG1 N390R, JPEG, 2000 Image Coding System, March 1997, http://www.jpeg.org/public/wg1n505.pdf.
  16. H. K. Bhaldar and V. K. Bairagi, “Hardware design of 2-D high speed DWT by using multiplierless 5/3 wavelet filters,” International Journal of Computer Applications, vol. 59, no. 17, pp. 42–46, 2012. View at Google Scholar