Research Article  Open Access
WaveletBased Image Registration and Segmentation Framework for the Quantitative Evaluation of Hydrocephalus
Abstract
Hydrocephalus, characterized by increased fluid in the cerebral ventricles, is traditionally evaluated by a visual assessment of serial CT scans. The complex shape of the ventricular system makes accurate visual comparison of CT scans difficult. The current research developed a quantitative method to measure the change in cerebral ventricular volume over time. Key elements of the developed framework are: adaptive image registration based on mutual information and wavelet multiresolution analysis; adaptive segmentation with novel feature extraction based on the DualTree Complex Wavelet Transform; volume calculation. The framework, when tested on physical phantoms, had an error of 2.3%. When validated on clinical cases, results showed that cases deemed to be normal/stable had a calculated volume change less than 5%. Those with progressive/treated hydrocephalus had a calculated change greater than 20%. These findings indicate that the framework is reasonable and has potential for development as a tool in the evaluation of hydrocephalus.
1. Introduction
Hydrocephalus results from excessive accumulation of cerebrospinal fluid, leading to enlargement of the cerebral ventricles. The condition is commonly evaluated by visual comparison of serial CT scans of the head. However, the complex shape of the ventricular system and the differences in the angulation of slices combined with slight differences in positioning of the head from one CT study to the next can make direct visual comparisons of serial imaging studies difficult and of limited accuracy. This makes the quantitative assessment of the volume change desirable.
Earlier methods for quantitatively assessing ventricular volume have included the diagonal ventricular dimension [1], the frontal and occipital horn ratio [2], the ventricularbrain ratio [3], the Evans ratio [4], Huckman's measurement [5, 6], and the minimal lateral ventricular width [7], among others. The previous attempts to quantitatively assess ventricular volume have focused on linear, ratio, or surface area estimates of ventricular size, and as such, have been limited by the fact that they try to estimate volume (a 3dimensional construct) using 1 or 2dimensional measurements [8, 9]. In many cases the estimates are based solely on measurements taken from a single axial slice, and may leave potential volumetric changes in the 3rd or 4th ventricles unaccounted for [8, 9]. The previous techniques that have tried to assess volumetric changes 3dimensionally have been time consuming, limiting their clinical applicability [8, 9]. Furthermore, often measurements appropriate for adults are not appropriate for pediatric patients and vice versa [1, 2, 10].
This paper describes a novel framework to measure the change in the volume of the ventricles using CT scans taken at two separate times. The method involves registering the two CT image sequences to be compared, automatically segmenting the ventricles in all the image slices, and calculating a volume change from the results. The framework was validated and verified on both physical phantom models and clinical data.
Image registration is used to align the second set of CT images with the first, thus making the volume calculations consistent, reducing the error caused by the partial volume effect and improving the accuracy of the calculated change in volume. The differences in angulation of the slices combined with the slight differences in positioning of the head from one CT to the next is referred to in this paper as the displacement of the human head. A number of image registration techniques have been described previously, including landmark techniques [11]; pointbased and thinsplinebased methods [12]; mutual informationbased methods [13–15]. The current research required a rigid registration technique to compensate for the rigid displacement of the head between the CT scans, while maintaining the differences in ventricular volume and shape. Both inplane and outofplane displacements needed to be considered. The developed framework includes an adaptive rigid registration method based on mutual information combined with image gradient information, and wavelet multiresolution analysis.
Image segmentation is the process of separating out mutually exclusive homogeneous regions of interest and in this research is used to isolate the ventricles in preparation for the volume calculation. In this paper, the focus is on a variation of the watershed automated segmentation method. The watershed method suffers from an oversegmentation problem, and a number of methods proposed in the literature to overcome the problem have had varying success. Soille [16] introduced the Hminima transform, which modifies the gradient surface, suppressing shallow minima. Shafarenko et al. [17] used a modified gradient map as the input for the watershed algorithm in randomly textured color images. O'Callaghan and Bull [18] proposed a twostage method, which is capable of processing both textured and nontextured objects in a meaningful fashion. In the current research, the DualTree Complex Wavelet Transform (DTCWT) was used to detect the texture boundaries and a novel feature extraction method used to optimize the segmentation results.
Once the images are registered and the ventricles are segmented, the framework calculates the change in volume. To validate the method developed in this study, physical phantoms of the brain and cerebral ventricles were constructed, using agar and water to simulate brain tissue and cerebrospinal fluid, respectively. The volume of the phantom ventricles was measured directly and was then calculated using the method described in this paper. Clinical data with known outcomes were also used to validate the results.
In Section 2, Method, the registration method is described first, followed by the adaptive segmentation and feature extraction method and finally the volume calculation is discussed. The complete algorithm framework is shown in Figure 1. Section 3, Data Sets, describes the physical phantoms and the clinical data used to test the framework. Section 4, Results, summarizes and discusses the results. Conclusions are drawn in Section 5.
2. Method
2.1. Registration
The method described in this research uses an image registration technique to align the image slices of the CT scan taken at a time, , with the slices taken at an initial time, . This registration step reduces the error in the calculation of the volume change with time that would otherwise be caused by the partial volume effect [11]. In the following discussion refers to the th slice in the set of CT images, refers to slice image in the CT image scan taken at time , refers to the closest corresponding CT slice image in the CT scan taken at the subsequent time , and refers to image after it has been registered to slice . are the 3D spatial coordinates of the pixels and where is not given, it is assumed to be in the image plane.
2.1.1. Change in Volume Error
Given a clinical case with two different CT scans of the head taken at times and , the cerebral ventricles will have a physical volume of and a calculated volume of at time and a physical volume of and a calculated volume of at . Each calculated volume will have an error, and , respectively, introduced in part by the partial volume effect, such that
Thus the change in calculated volume, , between and , is given by
If the displacement of the head is such that the errors and compound, then will have a large error. If registration is applied so that the CT scans are aligned and the partial volume errors are consistent, then will approach , and will approach zero.
If represents the volume calculated using the set of registered images, , then
This means that if the set of images taken at is registered to the set of images taken at , so that the partial volume errors are consistent, then the error in the calculated change in volume will be reduced. Since an accurate calculated change in volume is required for this work, the framework described in this research includes registration of the CT scans before the ventricles are segmented and their change in volume calculated.
2.1.2. Modified Mutual Information
The registration method used in this research is a waveletbased technique that maximizes the mutual information in the two image sets. The mutual information, , of two images, and , is given by [13, 14, 19]
where () and () are Shannon entropies for images and , respectively, and (,) is the joint entropy between and . To reduce the effect of overlap, the more common form, normalized mutual information [20], , is used in this research
The entropies are computed by estimating the probability distributions of the image intensities. The joint entropy denotes the probability distribution of the image intensities shown in both the images and .
The mutual information registration algorithm assumes that the images are geometrically aligned by the rigid transformation , where is a vector consisting of six (three translation and three rotation) parameters. Optimal alignment is achieved with the set of parameters, , such that is maximal. To achieve optimal alignment, the mutual information function must be smooth.
Because displacement of the human head between scans can be outofplane as well as inplane, the framework in this research includes 3dimensional registration using the complete set of image slices and trilinear interpolation. In order to reduce the local maxima effect, partial volume interpolation is used to provide a more accurate estimate of the joint histogram [21]. When the joint histogram is calculated for a subvoxel alignment, the contribution of the pixel intensity to the joint histogram is distributed over the intensity values of the eight nearest neighbours using weights calculated by trilinear interpolation.
To improve the performance and robustness of the mutual information measure used in the registration algorithm, it is combined with gradient information as outlined by Pluim et al. [22]. The method multiplies the mutual information with a gradient term that is based on both the magnitude and orientation of the gradients and is very briefly summarized here.
The gradient vector is computed for each sample point in the reference image, , which in this case is , and its corresponding point, , in the registered image, or . is found using the rigid transformation, , of . The gradient terms are calculated by convolving the image with the appropriate first derivatives of a Gaussian kernel of scale . The angle between gradient vectors is defined by
with denoting the gradient vector of scale at point , denoting its magnitude, and denoting the convolution operator. The gradient function, , is computed as a weighted sum of the resulting products for all the pixels and is given by
where the weighting function, , smooths small angle variations and compensates for intensity inversions and is given by
The new normalized mutual information becomes
2.1.3. Optimization Using Simplex Method and Multiresolution Decomposition
The six parameters in the registration function, , are optimized simultaneously using the simplex method to find the global maximum. A drawback of this method is that if the mutual information function is not smooth with a single maximum, the simplex method may settle on a local maximum giving poor results. In order to reduce the impact of local maxima on the registration and improve the speed of the method the image resolution is reduced using a standard wavelet multiresolution decomposition [23]. At the lower resolution, detail information is removed, the mutual information function is smoother, and local maxima are significantly suppressed. Also at the lower resolution only a fraction of the voxels in the image is used to construct the joint histograms so speed is improved. After the global maximum is found at the lower resolution, the resolution level is increased and initialization is based on the previously found maximum. Therefore, a combination of mutual information and multiresolution analysis improves the chance of finding the global maxima in the mutual information function.
2.2. Adaptive Segmentation
An adaptive segmentation based on the watershed algorithm and a novel texture measurement is used in this research. The method consists of two stages: the preliminary watershed segmentation stage and the texture classification stage. In the first stage, DTCWT coefficients are used to extract the texture gradient for the watershed algorithm. In the second stage, DTCWT coefficients are used as the texture measure to classify the textures.
2.2.1. Stage I: Modified Gradient for Preliminary Watershed Segmentation
The first stage of the segmentation algorithm is outlined in Figure 2.
(a) Texture Gradient
The watershed algorithm is an automatic segmentation method based on visualizing a 2D image in 3dimensions (two spatial dimensions, (, ) and the image intensity, ). Input to the watershed algorithm is gradient information from the original image.
Serious oversegmentation problems result when the required gradient information is based solely on pixel intensities [23]. To reduce the oversegmentation problem, texture gradients, as introduced by Hill et al. [24], are used instead of intensity gradients. Different textures contain information that can be used to identify different tissues. If the gradients between textures are detected and used as input to the watershed algorithm, the images can be segmented into several homogeneous texture regions.
In this paper, the texture gradient is derived from the DualTree Complex Wavelet Transform (DTCWT) coefficients [24]. DTCWT calculates the complex wavelet transform of a signal using two separate real wavelet decompositions. The transform retains the useful properties of scale and orientation sensitivity, is approximately shift invariant, and also provides a representation with reduced redundancy. For each scale level, six subbands are produced, orientated at , , and , retaining the detail information of the original image along six different orientations. The texture gradient is derived from the subband features, where represents the subband oriented along at the th scale level.
The texture gradient is obtained in several steps. First of all, directional median filtering [18] is used on each subband . Directional median filtering refers to median filtering adapted to the orientation, , of the subband, . It is implemented as two 1D median filters, and , where the neighbourhood of the first filter extends in a line normal to the subband orientation and removes the step response (double edge effect) of the subbands. The second filter, parallel to the subband orientation, removes the noise of the subbands. Considering both scale and orientation, the subband resulting from the filtering is
In practice, the size of the median filter is related to the extent of the filter bank impulse response at that level and was chosen as [18].
After directional median filtering, the new subbands are passed to the Gaussian derivative function to estimate their gradients and mitigate noise amplification. The magnitude of the texture gradient oriented at at scale level of each subband is given by
where denotes and is the Gaussian function. The single texture gradient map, , required as input to the watershed algorithm, is calculated as a simple weighted sum of magnitudes [18]
where is the number of pixels in the subband image at level and is the simple zero insertion interpolation function.
(b) Modulated Gradient
After obtaining the texture gradient of the image, a modulated gradient is obtained. The modulated gradient is based on texture activity as described in [24]. Its purpose is to suppress the intensity gradient in textured areas but leave it unmodified in smooth regions. The measure of texture activity is described by
where is halfwave rectification to suppress negative exponents:
and are two predefined parameters with values of and for any 8bit grayscale image [18], and the texture energy, , is computed from the upsampled subband features which are related to such that
where is the morphological erosion operator with structure element . in this case is a square neighborhood of nine pixels.
(c) Texture Gradient and Modulated Gradient Combined
Now, the texture gradient and the modulated gradient are combined to obtain a final “Modified" gradient, , which captures the perceptual edges in the image
where is the median value of the texture gradient, is defined to be four times the median intensity gradient, and is the gradient of the original image. Figure 4 gives a good illustration of this process.
As a final step in this stage, the Hminima transform [16] is used as a postprocessing technique to improve the segmentation results by modifying the gradient surface and suppressing shallow minima. Stage I outputs a label map, an image where each segmented region is given a unique label, for use in Stage II.
2.2.2. Stage II: Texture Classification and Feature Extraction
All the methods in the previous section are gradient modifications and provide only a partial solution to the watershed oversegmentation problem in real medical images. A novel texture classification method is used to merge regions of similar textures, thus further reducing the oversegmentation and improving algorithm performance.
Traditional texture classification is based on a rectangularshaped window of a fixed sized [23]. The traditional method treats the “small” area in the window as a texture and attempts to extract the texture features from it. When the window lies completely inside the region of the texture to be represented, one texture feature is extracted. When the window crosses several regions, the features extracted from the window represent a mixture of textures. Rather than using a fixed windowsize, the method in the current research uses the regions from the oversegmented image output from Stage I as a basis for texture extraction [25]. Each of these regions has sufficient and homogenous texture information to allow for feature extraction. The texture in each region is compared to the texture of neighbouring regions. If the textures are “similar,” the regions are merged. Similarity is determined using the KolmogorovSmirnov test (KStest) in the following manner.
The texture feature is extracted from a region using a method that is based on the DTCWT coefficients, relying on their shift invariance and selective sensitivity. The DTCWT decomposes an image into seven subband images at each scale level. Only one of the subband images, filtered by the lowpass filter, is the approximation information of the image. The remaining six subbands contain detail information, which includes texture information. For example, for scale level 4, one approximation subband image and 24 detail subbands can be obtained. Since the DTCWT allows perfect reconstruction, a black image is substituted for the approximation subband image. When the image was reconstructed using the inverse DTCWT, the result, the texture map, contained most of the texture information, and no approximation information.
After the construction of the texture map, the original image and the texture map, along with the label map output from Stage I, are passed to the KS test. Two similarity matrices are obtained: for the texture map and for the original image. The final similarity map used for the merge process, , is obtained by combining and using the following formula:
where the original image information has the dominant effect and the texture map has a supplementary effect.
The two regions which have the maximum value in are merged at this step. After merging, the labels for each region are updated and the new segmented image used as input. The flow chart of Stage II is shown in Figure 3. The termination criterion for the “best” segmentation step, determined empirically, is simple. When the maximum value in equals the minimum value, there are no two regions which should merge.
(a) Original image
(b) Texture gradient
(c) Modulated gradient
(d) Modified gradient
In summary, an image is oversegmented at the first stage and then a texture classification stage is applied to optimize the outcome of the segmentation until a termination criterion is achieved. Figure 6 shows an example of the final segmentation result obtained from the standard watershed algorithm compared with the result from the adaptive watershed segmentation method used in this research.
2.2.3. User Interactions
Since the watershed segmentation result segments the entire image, and only the ventricles in the image are of interest, some user interactions are included in the framework. This interaction allows the user to identify which regions should be included in the ventricular system. After the regions have been selected, the framework generates an outcome image which only includes the ventricles.
2.3. Volume Calculation
The ultimate goal is to calculate the change in the volume of the ventricles. A combination of several algorithms was required to reach this goal. Registration of the two image sets is the first step in this process. Then the ventricles are segmented from the brain tissue. After segmentation, the complete set of slices is used to perform the ventricular volume calculation. The area of the ventricles in each slice is given by
where represents the pixel spacing and the number of pixels in the ventricles in the th slice. The volume of the ventricles in each slice, , is obtained by multiplying the area of the ventricles, , by the slice thickness,
The total volume, , is obtained by summing the volume of the ventricles in each slice over all the slices which contain the ventricles. The total number of slices which contain ventricle information is represented by
Once the total volume of the ventricles is calculated, the change in volume between registered scans is calculated using (3).
3. Data Sets
3.1. Physical Phantom
Since it is not possible to measure the true volume of the cerebral ventricles directly in a living person (i.e., without resorting to another imagebased morphometric technique), the precision and reliability of the volume calculation framework were tested using a physical phantom with known ventricular volume. A number of physical phantom models have been described in the literature, including plexiglass rods submerged in water cylinders [26] and fluidfilled rubber membranes enclosed in gelatin [8, 9, 27]. In the latter models, the membranebound “ventricles” were either of a complex shape [27] or a simple, spherical shape [8, 9], and the fluid was either static [27] or flowing [8, 9]. Models have also included casts of the human ventricular system in formalinfixed brains [28], potassium iodide baths [29], or copper nitrate baths [26]. These phantoms have either lacked the complex shape of the human ventricular system, required artificial membrane boundaries or used materials that do not mimic the density and texture of brain tissue well on CT. Therefore, in the current research, more realistic agar and water phantoms in a range of sizes were developed for verification and validation of the algorithms.
A set of 5 physical phantoms was constructed [4]. The materials were selected because their densities and textures closely mimic those of real brain tissue and cerebrospinal fluid on CT. Clay models of the human ventricular system, including left and right lateral ventricles, foramina of Munro, third ventricle, cerebral aqueduct, and fourth ventricle, were initially created. These were used to create molds from liquid latex rubber. The molds, in turn, were used to create ice models of the ventricular system, which were immersed in solidifying liquid agar. These phantoms consisting of agar “brain” and water “ventricles” were then scanned, using clinical CT scanning parameters (slice thickness 3 mm at the level of the fourth ventricle and 7 mm above the fourth ventricle, field of view 20 20 cm, tube voltage 140 kVp, tube current 140 mAs). Each phantom was given a complete CT scan four times, with the scanning angle changed by between each of the four scans. The volume of water within the phantom's ventricles, , was measured using a graduated syringe. The ice model and a sample CT slice image are shown in Figure 7.
3.2. Clinical Data
The collection of clinical images was approved by the Research Ethics Board of the IWK Health Centre, and the requirement for informed consent was waived. All clinical CT studies were collected in anonymized DICOM format. The CT studies were from patients whose outcome (normal, stable hydrocephalus, developing hydrocephalus, treated hydrocephalus) was known and were selected by a radiologist (MHS) to reflect a range of outcomes. Of the 13 cases provided, nine cases labeled p were patients who had 2 serial CT scans. The remaining cases labeled p were patients who had more than 2 serial CT scans. Manual segmentation was also provided by the radiologist (MHS), so that the segmentation portion of the framework could be validated.
4. Results
4.1. Physical Phantom Results
The volume calculation results for the set of five physical phantoms are summarized in Table 1. The mean calculated volume, , refers to the volume calculated by the algorithm framework, averaged over the four scanning angles used. “Mean Error" is the absolute value of the difference between the calculated volume and the measured volume, averaged over the four scanning angles tested and expressed as a percentage. The standard deviation of the calculated volume, , and of the percentage error, , are noted in the table. The mean percentage error for all the phantom models was . The maximum percent error for any one volume calculation was , therefore the algorithm's margin of error was deemed to be .

4.2. Clinical Results
4.2.1. Registration Measure
The improvement in alignment achieved by the registration algorithm is illustrated in Figure 5. In this example, the 3D displacement of the head between the two CT scans, and its subsequent correction, is particularly noticeable around the eyeballs. In order to quantify the improvement between every image pair, an improvement ratio, , was defined [25]
(a) Reference image, 𝐹 𝑘 𝑡 1 ( 𝑥 1 , 𝑥 2 )
(b) Float image, 𝐹 𝑘 𝑡 2 ( 𝑥 1 , 𝑥 2 )
(c) Registered image, 𝐹 𝑘 𝑡 2 ( 𝑥 1 , 𝑥 2 )
(a) Standard watershed result
(b) Adaptive segmentation result
(a) CT Slice image, physical phantom model
(b) Ventricular system ice model
where
The values for all the clinical cases are listed in column 2 of Table 2 and have a mean value of . The lowest value, 19.07%, occurred in case p when and were well aligned before registration. In case p, with = 20.20%, there was significant skull deformation caused by the hydrocephalus so the registered image, although aligned, is still dissimilar from the initial image.

4.2.2. Segmentation Measure
The segmentation portion of the framework was validated by calculating the similarity index, , between the results of the automated adaptive segmentation and a manual segmentation
where and are the pixel sets of the ventricle areas, measured in number of pixels, in the images segmented using adaptive segmentation and manual segmentation, respectively. A value of (or ) indicates excellent agreement [30]. Table 3 shows the results for each case (ps) with averaged over all the scans in the case as well as over all the slices in the case. ranged from to with a mean and standard deviation of and , respectively. The segmentation algorithm worked correctly for cases that had relatively normal ventricles as well as for those that had ventricles enlarged by developing hydrocephalus.

4.2.3. Framework Measure
Since the objective of the research is to measure the change in volume of the ventricular system with time, the difference in volume between two scans was calculated using (3). The change in volume is expressed as a percentage using the following equation:
Table 2 summarizes the volume calculation results for all the clinical cases. To further illustrate the effect of registration, the change in volume was calculated both without registration and with it and the results are tabulated as and , respectively. By examining the values for and , it can be noted that the values generally differ significantly.
The values are plotted in Figure 8 on a log scale. This plot shows that the values separate into two clusters based on kmeans clustering of the . The red and blue dots represent the two different clusters. One group has all the values less than and the other one has the values greater than . A value of greater than was selected empirically to be the algorithm predictor of developing hydrocephalus. This value was greater than the algorithm's measured accuracy of and also allowed a small margin of error for the differences between the physical phantom and the clinical data.
Using this predictor value, the diagnostic performance of the framework was compared to the clinical comments supplied by the radiologist (MHS) and the results are summarized in Table 4 using the following notations.

For ease of comparison, the clinical comments associated with each case are also listed in Table 2. The clinical comments were made independently of this research and were supplied by the radiologist (MHS) as a basis for comparison. The following abbreviations are used for these comments: healthy: the patient was diagnosed as healthy; hy: the patient was developing hydrocephalus; hy:stable: the patient has hydrocephalus but the hydrocephalus was stable between the two different scans; hy:treated: the patient was diagnosed with hydrocephalus and was treated between scans.
For all the positive and negative examples, the framework prediction and the clinical comments match.
5. Conclusion
In this paper, a framework was implemented to measure the volume of the ventricular system to aid in the diagnosis of hydrocephalus. This framework consists of four important algorithms: a modified registration algorithm using a combination of the wavelet multiresolution pyramid and mutual information, an adaptive watershed segmentation with a novel feature extraction method based on the DTCWT coefficients, and a volume calculation algorithm. In order to quantify the assessment of the success of the algorithms, an improvement ratio was calculated for the registration algorithm and a similarity index for the segmentation algorithm. Finally, physical phantom models with known volumes and clinical cases with known diagnoses were used to verify the volume calculation algorithm.
The average of for the normal cases is indicating that the registration algorithm succeeded in compensating for the displacement between scans. The range of the similarity index for the 13 cases was to and the average similarity index of all the cases was indicating that the segmentation method worked well.
For the volume calculation method on the physical phantom models, all the error rates were below and the average error rate was , indicating that the accuracy of the algorithm is high. Using as a predictor of developing hydrocephalus, the algorithm prediction matched the clinical comments in all cases. These findings show that the structure of the framework is reasonable and illustrate its potential for development as a tool to aid in the evaluation of hydrocephalus on serial CT scans.
Future work will include a more rigorous determination of the predictor value as well as collecting and testing a larger set of clinical data to examine the algorithm's performance on a wider range of clinically significant volume changes, particularly small clinically relevant changes.
Acknowledgments
The authors would like to thank the Natural Sciences and Engineering Council of Canada (NSERC) and the Department of Mathematics and Computing Science, Saint Mary's University, Halifax, Nova Scotia, Canada, for the financial support.
References
 A. H. Mesiwala, A. M. Avellino, and R. G. Ellenbogen, “The diagonal ventricular dimension: a method for predicting shunt malfunction on the basis of changes in ventricular size,” Neurosurgery, vol. 50, no. 6, pp. 1246–1252, 2002. View at: Publisher Site  Google Scholar
 B. B. O'Hayon, J. M. Drake, M. G. Ossip, S. Tuli, and M. Clarke, “Frontal and occipital horn ratio: a linear estimate of ventricular size for multiple imaging modalities in pediatric hydrocephalus,” Pediatric Neurosurgery, vol. 29, no. 5, pp. 245–249, 1998. View at: Publisher Site  Google Scholar
 V. Synek and J. R. Reuben, “The ventricular brain ratio using planimetric measurement of EMI scans,” British Journal of Radiology, vol. 49, no. 579, pp. 233–237, 1976. View at: Google Scholar
 J. Evans, The verification of a computer algorithm designed to calculate the volume of the human cerebral ventricles based on CT images, B.S. thesis, Dalhousie University, Halifax, Canada, 2005.
 M. S. Huckman, J. Fox, and J. Topel, “The validity of criteria for the evaluation of cerebral atrophy by computed tomography,” Radiology, vol. 116, no. 1, pp. 85–92, 1975. View at: Google Scholar
 J. H. Fox, L. T. Jordan, and M. S. Huckman, “Use of computerized tomography in senile dementia,” Journal of Neurology Neurosurgery and Psychiatry, vol. 38, no. 10, pp. 948–953, 1975. View at: Google Scholar
 B. S. Brann IV, C. Qualls, L. Wells, and L. A. Papile, “Asymmetric growth of the lateral cerebral ventricle in infants with posthemorrhagic ventricular dilation,” Journal of Pediatrics, vol. 118, no. 1, pp. 108–112, 1991. View at: Publisher Site  Google Scholar
 R. W. Sze, V. Ghioni, E. Weinberger, K. D. Seidel, and R. G. Ellenbogen, “Rapid computed tomography technique to measure ventricular volumes in the child with suspected ventriculoperitoneal shunt failure I: validation of technique with a dynamic phantom,” Journal of Computer Assisted Tomography, vol. 27, no. 5, pp. 663–667, 2003. View at: Publisher Site  Google Scholar
 R. W. Sze, V. Ghioni, E. Weinberger, K. D. Seidel, and R. G. Ellenbogen, “Rapid computed tomography technique to measure ventricular volumes in the child with suspected ventriculoperitoneal shunt failure II: clinical application,” Journal of Computer Assisted Tomography, vol. 27, no. 5, pp. 668–673, 2003. View at: Publisher Site  Google Scholar
 L. M. Zatz and T. L. Jernigan, “The ventricularbrain ratio on computed tomography scans: validity and proper use,” Psychiatry Research, vol. 8, no. 3, pp. 207–214, 1983. View at: Publisher Site  Google Scholar
 Z. Sun, Using computer vision techniques on CT scans to measure changes in ventricular volume to aid in the diagnosis of hydrocephalus, M.S. thesis, Saint Mary's University, Canada, 2005.
 M. Auer, P. Regitnig, and G. A. Holzapfel, “An automatic nonrigid registration for stained histological sections,” IEEE Transactions on Image Processing, vol. 14, no. 4, pp. 475–486, 2005. View at: Publisher Site  Google Scholar
 F. Maes, D. Vandermeulen, and P. Suetens, “Medical image registration using mutual information,” Proceedings of the IEEE, vol. 91, no. 10, pp. 1699–1721, 2003. View at: Publisher Site  Google Scholar
 J. P. W. Pluim, J. B. A. Maintz, and M. A. Viergever, “Mutualinformationbased registration of medical images: a survey,” IEEE Transactions on Medical Imaging, vol. 22, no. 8, pp. 986–1004, 2003. View at: Publisher Site  Google Scholar
 L. Liu, T. Jiang, J. Yang, and C. Zhu, “Fingerprint registration by maximization of mutual information,” IEEE Transactions on Image Processing, vol. 15, no. 5, pp. 1100–1110, 2006. View at: Publisher Site  Google Scholar
 P. Soille, Morphological Image Analysis, Principles and Applications, Springer, Berlin, Germany, 1999.
 L. Shafarenko, M. Petrou, and J. Kittler, “Automatic watershed segmentation of randomly textured color images,” IEEE Transactions on Image Processing, vol. 6, no. 11, pp. 1530–1544, 1997. View at: Google Scholar
 R. J. O'Callaghan and D. R. Bull, “Combined morphologicalspectral unsupervised image segmentation,” IEEE Transactions on Image Processing, vol. 14, no. 1, pp. 49–62, 2005. View at: Publisher Site  Google Scholar
 J. B. A. Maintz and M. A. Viergever, “A survey of medical image registration,” Medical Image Analysis, vol. 2, no. 1, pp. 1–36, 1998. View at: Google Scholar
 C. Studholme, D. L. G. Hill, and D. J. Hawkes, “An overlap invariant entropy measure of 3D medical image alignment,” Pattern Recognition, vol. 32, no. 1, pp. 71–86, 1999. View at: Google Scholar
 F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, and P. Suetens, “Multimodality image registration by maximization of mutual information,” IEEE Transactions on Medical Imaging, vol. 16, no. 2, pp. 187–198, 1997. View at: Google Scholar
 J. P. W. Pluim, J. B. A. Maintz, and M. A. Viergever, “Image registration by maximization of combined mutual information and gradient information,” IEEE Transactions on Medical Imaging, vol. 19, no. 8, pp. 809–814, 2000. View at: Publisher Site  Google Scholar
 R. C. Gonzalez and R. E. Woods, Digital Image Processing, Prentice Hall, Upper Saddle River, NJ, USA, 2nd edition, 2002.
 P. R. Hill, C. N. Canagarajah, and D. R. Bull, “Image segmentation using a texture gradient based watershed transform,” IEEE Transactions on Image Processing, vol. 12, no. 12, pp. 1618–1633, 2003. View at: Publisher Site  Google Scholar
 F. Luo, Waveletbased registration and segmentation framework for the quantitative evaluation of hydrocephalus, M.S. thesis, Saint Mary's University, Canada, 2006.
 M. Ashtari, J. L. Zito, B. I. Gold, J. A. Lieberman, M. T. Borenstein, and P. G. Herman, “Computerized volume measurement of brain structure,” Investigative Radiology, vol. 25, no. 7, pp. 798–805, 1990. View at: Publisher Site  Google Scholar
 F. Brassow and K. Baumann, “Volume of brain ventricles in man determined by computer tomography,” Neuroradiology, vol. 16, pp. 187–189, 1978. View at: Google Scholar
 D. A. Rottenberg, K. S. Pentlow, M. D. F. Deck, and J. C. Allen, “Determination of ventricular volume following metrizamide CT ventriculography,” Neuroradiology, vol. 16, pp. 136–139, 1978. View at: Google Scholar
 R. E. Baldy, G. S. Brindley, I. EwusiMensah et al., “A fullyautomated computerassisted method of CT brain scan analysis for the measurement of cerebrospinal fluid spaces and brain absorption density,” Neuroradiology, vol. 28, no. 2, pp. 109–117, 1986. View at: Google Scholar
 D. N. Kennedy, P. A. Filipek, and V. S. Caviness Jr., “Anatomic segmentation and volumetric calculations in nuclear magnetic resonance imaging,” IEEE Transactions on Medical Imaging, vol. 8, no. 1, pp. 1–7, 1997. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2010 Fan Luo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.