Recent Machine Learning Progress in Image Analysis and Understanding
View this Special IssueResearch Article  Open Access
Fei Yang, Xiangxu Meng, JiYing Lang, Weigang Lu, Lei Liu, "Region Space Guided Transfer Function Design for Nonlinear Neural Network Augmented Image Visualization", Advances in Multimedia, vol. 2018, Article ID 7479316, 8 pages, 2018. https://doi.org/10.1155/2018/7479316
Region Space Guided Transfer Function Design for Nonlinear Neural Network Augmented Image Visualization
Abstract
Visualization provides an interactive investigation of details of interest and improves understanding the implicit information. There is a strong need today for the acquisition of high quality visualization result for various fields, such as biomedical or other scientific field. Quality of biomedical volume data is often impacted by partial effect, noisy, and bias seriously due to the CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) devices, which may give rise to an extremely difficult task of specifying transfer function and thus generate poor visualized image. In this paper, firstly a nonlinear neural network based denoising in the preprocessing stage is provided to improve the quality of 3D volume data. Based on the improved data, a novel region space with depth based 2D histogram construction method is then proposed to identify boundaries between materials, which is helpful for designing the proper semiautomated transfer function. Finally, the volume rendering pipeline with raycasting algorithm is implemented to visualize several biomedical datasets. The noise in the volume data is suppressed effectively and the boundary between materials can be differentiated clearly by the transfer function designed via the modified 2D histogram.
1. Introduction
Since there are the two characteristics of visibility of object and clear detail revealing, visualization has been proven to be of paramount important for exploring meaningful properties of volume data [1]. Because of the ability of obtaining the twodimensional rendering results on the screen directly from the data field without building a network model in advance, volume rendering is thus certified to be an effective visualization method of extracting underlying information of interest from volumetric data using interactive graphics and imaging [2, 3]. Kniss et al. [4] visualized the muscle, soft tissues, and the bone from the visible male head data using the volume rendering method and produced a set of direct manipulation widgets to make exploring such features convenient. Ching and Chang [5] rendered the feature of interest in CT (Computed Tomography) images and generated a large wideangle perspective projection view in an endoscopy to help a physician in diagnosis. Zhang et al. [1] synchronized the dualmodality of cardiac MRI (Magnetic Resonance Imaging) and 3D ultrasound volumes and visualized the dynamic heart by 4D cardiac image rendering. Based on volume rendering. Zhang and Wang et al. developed a platform integrating multivolume visualization method for both heart anatomical data and electrophysiological data visualization [6]. Hsieh et al. [7] visualized the threedimensional (3D) geometry of the ear ossicle with the segmented ossicle computer tomography (CT) slices, which presented the spatial relation with the temporal bone to diagnose middle ear disease. To visualize brain activity conveniently, Holub and Winer [8] performed 3D and 4D volume ray casting on a tablet device in realtime.
Transfer function plays a fundamental role in visualization for its capability of classifying and segmenting features of volume data, which may affect the quality of rendering image and the perception of users to volume data. To measure throughplane MR flow. Thunberg et al. [9] presented a visualization method which combined the magnitude and velocity images into one single image. By using the transfer function, the velocities are colorcoded and set to a predefined opacity. How the measured blood flow was related to the underlying anatomy can thus be understood. Zhang presented a statisticsbased method to visualize 3D cardiac volume data set [10] and further proposed a novel transfer function design approach for revealing detailed structures in the human heart anatomy via perceptionbased lighting enhancement [11]. Yang presented a fusion visualization framework to combine the cardiac electrophysiology pattern with the anatomy pattern through a novel multidimensional fusion transfer function [12].
Ebert et al. [13] studied the accuracy of volume rendering for arterial stenosis measurement and the results suggested that the choice of transfer function parameters greatly affects the accuracy of volume rendering, while accurate transfer function parameters selection is still a challenge due to the lack of meaningful guidance information and intuitive user interface. The present methods for this problem mainly are objectcentric, imagecentric, and datacentric [14]. Objectcentric approach first classifies or segments the volume data through clustering, probability, and machine learning which covers artificial neural network, support vector machine, and hidden Markov model [15–18]. Then the optical parameters are specified based on the classification result.
Different from the objectcentric method, the image centric transfer function is designed on the rendered images. Through the evaluation of the projective images, parameters of transfer function are automatically adjusted and reapplied to the original data recursively until the satisfied rendering result is achieved. Based on a set of rendered images, He et al. [19] presented the stochastic method to search the satisfied transfer function. Marks et al. [20] proposed the Design Gallery method to provide the user varieties of ordered graphics or animations with different perceptions, which are generated automatically by a series of transfer functions given input parameter vector. Users then explore these images space to search for the satisfactory transfer function.
In datacentric approach, parameters of transfer function are specified by analyzing the volume data. Generally, collecting additional information related to the data prior to confirming transfer function makes the design more convenient. Scalar value of volume data is commonly considered for deriving 1D transfer function. The gradient [21, 22] and curve [23, 24] are introduced as the second variable for the twodimensional transfer function. Roettger et al. [25] extended the variable of transfer function to spatial information. Spatial regions connected with each other were grouped and thus classified. Huang et al. [26] added spatial information into the transfer function domain and extended the number of dimensions to three. Material boundaries are accurately revealed by taking advantage of a designed cost function in three dimensions to imply regional growth algorithm. Some approaches were proposed to classify the topological structure of volume data for the transfer function design [27, 28]. Through the continuous scalespace analysis and detection filters, Correa et al. [29] obtained the 3D scale fields which represent the scale of every voxel. The sizebased transfer function is then proposed using the scale fields and maps the scale of local features to color and opacity. Thus the features with similar scalar values in the complex data can be classified based on the relative size. With the increasing dimensions of transfer function, specifying its parameters properly becomes a more difficult and tedious task.
When the dimension is more than 2, it is difficult to specify the parameters for the higher dimensional transfer functions. The histograms are often used to find satisfied transfer functions. Based on the first and second derivatives in the volume, Kindlmann and Durkin [22] built a 2D histogram and the object boundaries appear as arcs in the histogram. A featuresensitive transfer function can then be semiautomatically generated according to the arcs to reveal features of interest. Lum et al. [30] employed gradientaligned samples instead of first derivatives as the first property for creating a variant of 2D histogram. The transfer function designed through the histogram classifies the voxels with different degree of homogeneity by mapping them to different optic parameters. However, with an increasing number of boundaries, their separation based on above methods becomes more difficult due to intersections and overlaps.
In this paper, firstly a neural network volume data preprocessing approach for slice denoising is implemented to improve the quality of 3D biomedical data. Then a twodimensional transfer function with the preprocessed data is designed based on a modified 2D histogram, which is created using a novel region space based method with depth information. The features of interest in the data are thus exactly explored. In Section 2 of this paper, the method for denoising is described and a twodimensional transfer function based on 2D histogram is designed. Efficiency and practicability of the presented method are further shown in Section 3. Finally, conclusions are discussed in Section 4.
2. Transfer Function Design
2.1. Augmentation on Slice Data
Biomedical volume data produced by current noninvasive devices such as CT and MRI scanners are usually accompanied by noisy, partial effect, and bias. Data with serious noise or error message which causes low SNR (Signal Noise Ratio) will directly affect transfer function specification and cause the objects obscuring in the resulting image.
Spatial mean lowpass filtering such as General Median filtering and Gaussian smoothing has the advantage of reducing the amplitude of noise fluctuations. While the filtering blurs the details in the data such as the line or edge and does not focus on processing regional boundary or tiny structures, which makes the resulting image too fuzzy. This is a hamper to effectively enhance boundary for those noisy data containing lots of details.
Although nonlinear filtering has the achievement of reserving the edge, it produces the loss of resolution due to the suppression of details. To solve this problem, the nonlinear enhancement algorithm uses information of boundary and the neighbor of a pixel to preprocess the image data [14, 31], which effectively removes the noise region with homogeneous physical properties and significantly improves the image quality. Loss of information can thus be minimized by reserving the object boundary and the detailed structure and the shape is enhanced by discontinuous sharpening. Since the anisotropic diffusion filtering smoothes the image along the edge direction rather than in the orthogonal direction to the edge, location and intensity of the edge can be retained. Unlike traditional methods, neural network can learn more features beneficial to the task through hierarchical structure. Based on a multilayer perceptron neural network, Burger et al. [32] presented a denoising algorithm that is learned on a large dataset for image denoising. For the multilayer perceptron (MLP), it can be represented as a nonlinear function which maps a noisy image to a noisefree image:Here the network has three hidden layers. , , are vectorvalued biases. The weight matrices of the structure are , , . The function operates componentwise. In order to realize image denoising, the clean images are selected from the image dataset and the input noise level is employed to produce the corresponding noisy images. The MLP parameters are then estimated by the back propagation algorithm satisfying:where u is the vectorvalued noisy images input and h(u) is the mapped vectorvalued denoised images output. v is the clean images. The architecture of the network is as Figure 1.
During application for image denoising, MLP uses fully connected neural network to process image fragments and then splits and combines all processed image segments to form a denoising image. First the noisy image is split into overlapping patches and each patch u is denoised separately. Then the denoised patches h(u) are placed at the locations of their noisy counterparts. The denoised image is thus obtained by averaging on the overlapping regions.
2.2. Region Space Guided Transfer Function Design
Kindlmann et al. [22] added higherorder derivatives of the voxel to transfer function domain in his presented approach. Those extracted boundaries appeared as arches in the derived histogram with axes representing scalar value and gradient magnitude. Although using this histogram can improve selection of boundaries, intersection or overlapping of two arches caused by different feature voxels sharing the same scalar value and gradient magnitude may result in ambiguities in classification of boundaries. Sereda et al. [33] proposed a multidimensional transfer function based on the LH histogram to facilitate separation of features which are represented by the arches.
LH Histogram based method computes low and high values of each sample voxels which are labeled as FL and FH respectively. For each sample voxel, if the gradient is less than the threshold, the voxel is identified to be internal sample. Otherwise the voxel is considered to be the boundary element. FL and FH of the internal voxel are equal and the value is the scalar of the voxel. For the voxels which are supposed to belong to boundaries, integration is implemented along the gradient and reverse direction in gradient field until the gradient is less than the threshold. FL and FH can then be found. The FL and FH values of all voxels are expressed in the same coordinate system, and the LH histogram is thus obtained. Since the value of FL is not more than FH, points in the LH histogram are only located above the diagonal line. The points on the diagonal indicate the internal voxels; that is, the FL and FH values are equal. The rest represent the boundary voxels and the corresponding FL and FH values are the scalar value of the two materials of the boundary respectively.
In volume rendering, using LH method to design transfer function can not only reduce the dependence on image segmentation, but also include voxel gradient information and boundary gray information. Due to the characteristics of medical data and clinical application, centralization of the voxel is required.
A proper region space is selected and a voxel is compared with the voxels in . Let be the scalar or intensity value of a voxel . The intensity mean and variance of all voxels within are given the following, respectively:where represents the adjacent voxel in the region space of voxel and is the number of voxels in . The criteria for identification boundary voxels can then be formulated as in (5): where is the set of voxels inside the materials and is the set of voxels on the boundaries. Thus the voxel that difference between it and the voxels in falls out of the range of is considered to the boundary voxel.
In the region where the complex boundary exists, using the single criterion will result in boundary determination error. Since some boundaries only appear at a certain depth and then disappear when they reach a certain depth, the complex boundaries can be further differentiated according to the depth information. Then a modified 2D histogram is created using the region space based method with depth information. In this paper the points in the original histogram are further grouped according to the corresponding depth.
A 2D transfer function can then be specified based on the created LH histogram by selecting relevant areas and by assigning them color and opacity. The corresponding features in the volume data can thus be explored. The details of the proposed method are given in Algorithm 1.

3. Results and Discussion
In this section, some data sets are used as the test data, including tooth data and sheep heart data, to evaluate the performance of the proposed transfer function. The size of data set is 256×256×161 and 352×352×256, respectively. All the experiments are carried out on the computer with Intel Core i5 2.66G, 4.00G RAM and graphics card of NVIDIA GeForce GT 650.
Biomedical volume data produced by current noninvasive devices such as CT and MRI scanners are usually accompanied by serious noise, which will generate poor visualized image and cause the blur objects in the resulting image. Thus the MLP neural network is implemented to denoise the volume data. In the experiments, the union of the LabelMe dataset is used to train MLP which contains approximately 150,000 images. Before training, the data are filled with padding operation and each pixel is filled with 6 pixel sizes. The noise level σ is set to 10. We use a patch of size 39×39 to generate the predicted patch and then adopt a filter of size 9×9 to average the output patches, thus its effective patch size is 47×47. In the experiment the learning rate in each layer is equal to and is the number of input units of current layer. The basic learning rate was set to 0.1. To improve results slightly we use the sliding window method with stride size of 3 which weights denoised patches with a Gaussian window instead of using all possible overlapping patches. Figure 2 compares the image denoising results of nonlinear enhancement and MLP neural network on the tooth data. The Peak Signal to Noise Ratio (PSNR) with the nonlinear enhancement filtering is 41.4294, and the Structural Similarity Index (SSIM) is 0.9325. The PSNR with the MLP method is 41.4294, and SSIM is 0.9325. As one can see, the MLP network produces more visually pleasant results. Compared with original data, noise in the homogeneous region is suppressed and the quality of the image is improved.
In LH histogram, points around diagonal represent interior of materials. Regions including those points are thus assigned to lower opacity in the transfer function to fade unimportant information out. Remainder regions are the accumulation of boundary voxels which contain features of interest. Figure 3 shows LH histogram and rendering result of the original tooth data and MLP denoised data. Figure 3(a) describes the created LH histogram (left) and the corresponding rendering result with original tooth data set (right). Figure 3(b) shows the LH histogram (left) based on the denoised data. From Figure 3(b), we can see the more compact separation of points in the LH histogram, which is on account of the suppression of noise in the homogeneous region and enhancement of the boundaries between materials. Since the noise is suppressed effectively, boundaries between different materials can be visualized more clearly. As shown in Figure 3(b), the fact which is specifically manifested through the experiment result is that dentineenamel (yellow) is explored exactly and noises around the root of the dentine are considerably removed.
(a)
(b)
Figure 4 shows the created LH histogram and rendering result on the anisotropic diffusion enhanced tooth volume data using the conventional and regional criteria based method. The number of iteration of the nonlinear filtering is the process ordering parameter. Figure 4(a) presents the visualization result of enhanced data using the conventional method. The gradient magnitude threshold for investigating the FL and FH intensity profile is set to 10. Figure 4(b) shows the LH histogram constructed and corresponding rendering images through regional criteria based method. Here the region range is set to 1.6. As shown in Figure 4(b), since the noise is suppressed and the path tracing for FL and FH value starts with regional criteria, the distribution of separated parts of points that correspond to different features is more concentrate in LH histogram, which thus ensures a more accurate identification of boundary voxels. The fact which is specifically manifested through two experiment results is that noises around the root of the dentine are considerably removed, and the phenomenon of blurred boundary is removed and various boundaries of the tooth, i.e., enamelair (white), dentineenamel (yellow), pulpdentine (red), and dentineair (pink) boundary, are revealed clearly in the final image.
(a)
(b)
Figure 5 shows the rendering result after introducing depth information into region space with the MLP augmented data. Classifying boundary voxels through conventional LH histogram will result in the confused boundary exploration. From Figure 4, we can see that there exists a visible discontinuity in the pulpdentine boundary. This discontinuity is due to classifying those boundary voxels of the dentine tissue falsely. Because the boundary appears at a specific depth, for example, the tooth enamel is in the region near the human eye, while the medulla is at the depth far away from the human eye, thus we add depth information for the histogram construction and can obtain the distinct boundary. In our experiment the depth of enamel is about 120 and the depth of medulla is about 80. As shown in the result image in Figure 5(a), since more voxels are classified into boundary voxels via the transfer function which is designed based on LH histogram with depth information in region space properly, the discontinuity of the pulpdentine boundary (red) in Figure 4 is corrected. And the exact pulpdentine boundary is revealed in the rendering result image. The rendering time of the proposed method is 1.1s, which enriches realtime interaction property of visualization.
(a)
(a)
Figure 6 shows the denoising result of sheep heart slice data and gives the rendering result via the proposed region space guided transfer function. Figure 6(a) shows the original slices. In Figure 6(b), the two corresponding denoised results are presented. It is obvious that preprocessing for tissues of the sheep heart such as the muscle and the fat is effective in decreasing noise and the boundary details are consequently remained. Fine structural features of interesting in the sheep heart are thus clearly visualized through the region space based transfer function with the augmented data and, as in Figure 6(c), the fat of sheet heart is colored in yellow and the muscle is red. The profile of sheep heart is colored by white. From the result, structures of the sheep heart are thus exactly explored and the shape, spatial position, and relationship of the tissues can be observed without ambiguity. The rendering time of the sheep heart data set is 2.6s.
(a)
(b)
(c)
4. Conclusion
Transfer function in performance of volume rendering plays a crucial role for exploring directly detail information hiding in data as well as enhancing important boundaries. In this work we first implement the MLP neural network on volume data to denoise while preserve the boundary. This method can considerably improve quality of volume data acquired by devices. Then we improve the LH method by combining the regional depth information to achieve the transfer function semiautomatic generation. This method can avoid the influence of noise and make the voxels more centralized. In the LH histogram the voxel distribution at the diagonal line is more concentrated, and the boundary of important objects are effectively emphasized. The features of interest in the data can thus be found exactly by mapping scalar value of boundary voxels which correspond to the points in LH histogram to appropriate opacity and color.
Data Availability
The two datasets are both open data which are available at http://visual.nlm.nih.gov/.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
The work was supported by the National Natural Science Foundation of China (NSFC) under Grant no. 61502275 and the Postdoctoral Science Foundation of China (no. 2017M622210). This work was also supported in part by the Natural Science Foundation of Shandong of Grant no. ZR2017MF051, the National Natural Science Foundation of China (NSFC) of Grant no. 61501450, and the MOE (Ministry of Education in China) Project of Humanities and Social Sciences of Grant no. 16YJC880057.
References
 Q. Zhang, R. Eagleson, and T. M. Peters, “GPUbased visualization and synchronization of 4D cardiac MR and ultrasound images,” IEEE Transactions on Information Technology in Biomedicine, vol. 16, no. 5, pp. 878–890, 2012. View at: Publisher Site  Google Scholar
 H. Pfister, B. Lorensen, C. Bajaj et al., “The transfer function bakeoff,” IEEE Computer Graphics and Applications, vol. 21, no. 3, pp. 16–22, 2001. View at: Publisher Site  Google Scholar
 Q. Zhang, R. Eagleson, and T. M. Peters, “Volume visualization: A technical overview with a focus on medical applications,” Journal of Digital Imaging, vol. 24, no. 4, pp. 640–664, 2011. View at: Publisher Site  Google Scholar
 J. Kniss, G. Kindlmann, and C. Hansen, “Multidimensional transfer functions for interactive volume rendering,” IEEE Transactions on Visualization and Computer Graphics, vol. 8, no. 3, pp. 270–285, 2002. View at: Publisher Site  Google Scholar
 Y.T. Ching and C.L. Chang, “A volume rendering technique to generate a very large wideangle endoscopeic view,” Journal of Medical and Biological Engineering, vol. 22, no. 2, pp. 109–112, 2002. View at: Google Scholar
 L. Zhang, C. Gai, K. Wang, W. Lu, and W. Zuo, “GPUbased high performance wave propagation simulation of ischemia in anatomically detailed ventricle,” in Proceedings of the Computing in Cardiology Conference (CinC '11), pp. 469–472, Hangzhou, China, September 2011. View at: Google Scholar
 M. S. Hsieh, F. P. Lee, and M. D. Tsai, “A virtual reality ear ossicle surgery simulator using threedimensional computer tomography,” Journal of Medical and Biological Engineering, vol. 30, no. 1, pp. 57–63, 2010. View at: Google Scholar
 J. Holub and E. Winer, “Enabling RealTime Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device,” Journal of Digital Imaging, vol. 30, no. 6, pp. 738–750, 2017. View at: Publisher Site  Google Scholar
 P. Thunberg and A. Kähäri, “Visualization of ThroughPlane Blood Flow Measurements Obtained from PhaseContrast MRI,” Journal of Digital Imaging, vol. 24, no. 3, pp. 470–477, 2011. View at: Publisher Site  Google Scholar
 L. Zhang, K. Wang, F. Yang et al., “A Visualization System for Interactive Exploration of the Cardiac Anatomy,” Journal of Medical Systems, vol. 40, no. 6, 2016. View at: Publisher Site  Google Scholar
 L. Zhang, K. Wang, H. Zhang, W. Zuo, X. Liang, and J. Shi, “Illustrative cardiac visualization via perceptionbased lighting enhancement,” Journal of Medical Imaging and Health Informatics, vol. 4, no. 2, pp. 312–316, 2014. View at: Publisher Site  Google Scholar
 F. Yang, W. G. Lu, L. Zhang, W. M. Zuo, K. Q. Wang, and H. G. Zhang, “Fusion visualization for cardiac anatomical and ischemic models with depth weighted optic radiation function,” in Proceedings of the Computing in Cardiology Conference (CinC '15), pp. 937–940, IEEE, Nice, France, September 2015. View at: Publisher Site  Google Scholar
 D. S. Ebert, D. G. Heath, B. S. Kuszyk et al., “Evaluating the potential and problems of threedimensional computed tomography measurements of arterial stenosis,” Journal of Digital Imaging, vol. 11, no. 3, pp. 151–157, 1998. View at: Publisher Site  Google Scholar
 G. Gerig, O. Kubler, R. Kikinis, and F. A. Jolesz, “Nonlinear anisotropic filtering of MRI data,” IEEE Transactions on Medical Imaging, vol. 11, no. 2, pp. 221–232, 1992. View at: Publisher Site  Google Scholar
 F.Y. Tzeng and K.L. Ma, “A clusterspace visual interface for arbitrary dimensional classification of volume data,” in Proceedings of the in Proceedings of the 6th Joint EurographicsIEEE TCVG Symposium on Visualization, pp. 17–24, Konstanz, Germany, 2004. View at: Google Scholar
 P. Šereda, A. Vilanova, and F. A. Gerritsen, “Automating transfer function design for volume rendering using hierarchical clustering of material boundaries,” in Proceedings of the in Proceedings of the 8th Joint EurographicsIEEE TCVG Symposium on Visualization, pp. 243–250, Lisbon, Portugal, 2006. View at: Google Scholar
 F.Y. Tzeng, E. B. Lum, and KL. Ma, “A novel interface for higherdimensional classification of volume data,” in Proceedings of the IEEE Visualization Conference (VIS '03), pp. 505–512, Seattle, WA, USA, 2003. View at: Publisher Site  Google Scholar
 F.Y. Tzeng, E. B. Lum, and K.L. Ma, “An intelligent system approach to higherdimensional classification of volume data,” IEEE Transactions on Visualization and Computer Graphics, vol. 11, no. 3, pp. 273–283, 2005. View at: Publisher Site  Google Scholar
 T. S He, L. C. Hong, A. Kaufman, and et al, “Generation of transfer functions with stochastic search techniques,” in Proceedings of the Seventh Annual IEEE Visualization '96, pp. 227–234, San Francisco, CA, USA. View at: Publisher Site  Google Scholar
 J. Marks, B. Mirtich, B. Andalman et al., “Design Galleries: A general approach to setting parameters for computer graphics and animation,” in Proceedings of the 1997 Conference on Computer Graphics, SIGGRAPH, pp. 389–400, August 1997. View at: Google Scholar
 M. Levoy, “Display of surfaces from volume data,” IEEE Computer Graphics and Applications, vol. 8, no. 3, pp. 29–37, 1988. View at: Publisher Site  Google Scholar
 G. Kindlmann and J. W. Durkin, “Semiautomatic generation of transfer functions for direct volume rendering,” in Proceedings of the 1998 IEEE Symposium on Volume Visualization, VVS 1998, pp. 79–86, USA, October 1998. View at: Google Scholar
 J. Hladůvka, A. König, and E. Gröller, “Curvaturebased transfer functions for direct volume rendering,” in Proceedings of the In Spring Conference on Computer Graphics 2000, vol. 16, pp. 58–65, 2000. View at: Google Scholar
 G. Kindlmann, R. Whitaker, T. Tasdizen, and T. Möller, “CurvatureBased Transfer Functions for Direct Volume Rendering: Methods and Applications,” in Proceedings of the VIS 2003 PROCEEDINGS, pp. 513–520, USA, October 2003. View at: Google Scholar
 S. Roettger, M. Bauer, and M. Stamminger, “Spatialized transfer functions,” in Proceedings of the In Eurographics, IEEE VGTC Symposium on Visualization, pp. 271–278, 2005. View at: Google Scholar
 R. Huang, . KwanLiu Ma, P. McCormick, and W. Ward, “Visualizing industrial CT volume data for nondestructive testing applications,” in Proceedings of the IEEE Visualization 2003, pp. 547–554, Seattle, WA, USA. View at: Publisher Site  Google Scholar
 I. Fujishiro, T. Azuma, and Y. Takeshima, “Automating transfer function design for comprehensible volume rendering based on 3D field topology analysis,” in Proceedings of the IEEE Visualization '99, pp. 467–470, October 1999. View at: Google Scholar
 S. Takahashi, Y. Takeshima, and I. Fujishiro, “Topological volume skeletonization and its application to transfer function design,” Graphical Models, vol. 66, no. 1, pp. 24–49, 2004. View at: Publisher Site  Google Scholar
 C. D. Correa and K.L. Ma, “Sizebased transfer functions: a new volume exploration technique,” IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1380–1387, 2008. View at: Publisher Site  Google Scholar
 E. B. Lum and K.L. Ma, “Lighting transfer functions using gradient aligned sampling,” in Proceedings of the IEEE Visualization 2004  Proceedings, VIS 2004, pp. 289–296, USA, October 2004. View at: Google Scholar
 P. Perona and J. Malik, “Scalespace and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629–639, 1990. View at: Publisher Site  Google Scholar
 H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: Can plain neural networks compete with BM3D?” in Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012, pp. 2392–2399, USA, June 2012. View at: Google Scholar
 P. Šereda, A. V. Bartrolí, I. W. O. Serlie, and F. A. Gerritsen, “Visualization of boundaries in volumetric data sets using lh histograms,” IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 2, pp. 208–217, 2006. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2018 Fei Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.