Wireless Communications and Mobile Computing

Wireless Communications and Mobile Computing / 2020 / Article
Special Issue

Mobile Intelligence Assisted by Data Analytics and Cognitive Computing 2020

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 8822624 | https://doi.org/10.1155/2020/8822624

Huadeng Wang, Guang Xu, Xipeng Pan, Zhenbing Liu, Rushi Lan, Xiaonan Luo, "A Novel Ray-Casting Algorithm Using Dynamic Adaptive Sampling", Wireless Communications and Mobile Computing, vol. 2020, Article ID 8822624, 12 pages, 2020. https://doi.org/10.1155/2020/8822624

A Novel Ray-Casting Algorithm Using Dynamic Adaptive Sampling

Academic Editor: Yin Zhang
Received23 May 2020
Revised04 Aug 2020
Accepted14 Sep 2020
Published14 Oct 2020

Abstract

Ray-casting algorithm is an important volume rendering algorithm, which is widely used in medical image processing. Aiming to address the shortcomings of the current ray-casting algorithms in 3D reconstruction of medical images, such as slow rendering speed and low sampling efficiency, an improved algorithm based on dynamic adaptive sampling is proposed. By using the central difference gradient method, the corresponding sampling interval is obtained dynamically according to the different sampling points. Meanwhile, a new rendering operator is proposed based on the color value and opacity changes before and after the ray enters the volume element, and the resistance luminosity. Compared with the state of other algorithms, experimental results show that the method proposed in this paper has a faster rendering speed while ensuring the quality of the generated image.

1. Introduction

The ray-casting algorithm was proposed by Levoy in 1988. As an important volume rendering algorithm, the ray-casting algorithm is widely used in medical images, fluid and quantum mechanics, geographical exploration, and finite element analysis [1, 2]. In the basic principle of the ray-casting algorithm, each pixel in the imaging plane sends out a ray along the line of sight through the volume data, resampling along the ray at the same distance, obtaining the color value and opacity of each resampling point, and then synthesizing the color and opacity of each resampling point on the ray from the front to the back or from the back to the front.

Although the traditional ray projection algorithm has a high quality of synthetic image, it has two disadvantages. One is that there are a lot of volume data and a lot of addition and multiplication operations in the interpolation process, which makes the whole rendering process very slow. Second, after the sampling point is determined, the use of cubic linear interpolation will affect the real-time rendering process.

Some researchers use the closest interpolation instead of trilinear interpolation to remove some unnecessary sampling points. Some researchers achieve this goal by reducing the intersection of the light field and the data field. For example, Siddon [3] replaced the intersection of ray and voxel with the intersection of ray and plane. Ross et al. [4] employed the Bresenham algorithm to determine the intersection voxels. The data domain octree organization method proposed by Levoy [5] could easily skip empty voxels and reduce the time complexity of the intersection to O(nlog n). Jian [6] and others chose the projection light according to the correlation of the critical voxels, but the sharpness of the image would be worse.

There are two ways to improve the ray projection algorithm. One is to simplify the operation of the sampling process, such as using trilinear interpolation instead of complex cubic convolution interpolation [710]. The other is to reduce the drawing time by reducing the number of sampling points on each ray. For the existing empty sampling points which are not helpful for image rendering, most of the methods are to eliminate them directly without considering the correlation between them and the surrounding voxels. When users interact with each other, the image will be blurred and the user experience will be affected.

Considering that most of the adaptive sampling methods used in most papers are equal-interval sampling, the threshold value of the isosurface is set first, and the isodistance sampling is started from the viewpoint direction along the line of sight direction until the sampling value exceeds the set threshold value. Finally, the linear interpolation is done between the sampling points and to get the intersection coordinates of the line of sight and the isosurface. The disadvantage of this method is that there are too many sampling points. Therefore, a large number of sampling points are empty voxels, which wastes precious time and is not conducive to user interaction. Another method is based on a given sampling frequency, but the quality of voxel rendering is very poor. Some scholars try to improve the sampling frequency, adjust the sampling frequency by the distance proportion between the tangent point of the surface normal and the viewpoint, and simplify the image synthesis operator. Another is to resample at a lower sampling frequency first. If the data values of two adjacent resample points differ greatly, then increase the sampling frequency in such a high-frequency area. In the place where the voxel value changes slowly and quickly, the sampling interval should be small to avoid missing voxel information. If the voxel value changes slowly, increase the sampling interval appropriately, so that a large number of empty voxels can be skipped. However, if the sampling interval is too large, the rendering quality will be low, resulting in a large “void” effect [11, 12], that is, the phenomenon of the artifact.

There are two shortcomings in the current ray casting algorithm for medical image rendering, one is the slow rendering speed, the other is the low efficiency and high time complexity of the whole sampling process. In order to solve the above shortcomings, this paper proposes an algorithm. The algorithm uses the central differential gradient method. By this method, the specific sampling distance can be determined dynamically according to the different distances of sampling points, which can improve the sampling efficiency. On the other hand, according to the change of color value and opacity before and after the ray enters the voxel, we propose a new rendering operator to improve the final image rendering speed.

This paper solves the problem of how to find the most suitable sampling interval dynamically. Through the full increment formula of the voxel value difference between two adjacent sampling points, combined with the central difference method, the calculated gradient, and the preset voxel value threshold, the sampling point interval can be calculated dynamically according to the different positions of sampling points. Thus, we can update the sampling interval dynamically on the premise of ensuring the sampling density. Aiming at the problem that the search precision of the intersection point between the light and the actual isosurface will be reduced when the sampling point interval is large, this paper proposes a recursive estimation method based on the intersection point to estimate the intersection point between the sampling point and the isosurface, which is simple and easy to operate. The algorithm we proposed is a novel ray-casting algorithm using dynamic adaptive sampling, which is called the ray-casting DAS algorithm for short. After determining the sampling point, aiming at the general paper’s shortcomings of using the nearest interpolation or trilinear interpolation to synthesize the image according to the opacity and color value of eight data points equidistant from the sampling point, this paper adopts a method of light blocking, which is to change the opacity and color values as well as the photometry before and after the ray enters the volume element and improve the composition operator. The basic algorithm structure and process are listed in Figure 1. Experimental results show that the improved method can effectively improve the speed of image rendering and avoid the generation of artifacts. The main innovation of this paper is dynamic adaptive sampling and improving the rendering operator to improve the efficiency and rendering time of the algorithm. Another unique innovation of this paper is the introduction of the distance function to improve the image noise and rendering quality.

The remainder of this paper is as follows. The second part summarizes the latest research results of some ray projection algorithms and compares their advantages and disadvantages. In the third part, the ray projection algorithm based on adaptive sampling is introduced. The fourth part is the comparison between the ray-casting DAS and other ray projection algorithms in several open-source datasets. The fifth part is the summary of ray-casting DAS and the assumption of improving in the future.

Wu et al. [13] proposed a GPU ray projection method for visualizing 3D pipelines in a virtual sphere. In their method, the pipeline data was initially divided into several data blocks. The pipeline centerline in each block was then segmented and coded, and a thicker pipeline envelope was then constructed using geometric shaders. Finally, elaborate 3D pipes could be rendered using pixel shaders. Compared with the traditional polygon-based pipeline data partition method, this method improves the rendering frame rate under the same pixel-level precision display effect. The visualization of 3D pipe thickness is realized without affecting the rendering efficiency.

Luo et al. [14] proposed a rendering algorithm based on the height field [15] and an improved algorithm of finding the intersection of isosurfaces to solve the problem of the slow speed of finding the intersection of light and volume data. However, because the volume data will be imaged many times due to the reflection of the surface of the object, so it needs to do many volume data rendering operations, which hurt the performance of the algorithm.

Francisco Sans et al. [16] proposed to combine CPU based ray projection with GPU, and reduce the time complexity of rendering by dynamic gradient interpolation of voxels. It realized single channel GPU rendering and meet the real-time interaction requirements of users.

Binyahib et al. [17] proposed a parallel volume rendering algorithm which combines the classic object sequence and image sequence techniques. The algorithm runs on an unstructured grid (and structured grid), so it can deal with block boundary crossing in a complex way. It can also effectively deal with situations that are prone to load imbalance [18]. At the largest scale, it can be expanded to 8192 processors and run on datasets of more than 1 billion cells.

Tao et al. [19] proposed a ray projecting algorithm combining the fitness estimation based on ray-casting with the global optimization method of improved adaptive differential evolution. This method eliminated the fine registration step of the famous iterative nearest point (ICP) algorithm [15, 20, 21], and it is the first direct global registration algorithm. Moreover, the calculation of the fitness of the global optimization method was accelerated, and the search space was effectively used to find the optimal transformation solution. The algorithm was successfully implemented in parallel mode on multicore computer processors.

3. Our Proposed Approach

3.1. Data Preprocessing Based on Fuzzy Enhancement

Because in the ray projection algorithm, in the mapping from three-dimensional to two-dimensional, the image edge, region, texture, and so on are inevitably blurred due to the loss of information. Therefore, the algorithm proposed in this paper first preprocesses the data, uses the method of enhancement operator [22] to calculate the blur rate of the enhanced image, and then selects the best threshold.

The purpose of introducing a fuzzy enhancement operator [23] is to enhance the contrast of the image region by different enhancement processing, that is, the black target area is attenuated and the white background area is enhanced. The enhanced image not only retains the information of the original image but also makes the layers of each region clearer, which further reduces the image ambiguity. In this way, the change of the processed data is beneficial to the extraction and selection of sampling points for the subsequent volume rendering algorithm.

3.2. Intersection Processes of Light and Plane

First, the parametric equation of light emitted by a pixel in the plane can be expressed as: where represents the direction vector of the line in the object space, represents a point on the line. , , and denote the three-dimensional coordinates of any point on the line, and is the parameter of the linear parametric equation, indicating the number of directions of the line.

Let the size of the regular data field be and logical coordinates , in which, , , and . Here , , and denote the number of , , and plane families, respectively, and represents the projection coordinates of a specific point in the plane of the specific plane family. The data distribution of the volume data field is represented by the plane family. The definition of the plane family is as follows:

Because the normal vector of light , let us set . Then, the parameters of the intersection of light and plane can be calculated as follows:

Further, the intersection of light and can be expressed as:

Eq. (4) infers that light and two adjacent planes and intersection parameters and satisfying Eq. (5)

The intersection points of the straight line and plane family can be obtained by using the above iteration formula:

Similarly, the recurrence formula to the intersection of the line and the plane can be obtained.

3.3. Dynamic Adaptive Sampling

The intersection points of the line and plane family obtained by the above operations are not the sampling points we need, but we can get the sampling points on the line based on these intersections. The general method is to obtain the sampling points by equidistant sampling with a fixed distance . Here, we consider how to dynamically adjust the adaptive sampling interval.

Let be a three-dimensional volume data field. According to the total differential formula of the binary function, the difference between the voxel values of two adjacent sampling points can be obtained as follows: where is the gradient of the 3D digital theater and is the coordinate distance of the sampling point. is the unit vector of the light direction.

Because has been estimated, and is unknown. It may be assumed that the voxel value of the sampling point is ( is the set voxel value threshold). According to Eq. (7), the sampling interval can be approximately expressed as:

It can be seen from Eq. (8) that if the projection value of the gradient at a point in the direction of light is smaller, and the difference between and is large, then the interval will be larger; otherwise, will be too small. In order to prevent the sampling interval from being too large or too small, an upper and lower bound can be added to the sampling interval of Eq. (8). It is as follows:

and are upper and lower bounds of sampling, respectively.

In order to simplify the calculation, the center difference method is used to calculate the gradient, and the calculation method is as follows:

Because the gradient calculation will affect the rendering speed if it is placed in the sampling stage, so we put the gradient calculation in the data preprocessing stage and use a linear table to store the calculated gradient value. When calculating the sampling step length, we look up the gradient value closest to the sampling point in the linear table as the gradient of the sampling point. Assuming the number of sampling points is , then the time complexity of this step is .

After determining the sampling interval, we need to find the intersection of two adjacent sampling points and the isosurface and then synthesize the image through the color value and opacity of the intersection. In order to improve the imaging quality, this paper presents a recursive method for intersection estimation.

Supposing that we need to find the intersection point of the two sampling points and and the isosurface. First, we take the midpoint coordinate of and . If the corresponding voxel value at is greater than the set threshold , then is taken as the new . If the corresponding voxel value at is less than the set threshold , then is taken as the new , and then continue to take the midpoint of and until the distance between and is very small (less than the given value). Then, the final and are linearly interpolated to obtain the coordinates of , which is the coordinates of the intersection.

The process of finding the intersection coordinates is shown in Algorithm 1. Through the algorithm, we can calculate the intersection point of sampling points and fast. Moreover, we set the threshold of the number of iterations, which improves the convergence speed of the algorithm and indirectly improves the rendering speed of the image.

Input: , number_Current, Total.
Output:
1: Take the coordinate of the geometric midpoint of and to obtain the sampling value g.
2: T is set threshold, and the number current represent the current iterations, which is required to less than the Total.
3: If then
4:  
5: Else
6:  
7: While do
8:   By linear interpolation of and , the coordinates of the intersection point of and
9: If number_Current>Total then return P0
10: If is close to then
11:  
12: Else
13:  
14: Return
Algorithm 1. solution process of intersection coordinates
3.4. Improving Rendering Operator

After selecting the sampling point and intersection point in the above steps, the traditional method is to do linear interpolation around the eight closest data points of the intersection point to synthesize the image [24]. For example, the ray projection algorithm based on trilinear interpolation for fiber surface is adopted by Francisco Sans, and the time complexity of the trilinear interpolation is .

The formula of cubic linear interpolation is as follows:

Among them, represents the voxel value of the corresponding spatial coordinate point represents the voxel value of the eight adjacent data points of , respectively.

In this paper, we consider a method of ray blocking and synthesize a new rendering operator through the change of color and opacity before and after the ray enters and leaves the voxel, as well as the corresponding opacity. The experimental results show that it can effectively reduce the time complexity and improve the rendering speed without affecting the imaging quality.

Let the opacity and color values of the th volume element be and , respectively, the opacity and color values before the ray enters the th volume element and , respectively, and the opacity and color values after the ray enters are and , respectively, and let the opacity of the cell be ; the composite operator of the point image is

The relationship between the opacity and opacity of current voxels is

At the same time, we notice that not all the light can finally reach the imaging screen for imaging. The initial point has the smallest opacity. With the increase of data points through which the ray passes, the opacity also increases. Finally, after the ray enters the nth data point, the opacity becomes the maximum 1, so the image synthesis operator can be simplified as:

From the above formula, it can be found that when , that is the maximum photometry, the right side of the equation is equal to 0, which means that the ray that does not reach the screen does not participate in the calculation of the synthesis operator. At the same time, the time complexity of single and calculation is , so the time complexity of the final synthesis operator is , so the algorithm effectively improves the rendering efficiency. On the other hand, it can not only improve the time of image rendering but also effectively prevent the generation of artifacts and improve the quality of rendering.

4. Experimental Results

In this section, several experiments and comparison results are reported to evaluate the performance of the proposed algorithm ray-casting DAS. We compare the ray-casting DAS with other algorithms in several aspects, such as time complexity, image rendering speed, and image quality.

4.1. Time Consumption Comparison

The following is a comparison of several algorithms’ test data on the open dataset ISIC2018 [25]. Table 1 compares the drawing time of different algorithms for drawing the same picture of the human leg. VS3D, Dual-FHR, RIFT2, SHSR-UV, and ray-casting DAS in Figure 2 represent the experimental results of Wu’s algorithm [13], Luo’s algorithm [14], Francisco Sans’ algorithm [16], Roba Binyahib’s algorithm [17], and ray-casting DAS algorithm, respectively.


AlgorithmsData preprocessing timeSampling timeDrawing timeTotal time

VS3D23323388
Dual-FHR17283176
RIFT216253172
SHSR-UV17232666
Ray-casting DAS24162161

According to the experimental results in Table 1, compared with other algorithms, the preprocessing time of the algorithm in this paper is increased by about 6 to 7 seconds compared with other algorithms. This is because we add gradient calculations in the preprocessing stage, which consumes more time. However, due to the use of dynamic adaptive sampling, the acquisition of sampling points is more efficient, and the interference of unnecessary sampling points is eliminated. Compared with other algorithms, the time consumption of this algorithm in the sampling phase is greatly reduced, reducing 7-16 seconds. On the other hand, due to the improvement of the rendering operator, the idea of early termination is added in the rendering process, which greatly reduces the rendering time by 5-10 seconds. Finally, compared with other better algorithms, the total time is improved by 5-20 seconds, and the speed is significantly improved.

Through the results of Figure 2, we can find that the algorithm in this paper can better draw the details of the leg, including some texture information. These details are very valuable for medical diagnosis. Compared with other algorithms, the overall occlusion of the scene is captured coherently and close to the real value. The shadow area of the leg is less, and more texture details are preserved with ray-casting DAS.

4.2. Rendering Speed and Quality Comparison on CT Dataset

As shown in Figure 3, RIFT2, SHSR-UV, ICVC-GPU, DOS-Ray, IMPA, and ray-casting DAS represent the experimental results of Francisco Sans’ algorithm [16], Roba Binyahibs’ algorithm [17], Feiniu Yuan’s algorithm [26], Leonardo’s algorithm [27], Yan Zhang’s algorithm [28], and ray-casting DAS algorithm, respectively. By comparing the results of the different algorithms on the dataset of ISBI2018 [29], the ray-casting DAS algorithm can reduce and avoid the phenomenon of artifact in CT image rendering and improve the visibility of the image.

Table 2 compares the rendering time of the ray-casting DAS algorithm with the other five algorithms. The experimental results show that ray-casting DAS has better performance in rendering time and better image signal-to-noise ratio than the previous algorithms.


AlgorithmsIterationDrawing timeSignal-to-ratio

RIFT2 algorithm [16]462438
SHSR-UV algorithm [17]532942
ICVC-GPU algorithm [26]342133
DOS-ray algorithm [27]492534
IMPA algorithm [28]452236
Ray-casting DAS algorithm321560

Similarly, from the experimental results in Figure 3, it can be seen that the rendering algorithm in this paper can effectively solve the problem of artifacts in CT image rendering. From (a) to (e), we can clearly see the existence of artifacts in CT images, they obviously cause interference to the analysis image, and the image (f) drawn by ray-casting DAS algorithm can effectively eliminate the existence of these artifacts. It can be clearly seen from the graph that in the experimental results of this paper, each part of the image has no artifacts interference, and the image is clear.

4.3. Influence of Different Distance Functions on Image Rendering

The distance function expresses the relationship between distance and distance intensity value. The distance determines the distance intensity value. Through the distance function, the distance and depth are reflected in the final drawing process. This paper proposes an interactive algorithm. Interaction is based on distance first. The interaction process specifies the distance value, and the distance value is used in the distance function. Then, the final transparent value is determined by combining the distance function in the rendering process. The corresponding distance function is a piecewise function:

The value is the distance value obtained by interaction. When the distance is less than , it is a high-intensity value, and when the depth is greater than , it is a low-intensity value.

In order to improve the quality of rendering and reduce the interference of noise and other factors, we introduce the concept of distance function before color and opacity composition. The four contrast distance functions used in this experiment are as follows:

Here, is a three-dimensional column vector, representing the spatial coordinates of the th voxel, and is the total number of all effective voxels.

As is shown in Table 3, LCC [31] is short for linear correlation coefficient, and SROCC [32] is short for Spearman’s rank-order correlation coefficient, which are both evaluation indexes to measure the similarity between drawn image and real image. The structural similarity theory is simplified as SSIM [33], whose value is between 0 and 1. The larger LCC, SROCC, and SSIM value indicate the higher similarity between the drawn image and the real value of the original image. The peak signal-to-noise ratio (PSNR) [32] measures the ratio of useful information and noise in an image. For the same image, the larger the signal-to-noise ratio is, the smaller the noise is, and the higher the image quality is.


AlgorithmsLCCSPOCCSSIMMSSIMPSNR

Without distance function0.690.710.7920.81329 db
Ray-casting DAS with f1 function0.720.760.8140.83933 db
Ray-casting DAS with f2 function0.760.790.8570.87436 db
Ray-casting DAS with f3 function0.750.780.8490.86735 db
Ray-casting DAS with f4 function0.860.850.8960.91843 db

In the experiment, four different distance functions are used, and their performances are compared. The experimental results in Table 3 show that, compared with the method without distance function, adding distance function can significantly increase the similarity between the drawn image and the real image; at the same time, it has a higher signal-to-noise ratio and relatively less noise content. On the other hand, Figure 4 shows that different distance functions have advantages and disadvantages in image rendering. The distance function f4 is the best in the quality of completion. It can eliminate the interference of noise to the greatest extent. The image drawn is also the clearest and contains the least noise interference.

4.4. Image Rendering Quality Comparison on Other Dataset

We use the open-source dataset VisMale [34] for experiments. The simulation environment is VS2016 and OpenGL. Table 4 shows the comparison of the six algorithms for human head image rendering, and ICVC-GPU, MVRC2, SigIg+FMI, SHSR-UV, DBRay, and Ray-casting DAS in Figure 5 represent the experimental results of Feiniu Yuan’s algorithm [26], Mohammandmehdi Bozorgi’s algorithm [32], R. Mehaboobathunnisa’s algorithm [35], Roba Binyahibs’ algorithm [17], Alec G.Moore’s algorithm [36], and ray-casting DAS algorithm proposed in this paper, respectively.


AlgorithmsData preprocessing timeSampling timeDrawing timeTotal time consumption

ICVC-GPU15252666
MVRC214221956
SigIg+FMI13243269
SHSR-UV15243372
DARay17262366
Ray-casting DAS22191557

In Figure 6, information entropy, contrast, sharpness, SNR [31, 33], and PSNR [32] are three objective indicators reflecting image quality. Their definitions are as follows: (1)Information entropy represents the amount of information contained in the image, which is proportional to the amount of information contained in the image [33]. The formula iswhere represents the gray value of the pixel , represents the mean gray value of the domain . And the formula reflects the comprehensive characteristics of the gray value of a pixel position and the gray distribution of its surrounding pixels, where is the frequency of the feature binary , and is the scale of the image. (2)Contrast reflects the influence of image on visual effect [32]. Contrast is proportional to the clarity of the image. The formula is as follows:

Among them, denotes the average gray level of the pixels, and denote the width and height of the image, and denotes the gray level of the pixels . (3)Sharpness represents the clarity of the image, and the calculation formula is as follows(4)Firstly, the mean square error (MSE) [37] is defined. The mean square error is used to calculate the mean square value of the pixel difference between the original image and the distorted image. The formula is as follows:

Peak signal-to-noise ratio (PSNR) [32] is the ratio of maximum signal quantity to noise intensity. Since digital images are represented by discrete numbers, generally . (5)Signal to noise ratio (SNR)’s definition is as follows [31, 33]

The higher the SNR is, the less noise the image owns.

Through the experimental results in Figure 3, we can find that ray-casting DAS is slightly worse than algorithms E in image rendering quality, because the dynamic adaptive sampling used in this paper will inevitably skip some small voxels containing useful information due to the setting of the threshold value and other problems, leading to slightly poor effective information of the image, but it is close to or much better than A, B, C, and D in image rendering quality. Because the distance function is introduced into the process of rendering operator composition and the most appropriate distance function is selected, the image drawn by this algorithm has a high signal-to-noise ratio and contains less noise and other interference factors, compared with other algorithms. This is a significant progress. Simultaneously, the sampling and drawing time of ray-casting DAS, as shown in Table 3, is much better than all the other five algorithms, which indicates that our method is effective and saving time consumption.

5. Conclusion

Based on the low efficiency of the sampling point selection and slow rendering speed of the existing ray-casting algorithm, a new method based on dynamic sampling and improved rendering operator is proposed in this paper. The experimental results show that this method can effectively shorten the time of sampling and rendering and improve the efficiency of the algorithm on the premise of ensuring the high quality of graphics rendering.

At the same time, this paper introduces the concept of distance function in the process of image rendering, which makes the image contrast high and contains less noise. However, there are other shortcomings in this algorithm, such as the need for gradient calculation, and the gradient calculation is placed in the preprocessing stage, which aggravates the calculation amount in the preprocessing stage. It results in more time in the pretreatment stage. The efficiency of this algorithm can be further improved by a more efficient gradient calculation method.

Data Availability

The data used to support the findings of this study are available from the following websites: https://challenge2018.isic-archive.com/, https://grand-challenge.org/All_Challenges/, http://adni.loni.usc.edu/data-samples/accessdata/, and http://academictorrents.com/details/a9e2741587d42ef6139aa474a95858a17952b3a5.

Disclosure

The earlier version of this paper has been presented as a conference abstract in “ISAIR 2020: THE 5TH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND ROBOTICS 2020” according to the following link: https://easychair.org/smart-program/ISAIR2020/2020-08-09.html#talk:157507.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank Campagnolo for publishing the dataset and Francisco Sans for making the code available. We are grateful for the comments from the anonymous reviewers. This research was supported in part by National Natural Science Foundation of China (Grant Nos. 61562013, 61772149, 61866009, 62002082), Guangxi Key Research and Development Project (Grant No. AB19110038), Guangxi Natural Science Foundation (Grant Nos. 2019GXNSFAA245014, 2017GXNFDA198025, 2018GXNSFAA294132, 2020GXNSFBA238014), and Guangxi University Young and Middle-aged Teachers' Research Ability Improvement Project (Grant Nos. 2019KY0238, 2020KY05034).

References

  1. A. Beristain, J. Congote, and O. E. Ruiz, “Volume visual attention Maps (VVAM) in ray-casting rendering,” in Medicine Meets Virtual Reality (MMVR), pp. 53–57, Newport Beach, California, USA, April 2012. View at: Google Scholar
  2. Z. Lesar, “Real-time ray casting of volumetric data,” in IEEE EUROCON 2015 - International Conference on Computer as a Tool (EUROCON), pp. 45–62, Salamanca, Spain, March 2015. View at: Publisher Site | Google Scholar
  3. H. Ray, H. Pfister, D. Silver, and T. A. Cook, “Ray casting architectures for volume visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 5, no. 3, pp. 210–223, 1999. View at: Publisher Site | Google Scholar
  4. J. R. Mitchell, P. Dickof, and A. G. Law, “A comparison of line integral algorithms,” Computers in Physics, vol. 4, no. 2, pp. 166–172, 1990. View at: Publisher Site | Google Scholar
  5. M. Levoy, “Volume rendering: display of surface from volume data,” IEEE Computer Graphics & Applications, vol. 8, no. 3, pp. 29–37, 1998. View at: Google Scholar
  6. Z. Jian, “The study of volume rendering techniques for medical data,” IEEE Transactions on Visualization and Computer Graphics, vol. 21, no. 7, pp. 463–473, 2005. View at: Google Scholar
  7. Z. Liu, H. Seo, A. Castiglione, K.-K. R. Choo, and H. Kim, “Memory-efficient implementation of elliptic curve cryptography for the Internet-of-Things,” IEEE Transactions on Dependable and Secure Computing, vol. 16, no. 3, pp. 521–529, 2019. View at: Publisher Site | Google Scholar
  8. K.-K. R. Choo, C. Esposito, and A. Castiglione, “Evidence and forensics in the cloud: challenges and future research directions,” IEEE Cloud Computing, vol. 4, no. 3, pp. 14–19, 2017. View at: Publisher Site | Google Scholar
  9. I. Demir and R. Westermann, “Vector-to-closest-point Octree for surface ray-casting,” in Vision, Modeling and Visualization(VMV), pp. 65–72, Aachen, Germany, October 2015. View at: Publisher Site | Google Scholar
  10. I. Herrera, C. Buchart, I. Aguinaga, and D. Borro, “Study of a ray casting technique for the visualization of deformable volumes,” IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 11, pp. 1555–1565, 2014. View at: Publisher Site | Google Scholar
  11. A. Averbuch, G. Lifschitz, and Y. Shkolnisky, “Accelerating X-ray data collection using pyramid beam ray casting geometries,” IEEE transactions on image processing, vol. 20, no. 2, pp. 523–533, 2011. View at: Publisher Site | Google Scholar
  12. B. Lee, J. Yun, J. Seo, B. Shim, Y.-G. Shin, and B. Kim, “Fast high-quality volume ray-casting with virtual samplings,” IEEE Transactions on Visualization and Computer Graphics, vol. 16, no. 6, pp. 1525–1532, 2010. View at: Publisher Site | Google Scholar
  13. Z. Wu, N. Wang, J. Shao, and G. Deng, “GPU ray casting method for visualizing 3D pipelines in a virtual globe,” International Journal of Digital Earth, vol. 12, no. 4, pp. 428–441, 2019. View at: Publisher Site | Google Scholar
  14. J. Luo, G. Hu, and G. Ni, “Dual-space ray casting for height field rendering,” Journal of Visualization and Computer Animation, vol. 25, no. 1, pp. 45–56, 2014. View at: Publisher Site | Google Scholar
  15. D. Tost, S. Grau, M. Ferre, and A. Puig, “Ray-casting time-varying volume data sets with frame-to-frame coherence,” in Visualization and Data Analysis 2006, vol. 6060, pp. 505–522, San Jose, CA, USA, January 2006. View at: Publisher Site | Google Scholar
  16. F. Sans and R. Carmona, “A comparison between GPU-based volume ray casting implementations: fragment shader, compute shader, OpenCL, and CUDA,” CLEI Electronic Journal, vol. 20, no. 2, pp. 643–668, 2017. View at: Google Scholar
  17. R. Binyahib, T. Peterka, M. Larsen, K.-L. Ma, and H. Childs, “A scalable hybrid scheme for ray-casting of unstructured volume data,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 7, pp. 2349–2361, 2019. View at: Publisher Site | Google Scholar
  18. C. Schulz and A. Zell, “Sub-pixel resolution techniques for ray casting in low-resolution occupancy grid maps,” in 2019 European Conference on Mobile Robots (ECMR), pp. 1–6, Prague, Czech Republic, Czech Republic, August 2019. View at: Publisher Site | Google Scholar
  19. L. Tao, T. Bui, and H. Hasegawa, “Global ray-casting range image registration,” IPSJ transactions on computer vision and applications, vol. 9, no. 1, pp. 122–138, 2017. View at: Publisher Site | Google Scholar
  20. S. Lim and B.-S. Shin, “A half-skewed Octree for volume ray casting,” IEICE Transactions on Information and Systems, vol. E90-D, no. 7, pp. 1085–1091, 2007. View at: Publisher Site | Google Scholar
  21. F. Sans and R. Carmona, “Volume ray casting using different GPU based parallel APIs,” in 2016 XLII Latin American Computing Conference (CLEI), pp. 1–11, Valparaíso, Chile, October 2016. View at: Publisher Site | Google Scholar
  22. L. Xiao, C. Li, Z. Wu, and T. Wang, “An enhancement method for X-ray image via fuzzy noise removal and homomorphic filtering,” Neurocomputing, vol. 195, no. 2, pp. 56–64, 2016. View at: Publisher Site | Google Scholar
  23. M. Mouzai, C. Tarabet, and A. Mustapha, “Low-contrast X-ray enhancement using a fuzzy gamma reasoning model,” Medical & Biological Engineering & Computing, vol. 58, no. 6, pp. 1177–1197, 2020. View at: Publisher Site | Google Scholar
  24. J. P. Wang, F. Yang, and Y. Cao, “Cache-aware sampling strategies for texture-based ray casting on GPU,” in 2014 IEEE 4th Symposium on Large Data Analysis and Visualization (LDAV), pp. 19–26, Paris, France, November 2014. View at: Publisher Site | Google Scholar
  25. https://challenge2018.isic-archive.com/.
  26. F. Yuan, “An interactive concave volume clipping method based on GPU ray casting with Boolean operation,” Computing and Informatics, vol. 31, no. 3, pp. 551–600, 2012. View at: Google Scholar
  27. L. Q. Campagnolo and W. Celes, “Interactive directional ambient occlusion and shadow computations for volume ray casting,” Computers & Graphics, vol. 84, pp. 66–76, 2019. View at: Publisher Site | Google Scholar
  28. Y. Zhang, P. Gao, and X. Li, “A novel parallel ray-casting algorithm,” in 2016 13th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 2016. View at: Publisher Site | Google Scholar
  29. https://grand-challenge.org/All_Challenges/.
  30. http://academictorrents.com/details/a9e2741587d42ef6139aa474a95858a17952b3a5.
  31. L. Lin, S. Chen, Y. Shao, and Z. Gu, “Plane-based sampling for ray casting algorithm in sequential medical images,” Computational and mathematical methods in medicine, vol. 2013, Article ID 874517, 5 pages, 2013. View at: Publisher Site | Google Scholar
  32. M. Bozorgi and F. Lindseth, “GPU-based multi-volume ray casting within VTK for medical applications,” International Journal of Computer Assisted Radiology and Surgery, vol. 10, no. 3, pp. 293–300, 2015. View at: Publisher Site | Google Scholar
  33. A. Mastmeyer, T. Hecht, D. Fortmeier, and H. Handels, “Ray-casting based evaluation framework for haptic force feedback during percutaneous transhepatic catheter drainage punctures,” International Journal of Computer Assisted Radiology and Surgery, vol. 9, no. 3, pp. 421–431, 2014. View at: Publisher Site | Google Scholar
  34. http://adni.loni.usc.edu/data-samples/access-data/.
  35. R. Mehaboobathunnisa, A. A. H. Thasneem, and M. M. Sathik, “Fuzzy mutual information-based intraslice grouped ray casting,” Journal of Intelligent Systems, vol. 28, no. 1, pp. 77–86, 2019. View at: Publisher Site | Google Scholar
  36. A. G. Moore, J. G. Hatch, S. Kuehl, and R. P. McMahan, “VOTE: a ray-casting study of vote-oriented technique enhancements,” International Journal of Human-Computer Studies, vol. 120, no. 12, pp. 36–48, 2018. View at: Publisher Site | Google Scholar
  37. M. Fröhlich, C. Bolinhas, and A. Depeursinge, “Holographic visualisation and interaction of fused CT, PET and MRI volumetric medical imaging data using dedicated remote GPGPU ray casting,” in POCUS 2018, BIVPCS 2018, CuRIOUS 2018, pp. 102–110, Granada, Spain, 2018. View at: Publisher Site | Google Scholar

Copyright © 2020 Huadeng Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views0
Downloads0
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.