Research Article  Open Access
Jie Liu, Chunye Gong, Weimin Bao, Guojian Tang, Yuewen Jiang, "Solving the Caputo Fractional ReactionDiffusion Equation on GPU", Discrete Dynamics in Nature and Society, vol. 2014, Article ID 820162, 7 pages, 2014. https://doi.org/10.1155/2014/820162
Solving the Caputo Fractional ReactionDiffusion Equation on GPU
Abstract
We present a parallel GPU solution of the Caputo fractional reactiondiffusion equation in one spatial dimension with explicit finite difference approximation. The parallel solution, which is implemented with CUDA programming model, consists of three procedures: preprocessing, parallel solver, and postprocessing. The parallel solver involves the parallel tridiagonal matrix vector multiplication, vectorvector addition, and constant vector multiplication. The most time consuming loop of vectorvector addition and constant vector multiplication is optimized and impressive performance improvement is got. The experimental results show that the GPU solution compares well with the exact solution. The optimized GPU solution on NVIDIA Quadro FX 5800 is 2.26 times faster than the optimized parallel CPU solution on multicore Intel Xeon E5540 CPU.
1. Introduction
The idea of fractional derivatives can be dated back to the 17th century. A fractional differential equation is a kind of equation which uses fractional derivatives. Fractional equations can be used to describe some physical phenomenons more accurately than the classical integer order differential equation [1]. The reactiondiffusion equations play an important role in dynamical systems of mathematics, physics, chemistry, bioinformatics, finance, and other research areas.
Some analytical methods were proposed for fractional differential equations [2, 3]. The stability of Cauchy fractional differential equations was studied [4, 5] and more attention should be paid to the interesting Ulam’s type stability [6]. There have been a wide variety of numerical approximation methods proposed for fractional equations [7, 8], for example, finite difference method [9], finite element method, and spectral techniques [10]. Interest in fractional reactiondiffusion equations has increased [11]. In 2000, Henry and Wearne [12] derived a fractional reactiondiffusion equation from a continuoustime random walk model with temporal memory and sources. The fractional reactiondiffusion system with activator and inhibitor variables was studied by Gafiychuk et al. [13]. Haubold et al. [14] developed a solution in terms of the Hfunction for a unified reactiondiffusion equation. The generalized differential transform method [15] was presented for fractional reactiondiffusion equations. Saxena et al [16] gave investigation of a closed form solution of a generalized fractional reactiondiffusion equation.
Parallel computing is used to solve computation intensive applications simultaneously [17–19]. In recent years, the computing accelerators such as graphics processing unit (GPU) provided a new parallel method of accelerating computation intensive simulations [20–22]. The use of general purpose GPU is possible by the advance of programming models and hardware resources. The GPU programming models such as NVIDIA’s compute unified device architecture (CUDA) [23] become more mature than before and simplify the development of nongraphic applications. GPU presents an energy efficient architecture for computation intensive domains like particle transport [24, 25] and molecular dynamics [26].
It is time consuming to numerically solve fractional differential equations for high spatial dimension or big time integration. Short memory principle [27, 28] and parallel computing [29–32] can be used to overcome this difficulty. The parallel algorithms of one and two dimensional time fractional equations are studied and good parallel scalability is got [31, 32]. Optimization of the sum of constant vector multiplication is presented and 2time speedup can be got [31]. The parallel implicit iterative algorithm was studied for twodimensional time fractional problem at the first time [32].
Gong et al. [29] presented a parallel algorithm for Riesz space fractional equations. The parallel efficiency of the presented parallel algorithm of 64 processes is up to 79.39% compared with 8 processes on a distributed memory cluster system. Diethelm [30] implemented the fractional version of the secondorder AdamsBashforthMoulton method for fractional ordinary equations on a parallel computer. Domain decomposition method is regarded as the basic mathematical background for many parallel applications. A domain decomposition algorithm for time fractional reactiondiffusion equation with implicit finite difference method was proposed [33]. The domain decomposition algorithm keeps the same parallelism but needs much fewer iterations, compared with Jacobi iteration in each time step, until nothing has been recorded on accelerating the numerical solution of Caputo fractional reactiondiffusion equation on GPU.
This paper focuses on the Caputo fractional reactiondiffusion equation: on a finite domain and . The and are constants. If equals 1, (1) is the classical reactiondiffusion equation. The fractional derivative is in the Caputo form.
2. Background
2.1. Numerical Solution
The fractional derivative of in the Caputo sense is defined as [27]
If is continuous bounded derivatives in for every , we can get
Define , , , and for . Define , , and as the numerical approximation to , , and . We can get [11] where , , and
Using explicit center difference scheme for , we can get
The explicit finite difference approximation for (1) is
Define , , , , , , and as
Equation (7) evolves as where matrix is a tridiagonal matrix, defined by
2.2. GPU Architecture and CUDA Programming Model
The architecture of GPU is optimized for rendering realtime graphics, a computation and memory access intensive problem domain with enormous inherent parallelism. Not like CPU, a much larger portion of GPUs resources is devoted to data processing rather than to caching or control flow. The NVIDIA Quadro FX 5800 GPU has 30 multiprocessor units which are called the streaming multiprocessors (SMs). Each SM contains 8 SIMD CUDA cores. Each CUDA core runs at 1.30 GHz. The multiprocessors create, manage, and execute concurrent threads in hardware with near zero scheduling overhead. The single instruction multiple thread (SIMT) unit, which is akin to SIMD vector organizations, creates, manages, schedules, and executes threads in groups of 32 parallel threads called warp [23].
The programming model is a bridge between hardware and application. CUDA comes with a software environment that allows developers to use C as a highlevel programming language. There are three key abstractions in CUDA: a hierarchy of thread execution model (grid or kernel, thread block, and thread), shared memory, and barrier synchronization. These abstractions provide finegrained data level parallelism and thread parallelism, nested within coarsegrained data level parallelism and task level parallelism. Each CUDA kernel creates a single grid, which consists of many thread blocks. Each thread block schedules groups of threads that can share data through onchip shared memory and synchronize with one another using barrier synchronization. Threads within a block are organized into warps which run in SIMT fashion. CUDA threads may access data from multiple memory spaces during their execution. The memory spaces include global, texture, and constant memory for grid, onchip shared memory for thread block, and private register files for thread. The memory access time to different memory spaces varies from several to hundreds of cycles. These memory accesses must be coalesced to improve the performance of global memory access.
3. Details of GPU Solution
The parallel solution consists of three parts. The first part is preprocessing, which prepares the initial matrices, vectors, and so on. The second part is the parallel solver, which focuses on the iteration of time step with (9). The third part is postprocessing, which outputs the final results and so on.
The preprocessing includes initialization of parallel environment, distribution of computing task, allocation of local memory space, and initialization of variables and arrays. Matrices and are prepared before the computation of (9). For example, matrix can be got according to (10). The postprocessing is simple. The results of the exact solution are performed. The max absolute error of the exact and parallel solutions is computed and outputted. Both the results of the exact and parallel solution are saved in files which are necessary for plot. Other operations of postprocessing include free memory space and stop the parallel environment.
In order to get , the rightsided computation of (9) should be performed. There are mainly one tridiagonal matrix vector multiplication, many constant vector multiplications, and many vectorvector additions in the rightsided computation.(1)The tridiagonal matrix vector multiplication is .(2)The constant vector multiplications are , , and so on.(3)The vectorvector additions are and so on.
The parallel solution uses the data level parallelism of GPU architecture. The parallel solution with CUDA for (1) is described in Algorithm 1. The preprocessing involves lines 1 to 3. The parallel solver, which is the most time consuming procedure, involves lines 4 to 12. The postprocessing involves lines 13 to 14 and other additional operations are not shown in Algorithm 1. Because the time spent on the preprocessing and postprocessing is trivial when the number of time steps is big enough, the preprocessing and postprocessing time is omitted for the measured time. and are used to record the measured time of the parallel CPU and GPU solutions.

The parallel solution uses the data level parallelism of GPU architecture. The parallel solution with CUDA for (1) is described in Algorithm 1. The preprocessing involves lines 1 to 3. The parallel solver, which is the most time consuming procedure, involves lines 4 to 12. The postprocessing involves lines 13 to 14 and other additional operations are not shown in Algorithm 1. BS stands for the CUDA thread block size and /BS is the number of CUDA thread blocks. BS is the predefined constant like 16, 32, 64, and so forth. Because the time spent on the preprocessing and postprocessing is trivial when the number of time steps is big enough, the preprocessing and postprocessing time is omitted for the measured time. and are used to record the measured time of the serial and parallel solution.
Except for the initialization of variables and arrays, there are four CUDA kernels. The first kernel is , which computes the initial condition according to in (1). The second kernel is , which performs the tridiagonal matrix vector multiplication. The third kernel is , which stands for constant vector multiplication and vectorvector addition. The fourth kernel is , which performs the constant vector multiplication and vectorvector addition. The last kernel is , which stands for constant vector multiplication. The CUDA kernels and are simple and will not be described in detail.
3.1. Implementation
The CUDA kernel for constant vector multiplication and vectorvector addition is shown in Algorithm 2. Algorithm 2 computes and saves the final vector into .

Most elements of tridiagonal matrix are zero. The most common data structure used to store a sparse matrix for sparse matrix vector multiplication computations is compressed sparse row (CSR) format [34] shown in So in the following parts of this paper, matrix stands for the format of (11) not the format of (10). With the format of (11), the global memory is coalesced and can improve the performance of global memory access.
The CUDA kernel for tridiagonal matrix vector multiplication is shown in Algorithm 3. One thread block deals with the multiplication of one row of matrix and one column of . Algorithm 3 computes and saves the final vector into . The shared memory is used to improve the memory access speed. The synchronization function is used to ensure the correctness of the logic of the algorithm.

In Algorithm 3, each GPU thread deals with the multiplication of one row of tridiagonal matrix and vector . Each thread needs to use three elements of vector . The nearest two threads use the same two elements of vector . We can use the shared memory to improve the memory access performance. So the elements of vector which will be used by threads in a thread block are stored into shared memory, shown between lines 6 and 16 of Algorithm 3. The real computation of tridiagonal matrix multiplication is shown between lines 18 and 21. Finally, the results are stored into the global memory of .
3.2. Optimization
In Algorithm 1, the kernels , , and are invoked times. In each time step, kernel is invoked () times. Because the total number of the invocations of kernel is . The most time consuming part of Algorithm 1 is the loop of line 9. Loop 9 can be combined into one CUDA kernel as shown in Algorithm 4. The array is the coefficient of (8) in global memory.

So the optimized parallel solution for Caputo fractional reactiondiffusion equation is similar to Algorithm 1 except that the loop (lines 910) in Algorithm 1 is replaced with the optimized CUDA kernel of Algorithm 4.
4. Experimental Results
4.1. Experiment Platforms
The experiment platforms consist of one GPU and one CPU listed in Table 1. For the purpose of fair comparisons, we measure the performance provided by GPU compared to the MPI code running on multicore CPU [31]. Both codes run on double precision floating point operations. The CPU code is compiled by the mpif90 compiler with level three optimization. The GPU code is compiled by the NVCC compiler provided by CUDA version 3.0 with level three optimization too.

4.2. Numerical Example
The following Caputo fractional reactiondiffusion equation [11] was considered: with , and The exact solution of (13) is
4.3. Accuracy of the GPU Implementation
The GPU solution compares well with the exact solution to the time fractional diffusion equation in this test case of (13), shown in Figure 1. The and for the GPU solution are and . The maximum absolute errors for , , and are , , and . In fact, the accuracy and convergence of the GPU solution are the same as the serial and parallel MPI solution on CPU [31].
4.4. Total Performance Improvement
In this section, the performance of the optimized GPU solution presented in this paper is compared with the performance of the parallel CPU solution [31]. The parallel CPU solution makes full use of four cores of E5540. The optimized GPU solution is presented in Section 3.2.
For fixed , the performance comparison between GPU and multicore CPU is shown in Table 2. The thread block size is 32.

For fixed , the performance comparison between optimized GPU solution and parallel CPU solution is shown in Table 3. The thread block size is 32.

4.5. Performance Issues of GPU Solution
With , , and thread block size 64, the runtime of Algorithm 1 on Quadro FX 5800 is 1.228 seconds. Without the loop of line 9 in Algorithm 1, the runtime is only 0.032 seconds. That is to say, about 97.39% of runtime is spent on the loop of line 9. This is the reason why we develop the optimized GPU solution with an optimized CUDA kernel of Algorithm 4.
The impact of the optimized CUDA kernel on constant vector multiplication and vectorvector addition with fixed is shown in Table 4. The performance improvement with fixed is shown in Table 5. All thread block sizes are 64. The basic GPU solution is Algorithm 1 and the optimized GPU solution uses the optimized CUDA kernel of Algorithm 4.


The thread block size (BS) is a key parameter for parallel GPU algorithms. The impact of BS is shown in Table 6. From Table 6, we can see that thread block size 32 is the best choice.

5. Conclusions and Future Work
In this paper, the numerical solution of Caputo fractional reactiondiffusion equation with explicit finite difference method is accelerated on GPU. The iteration of time step consists of tridiagonal matrix vector multiplication, constant vector multiplication, and vectorvector addition. The details of the GPU solution and some basic CUDA kernels are presented. The most time consuming loop (constant vector multiplication and vectorvector addition) is optimized. The experimental results show the GPU solution compares well with the exact analytic solution and is much faster than parallel CPU solution. So the power of parallel computing on GPU for solving fractional applications should be recognized.
As a part of the future work, first, the stability, like Ulam’s type, of different fractional equations should be paid attention to [35–37]. Second, parallelizing the implicit numerical method of fractional differential equations is challenging.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This research work is supported in part by the National Natural Science Foundation of China under Grants no. 11175253 and no. 61170083, in part by Specialized Research Fund for the Doctoral Program of Higher Education under Grant no. 20114307110001, and in part by 973 Program of China under Grant no. 61312701001. The authors would like to thank the anonymous reviewers for their helpful comments as well.
References
 F. Huang and F. Liu, “The time fractional diffusion equation and the advectiondispersion equation,” The ANZIAM Journal, vol. 46, no. 3, pp. 317–330, 2005. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 S. Chen and X. Jiang, “Analytical solutions to timefractional partial differential equations in a twodimensional multilayer annulus,” Physica A: Statistical Mechanics and Its Applications, vol. 391, no. 15, pp. 3865–3874, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 R. K. Pandey, O. P. Singh, and V. K. Baranwal, “An analytic algorithm for the spacetime fractional advectiondispersion equation,” Computer Physics Communications, vol. 182, no. 5, pp. 1134–1144, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 R. W. Ibrahim, “Ulamhyers stability for cauchy fractional differential equation in the unit disk,” Abstract and Applied Analysis, vol. 2012, Article ID 613270, 10 pages, 2012. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 J. Brzdęk, N. BrillouëtBelluot, K. Ciepliński, and B. Xu, “Ulam's type stability,” Abstract and Applied Analysis, vol. 2012, Article ID 329702, 2 pages, 2012. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 N. BrillouëtBelluot, J. Brzdk, and K. Ciepliński, “On some recent developments in ulam's type stability,” Abstract and Applied Analysis, vol. 2012, Article ID 716936, 41 pages, 2012. View at: Publisher Site  Google Scholar
 Q. Liu, F. Liu, I. Turner, and V. Anh, “Numerical simulation for the 3D seepage flow with fractional derivatives in porous media,” IMA Journal of Applied Mathematics, vol. 74, no. 2, pp. 201–229, 2009. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 A. Ashyralyev and Z. Cakir, “On the numerical solution of fractional parabolic partial differential equations with the dirichlet condition,” Discrete Dynamics in Nature and Society, vol. 2012, Article ID 696179, 15 pages, 2012. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 A. Ashyralyev and F. Dal, “Finite difference and iteration methods for fractional hyperbolic partial differential equations with the neumann condition,” Discrete Dynamics in Nature and Society, vol. 2012, Article ID 434976, 15 pages, 2012. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 C. Li, F. Zeng, and F. Liu, “Spectral approximations to the fractional integral and derivative,” Fractional Calculus and Applied Analysis, vol. 15, no. 3, pp. 383–406, 2012. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 J. H. Chen, “An implicit approximation for the caputo fractional reactiondispersion equation,” Journal of Xiamen University (Natural Science), vol. 46, no. 5, pp. 616–619, 2007 (Chinese). View at: Google Scholar  Zentralblatt MATH  MathSciNet
 B. I. Henry and S. L. Wearne, “Fractional reactiondiffusion,” Physica A: Statistical Mechanics and Its Applications, vol. 276, no. 34, pp. 448–455, 2000. View at: Publisher Site  Google Scholar  MathSciNet
 V. Gafiychuk, B. Datsko, and V. Meleshko, “Mathematical modeling of time fractional reactiondiffusion systems,” Journal of Computational and Applied Mathematics, vol. 220, no. 12, pp. 215–225, 2008. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. J. Haubold, A. M. Mathai, and R. K. Saxena, “Further solutions of fractional reactiondiffusion equations in terms of the hfunction,” Journal of Computational and Applied Mathematics, vol. 235, no. 5, pp. 1311–1316, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 S. Z. Rida, A. M. A. ElSayed, and A. A. M. Arafa, “On the solutions of timefractional reactiondiffusion equations,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, no. 12, pp. 3847–3854, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 R. K. Saxena, A. M. Mathai, and H. J. Haubold, “Solution of generalized fractional reactiondiffusion equations,” Astrophysics and Space Science, vol. 305, no. 3, pp. 305–313, 2006. View at: Publisher Site  Google Scholar
 S. J. Pennycook, S. D. Hammond, G. R. Mudalige, S. A. Wright, and S. A. Jarvis, “On the acceleration of wavefront applications using distributed manycore architectures,” The Computer Journal, vol. 55, no. 2, pp. 138–153, 2012. View at: Publisher Site  Google Scholar
 Z. Mo, A. Zhang, X. Cao et al., “Jasmin: a parallel software infrastructure for scientific computing,” Frontiers of Computer Science in China, vol. 4, no. 4, pp. 480–488, 2010. View at: Publisher Site  Google Scholar
 R. Zhang, “A threestage optimization algorithm for the stochastic parallel machine scheduling problem with adjustable production rates,” Discrete Dynamics in Nature and Society, vol. 2013, Article ID 280560, 15 pages, 2013. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 X.J. Yang, X.K. Liao, K. Lu, Q.F. Hu, J.Q. Song, and J.S. Su, “The TianHe1A supercomputer: its hardware and software,” Journal of Computer Science and Technology, vol. 26, no. 3, pp. 344–351, 2011. View at: Publisher Site  Google Scholar
 Y.X. Wang, L.L. Zhang, W. Liu et al., “Efficient parallel implementation of large scale 3D structured grid CFD applications on the Tianhe1A supercomputer,” Computers and Fluids, vol. 80, pp. 244–250, 2013. View at: Publisher Site  Google Scholar
 C. Xu, X. Deng, L. Zhang et al., “Parallelizing a highorder CFD software for 3D, multiblock, structural grids on the TianHe1A supercomputer,” in Supercomputing, J. Kunkel, T. Ludwig, and H. Meuer, Eds., vol. 7905 of Lecture Notes in Computer Science, pp. 26–39, Springer, Heidelberg, Germany, 2013. View at: Google Scholar
 NVIDIA Corporation, CUDA Programming Guide Version 3.1, 2010.
 C. Gong, J. Liu, L. Chi, H. Huang, J. Fang, and Z. Gong, “GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method,” Journal of Computational Physics, vol. 230, no. 15, pp. 6010–6022, 2011. View at: Publisher Site  Google Scholar
 C. Gong, J. Liu, H. Huang, and Z. Gong, “Particle transport with unstructured grid on GPU,” Computer Physics Communications, vol. 183, no. 3, pp. 588–593, 2012. View at: Publisher Site  Google Scholar
 Q. Wu, C. Yang, T. Tang, and L. Xiao, “Exploiting hierarchy parallelism for molecular dynamics on a petascale heterogeneous system,” Journal of Parallel and Distributed Computing, vol. 73, no. 12, pp. 1592–1604, 2013. View at: Google Scholar
 I. Podlubny, Fractional Differential Equations, vol. 198 of Mathematics in Science and Engineering, Academic Press, San Diego, Calif, USA, 1999. View at: MathSciNet
 Y. Xu and Z. He, “The short memory principle for solving abel differential equation of fractional order,” Computers & Mathematics with Applications, vol. 62, no. 12, pp. 4796–4805, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 C. Gong, W. Bao, and G. Tang, “A parallel algorithm for the Riesz fractional reactiondiffusion equation with explicit finite difference method,” Fractional Calculus and Applied Analysis, vol. 16, no. 3, pp. 654–669, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 K. Diethelm, “An efficient parallel algorithm for the numerical solution of fractional differential equations,” Fractional Calculus and Applied Analysis, vol. 14, no. 3, pp. 475–490, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 C. Gong, W. Bao, G. Tang, B. Yang, and J. Liu, “An efficient parallel solution for Caputo fractional reactiondiffusion equation,” The Journal of Supercomputing, 2014. View at: Publisher Site  Google Scholar
 C. Gong, W. Bao, G. Tang, Y. Jiang, and J. Liu, “A parallel algorithm for the two dimensional time fractional diffusion equation with implicit difference method,” The Scientific World Journal, vol. 2014, Article ID 219580, 8 pages, 2014. View at: Publisher Site  Google Scholar
 C. Gong, W. Bao, G. Tang, Y. Jiang, and J. Liu, “A domain decomposition method for time fractional reactiondiffusion equation,” The Scientific World Journal, vol. 2014, Article ID 681707, 5 pages, 2014. View at: Publisher Site  Google Scholar
 S. Williams, L. Oliker, R. Vuduc, J. Shalf, K. Yelick, and J. Demmel, “Optimization of sparse matrixvector multiplication on emerging multicore platforms,” Parallel Computing, vol. 35, no. 3, pp. 178–194, 2009. View at: Publisher Site  Google Scholar
 A. Fošner, “On the generalized HyersUlam stability of module left $(m,n)$derivations,” Aequationes Mathematicae, vol. 84, no. 12, pp. 91–98, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 D. Popa, “HyersUlam stability of the linear recurrence with constant coefficients,” Advances in Difference Equations, no. 2, pp. 101–107, 2005. View at: Google Scholar  Zentralblatt MATH  MathSciNet
 R. P. Agarwal, B. Xu, and W. Zhang, “Stability of functional equations in single variable,” Journal of Mathematical Analysis and Applications, vol. 288, no. 2, pp. 852–869, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
Copyright
Copyright © 2014 Jie Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.