Scientifica

Volume 2017, Article ID 6524156, 10 pages

https://doi.org/10.1155/2017/6524156

## An Investigation on the Aggregation and Rheodynamics of Human Red Blood Cells Using High Performance Computations

^{1}State Key Laboratory of Hydraulic Engineering Simulation and Safety, Tianjin University, Weijin Road, Tianjin 300072, China^{2}School of Engineering & Materials Science, Queen Mary University of London, Mile End Road, London E1 4NS, UK^{3}Department of Mechanical Engineering and Materials Science and Engineering, Faculty of Engineering and Technology, Cyprus University of Technology, 45 Kitiou Kyprianou, 3041 Limassol, Cyprus^{4}Faculty of Civil Engineering, University of Split, Split, Croatia

Correspondence should be addressed to Dong Xu; nc.ude.ujt@gnodux

Received 6 January 2017; Accepted 28 February 2017; Published 4 April 2017

Academic Editor: Dharmendra Tripathi

Copyright © 2017 Dong Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Studies on the haemodynamics of human circulation are clinically and scientifically important. In order to investigate the effect of deformation and aggregation of red blood cells (RBCs) in blood flow, a computational technique has been developed by coupling the interaction between the fluid and the deformable RBCs. Parallelization was carried out for the coupled code and a high speedup was achieved based on a spatial decomposition. In order to verify the code’s capability of simulating RBC deformation and transport, simulations were carried out for a spherical capsule in a microchannel and multiple RBC transport in a Poiseuille flow. RBC transport in a confined tube was also carried out to simulate the peristaltic effects of microvessels. Relatively large-scale simulations were carried out of the motion of 49,512 RBCs in shear flows, which yielded a hematocrit of 45%. The large-scale feature of the simulation has enabled a macroscale verification and investigation of the overall characteristics of RBC aggregations to be carried out. The results are in excellent agreement with experimental studies and, more specifically, both the experimental and simulation results show uniform RBC distributions under high shear rates (60–100/s) whereas large aggregations were observed under a lower shear rate of 10/s.

#### 1. Introduction

The red blood cell (RBC, also referred to as erythrocyte) is the most common type of cell occurring in human blood and occupies approximately 45% of the total blood volume for man and 40% for women. RBCs in a healthy state have a biconcave shape with a diameter of 6–8 *μ*m and a thickness of about 2 *μ*m at the edges and about 1 *μ*m at the center [1]. The RBC aggregation, which is the mechanism that greatly influences the non-Newtonian properties of blood [2], occurs when the shear forces are low and cells attract each other to form rouleaux (structures resembling coin piles), larger aggregates, and networks of aggregates. RBC aggregation can cause complications in health related issues; therefore, the investigation of deformability and aggregation of RBCs in blood flow can be promising for the better understanding and diagnosis of many diseases in clinical medicine.

Due to the complex mechanism of fluid-structure interaction, the theoretical analysis of RBCs or capsules is difficult and is usually limited to simple geometries and small deformations (Barthes-Biesel, 1980) and an alternative approach—numerical simulation—has attracted much attention. To date, most simulations have tended to target a relatively small number of cells ([3]; Dupin et al., 2006), due to their complex nature. The intrinsic complexity of biological systems requires a closer combination between experimental and computational approaches ([3, 4], Secomb, 2011). This requires large-scale simulations because experiments usually measure the macroscale effects with large quantities of cells [4–7].

So far, most RBC simulations have tended to target a relatively small number of cells [3, 8, 9]. However, it is often said that biological systems, such as cells, are “complex systems.” Such complex systems involve not only different categories of cells but also very large numbers of simple and identical elements interacting to produce “complex” behaviour. Experimental results show that the number of RBCs involved in the formation of aggregates and aggregate network structures in blood flow is much greater compared to those used in the aforementioned studies [7]. Although advances in accurate, quantitative experimental approaches will doubtlessly continue, insights into the functioning of biological systems will not result from purely intuitive assaults. This is because of the intrinsic complexity of biological systems and a combination of experimental and computational approaches is expected to resolve this problem [3, 4]. This again requires large-scale simulations because experiments usually measure the macroscale effects with large quantities of cells involved [4–7]. Large sale simulations of RBCs developed rapidly in recent years [10–12]. With the support of modern GPUs, Tang and Karniadakis [13] conducted a large-scale simulation of spontaneous vesicle formation consisting of 128 million particles.

In this paper, we present our computational research on RBC aggregations [14]. A 3D computational modelling has been developed for the deformation and aggregation of the RBCs. With the support of high performance computers, the transport and aggregation of up to 49,152 RBCs, at a hematocrit of 45% in a simple shear flow, have been simulated. The large-scale nature of our computation also enabled us to perform a direct comparison of the macroscale aggregation feature observed in experiments with excellent agreement. Simulations were also carried out in a Poiseuille flow formed in a tube with a confined throat to investigate RBC transport in stenosed vessels or micro peristaltic pumps.

In this paper, computational research on RBC aggregations is presented in which large-scale computations have enabled direct comparisons of macroscale aggregation features with experimental observations.

#### 2. Methodology

##### 2.1. Solvers for the Fluid, the Solids, and Their Interactions

###### 2.1.1. A Solver for Incompressible Viscous Flow

In order to simulate incompressible viscous flow, we used the in-house Computational Fluid Dynamics (CFD) code, CgLes [16]. CgLes is a three-dimensional fluid solver with second-order accuracy in both time and space and is based on a finite volume formulation. The projection method was used to decouple flow velocities and pressure. The capability of CgLes to simulate both laminar and turbulent flow has been extensively verified [17–19]. An implicit Crank-Nicolson scheme for the time-stepping was used for the diffusion term. The advection term, though very small and often neglected, was retained and discretized with the explicit two-step Adams-Bashforth scheme.

###### 2.1.2. Modelling the Deformation of the RBCs

The combined finite-discrete element method (FEM-DEM) [20, 21] was used to simulate the movement and deformation of the RBCs under the various forces developed in the fluid. This method incorporates movements, deformations, contacts, and interactions of RBCs. More details on this method were presented by Munjiza [20]. The initial RBC shape was described by a biconcave function [22]: where , , and

In the present study, the RBC membrane was treated as a thin solid shell, meshed with tetrahedron elements and treated as a hyperelastic material. The most widely used constitutive laws include the neo-Hookean model [23] and the Mooney–Rivlin model [24, 25], which was adopted following Y. L. Liu and W. K. Liu [24]. The deformation of the solid can be written aswhere are positions of the material points, is a smooth function mapping the initial material points to the deformed points, and is the displacement mapping. The change in deformation in the vicinity of each material point can be described by the deformation gradient tensor :where is the identity matrix. The left hand side Cauchy-Green deformation tensor can be written as

###### 2.1.3. Coupling the Fluid and the Solid Simulation

The fluid motion and solid deformation were coupled using an Immersed Boundary (IB) method [26] which links the interface between the fluid and the solid, both of which have independent meshes. By introducing a force term into the momentum equations of the fluid, the Immersed Boundary points serve as nonslip boundaries to the fluid solver. The surface nodes of the solids were then treated as Immersed Boundary points for the fluid to form nonslip wall boundaries. A direct-forcing scheme, which has been well established and verified [17], was used for the implementation of the IB method. With the body force due to the Immersed Boundary condition incorporated, the time-discretized momentum equation of the fluid can be written aswhere regroups the convective, pressure, and viscous terms at the intermediate time level between and . The force term which yields the desired velocity then becomes

###### 2.1.4. Modelling the Interaction between RBCs

The adhesive force, which causes aggregation, originates from molecular forces such as van der Waals attractions, on the cell surfaces [27]. Cell-cell communications and molecular coupling may also contribute to formation of adhesive force. An accurate quantitative modelling of the adhesive force between RBCs is very important for the simulation of the RBC aggregations. The strength of the adhesion between two cells can usually be described by the adhesion work, , which is the work required to separate two adhered cells. The JKR model [28] was used in the present study to compute the adhesion forces, where the relationship between the adhesive force and the penetration depth is described bywhere is the radius of the tip, is the contact area radius, is the work of adhesion, and is the effective Young modulus.

##### 2.2. Parallelization and Implementation

In order to simulate accurately the macroscale behaviours of red blood cells, such as the formation and development of aggregations structures, there should be a sufficient number of red blood cells involved in the simulation. Parallelization of the solver is necessary for simulation of a large quantity of RBCs. The fluid code—CgLes—was parallelized using MPI and spatial decomposition and a high scalability has been fulfilled [16]. For RBCs, the decomposition of the entire computational domain shares the same block division with the fluid. The advantage of this scheme is that the fluid block and the solid block for the same physical space are always installed on the same processor. Therefore, no remote data transfer between the fluid and the solid is required. Thus, no global data operation is required and all datasets are only running on a local processor. The computation is fully scalable in the spatial domain, which makes simulation with very large scales (running on hundreds of cores) possible.

The procedure for the parallelization of the RBCs is as follows: obtain the spatial decomposition by inheriting block and neighbourhood information from the fluid domain; seed red blood cells randomly on each block and carry out initialization; obtain the forces acting on boundary nodes of the solid, which are also the forces on Immersed Boundary points solved by the fluid code CgLes; let each processor solve all cells hosted by its local blocks at time ; find the cells located in the buffer areas of each block; transfer the information of the cells in the buffer areas to a local queue or a remote queue depending on whether the neighbour block is local or remote; exchange remote data information using MPI functions; exchange local data information by collecting local queue information; (9) collect cell data posted by neighbours. If the collected cell is already found in local block, overwrite its data with collected data; if not, append a new cell; (10) delete cells outside of the buffer area; (11) go on with the next step by setting and repeat step . The scheme for parallelization of the solid solver for the RBCs is shown in Figure 1. For comparison, a scheme with nonbuffered data exchange (using MPI_BSEND) was also tested during implementation.