#### Abstract

The plasma sheath is known as a popular topic of computational electromagnetics, and the plasma case is more resource-intensive than the non-plasma case. In this paper, a parallel shift-operator discontinuous Galerkin time-domain method using the MPI (Message Passing Interface) library is proposed to solve the large-scale plasma problems. To demonstrate our algorithm, a plasma sheath model of the high-speed blunt cone was established based on the results of the multiphysics software, and our algorithm was used to extract the radar cross-section (RCS) versus different incident angles of the model.

#### 1. Introduction

A high-speed aircraft in the atmosphere is surrounded by a plasma sheath particularly during hypersonic reentry into the earth’s atmosphere. As a result, aerodynamic heating has a great effect on the ionization of aircraft surface air [1–3]. The transmission signal between the aircraft and the ground base or the relay satellite could be interfered by the plasma sheath [4–7], resulting in the phenomenon of a “black barrier”. Thus, it is helpful to study the characteristics of the plasma sheath for tracking of and communication with the aircraft.

The discontinuous Galerkin time-domain (DGTD) method offers superior versatility in geometry discrimination by using unstructured meshes; unlike the finite difference time-domain method (FDTD) [8, 9], the DGTD method has no numerical dispersion. DGTD is appropriate for solving complex electromagnetic problems [10–15]. The shift-operator DGTD (SO-DGTD) method was proposed for dispersive media [16–18]; to address the frequency domain constitutive relation and obtain the time-domain iteration formula, the shift-operator is used. Plasma sheath is a problem that requires a huge consumption of computer resources to solve, resulting in a huge resource consumption in time-domain simulation. In response, the parallel technology can improve the efficiency while allowing simulation of large-scale problems. References involving the study of the parallel DGTD algorithm based on MPI or CUDA can be found in the literature. In 2006, Marc Bernacki used MPI-DGTD to analyze the plane and the distribution of SAR of a human head [19]. The electromagnetic wave interaction with biological tissues has been simulated by GPU DGTD [20]. In [21], the MPI/GPU implementation of DGTD was proposed to simulate an SRR device, and the parallel efficiency of this algorithm was analyzed. Stylianos Dosopoulos proposed MPI-DGTD based on nonconformal meshes and the local time-stepping strategy for simulation of the IC package [22]. A parallel nonconforming DGTD based on MPI was presented for the simulation of metallic nanoparticles by R. Léger [23]. The above case studies do not have the detailed explanation in the MPI scheme, especially data exchange between processes. As far as the development of DGTD is concerned, an effective and extensible parallel scheme will provide certain help for later research work, such as complex media and the heterogeneous parallel system.

According to the reference, the study of the parallel DGTD algorithm for the plasma sheath target has not been reported yet. In this paper, we focus on the study of DGTD in plasma medium. First, we introduce the SO-DGTD method; next, a specific parallel scheme for SO-DGTD is given, and numerical examples are given to illustrate parallel efficiency; finally, the proposed algorithm is applied to the analysis of the scattering characteristic of a hypersonic aircraft. The results show that the sheath of the ultrahigh-speed aircraft can significantly affect its RCS over a wide frequency band.

#### 2. SO-DGTD Formulation For Plasma Medium

The semidiscrete equations of DGTD arewhere is the permeability and and are the electric and magnetic field, respectively. is the electric flux density. is the index of element located in the computational region, is the mass matrix, is the stiffness matrix, and and are the flux matrices.

The constitutive relation of plasma is defined by the Drude modelwhere is the permittivity of vacuum, is the relative permittivity (which is dependent on frequency), is the plasma frequency, is the collision frequency, and is the angular frequency of electromagnetic waves. Equations (1) allow us to update by using and update by using . Obviously, we need a formula from to complete the cycle of . Bing Wei proved that the complex permittivity of the general dispersive media model can be described by a rational polynomial fraction in [24]:where and are the maximum powers of the fractional polynomial and and are the expansion coefficients of polynomials. According to (3), by using the relationship of frequency-time-domain (), we can obtainwhere denotes the time step. The relation of the time-domain differential operator and the shift-operator is as follows.

The final time-domain form corresponding to (3) is [16]where represents the shift-operator, defined by . The role of is to shift the time sampling from the* n*th step to the (*n*+1)-th step. , where is the time increment in the calculation.

By using the characteristic of , we obtain the iteration formula from (6)

For the Drude model (plasma), the expansion coefficients are described as follows.

To match (7), the leap-frog scheme is used to solve (1).

#### 3. MPI Parallelization of SO-DGTD

The proposed algorithm contains three types of variables, namely, electric field ** E**, magnetic field , and electric displacement field . In our algorithm, ** E** and are the communications data between neighboring elements. and in (1) can be written as below.

For element* e*, we denote the coefficients of flux by , , , and . and denote the flux matrices. and denote the fields of element ; and denote the fields of adjacent element* e*+. In parallel processing, each partition is mapped to an MPI process, and the main task is the communication of data. If and* e*+ belong to the same process, then the data can be read from memory directly; otherwise, the communications must be handled by MPI function for the neighboring processes. The flow chart in Figure 1 shows the activities of the proposed algorithm.

##### 3.1. Mesh Partition

As shown in Figure 2, a sphere is partitioned into three subdomains corresponding to process 0, process 1 and process 2. Gmsh is used to provide the tetrahedral meshes and high quality partitions [25].

For each pair of neighboring submeshes, the boundary has no buffer elements. All communications between neighboring processes are handled with data packaging and transmitting technology, as shown in Figure 3 and discussed in the next section.

##### 3.2. Communication and Calculation Setup

The key problem of iterative calculation is addressing the issue of communications between neighboring processes. As shown in Figure 4, tetrahedron belongs to process ** M**, and the neighboring tetrahedron also belongs to process ** M**, but another neighboring tetrahedron belongs to process **.** Thus, the data among and can be read directly, whereas the data among and must be communicated between process and . Communication of parallel computing is implemented via the mapping of meshes.

Figure 4 illustrates the communication between neighboring processes. In addition, the adjacency relations among all processes and elements (tetrahedrons) must be established, as shown in Table 1.

In the calculation of DGTD, we need to establish a topology table for each processes, and the topology tables are extracted from meshes. The send buffers and receive buffers are established based on the topology tables. This part is particularly important for parallel computation. MPI_SENDRECV() function is used to implement the communication. Part of the code is as follows: *Do i=1, N_out**n=N_out_NODE(i)**CALL MPI_SENDRECV (E_cache_out(1:8,1:Num_NODE(n),n), 8**Num_NODE(n),**MPI_REAL, m1, 99+n, E_cache_in(1:8, 1:Num_NODE(n), n), 8**Num_NODE(n),**MPI_REAL, m1, 99+MYTIDS, MPI_COMM_WORLD, STATUS, IERR)**Enddo*

In the code above, MPI_SENDRECV() function is only running on the neighboring processes. This code can achieve the desired result and efficiency.

Subsequently, these procedures are packaged into a solution wrapper, as shown in Figure 5. It should be noted that the parallel scheme is directed to dispersive media, and the scheme calculates the fields first, followed by the fields and fields lastly. This process repeats many times until the program ends. This scheme has the advantages of openness, convenience, and well expansibility.

##### 3.3. Parallel Efficiency

*Example 1. *The monostatic RCS of a plasma sphere is simulated to show the parallel efficiency. The plasma sphere has a radius of 1 m (, ), and the mesh size is 0.05 m. Three sets of platform with different numbers of processes are used to test the program (Figure 6). The computing platform is the Xidian high performance computing center.

Good agreements are observed between the numerical results and the Mie analytical solution, as shown in Figure 6. Compared with the Mie solution, the root-mean square error of our algorithm is 1.5370 dB.

In Figure 7, as a benchmark we chose 24 processes, and the parallel efficiency of 72 processes is 90.1%. The proposed algorithm has good performance in this example.

*Example 2. *The previous example shows that the proposed algorithm has good practicability and high efficiency. Next, we calculate the monostatic RCS of a plasma-coated PEC sphere. The sphere of radius 0.5 meters is coated with plasma, as shown in Figure 8. The plasma layer has a thickness of 0.2 meters. The plasma frequency and the collision frequency of the plasma are and , respectively.

The model is discretized into 3276319 tetrahedrons. Three sets of platform with different numbers of processes are used, and the numbers are 24, 48, and 72, respectively. The computing platform is the Xidian high performance computing center.

The Mie theory is an analytical method for the sphere target, and the Mie solution is used as a reference value in this example, as shown in Figure 9 (cyan line). The monostatic RCS results of different sets are also shown in Figure 9, where the black line, red dash, and blue dot represent the results of 24 processes, 48 processes, and 72 processes, respectively. The Mie solution is consistent with the numerical results.

As seen in Figure 10, the parallel efficiency of 72 processes is 91.50%. The proposed algorithm still has good performance in this example.

The calculation results illustrate the proposed parallel scheme is effective in parallel processing, and the results suggest the parallel SO-DGTD method can provide reliable results. The parallel SO-DGTD method can be used to analyze the plasma sheath target, as discussed in the next section.

#### 4. Simulation Results of Hypersonic Target

The target is simplified as a PEC blunt cone. The cone is a rotationally symmetric target with a bottom radius of 0.05 meters and height of 0.3 meters. Let the height of the cone be 50 kilometers and the velocity be Mach 20. The parameters of the plasma sheath around the target at the certain altitude and velocity were calculated by COMSOL.

The section view is shown in Figure 12. Because of the very large span data, the plasma concentration is shown in terms of log_{10} in the plot.* ne* represents the plasma concentration (), and the unit of plasma temperature is K. The Lagrange interpolation technique was used to obtain the parameters at the sampling point for the DGTD calculation.

The meshes are assigned to a cluster containing 240 processes. The number of tetrahedrons per process is given in Figure 13, showing that the processes are well-balanced. Three incident angles are used in the simulation (Figure 11). VV represent the polarized direction and receiving polarization vertical to the axis, and HH represent the polarized direction and receiving polarization horizontal to the axis. Moreover, we consider both the cone with a sheath and the cone without a sheath.

**(a) Plasma concentration (log10(ne))**

**(b) Plasma temperature (K)**

Table 2 provides the resource consumption of this example. There were 12 sets of parameters in this example, and the average CPU time is mean of 12 tests.

In Figure 14, the black line represents the result of the cone with a sheath; on the contrary, the red dash represents the result of the cone without a sheath. For Figure 14(a), because the plasma sheath is mainly distributed in the head part of the cone, the wave is strongly attenuated in this case. Thus, the RCS of the cone with a sheath is less than that without a sheath. It can be noticed that VV and HH with the incident direction returned the same results, because the cone is axial symmetry.

**(a) VV**

**(b) HH**

**(c) VV**

**(d) HH**

**(e) VV**

**(f) HH**

For Figure 14(c), the results of the cone with and without a plasma sheath are almost the same. For , in the HH case (Figure 14(d)), the RCS of the cone with a sheath is less than that without sheath below 8 GHz, above which they are similar. For Figures 14(e) and 14(f), among the two types of results (with and without a sheath) of VV or HH case, there are few differences. Considering that the thin plasma sheath lies at the tail of the cone, for the latter two cases, the sheath is not effective.

#### 5. Conclusions

In this paper, our study provided a parallel framework for the SO-DGTD method by using MPI as a communication pattern, including mesh partition, communication between processes, and time-domain iteration. The algorithm was validated by the examples of a plasma sphere and a plasma-coated sphere. For a hypersonic target, the plasma is mainly distributed in the head part of the target; the results showed that when the electromagnetic wave vector is along the head of the cone, namely, the main distribution area of the plasma sheath, the reflection decline is influenced by the sheath. When the change in incident angle has distanced wave vector from the main distribution area of the plasma sheath, then the difference in the results of the cone with and without the sheath is declined.

#### Data Availability

No data were used to support this study.

#### Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

#### Acknowledgments

This work was supported by the National Natural Scientific Foundation of China (61571348, 61231003, 61401344).