HighPerformance Computing and Engineering Applications in Electromagnetics
View this Special IssueResearch Article  Open Access
RCS Computation by Parallel MoM Using HigherOrder Basis Functions
Abstract
A MessagePassing Interface (MPI) parallel implementation of an integral equation solver that uses the Method of Moments (MoM) with higherorder basis functions has been proposed to compute the Radar CrossSection (RCS) of various targets. The blockpartitioned scheme for the large dense MoM matrix is designed to achieve excellent load balance and high parallel efficiency. Some numerical results demonstrate that higherorder basis in this parallelized scheme is more efficient than the conventional RWG method and able to efficiently analyze RCS of various electrically large platforms.
1. Introduction
Radar CrossSection (RCS) computation of electrically large platforms has attracted a great deal of attention in the past few decades. One traditional and widely adopted method is the method of moments (MoM) [1]. However, when the operating frequency is high, the MoM method based on the RaoWiltonGlisson basis functions (RWGs) [2, 3] produces a very large number of unknowns for electrically large structures. To reduce the number of unknowns and to accelerate the computation, the fast multipole method (FMM) is a feasible approach. Although this technique can achieve our goal to some extent, there may be a problem of convergence when the model to be simulated is complex. Another choice is to use higherorder polynomials over wires and quadrilateral plates as basis functions over larger subdomain patches [4, 5]. The use of higherorder basis functions significantly reduces the number of unknowns. However, it is necessary to state that higherorder basis is suitable for large smooth structure but not very beneficial for detailed structure. In addition, to reduce the total wall clock time, the large dense MoM matrix is divided into a number of small block matrices that are nearly equal in size and distributed among all the available processes in the parallel method.
In this paper, the parallel incore MoM solver combined with the higherorder polynomial basis functions (HOBs) is employed on highperformance clusters so that the capability of the MoM method has been significantly improved. This technique is capable of solving electrically large scattering problems [3, 5, 6] of several hundred wavelengths in the maximum dimension.
In Section 2 of this paper, the basic theory of higherorder basis function and the matrix partition scheme are listed respectively; And then the computation platforms are described in Section 3; Section 4 lists some numerical examples to validate the accuracy, efficiency, and application of this paper’s method: Section 4.1 demonstrates the convergence of the higherorder basis MoM method; Section 4.2.1 validates the accuracy of this paper’s method through the comparison with measurement results; Section 4.2.2 demonstrates that higherorder bases are able to significantly reduce the number of unknowns and can effectively shorten the computation time; Section 4.3 checks the parallel efficiency of the parallel scheme used in this paper. Finally, Section 4.4 illustrates a reallife problem of the RCS computation of a missile whose maximum electrical dimension is bigger than one hundred wavelengths. Finally, Section 5 presents the conclusions and the acknowledgements follow, respectively.
2. Basic Theory
2.1. Higherorder Basis Functions
Flexible geometric modeling can be achieved by using truncated cones for wires and bilinear patches to characterize surfaces [4]. The surface current over a bilinear surface is decomposed into its and components, as shown in Figure 1(a). However, the current component can be treated as the current component defined over the same bilinear surface with an interchange of the and coordinates. The approximations for the components of the electric and magnetic currents over a bilinear surface are typically defined by where are defined as
(a)
(b)
The edge basis functions and the patch basis functions are expressed by (3) and (4), respectively, where , are the unitary vectors defined as The parametric equation of such an isoparametric element can be written in the following form: where are the position vectors of its vertices and the and are the local coordinates.
A righttruncated cone is determined by the position vectors and the radii of its beginning and its end,,, , , respectively, as shown in Figure 1(b). Generalized wires (i.e., wires that have a curvilinear axis and a variable radius) can be approximated by righttruncated cones.
Currents along wires are approximated by polynomials and can be written as where node basis functions, , and segment basis functions, , are expressed as respectively, and where , are the coefficients, and , are the values of the currents at the wire ends, respectively.
The parametric equation of the cone surface can be written as where is the circumferential angle, measured from the axis, and is the radial unit vector, perpendicular to the cone axis.
2.2. The Matrix Partition Scheme
Assume that the matrix is a large dense matrix, it can be divided into smaller blocks and distributed to each process grid [6]. For explanation purposes, the MoM matrix equation is rewritten in a general form as where denotes the complex dense matrix, is the unknown vector to be determined, and denotes the given source vector.
Assume that the matrix is divided into blocks, which are distributed to 6 processes in a process grid, as illustrated in Figure 2(a). Figure 2(b) shows to which process the blocks of are distributed using ScaLAPACK’s distribution methodology.
(a)
(b)
In Figure 2(a), the outermost numbers denote the row and column indices of the process coordinates. The top and bottom numbers in any block of Figure 2(b) denote the process rank and the process coordinate of a certain process, respectively, corresponding to the block of the matrix shown in Figure 2(a). By varying the dimensions of the blocks of and those of the process grid, different mappings can be obtained. This scheme can be referred to as a blockcyclic distribution procedure.
Load balancing is critical to obtain an efficient operation of a parallel code. This parallel scheme is able to achieve the good load balancing. Little communication between processes is necessary during the matrix filling process [4].
Also, it is necessary to mention that the degree of higherorder basis is confirmed by the maximum length of edge of the corresponding plate. And the same loadbalancing scheme is used no matter what the order of basis function is.
3. Description of the Computation Platforms
To illustrate the versatility of the solver, two representative computer platforms have been chosen.(1)Personal computer: Quad core Intel I5 processor (2.67 GHz) with 4 GB RAM and 500 GB of hard disk.(2)Shanghai supercomputer center (SSC): the 37 nodes from Magiccube Machine with a total of 592 AMD CPU cores (1.9 GHz per CPU and 4 cores on each CPU): 16 CPU cores on each node and 4 GB RAM per core, and a total amount of RAM approximately equal to 2.3 TB. No hard disk storage is available for computation. InfiniBand is used for the network interconnection.
4. Numerical Results and Discussion
4.1. The Accuracy versus the Order of Basis Function
In this benchmark, a model of PEC sphere is used to test the relationship between the accuracy of simulation and the order of basis function. The radius of the sphere is 1.0 meter. The simulation frequency is 1 GHz. The incident direction is along axis and the observation plane is XOY, as illustrated in Figure 3. Parallel higherorder MoM with 512 CPUs is employed to calculate the bistatic RCS (dB). Through changing the order of basis, the results obtained are compared in Figure 4 and the information of simulation process is listed in Table 1.

We can see from Figure 4 that the results are stable when the order of basis function ranges from two to five. Therefore to this model, numbers of two to five can be chosen as the reasonable order of basis. Moreover, Table 1 lists the information about the simulations, respectively. It is obvious that the number of unknowns is more when the order of basis is higher. Meanwhile, the simulation time required is longer as the order of basis function is increased.
4.2. Comparison with the Measurement Results and Parallel FMM with RWG Basis
To validate the accuracy and efficiency of the proposed parallel higherorder basis MoM methodology, two benchmarks of a truncated cone and a Y8 plane are simulated to calculate their RCS, respectively.
4.2.1. Truncated Cone [7]
This benchmark is an endcapped truncated cone oriented along the axis and centred in the plane (illustrated in Figure 5). The elevation angle () is taken from the positive axis and the azimuth angle () from the positive axis. There are several interesting points in this target. First, it shows the RCS response of targets with single curvature (common in structural parts of an aircraft, such as the fuselage). It is also important to know the diffraction mechanism at curved edges. Reflection from planar surfaces with curved edges can also be observed. Therefore, this target is especially suitable for the validation of the prediction of objects with flat surfaces delimited by curved edges and for evaluation of curved edge contributions.
This model has been simulated at 7 GHz. The RCS pattern, for HH polarizations, corresponds to and ranges from 0° to with a 1° step. The incident direction is perpendicular to the generatrix. The simulation is performed on the first kind of computer platform described above.
Figure 6 shows the RCS pattern of the truncated cone for HH polarization at 7 GHz. Three main lobes are clearly defined. Two of them correspond to the specular reflection from the two bases of the cone. The minor one corresponds to and the major one to , with the different levels resulting from the different areas of the corresponding bases. The other main lobe corresponds to the angle at which the generatrix is perpendicular to the incident direction. Diffraction from the curved edges becomes important in the intermediate region between the main lobes. The RCS pattern is compared with measured results and good agreement is seen.
4.2.2. Y8 Plane
This benchmark is a real aircraft named Y8 (illustrated in Figure 7). The elevation angle () is taken from the positive axis and the azimuth angle () also from the positive axis. The operating frequency is 100 MHz. The airplane model is 36.2 m long, 38 m wide, and 10.5 m high. The corresponding electrical sizes of the model are 12.1, 12.7, and 3.5, where is the freespace wavelength at the operating frequency. The incident wave with HH polarization is along the negative axis.
In this simulation, the order of higherorder basis is three; also, FMM parameters are described as follows.(1)Number of levels: 6.(2)Top level: 3.(3)Number of boxes at the top level: 27.(4)Number of boxes at the finest level: 1937.(5)Finest box size: 0.23*lambda.
The simulation is performed on the first kind of computer platform described above. The Bistatic RCS results obtained by using the proposed parallel higherorder basis MoM method and the parallel FMM method, are plotted together in Figure 8. As shown in this figure the results agree with each other very well from 15° to 345°. The only considerable discrepancy between them occurs in the nose region of the plane, for angles from 0° to 15° and from 345° to 360°.
The comparisons of some computation parameters are listed in Table 2.

From Table 2, one can see that the higherorder basis adopted in the proposed method results in less number of unknowns than the FMM RWGs do. The total computation time of the proposed method is only about 23.6% of the time required by parallel FMM method, and it implies that the proposed method is about 4.2 times faster than the parallel FMM method. This benefit not only comes from the smaller number of unknowns needed when using higherorder basis, but also due to the parallel matrix partition scheme.
4.3. Parallel Efficiency of Mirage’s RCS Computation
In this benchmark, the parallel efficiency of Mirage’s RCS computation has been measured with respect to different numbers of processes, as shown in Table 2. The model of the Mirage aircraft is described in Figure 9, and its geometric dimensions are 11.3 m × 7 m × 2.85 m. The operating frequency is 1.25 GHz. Thus the corresponding electrical dimensions are , where is the wave length in free space. The model is placed along axis and is excited by a plane wave propagating along the negative axis and with VV polarization.
Taking the time for 16 processes as a reference, the parallel efficiencies for this simulation are described in Figure 10, and the times of simulation for different number of CPUs are listed in Table 3. In the cases of 16, 32, 64, and 128 processes, the parallel efficiencies for the wall time are higher than 80%, which demonstrates that this proposed method can reach an excellent parallel efficiency and is capable of effectively reducing the computation time.

4.4. RCS Computation of Benchmark with Electrically Large Dimension
In the following examples, the elevation angle () is taken from the positive axis to axis and the azimuth angle () from the positive axis. The simulations are performed on the second kind of computer platform described in Section 3. In the first part of this section, the parallel speedup ratio for computing a real missile’s bistatic RCS is tested.
The model of this benchmark is listed as follows. The missile model is placed along the axis, as illustrated in Figure 11. Table 4 illustrates the corresponding parameters of this model.

This testing benchmark is operating at 5.0 GHz, for which the number of unknowns is 69247. The simulation times for different number of processes and the results of parallel speedup ratio are listed in Table 5 and Figure 12, respectively. Taking the time for 64 processes as a reference, it can be found in Figure 12 that the parallel speedup ratio is nearly linear. However, in the case of 192 and 256 processes, the parallel efficiencies for the simulation decrease compared with the theoretical results. An increase in the number of processes deteriorates the performance. This is expected because the ratio of the communication volume to computation increases with an increase in the number of processes for this problem. But the parallel efficiencies of all these four situations are also higher than 80%, which proves a good parallel performance of this paper’s method.

Consider next the missile’s bistatic RCS and its surface current distribution.
Table 6 illustrates the corresponding parameters of this benchmark, and Table 7 summarizes the computation results. In the following, RCS results and surface current distribution of this benchmark are listed in Figures 13 and 14, respectively.


(a) XOY plane
(b) XOZ plane
From this benchmark, it is clear that the proposed method in this paper is able to handle electrically large scattering problems of hundreds of wavelength in the maximum dimension.
5. Conclusion
In this paper, RCS computation of electrically large platforms using parallel MoM technique with higherorder basis functions is presented. A loadbalanced parallel method is achieved by a matrix partition scheme, so the total wall clock time for solving a large dense matrix is shortened. Its accuracy, efficiency, and applicability are also validated through several numerical examples. In conclusion, the method proposed in this paper can solve some electrically large problems with high accuracy and short computation time, which cannot be achieved by the conventional RWG MoM method. Also this paper contributes to ongoing research efforts on developing numerically accurate solutions for electrically large problems.
Acknowledgments
This work is partly supported by the Fundamental Research Funds for the Central Universities of China (JY10000902002, K50510020017) and the National Science Foundation of China (61072019). This work is also supported by Shanghai Supercomputer Center of China (SSC).
References
 R. F. Harrington, Field Computation by Moment Methods, IEEE Series on Electromagnetic Waves, IEEE, New York, NY, USA, 1993.
 S. M. Rao, D. R. Wilton, and A. W. Glisson, “Electromagnetic scattering by surfaces of arbitrary shape,” IEEE Transactions on Antennas and Propagation, vol. AP30, no. 3, pp. 409–418, 1982. View at: Google Scholar
 Y. Zhang, M. Taylor, T. K. Sarkar et al., “Parallel incore and outofcore solution of electrically large problems using the RWG basis functions,” IEEE Antennas and Propagation Magazine, vol. 50, no. 5, pp. 84–94, 2008. View at: Publisher Site  Google Scholar
 Y. Zhang and T. K. Sarkar, Parallel Solution of Integral Equation Based EM Problems in the Frequency Domain, Wiley, Hoboken, NJ, USA, 2009.
 Y. Zhang, M. Taylor, T. K. Sarkar, H. G. Moon, and M. Yuan, “Solving large complex problems using a higherorder basis: parallel incore and outofcore integralequation solvers,” IEEE Antennas and Propagation Magazine, vol. 50, no. 4, pp. 13–30, 2008. View at: Publisher Site  Google Scholar
 Y. Zhang, T. K. Sarkar, M. Taylor, and H. Moon, “Solving MoM problems with million level unknowns using a parallel outofcore solver on a high performance cluster,” in Proceedings of the IEEE International Symposium on Antennas and Propagation and USNC/URSI National Radio Science Meeting (APSURSI '09), Charleston, SC, USA, June 2009. View at: Publisher Site  Google Scholar
 R. FernándezRecio, A. JuradoLucena, B. ErrastiAlcalá, D. PoyatosMartínez, D. EscotBocanegra, and I. MontielSánchez, “RCS measurements and predictions of different targets for radar benchmark purpose,” in Proceedings of the International Conference on Electromagnetics in Advanced Applications (ICEAA '09), pp. 443–446, September 2009. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2012 Ying Yan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.