Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 3635785 | https://doi.org/10.1155/2020/3635785

Kun-Chou Lee, "Analysis for Mutual Impedance of Pistons by Neural Network and Its Extension of Derivative", Mathematical Problems in Engineering, vol. 2020, Article ID 3635785, 8 pages, 2020. https://doi.org/10.1155/2020/3635785

Analysis for Mutual Impedance of Pistons by Neural Network and Its Extension of Derivative

Academic Editor: Georgios I. Giannopoulos
Received22 Sep 2019
Accepted19 Feb 2020
Published12 Mar 2020

Abstract

This study is basically a mathematical problem in sonar engineering. The sonar plays a very important role in underwater communication, detection, and remote sensing. Pistons are key sensors in a sonar system. The mutual coupling is a challenging problem in designing a sonar array. The mutual impedance of pistons is required in analyzing the mutual coupling of a sonar array. In this paper, a mathematical model consisting of a neural network and its extension of derivative is given and then utilized to analyze the mutual impedance of pistons. Initially, the mutual impedance of pistons is modelled and predicted by a neural network. By suitably extending the neural network, the derivative, i.e., slope information, for the neural-network output is obtained easily. Therefore, the mutual impedance and its slope information are obtained simultaneously almost in real time as the neural network is well trained in advance. Numerical examples show that the neural network can accurately predict the mutual impedance and its extension of derivative gives the slope information of mutual impedance simultaneously. It should be emphasized that the training work of a neural network is performed only once, i.e., only the training work in mapping the mutual impedance is required. No additional training work is required in obtaining the slope information.

1. Introduction

In underwater environment, the acoustic wave-based sonar [1, 2] is often utilized for communication, detection, and remote sensing because the electromagnetic wave attenuates rapidly in water. The piston is one type of acoustic sensors and then plays a very important role in an underwater sonar system. As the array structure of pistons is utilized in a sonar system, the mutual coupling between different pistons will affect the array performance. Therefore, understanding and interpretations for mutual coupling [3, 4] within pistons of a sonar array are important and required. According to [5], the mutual coupling effect within a communication array can be modelled and analyzed from the mutual impedance between any two array elements. In other words, the mutual impedance of pistons is required in designing an underwater sonar array including mutual coupling effects. However, the computation for mutual impedance of pistons often involves multidimensional integrals [6] and is then difficult and time consuming. This then motivates us to utilize the mathematical techniques of the neural network and its extension to analyze mutual impedance of pistons.

The neural network (NN) [7] is an important mathematical technique. It belongs to machine learning and has many applications in engineering. These applications are basically mathematical problems in engineering [810]. The NN serves as a black box of nonlinear mapping that accepts certain inputs and produces certain outputs. The term “black box” means the relation between the input and output is very complicated and difficult to characterize. In general, two situations require the aid of neural-network modelling. One is that the theoretical computations are difficult or time consuming. The other is that the practical experiments are difficult to implement. The neural-network machine-learning back box is expected to replace the difficult theoretical computations or practical experiments. There have been widespread applications in applying the neural network to different engineering problems, e.g., [715]. In reference [7], several types of neural networks are utilized to model the relation between the input and output of different fundamental electromagnetic problems. Since the electromagnetic problems involve difficult wave theories and are usually difficult in experiments, the neural networks become good candidates for solving such problems. In reference [11], the neural network is deformed to achieve optimization of antenna arrays. The neural network can also be extended to calculate multidimensional integrals of engineering problems, e.g., [12]. In reference [13], the neural network is utilized to model antennas attached with nonlinear electronic components including mutual coupling effects. In addition to forward problem, the neural network can also be utilized to treat an inverse problem, e.g., microwave imaging [14]. In reference [15], the neural network is utilized to model the mutual coupling effects within an antenna array.

However, most applications of neural networks in engineering problems are for nonlinear mapping only, e.g. [715], as introduced above. In some application, the knowledge of slope information for the system output is necessary. This then motivates us to develop a model that consists both the neural network and its extension of derivative to help one design such a system.

In this paper, a mathematical model consisting of a neural network and its extension of derivative is given and then utilized to analyze the mutual impedance of pistons. Initially, the mutual impedance between two pistons within a sonar array is modelled and predicted by an RBF-NN (radial basis function neural network) [7]. By suitably extending this neural network, the derivative, i.e., slope information, for the output of the RBF-NN is obtained easily. The RBF-NN is trained to serve as the nonlinear mapping for the mutual impedance of pistons. There is one node in the input layer of RBF-NN to represent the geometry of the two array elements. There is also one node in the output layer of RBF-NN to represent the mutual-impedance magnitude between the two pistons. There still exist some nodes and transfer functions in the hidden layer for nonlinear mapping. The RBF-NN is trained by some existing data sets. After the RBF-NN is well trained, all the weights connecting different nodes within the neural network are determined. The output can then be predicted by simple algebraic computations from the weights and transformation functions within the neural network. The derivative for the output of RBF-NN can be easily obtained by suitably extending the RBF-NN. Since the output of an RBF-NN is the linear combination of node weights and nonlinear transfer functions within the neural network, the derivative for the output of an RBF-NN can then be replaced by differential operations upon this linear combination. The node weights are constant after the neural network is well trained. The nonlinear transfer functions are known and differentiable in general. Therefore, the derivative for the output of RBF-NN can be easily transformed into the derivative for these nonlinear transfer functions. This transformation makes the computation of the derivative very simple and straightforward. An extension of RBF-NN is then developed to represent the derivative. It should be noted that no additional training work is required in predicting the derivative of neural-network output. In other words, the training work is performed only once.

In Section 2, the neural network and its extension of derivative are given. The application to mutual impedance of pistons is given in Section 3. Numerical examples are given in Section 4. Finally, the conclusion is given in Section 5.

2. Neural Network and Its Extension of Derivative

In this section, a model consisting of a neural network and its extension of derivative is given. The neural network used in this study is the RBF-NN [7]. As shown in Figure 1 (the part below the horizontal dotted line), the RBF-NN has three layers, which are the input layer, the hidden layer, and the output layer. There is one node, i.e., y, in the output layer to represent the output. There is one node, i.e., x, in the input layer to represent the piston parameter. There still exist J nodes in the hidden layer for nonlinear mapping. In fact, there is no limitation on the node number of the input or output layer. However, we choose one node in both the input and output layers to make the illustration clear. The output of the RBF-NN in Figure 1 can be expressed as follows [7]:

In equation (1), represents the nonlinear transformation function of the jth node in the hidden layer and is given aswhere is the mean value corresponding to the jth hidden node and is the autocovariance of the Gaussian function.

The abovementioned neural network is trained by some existing data sets. The training processes are given in detail in [7]. After the neural network is well trained, all the weights of , j = 0, 1, …, J, are determined and the nonlinear mapping of can be predicted by equation (1).

To obtain the nonlinear mapping of , the differential operation is executed on equation (1). We have

Since all the weights of , j = 0, 1, …, J, in equation (3) have been determined in the training process for mapping of , we can determine the mapping of from equation (3) straightforwardly.

The RBF-NN is then extended to model or predict based on equation (3), as shown in Figure 1 (the part above the horizontal dotted line). The flow chart of using the RBF-NN and its extension of derivative is illustrated in Figure 2. It should be noted that the training work is performed only once (during the mapping of ). The derivative, namely, , means the slope information at some x. In designing a system, x is a controlled factor and y is the corresponding response. As the derivative or slope information is known, one can predict whether the next y for (x + Δx) is increasing or decreasing. This can reduce the chance of error in designing a system. After the mapping of or the slope information of y is determined, one can give insight into the properties for characteristic of y, such as increasing, decreasing, opening upward, and opening downward. These properties will be helpful in designing many engineering problems, such as a sonar array including mutual coupling effects.

3. Application to Mutual Impedance of Pistons

Prior to mutual impedance analyses, we first introduce the role of mutual impedance in mutual coupling and communication array design. For a communication array, the far-field pattern function is described by the product of the element factor and the array factor. The array factor is especially important. Consider an antenna array consisting of N elements. The array factor is the weighted sum of N terms with each term being e (=2.71828...) to the power of an imaginary number. The weight of each term is the radiating current. The feeding voltage and radiating current of array elements can be characterized by , where and are both N-dimensional column vectors with components representing the feeding voltage and radiating current of array elements. The N × N matrix contains elements of representing the self-impedance (i = j) or mutual impedance (i ≠ j) between the ith and jth elements. For isotropic elements, i.e., no mutual coupling, we have  = 0 and becomes simple multiplication of scalar. As the mutual coupling exists, we have  ≠ 0 and becomes a complicated multiplication of the matrix. For an acoustic sonar array, the abovementioned voltage is equivalent to mechanical reaction force and the abovementioned current is equivalent to vibration velocity. Their roles in mutual coupling, far-field pattern, and array design are similar to those of antennas. These are the reasons why the mutual impedance of pistons is analyzed in this study.

Consider two rectangular pistons within a sonar array, as shown in Figure 3. Our goal is to calculate and then predict the mutual impedance between these two pistons. According to [6], the pressure at a point on piston-2 due to a point on piston-1 which vibrates with angular frequency and maximum velocity , is given bywhere , c is the velocity of sound in the medium, is the density of the medium, and k is the wave number. The mutual impedance of the two pistons may be found as the force due to the motion of piston-1 on piston-2 divided by the velocity of piston-1. The formulation can be given as follows [6]:

Note that equations (4) and (5) are the solutions to the Helmholtz equation of acoustic wave theories. The abovementioned equation involves four-dimensional integrals and is time consuming in computation. Therefore, modelling or predicting is necessary in practical applications. In this study, a neural network is utilized to model the nonlinear mapping for the mutual impedance between any two pistons. In addition, the slope information for the mutual-impedance characteristic can then be easily obtained by suitably extending the neural network as equation (3) or Figure 1.

4. Numerical Simulation Results

In this section, two numerical examples are given to illustrate the abovementioned theory. For simplicity, there are two square pistons considered in this section, i.e., a = b in Figure 3. The hidden layer of the RBF-NN in Figure 1 has 10 nodes, i.e., J = 10. The learning rate in the training procedure is 0.1. The autocovariance of the Gaussian function is 0.5. The selection of J (number of hidden-layer nodes) and learning rate are based on experiences. Small values of J will make characteristics of nonlinear transformation inadequate, whereas large values of J will increase the neural-network size and then make the training work difficult. The learning rate is generally selected within the range [0, 1]. Our past studies of references [1115] have utilized J = 10 and learning rate 0.1 to serve as RBF-NN parameters to model electromagnetic problems and the results are very well. Since the mathematical formulas of acoustic wave theories in this study are very similar to those of electromagnetic wave theories in references [1115], we expect that these values of RBF-NN parameters are also suitable in this study. In the training and predicting processes of the neural network, all values of the input variable are linearly normalized into . The maximum training loops of the neural network is set to be 40000.

In the first example, the mutual impedance is calculated with respect to orientation angle under the assumptions of ka = 1 and d/a = 2. There are 20 data sets randomly selected from the interval of to train the neural network. There are also 20 data sets (different from the training data sets) randomly selected from the interval of to verify the prediction accuracy of the neural network. All the training and verification data sets are calculated from equation (5). Figure 4 shows the normalized (divided by ) mutual-impedance magnitude with respect to orientation angle by using the RBF-NN prediction in Section 2, i.e., the part below the horizontal dotted line in Figure 1. For comparison, the results calculated by theoretical formula of equation (5) are also given. It shows that they are in good agreement. The discrepancy is defined aswhere and denote the result by prediction and by theory, respectively. The mean square error of discrepancy for all data of Figure 4 is about 0.009%. This implies that the prediction is very accurate. The consistence of the two curves in Figure 4 means that the trained RBF-NN can accurately predict the mutual-impedance magnitude between two pistons. With the use of RBF-NN, complex numerical calculations of multidimensional integrals in equation (5) can be replaced by very simple algebraic calculation of the neural network. This will make the analyses for mutual coupling effects of sonar arrays very efficient.

The slope information of Figure 4 can be obtained straightforwardly from equation (3), i.e., the extension of RBF-NN in Figure 1 (the part above the horizontal dotted line). Following the procedures of the flow chart in Figure 2, the derivative or slope information for the curve of Figure 4 can be obtained easily. Figure 5 shows the slope of normalized mutual-impedance magnitude with respect to orientation angle by equation (3), i.e., the extension of RBF-NN in Figure 1. For comparison, those calculated by center-difference differential on equation (5) are also shown. It shows that they are very consistent. The mean square error of discrepancy for all data of Figure 5 is about 1.74%. This implies that the prediction is very accurate. In Figure 5, much information about the distribution for the curve of mutual-impedance magnitude in Figure 4 can be found. The slope is zero and increasing at . This means that the curve of mutual-impedance magnitude in Figure 4 has a minimum value at . The slope is negative from to and then the curve in Figure 4 will be decreasing in this interval. The slope is positive from to , and then the curve in Figure 4 will be increasing in this interval. The slope is decreasing from to and from to . This means that the curve in Figure 4 opens downward in these two intervals. Similarly, the slope is increasing from to . This means that the curve in Figure 4 opens upward in this interval. The information shown in Figure 5 is consistent with the practical situations of curve distribution in Figure 4.

In the second example, the mutual impedance with respect to ka is studied. The distance and size of pistons are chosen as d/a = 2. Initially, the orientation angle for the two pistons is assumed to be  = , i.e., the two square pistons in Figure 3 are aligned on the x-axis. There are 40 data sets randomly selected from the interval of 0 < ka < 20 to train the neural network. There are also 40 data sets (different from the training data sets) randomly selected from the interval of 0 < ka < 20 to verify the prediction accuracy of the neural network. All the training and verification data sets are calculated from equation (5). Figure 6 (angle = ) shows the normalized mutual-impedance magnitude (divided by ) with respect to ka by using the RBF-NN prediction in Section 2, i.e., the part below the horizontal dotted line in Figure 1. For comparison, those calculated from theory of equation (5) are also illustrated in Figure 6 (angle = ). It shows that they are in very good agreement. The mean square error of discrepancy for all data of Figure 6 is about 1.22%. This implies that the prediction is very accurate. Figure 7 (angle = ) shows the slope of normalized mutual-impedance magnitude with respect to ka predicted by equation (3), i.e., the extension of RBF-NN in Figure 1. For comparison, those calculated from center-difference differential on equation (5) are also shown in Figure 7 (angle = ). It shows that they are in good agreement. The mean square error of discrepancy for all data of Figure 7 is about 2.82%. This implies that the prediction is very accurate. Similar to the previous example, the slope information in Figure 7 shows that the local maxima for the curve in Figure 6 should occur at ka 2.5, 9.5, and 15.6 since the slope values are zero and decreasing at these points. Similarly, it is found from Figure 7 that the local minima for the curve in Figure 6 should occur at ka 7.0, 12.5, and 19.1 since the slope values are zero and increasing at these points. The information shown in Figure 7 is consistent with the practical situations of curve distribution in Figure 6. Next, the orientation angle for the two pistons is changed from  =  to  = . The other procedures are the same as those of the case  = . Figure 6 (angle = ) shows the normalized mutual-impedance magnitude (divided by ) with respect to ka by using the RBF-NN prediction and theoretical computation, respectively. It shows that they are in very good agreement. Figure 7 (angle = ) shows the slope of normalized mutual-impedance magnitude with respect to ka predicted by using the RBF-NN extension and center-difference differential on equation (5), respectively. It also shows that they are in good agreement. The meanings of curves are not repeated here because they are similar to those of the case  = .

From the abovementioned numerical examples, it can be observed that the curves predicted by the neural network or its extension of derivative are not as smooth as those calculated from theories. Since the neural network is inherently a “black box” for nonlinear mapping, the slight roughness for curves predicted by neural networks in Figures 47 is reasonable. In some practical applications, the training data sets are obtained by measurement. These measured data contain not only clean signals but also random noises. Therefore, the measured curves may have roughness in most cases. Due to the inherent black-box property of neural networks, the proposed methods can deal with nonlinear mapping for rough curves of measurement. The black-box nonlinear mapping of this study is achieved through the Gaussian bases in equation (2). In this paper, the main purpose of the neural-network black-box is to replace the four-dimensional integral computation in equation (5), which is very complicated. In both examples, the relation between the input (controlled) variable and output response is nonlinear, as shown in Figures 47. What we want to emphasize is that the neural-network black box has successfully replaced numerical calculations of multidimensional integrals. Moreover, the black-box property implies that the relation between the input and output may be very complicated. Therefore, the neural-network black box can also be applied to different forms of sonar arrays, such as rectangular, cylindrical, or spherical structures. As the radiator is produced of a real material, the significant factor of water fluid-loading can be included in the black box, i.e., neural-network-based machine learning. These will make the sonar array design very convenient and efficient. For simplicity without loss of generality, the input is either or ka, and the output is mutual-impedance magnitude in the neural network of this study. In fact, all controllable factors in equation (5) can serve as the input of the neural network. The input and output of the neural network may have multiple nodes to represent multiple controllable factors and multiple responses, respectively. This is the difference between our neural-network machine learning and a look-up table. Furthermore, our neural-network machine learning can also treat arrays with multiple transducers as one increases the number of input and output nodes. Of course, this will also lead to the increase of training work.

The abovementioned numerical simulations are performed using personal computer with CPU of Intel Core i7-4790 3.6 GHz. All the programs are coded using Fortran-90 computer language with compiler version of Absoft Pro Fortran 6.2. By using numerical computation of equation (5), the computing time of each data point in Figures 4 and 6 is about 5 seconds. Our computing results are consistent with those of reference [16]. Note that the computation is almost in real time by using the trained neural network in this study. This is because a neural network involves only very simple algebraic calculations. Although the training work of a neural network is somewhat time consuming, it can be finished before one uses the neural network and its extension of derivative.

5. Conclusion

In this study, a mathematical model consisting of a RBF-NN and its extension of derivative is successfully applied to nonlinear mapping for mutual impedance of pistons and the slope information. The training work is performed only once, i.e., during the mapping for mutual impedance of pistons. According to reference [17], the RBF-NN model utilized in this study is inherently one type of the general regression, and it can predict new results nonlinearly from some training data sets. In addition, the studies of references [1115] have successfully utilized the RBF-NN to model different electromagnetic problems and the results are very well. Since the equations of acoustic waves are very similar to those of electromagnetic waves in references [1115], it is reasonable that the RBF-NN is also a good mathematical model in this study. Although reference [15] has successfully utilized the RBF-NN to deal with the mutual coupling of antenna arrays, the modelling is only the conventional neural network itself. It should be emphasized that this study not only utilizes the RBF-NN to model the mutual coupling of acoustic sensors but also extends the RBF-NN to achieve the output derivative information. As shown in equation (3), the output derivative is achieved by differential operations on nonlinear transformation functions in the hidden layer, e.g., equation (2). In fact, as long as the nonlinear transformation functions in the hidden layer are differentiable, any types of multilayer neural networks can be utilized. Therefore, the modelling flow chart of this paper can also be analogized to other black-box machine-learning techniques. Remember that we obtain the gradient information from equation (3), which is essentially the exact derivative on the differentiable function of equation (2). By this way, one does not need to calculate y prior to since the right part of equation (3) involves only weights within the neural network together with simple mathematical functions. There are two advantages to achieve the gradient by this way. Firstly, the computation is efficient since one does not need to calculate y prior to . Secondly, one can reduce the chance of producing error from the neural-network output y. Compared with conventional center-difference gradient, our neural-network extension of derivative is not only efficient but also accurate.

The concept of mutual impedance can also be analogized to calculate the sound radiation impedance, e.g., the discrete calculation method (DCM) originally designed by Norihisa Hashimoto [18]. In that method, the vibrating object is divided virtually into small elements. Each individual element is treated as a circular piston vibrating plate with an area equal to that of the corresponding element. The sound power of each individual element is related to mutual impedance between different individual elements. The total radiation power of a vibrating object can be obtained by calculating and summing up all of the sound power of each individual element. Note that our neural-network-based analysis in this paper belongs to supervised machine learning. Prior to predicting, known examples with answers are required to train the neural network. As the supervised learning procedures are not finished, our neural-network-based technique cannot work. In fact, such a learning process is the common requirement for all supervised machine-learning techniques, but not the particular property of this study. The major difference between our neural-network-based technique and Hashimoto’s method is the supervised learning phase. Our neural-network-based technique requires known samples with answers for learning in advance. After the supervised learning procedures are finished in advance, it can be generalized to predict unseen data fast and accurately, whereas Hashimoto’s method [17] does not require learning procedures. It is basically an improved calculation technique based on physics and mathematics.

Numerical examples of Figures 47 have verified the proposed model to be accurate and efficient. Compared with conventional mathematical models (e.g., function interpolation or regression), the neural network is inherently a black box and can give mapping with strong nonlinearity. With the use of neural network and its extension of derivative, one can quickly obtain the output gradient information without knowledge of the overall output in advance. Although the training work of a neural network is usually time consuming, it can be completed in advance. The proposed model can be applied to many other mathematical problems in engineering.

Data Availability

The important and key computer programs of this study are available online https://doi.org/10.6084/m9.figshare.7150001.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to thank the financial support from the Ministry of Science and Technology, Taiwan, under Grant MOST 108-2221-E-006-091 and the National Center for High Performance Computing, Taiwan, for computer time and facilities.

References

  1. T. Powers, D. W. Krout, J. Bilmes, and L. Atlas, “Constrained robust submodular sensor selection with application to multistatic sonar arrays,” IET Radar, Sonar & Navigation, vol. 11, no. 12, pp. 1776–1781, 2017. View at: Publisher Site | Google Scholar
  2. A. J. Hunter, S. Dugelay, and W. L. J. Fox, “Repeat-pass synthetic aperture sonar micronavigation using redundant phase center arrays,” IEEE Journal of Oceanic Engineering, vol. 41, no. 4, pp. 820–830, 2016. View at: Publisher Site | Google Scholar
  3. K. C. Lee, “Understanding and interpretations for mutual coupling within sonar arrays,” IEEE Journal of Oceanic Engineering, vol. 33, pp. 210–214, 2008. View at: Google Scholar
  4. K.-C. Lee, J.-Y. Jhang, and M.-C. Huang, “Performance of underwater adaptive array including mutual coupling effects,” Applied Acoustics, vol. 70, no. 1, pp. 190–193, 2009. View at: Publisher Site | Google Scholar
  5. K. C. Lee and T. H. Chu, “A circuit model for mutual coupling analysis of a finite antenna array,” IEEE Transactions on Electromagnetic Compatibility, vol. 38, no. 3, pp. 483–489, 1996. View at: Publisher Site | Google Scholar
  6. E. M. Arase, “Mutual radiation impedance of square and rectangular pistons in a rigid infinite baffle,” The Journal of the Acoustical Society of America, vol. 36, no. 8, pp. 1521–1525, 1964. View at: Publisher Site | Google Scholar
  7. C. Christodoulous and M. Georgiopoulos, Applications of Neural Networks in Electromagnetics, Artech House, Boston, MA, USA, 2001.
  8. Y. Wang, Q. Lin, X. Wang, and F. Zhou, “Adaptive PD control based on RBF neural network for a wire-driven parallel robot and prototype experiments,” Mathematical Problems in Engineering, vol. 2019, Article ID 6478506, 15 pages, 2019. View at: Publisher Site | Google Scholar
  9. Z. Zhao, J. Xi, X. Zhao, G. Zhang, and M. Shang, “Evaluation of the calculated sizes based on the neural network regression,” Mathematical Problems in Engineering, vol. 2018, Article ID 4078456, 11 pages, 2018. View at: Publisher Site | Google Scholar
  10. G. Q. Yang, “Modulation classification based on extensible neural networks,” Mathematical Problems in Engineering, vol. 2017, Article ID 6416019, 10 pages, 2017. View at: Publisher Site | Google Scholar
  11. K.-C. Lee, J.-Y. Jhang, and T.-N. Lin, “An automatically converging scheme based on the neural network and its application in antennas,” IEEE Transactions on Antennas and Propagation, vol. 57, no. 4, pp. 1270–1274, 2009. View at: Publisher Site | Google Scholar
  12. K. C. Lee, “Impedance calculations for elements of sonar arrays by neural network based integration,” IEEE Transactions on Aerospace and Electronic Systems, vol. 43, no. 3, pp. 1065–1070, 2007. View at: Publisher Site | Google Scholar
  13. K. C. Lee and T. N. Lin, “Application of neural networks to analyses of nonlinearly loaded antenna arrays including mutual coupling effects,” IEEE Transactions on Antennas and Propagation, vol. 53, no. 3, pp. 1126–1132, 2005. View at: Publisher Site | Google Scholar
  14. K.-C. Lee, “A neural-network-based model for 2D microwave imaging of cylinders,” International Journal of RF and Microwave Computer-Aided Engineering, vol. 14, no. 5, pp. 398–403, 2004. View at: Publisher Site | Google Scholar
  15. K.-C. Lee, “Mutual coupling analyses of antenna arrays by neural network models with radial basis functions,” Journal of Electromagnetic Waves and Applications, vol. 17, no. 8, pp. 1217–1223, 2003. View at: Publisher Site | Google Scholar
  16. J. Lee and I. Seo, “Radiation impedance computations of a square piston in a rigid infinite baffle,” Journal of Sound and Vibration, vol. 198, no. 3, pp. 299–312, 1996. View at: Publisher Site | Google Scholar
  17. D. F. Specht, “A general regression neural network,” IEEE Transactions on Neural Networks, vol. 2, no. 6, pp. 568–576, 1991. View at: Publisher Site | Google Scholar
  18. N. Hashimoto, “Measurement of sound radiation efficiency by the discrete calculation method,” Applied Acoustics, vol. 62, no. 4, pp. 429–446, 2001. View at: Publisher Site | Google Scholar

Copyright © 2020 Kun-Chou Lee. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views61
Downloads143
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.