Abstract

Artificial neural networks due to their general-purpose nature are used to solve problems in diverse fields. Artificial neural networks (ANNs) are very useful for fractal antenna analysis as the development of mathematical models of such antennas is very difficult due to complex shapes and geometries. As such empirical approach doing experiments is costly and time consuming, in this paper, application of artificial neural networks analysis is presented taking the Sierpinski gasket fractal antenna as an example. The performance of three different types of networks is evaluated and the best network for this type of applications has been proposed. The comparison of ANN results with experimental results validates that this technique is an alternative to experimental analysis. This low cost method of antenna analysis will be very useful to understand various aspects of fractal antennas.

1. Introduction

Artificial neural networks (ANNs) have been used as efficient tools for modeling and prediction in almost all disciplines. The use of ANN has become widely accepted in antenna design and analysis applications. This is evident from the increasing number of publications in research/academic journals [18]. Angiulli and Versaci proposed a technique to evaluate the resonant frequency of microstrip antennas using neuro-fuzzy networks [1]. The use of ANN for the design of rectangular patch antenna is explained in [2]. Applications of ANN in various types of antennas and antenna arrays are explained in [3]. Neog et al. [4] have used a tunnel based ANN for the parameter calculation of the wideband microstrip antenna. Lebbar et al. [5] employed a geometrical methodology based ANN for the design of a compact broadband microstrip antenna. In [6] the authors proposed an ANN to predict the input impedance of a broadband antenna as a function of its geometric parameters. Guney and Sarikaya [7] presented a hybrid method based on a combination of ANN and fuzzy inference system to calculate simultaneously the resonant frequencies of various microstrip antennas of regular geometries. An equilateral triangular microstrip antenna has been designed using a particle swarm optimization driven radial basis function neural networks by [8]. However, the use of ANN in analysis & design of fractal antennas is at very early stage. A limited number of literatures are available in this field of antennas [912]. In this paper, the performance of three different ANNs on Sierpinski gasket fractal antenna analysis is investigated by means of two aspects: mean absolute error (MAE) and coefficient of correlation.

The experimental analysis of antennas involves costly setup which include anechoic chamber and equipment like vector network analyzer, synthesized signal generator, pattern recorder, and so forth, in addition to other equipment required for antenna fabrication. Availability of such resources is limited in academic institutions due to limited budget. Thus, the proposed technique is a very low cost method as it requires only a computer and software. So this method may be used as part of the lab experiments in undergraduate and postgraduate classes. The method is explained by taking an example of Sierpinski gasket fractal antenna for analysis.

The term fractal was originally coined by Mandelbrot to describe a family of complex shapes that possess an inherent selfsimilarity in their geometrical structure [15]. Fractals represent a class of geometries with very unique properties that have attracted antenna designers in recent years. The general concept of fractals can be applied to develop various antenna elements and such antennas are known as fractal antennas [16]. Applying fractals for antenna elements result in smaller resonant antennas that are multiband and may be optimized for gain. The advantages of fractal antennas include (i) miniaturization and space-filling, (ii) multiband performance, (iii) input impedance matching, (iv) efficiency and effectiveness, and (v) improved gain and directivity [17].

Various fractal geometries have been explored by researcher in past two decades. However, the Sierpinski gasket fractal antenna geometry has been investigated most widely than any other geometry since its introduction in 1998 by Puente-Baliarda et al. [13]. The Sierpinski triangle, also called the Sierpinski gasket, is a fractal geometry named after the Polish mathematician Waclaw Sierpinski who described it in 1915. The first three iterations of Sierpinski gasket are shown in Figure 1. The first iteration gasket is constructed by subtracting central inverting triangle from a main triangle shape. After the subtraction three equal triangles remain on the structure, each one being half of the size of the original. If the same subtraction procedure is repeated on remaining triangles, the 2nd iteration is obtained and similarly if iteration is carried out an infinite number of times, the ideal Sierpinski gasket is obtained [18]. The ideal Sierpinski gasket is a selfsimilar structure and each one of its three main parts has exactly the same shape as that of the whole object but reduced by a factor of two [13].

The Sierpinski triangle can also be generated by the use of the iterated function system (IFS) which is generally used to construct selfsimilar fractal shapes. The IFS is based on a set of affine transforms and if the side length “” of the base equilateral triangle is known, then the following iterated function system gives the selfsimilar shape of Sierpinski gasket [18]:

The Sierpinski antenna is a multiband antenna and the number of bands depends on the number of iterations “” [13]. The base triangular shape which is also called 0th iteration has single fundamental resonance frequency. The first fractal iteration has two resonant frequencies and so on. Also, the actual value of resonant frequencies depends on dielectric constant “” and on the thickness “” of the substrate and side length “.” So to determine the resonant frequencies in a particular iteration, the values of ,  , and number of iterations “” must be known. An expression for predicting the resonant frequency of this antenna has been proposed in 2008 by Mishra et al. [14].

The rest of the paper is organized as follows. Section 2 briefly describes the ANN types which are being investigated. Section 3 describes the details of ANN models, results, and the performance comparison. The conclusion is presented in Section 4.

2. Artificial Neural Networks: Brief Description

The brief of three types of networks, namely, multilayer perceptron neural network (MLPNN), radial basis function neural network (RBFNN), and general regression neural network (GRNN) which are used in this work, are described in the following sections.

2.1. Multilayer Perceptron Neural Network

Multilayer perceptron neural network (MLPNN) is a widely used neural network structure in antenna applications. It consists of multiple layers of neurons that are connected in a feed forward manner [6]. The base structure of an MLPNN is shown in Figure 2.

Figure 2 depicts that the network has layers. The first layer is the input layer and the last layer (th layer) is the output layer. The other layers, that is, layers from 2 to are hidden layers. Each layer has a number of neurons [19]. In the input and output layers, the number of neurons are equal to the number of inputs and outputs, respectively. The number of neurons in the hidden layers is chosen using a trial and error process so that accurate output is obtained. Usually, the MLPs with one or two hidden layers are commonly used for antenna applications. Each neuron processes the inputs to produce output using a function called activation function. Input neurons normally use a relay activation function and simply relay (pass) the external inputs to the hidden layer neurons. The most commonly used activation function for hidden layer neuron is the sigmoid function defined as

The connection between the units in subsequent layers has an associated weight which is computed during training using error backpropagation algorithm [20]. The backpropagation learning algorithm consists of two steps. In the first step, the input signal is passed forward through each layer of the network. The actual output of the network is calculated and this output signal is subtracted from desired output signal to generate an error signal. In the second step, error is fed backward through the network from the output layer, through the hidden layers. The synaptic weights between each layer are updated based on the computational error. These two steps are repeated for each input-output pair of training data and process is iterated for a number of times until the output value converges to a desired solution [21]. The details of MLPNN can be further seen from [2225].

2.2. Radial Basis Function Neural Networks

Radial basis function neural network (RBFNN) has several special characteristics like simple architecture and faster performance [26]. It is a special neural network with a single hidden layer. The hidden layer neurons use radial basis function to apply a nonlinear transformation from the input space to the hidden space [27]. The nonlinear basis function is a function of the normalized radial distance between the input vector and the weight vector. The most widely used form of radial basis function is the Gaussian function given by [26] where is -dimensional input vector with elements and is vector determining the centre of basis function and has elements . The symbol denotes width parameter whose value is determined during training. The RBFNN involves two-stage training procedure. In the first stage, the parameters governing the basis functions, that is, centers and widths , are determined using the relatively fast, unsupervised methods. The unsupervised training methods use only the input data and not the target data. In second stage of training, the weights of the output layer are determined [27]. For these networks, output layer is a linear layer, so fast linear supervised methods are used. Due to this reason these networks have faster performances [26]. The number of neurons in all layers of RBFNN, that is, input, output, and hidden layers, is determined by the dimensions and number of input-output data sets. The basic structure of RBFNN is shown in Figure 3. RBFNN can be explored in detail from [2529].

2.3. General Regression Neural Network

General regression neural network (GRNN) introduced by Specht in 1991 is a one-pass learning algorithm network with a highly parallel structure [30]. It is a memory based network that provides estimates of continuous variables and converges to underlying linear or nonlinear regression surface. Nonlinear regression analysis forms the theoretical base of GRNN. If joint probability density function of random variables and is and measured value of is , then the regression of with respect to (which is also called conditional mean) is [31] where is the prediction output of when input is .

For GRNN, Nose-Filho et al. [32] have shown that can be calculated by where and are sample values of random variables and , is number of samples in training set, is smoothing parameter, and is the Euclidean matrix defined by The estimate can be visualized as a weighted average of all of the observed values, , where each observed value is weighted exponentially according to its Euclidean distance from .

The basic structure of GRNN is shown in Figure 4. The input units are merely distribution units, which provide all of the inputs (scaled measurement variables) to all of the neurons on the second layer called pattern units. The number of neurons in input layer and output layer of GRNN is equal to the dimensions of the input data and output data, respectively. The pattern unit consists of neurons equal to number of sample in training set and each pattern unit is dedicated to one cluster centre. When a new vector, , is entered into the network, it is subtracted from the stored vector representing each cluster center. The sum of squares or the absolute values of the differences is calculated and fed into a nonlinear activation function which is generally an exponential function.

The outputs of pattern units are passed on to third layer neurons known as summation units. There are two types of neurons in summations units: numerator neurons whose number is equal to output dimensions and one denominator neuron. The first type generates an estimate of by adding the outputs of pattern units weighted by the number of observations each cluster centre represents. The second type generates estimate of and multiplies each value from a pattern unit by the sum of samples associated with cluster center . is a constant determined by the Parzen window. The desired estimate of is yielded by output unit by dividing by [32]. The detailed explanation of GRNN can be seen from [3032].

3. The Proposed ANN Analysis of the Sierpinski Gasket Fractal Antenna

In presented work, three above discussed ANN models have been implemented and compared for Sierpinski gasket fractal antenna. In each model, the parameters dielectric constant “,” the thickness “” of the substrate, side length “” and the number of iterations “” of the antenna are taken as inputs and the resonant frequency is taken as outputs. A set of 45 values of inputs along with desired output values are taken for training. The block diagram of the proposed models is depicted in Figure 5. Thus, the networks models have four inputs and one output with intermediate hidden layers. The parametric details of the models are shown in Figure 5.

3.1. Multilayer Perceptron Neural Network Parameters

For the training of MLPNN, 4 input neurons, 15 neurons in hidden layer, and 1 output neuron are used. The trainlm function is used as training function and learning rate is selected as 0.2. Trainlm is a network training function that updates weight and bias values according to the Levenberg-Marquardt algorithm. The architecture used for the analysis is shown in Figure 6.

3.2. Radial Basis Function Neural Networks Parameters

For the training of RBFNN, the value of spread constant has been selected as 1.05. The value of spread constant should be large enough so that several radial basis neurons always have fairly large outputs at any given moment. This makes the network function smoother and results in better generalization for new input vectors occurring between input vectors used in the design. However, for very large value of spread constant, each neuron is effectively responding in the same large area of the input space. As there are 45 sets of training data so number of neurons in hidden layer is 45. The detailed architecture is depicted in Figure 7.

3.3. General Regression Neural Network Parameters

For the training of GRNN, the value of spread constant has been selected as 0.85. In GRNN, the transfer function of first hidden layer is radial basis function. For small spread constant, the radial basis function is very steep so the neuron with the weight vector closest to the input will have a much larger output than other neurons. The network will tend to respond with the target vector associated with the nearest design input vector. For large values of spread constant the radial basis function's slope becomes smoother and several neurons can respond to an input vector. The network then acts as if it is taking a weighted average between target vectors whose design input vectors are closest to the new input vector. As in case of RBFNN, the number of neurons in hidden layer is 45. The architecture is shown in Figure 8.

The trained ANN models are used to predict resonant frequencies of 4th iteration Sierpinski gasket antenna with parameters ,  mm, and  mm. ANN results are compared with the experimental and theoretical results as shown in Table 1. The absolute error values are calculated by comparing with measured results of Puente-Baliarda et al. [13]. Also the theoretical results obtained using the expressions of Mishra et al. [14] are provided for comparison.

The performance comparison of different types of ANNs is important to find the best suitable model for specific problem. In recent years, a number of studies are done on performance comparison of ANNs for a range of applications [19, 21, 26, 3339]. The performance of the proposed models is compared on the basis of two different measures that is, mean absolute error (MAE), which gives the quality of training the model. The smaller the value of MAE of a model better will be its performance, and coefficient of correlation is the other measure of strength of the linear relationship developed by a particular model. The values of close to 1.0 indicate a good model performance. The quantitative performance is given in Table 2. The values of MAE and for theoretical results are also shown for comparison purposes.

The comparison between these three algorithms on the basis of MAE shows that the MLPNN and RBFNN have smallest error values and even better than those of theoretical results. It may be seen that the values of are equal to 1 for MLPNN, RBFNN, and theoretical results and it is sufficiently high for GRNN indicating satisfactory performance by all models.

When both performance criteria are considered together, the most satisfactory model is RBFNN, which has better performance than MLPNN and GRNN and it is even accurate than theoretical method in predicting resonant frequencies.

4. Conclusion

The application of the artificial neural networks method in Sierpinski gasket fractal antenna analysis is evaluated in this work. The performances of the three neural networks are evaluated to realize the possible applications of ANN in fractal antenna design. The obtained ANN results are compared with the published experimental results which show percentage of error less than 1.5% for RBFNN. Due to fast adaptive properties of ANN, the simulation time required is very less. It has been observed that RBFNN method outperforms the theoretical method in accuracy. Among the different neural models compared, the RBFNN is found as the best suitable model for this type of antenna analysis. Thus, ANN approach to Sierpinski fractal antenna analysis seems to be a low cost, accurate, and computationally fast approach. The same concept can be explored for other fractal geometries.