Abstract

In this study, the static pull-in instability of beam-type micro-electromechanical system (MEMS) is theoretically investigated. Considering the mid-plane stretching as the source of the nonlinearity in the beam behavior, a nonlinear size dependent Euler-Bernoulli beam model is used based on a modified couple stress theory, capable of capturing the size effect. Two supervised neural networks, namely, back propagation (BP) and radial basis function (RBF), have been used for modeling the static pull-in instability of microcantilever beam. These networks have four inputs of length, width, gap, and the ratio of height to scale parameter of beam as the independent process variables, and the output is static pull-in voltage of microbeam. Numerical data employed for training the networks and capabilities of the models in predicting the pull-in instability behavior has been verified. Based on verification errors, it is shown that the radial basis function of neural network is superior in this particular case and has the average errors of 4.55% in predicting pull-in voltage of cantilever microbeam. Further analysis of pull-in instability of beam under different input conditions has been investigated and comparison results of modeling with numerical considerations show a good agreement, which also proves the feasibility and effectiveness of the adopted approach.

1. Introduction

Micro-electromechanical systems (MEMS) are widely being used in today’s technology. So investigating the problems referring to MEMS owns a great importance. One of the significant fields of study is the stability analysis of the parametrically excited systems. Parametrically excited micro-electromechanical devices are ever being increasingly used in radio, computer, and laser engineering [1]. Parametric excitation occurs in a wide range of mechanics, due to time-dependent excitations, especially periodic ones; some examples are columns made of nonlinear elastic material, beams with a harmonically variable length, parametrically excited pendulums, and so forth. Investigating stability analysis on parametrically excited MEM systems is of great importance. In 1995 Gasparini et al. [2] examined the transition between the stability and instability of a cantilevered beam exposed to a partially follower load. Applying voltage difference between an electrode and ground causes the electrode to deflect towards the ground. At a critical voltage, which is known as pull-in voltage, the electrode becomes unstable and pulls in onto the substrate [3]. The static pull-in behavior of MEMS actuators has been studied for over two decades without considering the Casimir force [46]. Osterberg and Senturia [4] and Osterberg et al. [5] investigated the pull-in parameters of the beam-type and circular MEMS actuators using the distributed parameter models. Beni et al. [7], Koochi et al. [8] and Ghalambaz et al. [9] investigated the effect of Casimir force on the pull-in behavior of beams- type NEMS. Sadeghian et al. [6] applied the generalized differential quadrature method to investigate the pull-in phenomena of microswitches. A comprehensive literature review on investigating MEMS actuators can be found in [10]. Moghimi Zand and Ahmadian [11] investigated the pull-in behavior of multilayer microplates using finite element method. Different analytical, numerical, finite element methods have been proposed to model the pull-in behavior of microbeams [1216]. Further information about modeling pull-in instability of MEMS has been presented by Batra et al. [17] and Lin and Zhao [18]. The classical continuum mechanics theory is not able to explain the size-dependent behavior of materials and structures at submicron distances. To overcome this problem, nonclassical continuum theories such as higher-order gradient theories and the couple stress theory are developed considering the size effects [7]. In 1960s some researchers such as Koiter [19], Mindlin and Tiersten [20], and Toupin [21] introduced the couple stress elasticity theory as a nonclassic theory capable to predict the size effects with appearance of two higher-order material constants in the corresponding constitutive equations. In this theory, besides the classical stress components acting on elements of materials, the couple stress components, as higher-order stresses, are also available which tend to rotate the elements. Utilizing the couple stress theory, some researchers investigated the size effects in some problems [22]. Employing the equilibrium equation of moments of couples besides the classical equilibrium equations of forces and moments of forces, a modified couple stress theory is introduced by Yang et al. [23], with one higher-order material constant in the constitutive equations. Recently, size-dependent nonlinear Euler-Bernoulli and Timoshenko beams modeled on the basis of the modified couple stress theory have been developed by Xia et al. [24] and Asghari et al. [25], respectively. Rong et al. [26] presented an analytical method for pull-in analysis of clamped-clamped multilayer beam. Their method is the Rayleigh-Ritz method and assumes one deflection shape function. They derived the two governing equations by enforcing the pull-in conditions that the first- and second-order derivatives of the system energy functional are zero. They investigated the static pull-in instability voltage and displacement of the multilayer beam which were coupled in the two governing equations. Artificial neural networks (ANNs), as one of the most attractive branches in artificial intelligence, have the potentiality to handle problems such as modeling, estimation, prediction, diagnosis, and adaptive control in complex nonlinear systems [27]. As seen from pervious studies, the researcher used the analytical or numerical methods for static pull-in instability voltage of microcantilever beams but in this study, we use the neural network to estimate the pull-in voltage. This study investigates the pull-in instability of microbeams with a curved ground electrode under action of electric field force within the framework of von Karman nonlinearity and the Euler-Bernoulli beam theory. The static pull-in instability voltage of cantilever microbeam is obtained by using MAPLE commercial software. The objective of this paper is to establish the neural network models for estimation of the pull-in instability voltage of cantilever beams. More specifically, two different types of neural networks, back propagation (BP) and radial basis function (RBF), are used to construct the pull-in instability voltage. Effective parameters influencing pull-in voltage and their levels for training were selected through preliminary calculations carried out on instability pull-in voltage of microbeam. Networks trained by the same numerical data are then verified by some numerical calculations different from those used in training phase, and the best model was selected based on the criterion of having the least average values of verification errors. To the authors’ best knowledge, no previous studies which cover all these issues are available.

2. Modeling the Microcantilever Based on the Modified Couple Stress

Consider an electrostatically actuated microbeam of length as shown in Figure 1. The microcantilever is under a transverse distributed electrical force caused by the input voltage applied between the microbeam and the substrate. The cross section of the beam is here assumed to be rectangular with width and height . The governing equation for the static behavior of a uniform, homogeneous Euler-Bernoulli beam, made of an isotropic linear elastic material, under transverse distributed load modeled based on the modified couple stress theory is written as [28] where where , , , and are Young’s modulus in the beam direction, the axial load, the second moment of inertia, and the transverse deflection. In addition, is a material length scale parameter and is the average shear modulus in the side-plane of the beam. Letting , the equation is reduced to one corresponding to the classical theory. If in (1), , then the model of beam is called the linear equation without the effect of geometric nonlinearity. The cross-sectional area of beam is . The electrostatic force enhanced with first-order fringing correction can be presented in the following equation [29]: where  C2 N−1 m−2 is the permittivity of vacuum, is the applied voltage, and is the initial gap between the movable and the ground electrode.

In the static case, it is clear that and . Hence, (1) is reduced to For cantilever beam, the boundary conditions at the ends are Let us consider the following dimensionless parameters: In the above equations, the nondimensional parameter, , is defined as the size effect parameter. Also, is nondimensional voltage parameter. The normalized nonlinear governing equation of motion of the beam can be written as [30]

3. Overview of Neural Networks

A neural network is a massive parallel system comprised of highly interconnected, interacting processing elements, or nodes. The neural networks process through the interactions of a large number of simple processing elements or nodes, also known as neurons. Knowledge is not stored within individual processing elements but rather represented by the strengths of the connections between elements. Each piece of knowledge is a pattern of activity spread among many processing elements, and each processing element can be involved in the partial representation of many pieces of information. In recent years, neural networks have become a very useful tool in the modeling of complicated systems because it has an excellent ability to learn and to generalize (interpolate) the complicated relationships between input and output variables [27]. Also the ANNs behave as a model free estimators; that is, they can capture and model complex input-output relations without the help of a mathematical model [31]. In other words, training neural networks, for example, eliminates the need for explicit mathematical modeling or similar system analysis.

3.1. Artificial Neural Network Models of Static Pull-In Instability of Beam

In this research backpropagation (BP) and radial basis function (RBF) neural networks have been used for modeling the pull-in instability voltage of microcantilever beams. The first ANN is very popular, especially in the area of online monitoring and manufacturing modeling, as its design, structure, and operation are relatively simple. The radial basis network has some additional advantages such as rapid learning and less error. In particular, most RBFNs involve fixed basis functions with linearly unknown parameters in the output layer. In contrast, multilayer BP ANNs comprise adjustable basis functions, which results in nonlinearly unknown parameters. It is commonly known that linear parameters in RBFN make the use of least squares error based updating schemes possible that have faster convergence than the gradient-descent methods in multilayer BP ANN. On the other hand, in practice, the number of parameters in RBFN starts becoming unmanageably large only when the number of input features increases beyond 10 or 20, which is not the case in our study. Hence, the use of RBFN was practically possible in this research. In this paper, MATLAB Neural Network Toolbox “NNET” was used as a platform to create the networks [32].

3.2. Backpropagation (BP) Neural Network

The backpropagation network (Figure 2) is composed of many interconnected neurons or processing elements (PE) operating in parallel and are often grouped in different layers.

As shown in Figure 3, each artificial neuron evaluates the inputs and determines the strength of each through its weighing factor. In the artificial neuron, the weighed inputs are summed to determine an activation level. That is, and is the output of the neuron in the layer, where is the summation of all the inputs of the neuron in the layer and is the weight from the neuron to the neuron. The output of the neuron is then transmitted along the weighed outgoing connections to serve as an input to subsequent neurons. In the present study, a hyperbolic tangent function with a bias is used as an activation function of hidden and output neurons. Therefore, output of the neuron for the layer can be expressed as Before practical application, the network has to be trained. To properly modify the connection weights, an error-correcting technique, often called backpropagation learning algorithm or generalized delta rule, is employed. Generally, this technique involves two phases through different layers of the network. The first is the forward phase, which occurs when an input vector is presented and propagated forward through the network to compute an output for each neuron. During the forward phase, synaptic weights are all fixed. The error obtained when a training pair consisting of both input and output is given to the input layer of the network is expressed by the following equation [33]: where is the component of the desired output vector, and is the calculated output of neuron in the output layer. The overall error of all the patterns in the training set is defined as mean square error (MSE) and is given by where is the number of input-output patterns in the training set. The second is the backward phase which is an iterative error reduction performed in the backward direction from the output layer to the input layer. In order to minimize the error, , as rapidly as possible, the gradient descent method, adding a momentum term is used. Hence, the new incremental change of weight can be where is a constant real number between 0.1 and 1, called learning rate, is the momentum parameter usually set to a number between 0 and 1, and is the index of iteration. Therefore, the recursive formula for updating the connection weights becomes These corrections can be made incrementally (after each pattern presentation) or in batch mode. In the latter case, the weights are updated only after the entire training pattern set has been applied to the network. With this method, the order in which the patterns are presented to the network does not influence the training. This is because of the fact that adaptation is done only at the end of each epoch. And thus, we have chosen this way of updating the connection weights.

3.3. Radial Basis Function (RBF) Neural Network

The construction of a radial basis function (RBF) neural network in its most basic form involves three entirely different layers. A typical RBFN with input and output is shown in Figure 4. The input layer is made up of source nodes (sensory units). The second layer is a single hidden layer of high enough dimension, which serves a different purpose in a feed-forward network. The output layer supplies the response of the network to the activation patterns applied to the input layer. The input units are fully connected though unit-weighed links to the hidden neurons, and the hidden neurons are fully connected by weighed links to the output neurons. Each hidden neuron receives input vector and compares it with the position of the center of Gaussian activation function with regard to distance. Finally, the output of the -hidden neuron can be written as where is an -dimensional input vector, is the vector representing the position of the center of the hidden neuron in the input space, and is the standard deviation or spread factor of Gaussian activation function. The structure of a radial basis neuron in the hidden layer can be seen in Figure 5. Output neurons have linear activation functions and form a weighted linear combination of the outputs from the hidden layer: where is the output of neuron , is the number of hidden neurons, and is the weight value from the hidden neuron to the output neuron. Basically, the RBFN has the properties of rapid learning, easy convergence, and less error and generally possess following characteristics.(1)It may require more neurons than the standard feed-forward BP networks.(2)It can be designed in a fraction of the time that it takes to train the BP network.(3)It has excellent ability of representing nonlinear functions. RBFN is being used for an increasing number of applications, proportioning a very helpful modeling tool.

4. Results and Discussion

4.1. Static Pull-In Instability Analysis

In order to obtain different pull-in instability parameters and output features for training and testing of neural networks, a series of numerical analysis was performed on a MAPLE package. At first, some preliminary calculations were carried out to determine the stable domain of the pull-in instability parameters and also the different ranges of pull-in variables. Based on preliminary calculations results, beam length (), width of beam (), gap (), and () ratio were chosen as independent input parameters. The total data obtained from MAPLE calculations is 120 which forms the neural networks’ training and testing sets. In the considered case study, it is assumed that the cantilever beam is made of silicon in 110 direction; that is, the length of the beam is along the 110 direction of silicon crystal, with  GPa. Also, the side planes of the beam are considered normal to the 110 direction, with average in-plane Poisson’s ratio and shear modulus equal to and  GPa [29].

4.2. Modeling of Static Pull-In Instability of Cantilever Beam Using Neural Networks

The modeling of pull-in instability of microbeam with BP and RBF neural networks is composed of two stages: training and testing of the networks with numerical data. The training data consisted of values for beam length (), gap (), width of beam () and (), and the corresponding static pull-in instability voltage (). A total of 120 such datasets were used, of which 110 were selected randomly and used for training purposes whilst the remaining 10 datasets were presented to the trained networks as new application data for verification (testing) purposes. Thus, the networks were evaluated using data that had not been used for training. Training/testing pattern vectors are formed, each with an input condition vector and the corresponding target vector. Map each term to a value between −1 and 1 using the following linear mapping formula: where : normalized value of the real variable; and : minimum and maximum values of normalization, respectively; : real value of the variable; and : minimum and maximum values of the real variable, respectively. These normalized data were used as the input and output to train the ANN. In other words, the network has four inputs of beam length (), gap (), width of beam (), and () ratio and one output of static pull-in voltage (). Figure 6 shows the general network topology for modeling the process.

4.3. BP Neural Network Model

The size of hidden layer(s) is one of the most important considerations when solving the actual problems with using multilayer feed-forward network. However, it has been shown that the BP neural network with one hidden layer can uniformly approximate any continuous function to any desired degree of accuracy given an adequate number of neurons in the hidden layer and the correct interconnection weights [34]. Therefore, one hidden layer was adopted for the BP model. To determine the number of neurons in the hidden layer, a procedure of trial and error approach needs to be done. As such, some attempts have been made to study the network performance with a different number of hidden neurons. Hence, a number of candidate networks are constructed, each trained separately, and the “best” network was selected based on the accuracy of the predictions in the testing phase. It should be noted that if the number of hidden neurons is too large, the ANN might be overtrained giving spurious values in the testing phase. If too few neurons are selected, the function mapping might not be accomplished due to undertraining. Three functions, namely, newelm, newff, and newcf, have been used for creating the BP networks. Table 1 shows that 10 numerical datasets have been used for verifying or testing network capabilities in modeling the process.

Therefore, the general network structure is supposed to be 4--1, which implies 4 neurons in the input layer, neurons in the hidden layer, and 1 neuron in the output layer. Then, by varying the number of hidden neurons, different network configurations are trained, and their performances are checked. The results are shown in Table 2.

For training problem, equal learning rate and momentum constant of were used. Also, error stopping criterion was set at , which means that training epochs continued until the mean square error fell beneath this value. Both the required iteration numbers and mapping performances were examined for these networks. As the error criterion for all networks was the same, their performances are comparable. As a result, from Table 2, the best network structure of BP model is picked to have 8 neurons in the hidden layer with the average verification errors of 6.36% in predicting by newelm function. Table 3 shows the comparison of calculated and predicted values for static pull-in voltage in verification cases. After 1884 epochs, the MSE between the desired and actual outputs becomes less than 0.01. At the beginning of the training, the output from the network is far from the target value. However, the output slowly and gradually converges to the target value with more epochs and the network learns the input/output relation of the training samples.

4.4. RBF Neural Network Model

Spread factor () value of Gaussian activation functions in the hidden layer is the parameter that should be determined by trial and error when using MATLAB neural network toolbox for designing RBF networks. It has to be larger than the distance between adjacent input vectors, so as to get good generalization, but smaller than the distance across the whole input space. Therefore, in order to have a network model with good generalization capabilities, the spread factor should be selected between 0.5 and 5.34. For training the RBF network, at first, a guess is made for the value of spread factor in the obtained interval. Also, the number of radial basis neurons is originally set as one. At each iteration, the input vector that results in lowering the most network training error is used to create a radial basis neuron. Then, the error of the new network is checked, and if it is low enough, the training stops. Otherwise, the next neuron is added. This procedure is repeated until the error goal is achieved or the maximum number of neurons is reached. In the present case, it was found by trial and error that 20 hidden neurons with the spread factor of 3 can give a model, which has the best performance in the verification stage. Table 4 shows the effect of the number of hidden neurons on the RBF network performance. It is clear that adding of hidden neurons more than 20 makes the training error (MSE) smaller but deteriorates network’s generalization capabilities with the increase of average verification errors instead of decreasing. Therefore, the optimum number of radial basis neurons is 20. The selected network has the average errors of 4.55% in response to the 10 input verification calculations (Table 1) for with newrbe function. Table 5 lists output values predicted by the RBF neural model and the calculated ones in verification (testing) phase. Two functions, namely, newrbe and newrb, have been used for creating of RBF networks.

5. Selection of the Best Model

The simplest approach to the comparison of some different networks is to evaluate the error function by using the data which is independent of that used for training [35]. Hence, the selection of the corresponding “best network” is carried out based on the accuracy of predicting the process outputs in verification stage. From Tables 2 and 4, it is concluded that RBFN model with the total average error of 4.55% in comparison with 6.36% for BP model has superior performance and therefore is picked as the best model. Figure 7 illustrates the numerical and predicted by BP and RBF neural networks in verification stage. Figures 8 and 9 compare the pull-in voltages evaluated by the modified couple stress theory with , μm, μm, and μm with the results of the BP and RBF neural networks, respectively. The pull-in voltages of the microcantilever versus parameter for are depicted in Figure 10, with three BP functions. The same conditions have been used for two functions of RBFN, in Figure 11. As a further step to study the capabilities of each network in fitting all points in the input space, a linear regression between the network output and the corresponding target (numerical) values was performed. In this case, the entire data set (training and verification) was put through the trained networks, and regression analysis was conducted. The networks have mapped the output very well. The correlation coefficients () are also given as a criterion of comparison. The amounts of are 0.989 for BP model and 0.997 for RBF model in simulating .

6. Conclusions and Summary

In this paper, by using the modified couple stress theory, the size-dependent behavior of an electrostatically actuated microcantilever has been investigated. In this study the pull-in instability of geometrically nonlinear cantilever microbeams under an applied voltage is investigated. Two supervised neural networks have been used for the static pull-in instability voltage of microcantilever beams. Based on the results of each network with some data set, different from those used in the training phase, it was shown that the RBF neural model has superior performance than BP network model and can predict the output in a wide range of microbeam conditions with reasonable accuracy. In sum, the following items can also be mentioned as the general findings of the present research.(1)The BP and RBF neural networks are capable of constructing models using only numerical data, describing the static pull-in instability behavior.(2)RBF neural network which possesses the privileges of rapid learning, easy convergence, and less error with respect to BP network, has better generalization power and is more accurate for this particular case. This selection was done according to the results obtained in the verification phase.(3)The results show that the newelm function is more accurate than newff and newcf functions. Also the Levenberg-Marquardt training is faster than other training methods.(4)The results have demonstrated the applicability and adaptability of the RBNN for analysis of instability static pull-in voltage of cantilever beams and also the newrbe function is more accurate and faster than newrb function.(5)For cantilever beams by increasing of gap length, the pull-in voltage is significantly increased.(6)For cantilever beams by increasing of length of beams, the pull-in voltage is significantly decreased.(7)When the ratio of increases, the pull-in voltage predicted by modified couple stress theory and ANN is constant approximately.The conclusion indicates that the geometry of beam has significant influences on the electrostatic characteristics of microbeams that can be designed to tailor the desired performance in different MEMS applications.