Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2011 (2011), Article ID 145608, 16 pages
http://dx.doi.org/10.1155/2011/145608
Research Article

Inverse Problem with Respect to Domain and Artificial Neural Network Algorithm for the Solution

1Institute of Applied Mathematics, Baku State University, Baku 1148, Azerbaijan
2Institute of Information Technology, ANAS, Baku 1141, Azerbaijan

Received 25 April 2011; Accepted 21 June 2011

Academic Editor: Gradimir V. Milovanović

Copyright © 2011 Kambiz Majidzadeh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We consider the inverse problem with respect to domain. We suggested a new approach for reducing the inverse problem for a domain to an equivalent problem in a variational setting and gave an effective solution algorithm for solving such problems. In order to solve boundary problem, the artificial neural network is used in each step of the iteration.

1. Introduction

A wide class of practical problems are reduced to the inverse problem with respect to domain. As an example, we can show problems of elasticity theory, diffusion problems, the problems arising in hydrodynamics [16], and so forth. The papers concerning inverse problems usually deal with inverse problems for an unknown function (coefficients and functions occurring in the boundary and initial conditions). But in our case, a domain is sought and the investigation of the considered problems is related with some strong difficulties. In order to avoid these difficulties and to investigate such problems, firstly, the considered inverse problem is reduced to the variational statement. As the obtained variational problem is a domain-dependent variational problem, the investigation of such problems is also related to some difficulties. Here, we give an effective algorithm for solving such problems.

2. Statement of the Problem and Main Result

Let 𝐷 be an 𝑟-dimensional domain, that is, 𝐷𝑅𝑟 and 𝑥=(𝑥1,𝑥2,,𝑥𝑟)𝐷.

Denote by 𝑆𝐷 the boundary of the domain 𝐷,𝑆𝐷=𝜕𝐷. Assume that the boundary 𝑆𝐷 is in the space 𝐶2. Let us consider the following inverse problem: Δ𝑢+𝑎(𝑥)𝑢=𝑓(𝑥),𝑥𝐷,(2.1)𝑢(𝑥)=0,𝑥𝑆𝐷,(2.2)𝜕𝑢(𝑥)𝜕𝑛=0,𝑥𝑆𝐷,(2.3) where the functions 𝑎 are 𝑓 and continuously differentiable functions in 𝑅𝑟 and 𝑎(𝑥)>0. Denote by 𝐾 the set of convex domains set with a boundary from 𝐶2.

Our goal is to find a pair (𝐷,𝑢)𝐾×𝐶2(𝐷) such that the function 𝑢=𝑢(𝑥) satisfies (2.1) and boundary conditions (2.2), (2.3) in the domain 𝐷. As it is seen, condition (2.2) is a Dirichlet condition, and condition (2.3) is a Neumann condition. For solving inverse problem (2.1)–(2.3), at first, we consider the following unknown domain variational problem:𝐽(𝐷,𝑢)=𝐷𝐹𝑥,𝑢(𝑥),𝑢𝑥(𝑥)𝑑𝑥min,𝐷𝐾,𝑢𝐶2(𝑢𝐷),(2.4)(𝑥)=0,𝑥𝑆𝐷.(2.5) We will assume that the function 𝐹(𝑥,𝑢,𝑝) is a continuously differentiable function of its own variables in 𝐷×𝑅×𝑅𝑟.

If boundary condition is satisfied for 𝐷𝐾,𝑢𝐶2(𝐷), the pair (𝐷,𝑢) is said to be a possible pair. Denote by 𝑀 all possible pairs set. The pair (𝐷,𝑢)𝑀 is called an optimal pair if it gives a minimum to functional (2.4) in the set 𝑀.

Give the following theorem obtained in [7].

Theorem 2.1. Let the pair (𝐷,𝑢)𝑀 be an optimal pair for variational problem (2.4), (2.5). Then the function 𝑢=𝑢(𝑥) is a solution of the following Euler equation in the domain 𝐷: 𝐹𝑢𝑥,𝑢(𝑥),𝑢𝑥(𝑥)𝑛𝑖=1𝑑𝑑𝑥𝑖𝐹𝑢𝑥𝑖𝑥,𝑢(𝑥),𝑢𝑥(𝑥)=0,𝑥𝐷,(2.6) and moreover, in the boundary 𝑆𝐷, the condition 𝐹𝑥,𝑢(𝑥),𝑢𝑥(𝑥)𝑛𝑖=1𝑢𝑥𝑖(𝑥)𝐹𝑢𝑥𝑖𝑥,𝑢(𝑥),𝑢𝑥(𝑥)=0,𝑥𝑆𝐷,(2.7) is satisfied.

As it is seen, this theorem is proved for convex domains. But one can obtain the similar result for doubly connected domain 𝐷 with internal and external boundaries 𝑆1 and 𝑆2. For that, we must use the expansion𝐷𝐹𝑑𝑥=𝐷2𝐹𝑑𝑥𝐷1𝐹𝑑𝑥,(2.8) where 𝐷1 and 𝐷2 are convex domains bounded by the boundaries 𝑆1 and 𝑆2.

Now, take the function 𝐹(𝑥,𝑢,𝑝) in the following form 𝐹𝑥,𝑢,𝑢𝑥=||𝑢𝑥||2+𝑎(𝑥)𝑢22𝑓(𝑥)𝑢,(2.9) where||𝑢𝑥||2=||𝑢𝑥1||2+||𝑢𝑥2||2||𝑢++𝑥𝑟||2.(2.10) As the functions𝑎 and 𝑓 are continuously differentiable functions in 𝑅𝑟, the function 𝐹(𝑥,𝑢,𝑝) is a continuously differentiable function of its own variablies in𝐷×𝑅×𝑅𝑟. Apply the theorem mentioned above to this function. It is clear that 𝐹𝑢=2𝑎(𝑥)𝑢2𝑓(𝑥)𝑢,𝐹𝑢𝑥𝑖=2𝑢𝑥𝑖.(2.11) Hence,𝑑𝑑𝑥𝑖𝐹𝑢𝑥𝑖=2Δ𝑢.(2.12) So, from condition (2.6), we get that the function 𝑢=𝑢(𝑥) satisfies (2.1) in the domain 𝐷. From condition (2.7), we get the following boundary condition||𝑢𝑥||2+𝑎𝑢22𝑓(𝑥)𝑢||𝑢2𝑥||2=0,𝑥𝑆𝐷.(2.13) If we take into account the condition 𝑢(𝑥)=0,𝑥𝑆𝐷, we get ||𝑢𝑥||=0,𝑥𝑆𝐷,(2.14) or𝑢𝑥(𝑥)=0,𝑥𝑆𝐷.(2.15) From the equation of the derivative with respect to the normal 𝜕𝑢(𝑥)/𝜕𝑛, we get that function (2.3) satisfies the boundary condition (2.3) as well. Thus, we proved the following theorem.

Theorem 2.2. Let the pair (𝐷,𝑢)𝑀 be an optimal pair for problem (2.4), (2.5). Then it is a solution of problem (2.1)–(2.3) as well. This theorem shows that if instead of the inverse problem (2.1)–(2.3) we take the function 𝐹(𝑥,𝑢,𝑝) in the form (2.9), we can investigate the variational problem (2.4), (2.5). Notice that the inverse of this fact is not true in general. Though the functional 𝐽(𝐷,𝑢) is convex with respect to the functional 𝑢, this functional is not convex with respect to 𝐷 in general. Using the obtained results to solving problem (2.4), (2.5), we give the following method.

Let the system of the functions {𝜑𝑘(𝑥)},𝑘=1,2, form a basis in the space 𝐶2(𝐷). Then function 𝑢=𝑢(𝑥) may be expanded by this basis𝑢=𝑖=1𝛼𝑘𝜑𝑘(𝑥).(2.16) We take this into account in problem (2.4), (2.5) and get𝐼(𝐷,𝛼)=𝐷𝛼Φ(x,𝛼)dxmin,𝐷𝐾,𝛼=1,𝛼2,𝛼3,,(2.17)𝑖=1𝛼𝑘𝜑𝑘(𝑥)=0,𝑥𝑆𝐷,(2.18) whereΦ(𝑥,𝛼)=𝐹𝑥,𝑖=1𝛼𝑘𝜑𝑘(𝑥),𝑖=1𝛼𝑘𝜕𝜑𝑘(𝑥).𝜕𝑥(2.19) In our case, as𝐹(𝑥,𝑢,𝑝) is in the form (2.9),|||||Φ(𝑥,𝛼)=𝑘=1𝛼𝑘𝜕𝜑𝑘|||||𝜕𝑥2|||||+𝑎(𝑥)𝑖=1𝛼𝑘𝜑𝑘|||||(𝑥)22𝑓(𝑥)𝑖=1𝛼𝑘𝜑𝑘(𝑥).(2.20) For solving problem (2.17), (2.18), calculate the first variation of functional (2.17). It is clear that 𝜕𝐼=𝜕𝛼𝐷𝜕Φ(𝑥,𝛼)𝜕𝛼𝛿𝛼𝑑𝑥.(2.21) Calculate the first variation of the functional 𝐼(𝐷,𝛼) with respect to the domain 𝐷. In [5, 7], the functional of the form𝐺(𝐷)=𝐷𝑔(𝑥)𝑑𝑥(2.22) is considered, and, for its first variation, the formula 𝛿𝐺(𝐷)=𝑆𝐷𝑔(𝑥)𝛿𝑃𝐷(𝑛(𝑥))𝑑𝑠(2.23) is obtained. Here, the function 𝑔(x) is a continuously differentiable function in 𝑅𝑟,𝑛(𝑥) is an external normal to the surface 𝑆𝐷 at the point 𝑥 and 𝑃𝐷(𝑥) is a support function of the domain 𝐷 and is determined as follows:𝑃𝐷(𝑥)=sup𝑙𝐷(𝑙,𝑥),𝑥𝑅𝑟.(2.24) We take into account this formula and get 𝛿𝐼(𝐷,𝛼)=𝑆𝐷Φ(𝑥,𝛼)𝛿𝑃𝐷(𝑛(𝑥))𝑑𝑠+𝐷𝜕Φ(𝑥,𝛼)𝜕𝛼𝛿𝛼𝑑𝑥.(2.25)

It is seen from this formula that the numbers 𝛼1,𝛼2,𝛼3, are found from the system of equations 𝐷𝜕Φ(x,𝛼)𝜕𝛼𝑑𝑥=0.(2.26) In our case, as Φ(𝑥,𝛼) is in the form (2.20), the system of (2.26) will be a system of linear equations.

The set 𝐾 may satisfy some additional restrictions as well. For example, the volume 𝐾 may be the mentioned domains set, and domains set with the given area of surface. In another case, the domains set 𝐾 may be given as 𝐷0𝐷𝐷1 as well, where 𝐷0,𝐷1, and 𝑅𝑟 are the given domains. In practical problems, the set 𝐾 may be given in the form of the integral restrictions𝐷𝑔(𝑥)𝑑𝑥=𝑐(2.27) or𝐷𝑔(𝑥)𝑑𝑥𝑐.(2.28) In general, assume that there is a domain 𝐺𝑅𝑟 such that for arbitrary 𝐷𝐾 contained in the set 𝐾,𝐷𝐺. We can state this condition in a simpler form as follows.

We know that the optimal domain is contained in a certain domain 𝐺. The obtained relations (2.25), (2.26) enable to solve problem (2.17), (2.18) approximately. For that we give the following algorithm.

3. Algorithm for Numerical Solution

Based on the provided procedure, for numerical solution of the problem (2.4)-(2.5), the following methods are proposed.

Step 1. Take arbitrary domain 𝐷(0)𝐾 and the basis functions 𝜑𝑘(0)(𝑥),𝑘=1,2,.(3.1)

Step 2. Solving the system of equations 𝐷(0)𝜕Φ(𝑥,𝛼)𝜕𝛼𝑑𝑥=0,(3.2) we find the convergence 𝛼(0)=(𝛼01,𝛼02,𝛼03,).

Step 3. Minimizing the linear functional 𝑆𝐷(0)Φ𝑥,𝛼(0)𝑃𝐷(𝑥)𝑑𝑠min,𝐷𝐾,(3.3) we find the convex function 𝑃(𝑥).

Step 4. The intermediate domain 𝐷(0) is found as a subdifferential of the function 𝑃(𝑥) at the point 𝑥=0 [8]. In other words, 𝐷(0)=𝜕𝑃(0)={𝑙𝑅𝑛;𝑃(𝑥)(𝑙,𝑥),𝑥𝑅𝑛}.(3.4)

Step 5. A new domain 𝐷(1) is found as follows: 𝐷(1)=(1𝜇)𝐷(0)+𝜇𝐷(0),0<𝜇<1.(3.5) Here, the domain 𝜇 may be chosen in different ways [7, 9, 10].

If a new found domain 𝐷(1) satisfies definite exactness conditions, the iteration process is completed. On the contrary, for a new domain 𝐷(1), the iteration begins from the first step. The exactness condition may be given in different ways for example,||𝐼𝐷(𝑘+1),𝛼(𝑘+1)𝐷𝐼(𝑘),𝛼(𝑘)||<𝜀.(3.6) Here, 𝜀>0 is said to be accuracy order of the method. Now, give some rules for choosing the quantity 𝜇𝑘.(1)In general, the numbers 𝜇𝑘 may be chosen from the following condition:𝑓𝑘(𝜇)=𝐼1𝜇𝑘𝐷(𝑘)+𝜇𝑘𝐷(𝑘),𝛼𝑘[].min,𝜇0,1(3.7)(2)The quantity 𝜇𝑘 may be given as a sequence satisfying the following conditions: 0𝜇𝑘1,lim𝑘𝜇𝑘=0,𝑘=0𝜇𝑘=,(3.8) for example,𝜇𝑘=1𝑘+1,𝑘=1,2,3,.(3.9)(3)The another method is to take 𝜇𝑘=1 and verity that the value of the functional decreases. If the value of the functional does not decrease, then the value of 𝜇𝑘 decreases twice. In order to solve boundary problem (2.1), (2.2), we will use the artificial neural network in each step of the iteration.

4. Application of Neural Networks to Solving the Boundary Problem

Before passing to solution of differential equations by means of neural networks, notice the cause of this necessity. It is known that there are many methods and approaches for solving differential equations and related boundary value problems. But there exist such boundary value problems that application of the known methods to them does not enable to find the solution with high accuracy because of many reasons. This may be connected with complexity of geometric structure of the domain where these differential equations are given and with strong nonlinearity of the problem. One of the reasons is that there are a great plenty of partition points, and this may strongly influence on error. At the same time, the solution of differential equation may require numerous iterations.

In addition, application of a neural network to the solution of differential equations is stipulated also by two reasons. The first reason is that logical sequence scheme corresponds to the logical scheme in the solution of differential equations [1114]. In other words, the solution scheme of differential equations enables to create the appropriate structure of neural networks. The second reason is that neural networks can accurately approximate the function [15, 16].

Let 𝐷𝑅2 and 𝑆=𝜕𝐷. Consider the following two-dimensional Poisson equation:Δ𝑢=𝑓(𝑥),𝑥𝐷,𝑢(𝑥)=𝑔(𝑥),𝑥𝑆.(4.1) Find the solution of this problem as follows: 𝑢(𝑥)=𝑛𝑖=1𝑤𝑖𝜑𝑖(𝑥),(4.2) where the functions 𝜑𝑖(𝑥) are usually chosen as radial-basis functions. Writing expansion (4.2) in (4.1), we get𝑛𝑖=1𝑤𝑖Δ𝜑𝑖(𝑥)=𝑓(𝑥),𝑥𝐷,𝑛𝑖=1𝑤𝑖𝜑𝑖(𝑥)=𝑔(𝑥),𝑥𝑆.(4.3) The weight coefficients 𝑤𝑖,𝑖=1,𝑛 are found from the minimality condition of the following function:𝐽(𝑤)=𝐷|||||𝑛𝑖=1𝑤𝑖Δ𝜑𝑖|||||(𝑥)𝑓(𝑥)2𝑑𝑥+𝑆|||||𝑛𝑖=1𝑤𝑖𝜑𝑖|||||(𝑥)𝑔(𝑥)2𝑑𝑠min.(4.4) But, in great majority of cases, the functions𝐽(𝑤)=𝑀1𝑘=1|||||𝑛𝑖=1𝑤𝑖Δ𝜑𝑖𝑥𝑘𝑥𝑓𝑘|||||2+𝑀2𝑘=1|||||𝑛𝑖=1𝑤𝑖𝜑𝑖𝑠𝑘𝑠𝑔𝑘|||||2min(4.5) are considered using the uniformity of the equation in the values at discrete points. In order to find the minimum of this function, we can use the gradient method. The iteration may be constructed to find the weight coefficients 𝑤𝑖,𝑖=1,𝑛.

Now, in this way, we will try to solve a boundary value problem stated for partial equations by means of a neural network. We will consider problem (4.1). If we write the solution of this problem by means of the Green function, we will see that the solution is linearly and continuously dependent on the right hand side of the boundary function. As it was noted, the neural networks allow to approximate the function with any accuracy [15, 16]. Replace the domain 𝐷𝑅2 by a regular discrete network by means of a small step . Let the value of the function 𝑓(𝑥) at the internal nodal points be 𝑓𝑖𝑗. Let 𝐼1 be an indices set and denote the data at the boundary point by 𝑔𝑝𝑞, the indices set by 𝐼2. Our goal is to construct a neural network by changing the functions 𝑓(𝑥),𝑔(𝑥). To get input and output data, take some 𝑢1(𝑥),𝑢2(𝑥),,𝑢𝑀(𝑥) functions. For these functions to be the solutions of (2.3), the function 𝑓(𝑥) should be chosen as the functions 𝑓(1)(𝑥),𝑓(2)(𝑥),,𝑓(𝑀)(𝑥), respectively. For satisfying the boundary condition, the function𝑓(𝑥) must be defined as 𝑔(1)(𝑥),𝑔(2)(𝑥),,𝑔(𝑀)(𝑥) in the boundary. In other words, if, in problem (2.3), (2.4), we take 𝑓(𝑥)=𝑓𝑘(𝑥) and𝑔(𝑥)=𝑔𝑘(𝑥), we get the appropriate solution𝑢(𝑥)=𝑢𝑘(𝑥). So we must construct such a neural network that it could associate the 𝑀 number of inputs 𝐻𝑘=𝑓(𝑘)𝑖𝑗,(𝑖,𝑗)𝐼1,𝑔(𝑘)𝑝𝑞,(𝑝,𝑞)𝐼2(4.6) to the 𝑀 number of outputs 𝑈𝑘=𝑢(𝑘)𝑖𝑗,(𝑖,𝑗)𝐼1.(4.7) Here, 𝑢(𝑘)𝑖𝑗 is the value of the chosen solution 𝑢𝑘(𝑥) at the nodal points (𝑖,𝑗)𝐼1. The constructed neural network enables to find an approximate solution of problem (2.3), (2.4) for arbitrarily given 𝑓(𝑥),𝑔(𝑥). For attaining it, we must insert the set of𝑓𝐻=𝑖𝑗,(𝑖,𝑗)𝐼1,𝑔𝑝𝑞,(𝑝,𝑞)𝐼2(4.8) as an input variable and, in this case, the set of 𝑢𝑈=𝑖𝑗,(𝑖,𝑗)𝐼1(4.9) as an output variable to the neural network. Then, the neural network will be an approximate solution of problem (4.1).

5. Model Example

Now let us consider the following example.

Consider the following problem:𝑢Δ𝑢=2,𝑥𝐷,(𝑥)=0,𝑥𝑆,(5.1) where 𝑥𝐷=𝑥=1,𝑥2𝑅2𝑥21+𝑥22.1(5.2) The used neural network is nonliner “cascade feed forward—distributed time delay—backpropagation” with levenberg-marqwardt algorithm (train LM) that needs 3 layers (Figure 1).

145608.fig.001
Figure 1: 1st layer is Input layer that contains 12 neuron and tansig transfer functions. 2nd layer is Hidden layer that contains 18 neuron and tansig transfer functions. 3rd layer is Output layer that contains 3 neuron and purelin transfer functions.

Backpropagation is the generalization of the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions. Input vectors and the corresponding target vectors are used to train a network until it can approximate a function, associate input vectors with specific output vectors, or classify input vectors in an appropriate way as defined.

The relationship between I/O, weight, and biases is shown as (Figure 2), include a weight connection from the input to each layer, and from each layer to the successive layers.

145608.fig.002
Figure 2

The additional connections might improve the speed at which the network learns the desired relationship. Each layer has a weight matrix W, a bias vector b, and an output vector a. Networks with wight, biases, a sigmoid layer, and a linear output layer are capable of approximating any function with a finite number of discontinuities.

Now we can start creating network through command structure:NET=newcf (minmax(P),[12, 18, 3], {“tansig”, “tansig”, “purelin”},“trainlm”);NET=init (NET);NET.trainparam.show=50;NET.trainparam.lr=0.05;NET.trainparam.lr_inc=1.05;NET.trainparam.mc=0.9;NET.trainparam.epochs=300;NET.trainparam.goal=1𝑒5;[NET,tr]=train (NET,P,T); Network =Neural Network object:architecture:   numInputs: 1  numLayers: 3  biasConnect: [1; 1; 1]  inputConnect: [1; 1; 1]  layerConnect: [0 0 0; 1 0 0; 1 1 0]  outputConnect: [0 0 1]  numOutputs: 1 (read-only)  numInputDelays: 0 (read-only)  numLayerDelays: 0 (read-only) subobject structures:   inputs: {1×1 cell} of inputs  layers: {3×1 cell} of layers  outputs: {1×3 cell} containing 1 output  biases: {3×1 cell} containing 3 biases  inputWeights: {3×1 cell} containing 3 input weights  layerWeights: {3×3 cell} containing 3 layer weightsfunctions:   adaptFcn: “trains”  divideFcn: “dividerand”  gradientFcn: “gdefaults”  initFcn: “initlay”  performFcn: “mse”  plotFcns: {“plotperform”,“plottrainstate”,“plotregression”}  trainFcn: “trainlm”parameters:   adaptParam:  .passes  divideParam:  .trainRatio,  .valRatio,  .testRatio  gradientParam: (none)  initParam: (none)  performParam: (none)  trainParam:  .show,  .showWindow,  .showCommandLine,  .epochs, .time,  .goal,  .max_fail,  .mem_reduc, .min_grad,  .mu,  .mu_dec,  .mu_inc,weight and bias values:   IW: {3×1 cell} containing 3 input weight matrices  LW: {3×3 cell} containing 3 layer weight matrices  b: {3×1 cell} containing 3 bias vectors.

After initializing and adjusting the train and weight parameter, the final created form of the artificial neural network is shown in Figure 3.

145608.fig.003
Figure 3

After the training network with 𝑃=25 input and output data, Figure 4 shows the result. The result here isn’t reasonable, because the test set error and the validation set error do not have similar characteristics, and it does appear that any significant overfitting has occurred.

145608.fig.004
Figure 4: Validation performance.

Figure 5 shows the net results to perform some analysis of the network response. In this case, there are four outputs, so there are four regressions.

fig5
Figure 5: Regression performance.

Solution of the considered model example is 𝑢(𝑥)=(1/2)(1𝑥21+𝑥22).

We found an approximate solution of this problem for 𝑃=25 by using a neural network. These results are shown in Figure 6.

145608.fig.006
Figure 6

For reaching the high accuracy, we should train the network with more data. We retrain the net with 𝑃=50 data again. The results are shown in Figures 7 and 8.

145608.fig.007
Figure 7: Validation performance.
fig8
Figure 8: Regression performance.

The artificial neural network’s error is reasonable, because the test set error and the validation set error have similar characteristics, and it does not appear that any significant overfitting has occurred.

We found an approximate solution for 𝑃=50 again. These results are shown in Figure 9.

145608.fig.009
Figure 9

For improving the high accuracy of approximate solution, the network is capable to train with more data.

References

  1. J. Sea, “Numerical method in mathematical physics and optimal control,” Publishing House Nauka, vol. 240, pp. 64–74, 1978. View at Google Scholar
  2. J. Sokolowski and J.-P. Zolesio, Introduction to Shape Optimization. Shape Sensitivity Analysis, Springer, Heidelberg, Germany, 1992.
  3. J. Haslinger and R. A. E. Makinen, Introduction to Shape Optimization: Theory, Approximation and Computation, SIAM, Philadelphia, Pa, USA, 2003.
  4. Y. S. Gasimov, A. Nachaoui, and A. A. Niftiyev, “Non-linear eigenvalue problems for p-Laplacian with variable domain,” Optimization Letters, vol. 4, no. 1, pp. 67–84, 2009. View at Publisher · View at Google Scholar · View at Scopus
  5. F. A. Aliev, M. M. Mutallimov, I. M. Askerov, and I.S. Ragumov, “Asymptotic method of solution for a problem of construction of optimal gas-lift process modes,” Mathematical Problems in Engineering, vol. 2010, Article ID 191153, 11 pages, 2010. View at Google Scholar
  6. A. A. Niftiyev and E. R. Akhmadov, “Variational statement of an inverse problem for a domain,” Journal Differential Equation, vol. 43, no. 10, pp. 1410–1416, 2007. View at Google Scholar
  7. S. Belov and N. Fujii, “Summery and sufficient conditions of optimality in a domain optimization problem,” Control and Cybernetics, vol. 26, no. 1, pp. 45–56, 1997. View at Google Scholar
  8. F. A. Aliev, A. A. Niftiyev, and J. I. Zeynalov, “Optimal synthesis problem for the fuzzy systems in semi-infinite interval,” Applied and Computational Mathematics, vol. 10, no. 1, pp. 97–105, 2011. View at Google Scholar
  9. V. F. Demyanov and A. M. Rubinov, Bases of Non-Smooth Analysis and Quasidifferential Calculus, Nauka, Moscow, Russia, 1990.
  10. F. P. Vasilyev, Optimization Methods, Factorial Press, Moscow, Russia, 2002.
  11. D. A. Tarkhov, Neural Networks: Models and Algorithms, Radio-Technical, Moscow, Russia, 2005.
  12. I. E. Lagaris, A. Likas, and D. I. Fotiadis, “Artificial neural networks for solving ordinary and partial differential equations,” IEEE Transactions on Neural Networks, vol. 9, no. 5, pp. 987–1000, 1998. View at Google Scholar · View at Scopus
  13. R. M. Alguliev, R. M. Aliguliev, and R. K. Alekperov, “New approach to optimal appointment for distributed system, informatics and computer science,” Problems, no. 5, pp. 23–31, 2004. View at Google Scholar
  14. A. U. Levin and K. S. Narendra, “Control of nonlinear dynamical systems using neural networks: controllability and stabilization,” IEEE Transactions on Neural Networks, vol. 4, no. 2, pp. 192–206, 1993. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  15. H. Lee and I. S. Kang, “Neural algorithm for solving differential equations,” Journal of Computational Physics, vol. 91, no. 1, pp. 110–131, 1990. View at Publisher · View at Google Scholar · View at Scopus
  16. A. N. Gorban, “A generalized approximation theorem and computing possibilities of neural networks,” Siberian Journal of Calculate Mathematics, vol. 1, no. 1, pp. 12–24, 1998. View at Google Scholar