Mathematical Problems in Engineering

Volume 2011 (2011), Article ID 145608, 16 pages

http://dx.doi.org/10.1155/2011/145608

## Inverse Problem with Respect to Domain and Artificial Neural Network Algorithm for the Solution

^{1}Institute of Applied Mathematics, Baku State University, Baku 1148, Azerbaijan^{2}Institute of Information Technology, ANAS, Baku 1141, Azerbaijan

Received 25 April 2011; Accepted 21 June 2011

Academic Editor: Gradimir V. Milovanović

Copyright © 2011 Kambiz Majidzadeh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We consider the inverse problem with respect to domain. We suggested a new approach for reducing the inverse problem for a domain to an equivalent problem in a variational setting and gave an effective solution algorithm for solving such problems. In order to solve boundary problem, the artificial neural network is used in each step of the iteration.

#### 1. Introduction

A wide class of practical problems are reduced to the inverse problem with respect to domain. As an example, we can show problems of elasticity theory, diffusion problems, the problems arising in hydrodynamics [1–6], and so forth. The papers concerning inverse problems usually deal with inverse problems for an unknown function (coefficients and functions occurring in the boundary and initial conditions). But in our case, a domain is sought and the investigation of the considered problems is related with some strong difficulties. In order to avoid these difficulties and to investigate such problems, firstly, the considered inverse problem is reduced to the variational statement. As the obtained variational problem is a domain-dependent variational problem, the investigation of such problems is also related to some difficulties. Here, we give an effective algorithm for solving such problems.

#### 2. Statement of the Problem and Main Result

Let be an -dimensional domain, that is, and .

Denote by the boundary of the domain . Assume that the boundary is in the space . Let us consider the following inverse problem: where the functions are and continuously differentiable functions in and . Denote by the set of convex domains set with a boundary from .

Our goal is to find a pair such that the function satisfies (2.1) and boundary conditions (2.2), (2.3) in the domain . As it is seen, condition (2.2) is a Dirichlet condition, and condition (2.3) is a Neumann condition. For solving inverse problem (2.1)–(2.3), at first, we consider the following unknown domain variational problem: We will assume that the function is a continuously differentiable function of its own variables in .

If boundary condition is satisfied for , the pair is said to be a possible pair. Denote by all possible pairs set. The pair is called an optimal pair if it gives a minimum to functional (2.4) in the set .

Give the following theorem obtained in [7].

Theorem 2.1. *Let the pair be an optimal pair for variational problem (2.4), (2.5). Then the function is a solution of the following Euler equation in the domain :
**
and moreover, in the boundary , the condition
**
is satisfied.*

As it is seen, this theorem is proved for convex domains. But one can obtain the similar result for doubly connected domain with internal and external boundaries and . For that, we must use the expansion where and are convex domains bounded by the boundaries and .

Now, take the function in the following form where As the functions and are continuously differentiable functions in , the function is a continuously differentiable function of its own variablies in. Apply the theorem mentioned above to this function. It is clear that Hence, So, from condition (2.6), we get that the function satisfies (2.1) in the domain . From condition (2.7), we get the following boundary condition If we take into account the condition , we get or From the equation of the derivative with respect to the normal , we get that function (2.3) satisfies the boundary condition (2.3) as well. Thus, we proved the following theorem.

Theorem 2.2. * Let the pair be an optimal pair for problem (2.4), (2.5). Then it is a solution of problem (2.1)–(2.3) as well. This theorem shows that if instead of the inverse problem (2.1)–(2.3) we take the function in the form (2.9), we can investigate the variational problem (2.4), (2.5). Notice that the inverse of this fact is not true in general. Though the functional is convex with respect to the functional , this functional is not convex with respect to in general. Using the obtained results to solving problem (2.4), (2.5), we give the following method.*

Let the system of the functions form a basis in the space . Then function may be expanded by this basis We take this into account in problem (2.4), (2.5) and get where In our case, as is in the form (2.9), For solving problem (2.17), (2.18), calculate the first variation of functional (2.17). It is clear that Calculate the first variation of the functional with respect to the domain . In [5, 7], the functional of the form is considered, and, for its first variation, the formula is obtained. Here, the function is a continuously differentiable function in is an external normal to the surface at the point and is a support function of the domain and is determined as follows: We take into account this formula and get

It is seen from this formula that the numbers are found from the system of equations In our case, as is in the form (2.20), the system of (2.26) will be a system of linear equations.

The set may satisfy some additional restrictions as well. For example, the volume may be the mentioned domains set, and domains set with the given area of surface. In another case, the domains set may be given as as well, where , and are the given domains. In practical problems, the set may be given in the form of the integral restrictions or In general, assume that there is a domain such that for arbitrary contained in the set . We can state this condition in a simpler form as follows.

We know that the optimal domain is contained in a certain domain . The obtained relations (2.25), (2.26) enable to solve problem (2.17), (2.18) approximately. For that we give the following algorithm.

#### 3. Algorithm for Numerical Solution

Based on the provided procedure, for numerical solution of the problem (2.4)-(2.5), the following methods are proposed.

*Step 1. *Take arbitrary domain and the basis functions

*Step 2. *Solving the system of equations
we find the convergence .

*Step 3. *Minimizing the linear functional
we find the convex function .

*Step 4. *The intermediate domain is found as a subdifferential of the function at the point [8]. In other words,

*Step 5. *A new domain is found as follows:
Here, the domain may be chosen in different ways [7, 9, 10].

If a new found domain satisfies definite exactness conditions, the iteration process is completed. On the contrary, for a new domain , the iteration begins from the first step. The exactness condition may be given in different ways for example, Here, is said to be accuracy order of the method. Now, give some rules for choosing the quantity .(1)In general, the numbers may be chosen from the following condition:(2)The quantity may be given as a sequence satisfying the following conditions: for example,(3)The another method is to take and verity that the value of the functional decreases. If the value of the functional does not decrease, then the value of decreases twice. In order to solve boundary problem (2.1), (2.2), we will use the artificial neural network in each step of the iteration.

#### 4. Application of Neural Networks to Solving the Boundary Problem

Before passing to solution of differential equations by means of neural networks, notice the cause of this necessity. It is known that there are many methods and approaches for solving differential equations and related boundary value problems. But there exist such boundary value problems that application of the known methods to them does not enable to find the solution with high accuracy because of many reasons. This may be connected with complexity of geometric structure of the domain where these differential equations are given and with strong nonlinearity of the problem. One of the reasons is that there are a great plenty of partition points, and this may strongly influence on error. At the same time, the solution of differential equation may require numerous iterations.

In addition, application of a neural network to the solution of differential equations is stipulated also by two reasons. The first reason is that logical sequence scheme corresponds to the logical scheme in the solution of differential equations [11–14]. In other words, the solution scheme of differential equations enables to create the appropriate structure of neural networks. The second reason is that neural networks can accurately approximate the function [15, 16].

Let and . Consider the following two-dimensional Poisson equation: Find the solution of this problem as follows: where the functions are usually chosen as radial-basis functions. Writing expansion (4.2) in (4.1), we get The weight coefficients are found from the minimality condition of the following function: But, in great majority of cases, the functions are considered using the uniformity of the equation in the values at discrete points. In order to find the minimum of this function, we can use the gradient method. The iteration may be constructed to find the weight coefficients .

Now, in this way, we will try to solve a boundary value problem stated for partial equations by means of a neural network. We will consider problem (4.1). If we write the solution of this problem by means of the Green function, we will see that the solution is linearly and continuously dependent on the right hand side of the boundary function. As it was noted, the neural networks allow to approximate the function with any accuracy [15, 16]. Replace the domain by a regular discrete network by means of a small step . Let the value of the function at the internal nodal points be . Let be an indices set and denote the data at the boundary point by , the indices set by . Our goal is to construct a neural network by changing the functions . To get input and output data, take some functions. For these functions to be the solutions of (2.3), the function should be chosen as the functions , respectively. For satisfying the boundary condition, the function must be defined as in the boundary. In other words, if, in problem (2.3), (2.4), we take and, we get the appropriate solution. So we must construct such a neural network that it could associate the number of inputs to the number of outputs Here, is the value of the chosen solution at the nodal points . The constructed neural network enables to find an approximate solution of problem (2.3), (2.4) for arbitrarily given . For attaining it, we must insert the set of as an input variable and, in this case, the set of as an output variable to the neural network. Then, the neural network will be an approximate solution of problem (4.1).

#### 5. Model Example

Now let us consider the following example.

Consider the following problem:
where
The used neural network is nonliner “*cascade feed forward—distributed time delay—backpropagation*” with levenberg-marqwardt algorithm (train LM) that needs 3 layers (Figure 1).

Backpropagation is the generalization of the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions. Input vectors and the corresponding target vectors are used to train a network until it can approximate a function, associate input vectors with specific output vectors, or classify input vectors in an appropriate way as defined.

The relationship between I/O, weight, and biases is shown as (Figure 2), include a weight connection from the input to each layer, and from each layer to the successive layers.

The additional connections might improve the speed at which the network learns the desired relationship. Each layer has a weight matrix **W**, a bias vector **b**, and an output vector **a**. Networks with wight, biases, a sigmoid layer, and a linear output layer are capable of approximating any function with a finite number of discontinuities.

Now we can start creating network through command structure:NET=newcf (minmax(P),[12, 18, 3], {“tansig”, “tansig”, “purelin”},“trainlm”);NET=init (NET);NET.trainparam.show=50;NET.trainparam.lr=0.05;NET.trainparam.lr_inc=1.05;NET.trainparam.mc=0.9;NET.trainparam.epochs=300;NET.trainparam.goal=;[NET,tr]=train (NET,P,T); Network = __Neural Network object:__ *architecture: * numInputs: 1 numLayers: 3 biasConnect: [1; 1; 1] inputConnect: [1; 1; 1] layerConnect: [0 0 0; 1 0 0; 1 1 0] outputConnect: [0 0 1] numOutputs: 1 (read-only) numInputDelays: 0 (read-only) numLayerDelays: 0 (read-only) *subobject structures: * inputs: cell of inputs layers: cell of layers outputs: cell containing 1 output biases: cell containing 3 biases inputWeights: cell containing 3 input weights layerWeights: cell containing 3 layer weights *functions: * adaptFcn: “trains” divideFcn: “dividerand” gradientFcn: “gdefaults” initFcn: “initlay” performFcn: “mse” plotFcns: {“plotperform”,“plottrainstate”,“plotregression”} trainFcn: “trainlm” *parameters: * adaptParam: .passes divideParam: .trainRatio, .valRatio, .testRatio gradientParam: (none) initParam: (none) performParam: (none) trainParam: .show, .showWindow, .showCommandLine, .epochs, .time, .goal, .max_fail, .mem_reduc, .min_grad, .mu, .mu_dec, .mu_inc, *weight and bias values: * IW: cell containing 3 input weight matrices LW: cell containing 3 layer weight matrices b: cell containing 3 bias vectors.

After initializing and adjusting the train and weight parameter, the final created form of the artificial neural network is shown in Figure 3.

After the training network with input and output data, Figure 4 shows the result. The result here isn’t reasonable, because the test set error and the validation set error do not have similar characteristics, and it does appear that any significant overfitting has occurred.

Figure 5 shows the net results to perform some analysis of the network response. In this case, there are four outputs, so there are four regressions.

Solution of the considered model example is .

We found an approximate solution of this problem for by using a neural network. These results are shown in Figure 6.

For reaching the high accuracy, we should train the network with more data. We retrain the net with data again. The results are shown in Figures 7 and 8.

The artificial neural network’s error is reasonable, because the test set error and the validation set error have similar characteristics, and it does not appear that any significant overfitting has occurred.

We found an approximate solution for again. These results are shown in Figure 9.

For improving the high accuracy of approximate solution, the network is capable to train with more data.

#### References

- J. Sea, “Numerical method in mathematical physics and optimal control,”
*Publishing House Nauka*, vol. 240, pp. 64–74, 1978. View at Google Scholar - J. Sokolowski and J.-P. Zolesio,
*Introduction to Shape Optimization. Shape Sensitivity Analysis*, Springer, Heidelberg, Germany, 1992. - J. Haslinger and R. A. E. Makinen,
*Introduction to Shape Optimization: Theory, Approximation and Computation*, SIAM, Philadelphia, Pa, USA, 2003. - Y. S. Gasimov, A. Nachaoui, and A. A. Niftiyev, “Non-linear eigenvalue problems for p-Laplacian with variable domain,”
*Optimization Letters*, vol. 4, no. 1, pp. 67–84, 2009. View at Publisher · View at Google Scholar · View at Scopus - F. A. Aliev, M. M. Mutallimov, I. M. Askerov, and I.S. Ragumov, “Asymptotic method of solution for a problem of construction of optimal gas-lift process modes,”
*Mathematical Problems in Engineering*, vol. 2010, Article ID 191153, 11 pages, 2010. View at Google Scholar - A. A. Niftiyev and E. R. Akhmadov, “Variational statement of an inverse problem for a domain,”
*Journal Differential Equation*, vol. 43, no. 10, pp. 1410–1416, 2007. View at Google Scholar - S. Belov and N. Fujii, “Summery and sufficient conditions of optimality in a domain optimization problem,”
*Control and Cybernetics*, vol. 26, no. 1, pp. 45–56, 1997. View at Google Scholar - F. A. Aliev, A. A. Niftiyev, and J. I. Zeynalov, “Optimal synthesis problem for the fuzzy systems in semi-infinite interval,”
*Applied and Computational Mathematics*, vol. 10, no. 1, pp. 97–105, 2011. View at Google Scholar - V. F. Demyanov and A. M. Rubinov,
*Bases of Non-Smooth Analysis and Quasidifferential Calculus*, Nauka, Moscow, Russia, 1990. - F. P. Vasilyev,
*Optimization Methods*, Factorial Press, Moscow, Russia, 2002. - D. A. Tarkhov,
*Neural Networks: Models and Algorithms*, Radio-Technical, Moscow, Russia, 2005. - I. E. Lagaris, A. Likas, and D. I. Fotiadis, “Artificial neural networks for solving ordinary and partial differential equations,”
*IEEE Transactions on Neural Networks*, vol. 9, no. 5, pp. 987–1000, 1998. View at Google Scholar · View at Scopus - R. M. Alguliev, R. M. Aliguliev, and R. K. Alekperov, “New approach to optimal appointment for distributed system, informatics and computer science,”
*Problems*, no. 5, pp. 23–31, 2004. View at Google Scholar - A. U. Levin and K. S. Narendra, “Control of nonlinear dynamical systems using neural networks: controllability and stabilization,”
*IEEE Transactions on Neural Networks*, vol. 4, no. 2, pp. 192–206, 1993. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus - H. Lee and I. S. Kang, “Neural algorithm for solving differential equations,”
*Journal of Computational Physics*, vol. 91, no. 1, pp. 110–131, 1990. View at Publisher · View at Google Scholar · View at Scopus - A. N. Gorban, “A generalized approximation theorem and computing possibilities of neural networks,”
*Siberian Journal of Calculate Mathematics*, vol. 1, no. 1, pp. 12–24, 1998. View at Google Scholar