Abstract

We consider the inverse problem with respect to domain. We suggested a new approach for reducing the inverse problem for a domain to an equivalent problem in a variational setting and gave an effective solution algorithm for solving such problems. In order to solve boundary problem, the artificial neural network is used in each step of the iteration.

1. Introduction

A wide class of practical problems are reduced to the inverse problem with respect to domain. As an example, we can show problems of elasticity theory, diffusion problems, the problems arising in hydrodynamics [1–6], and so forth. The papers concerning inverse problems usually deal with inverse problems for an unknown function (coefficients and functions occurring in the boundary and initial conditions). But in our case, a domain is sought and the investigation of the considered problems is related with some strong difficulties. In order to avoid these difficulties and to investigate such problems, firstly, the considered inverse problem is reduced to the variational statement. As the obtained variational problem is a domain-dependent variational problem, the investigation of such problems is also related to some difficulties. Here, we give an effective algorithm for solving such problems.

2. Statement of the Problem and Main Result

Let 𝐷 be an π‘Ÿ-dimensional domain, that is, π·βŠ‚π‘…π‘Ÿ and π‘₯=(π‘₯1,π‘₯2,…,π‘₯π‘Ÿ)∈𝐷.

Denote by 𝑆𝐷 the boundary of the domain 𝐷,𝑆𝐷=πœ•π·. Assume that the boundary 𝑆𝐷 is in the space 𝐢2. Let us consider the following inverse problem: βˆ’Ξ”π‘’+π‘Ž(π‘₯)𝑒=𝑓(π‘₯),π‘₯∈𝐷,(2.1)𝑒(π‘₯)=0,π‘₯βˆˆπ‘†π·,(2.2)πœ•π‘’(π‘₯)πœ•π‘›=0,π‘₯βˆˆπ‘†π·,(2.3) where the functions π‘Ž are 𝑓 and continuously differentiable functions in π‘…π‘Ÿ and π‘Ž(π‘₯)>0. Denote by 𝐾 the set of convex domains set with a boundary from 𝐢2.

Our goal is to find a pair (𝐷,𝑒)βˆˆπΎΓ—πΆ2(𝐷) such that the function 𝑒=𝑒(π‘₯) satisfies (2.1) and boundary conditions (2.2), (2.3) in the domain 𝐷. As it is seen, condition (2.2) is a Dirichlet condition, and condition (2.3) is a Neumann condition. For solving inverse problem (2.1)–(2.3), at first, we consider the following unknown domain variational problem:ξ€œπ½(𝐷,𝑒)=𝐷𝐹π‘₯,𝑒(π‘₯),𝑒π‘₯(ξ€Έπ‘₯)𝑑π‘₯⟢min,𝐷∈𝐾,π‘’βˆˆπΆ2(𝑒𝐷),(2.4)(π‘₯)=0,π‘₯βˆˆπ‘†π·.(2.5) We will assume that the function 𝐹(π‘₯,𝑒,𝑝) is a continuously differentiable function of its own variables in π·Γ—π‘…Γ—π‘…π‘Ÿ.

If boundary condition is satisfied for 𝐷∈𝐾,π‘’βˆˆπΆ2(𝐷), the pair (𝐷,𝑒) is said to be a possible pair. Denote by 𝑀 all possible pairs set. The pair (π·βˆ—,π‘’βˆ—)βˆˆπ‘€ is called an optimal pair if it gives a minimum to functional (2.4) in the set 𝑀.

Give the following theorem obtained in [7].

Theorem 2.1. Let the pair (π·βˆ—,π‘’βˆ—)βˆˆπ‘€ be an optimal pair for variational problem (2.4), (2.5). Then the function π‘’βˆ—=π‘’βˆ—(π‘₯) is a solution of the following Euler equation in the domain π·βˆ—: 𝐹𝑒π‘₯,𝑒(π‘₯),𝑒π‘₯ξ€Έβˆ’(π‘₯)𝑛𝑖=1𝑑𝑑π‘₯𝑖𝐹𝑒π‘₯𝑖π‘₯,𝑒(π‘₯),𝑒π‘₯ξ€Έ(π‘₯)=0,π‘₯βˆˆπ·βˆ—,(2.6) and moreover, in the boundary π‘†π·βˆ—, the condition 𝐹π‘₯,π‘’βˆ—(π‘₯),π‘’βˆ—π‘₯ξ€Έβˆ’(π‘₯)𝑛𝑖=1π‘’βˆ—π‘₯𝑖(π‘₯)𝐹𝑒π‘₯𝑖π‘₯,π‘’βˆ—(π‘₯),π‘’βˆ—π‘₯ξ€Έ(π‘₯)=0,π‘₯βˆˆπ‘†π·βˆ—,(2.7) is satisfied.

As it is seen, this theorem is proved for convex domains. But one can obtain the similar result for doubly connected domain 𝐷 with internal and external boundaries 𝑆1 and 𝑆2. For that, we must use the expansionξ€œπ·ξ€œπΉπ‘‘π‘₯=𝐷2ξ€œπΉπ‘‘π‘₯βˆ’π·1𝐹𝑑π‘₯,(2.8) where 𝐷1 and 𝐷2 are convex domains bounded by the boundaries 𝑆1 and 𝑆2.

Now, take the function 𝐹(π‘₯,𝑒,𝑝) in the following form 𝐹π‘₯,𝑒,𝑒π‘₯ξ€Έ=||𝑒π‘₯||2+π‘Ž(π‘₯)𝑒2βˆ’2𝑓(π‘₯)𝑒,(2.9) where||𝑒π‘₯||2=||𝑒π‘₯1||2+||𝑒π‘₯2||2||𝑒+β‹―+π‘₯π‘Ÿ||2.(2.10) As the functionsπ‘Ž and 𝑓 are continuously differentiable functions in π‘…π‘Ÿ, the function 𝐹(π‘₯,𝑒,𝑝) is a continuously differentiable function of its own variablies inπ·Γ—π‘…Γ—π‘…π‘Ÿ. Apply the theorem mentioned above to this function. It is clear that 𝐹𝑒=2π‘Ž(π‘₯)π‘’βˆ’2𝑓(π‘₯)𝑒,𝐹𝑒π‘₯𝑖=2𝑒π‘₯𝑖.(2.11) Hence,𝑑𝑑π‘₯𝑖𝐹𝑒π‘₯𝑖=2Δ𝑒.(2.12) So, from condition (2.6), we get that the function π‘’βˆ—=π‘’βˆ—(π‘₯) satisfies (2.1) in the domain π·βˆ—. From condition (2.7), we get the following boundary condition||π‘’βˆ—π‘₯||2+π‘Žπ‘’βˆ—2βˆ’2𝑓(π‘₯)π‘’βˆ—||π‘’βˆ’2βˆ—π‘₯||2=0,π‘₯βˆˆπ‘†π·.(2.13) If we take into account the condition π‘’βˆ—(π‘₯)=0,π‘₯βˆˆπ‘†π·βˆ—, we get ||π‘’βˆ—π‘₯||=0,π‘₯βˆˆπ‘†π·βˆ—,(2.14) orπ‘’βˆ—π‘₯(π‘₯)=0,π‘₯βˆˆπ‘†π·βˆ—.(2.15) From the equation of the derivative with respect to the normal πœ•π‘’(π‘₯)/πœ•π‘›, we get that function (2.3) satisfies the boundary condition (2.3) as well. Thus, we proved the following theorem.

Theorem 2.2. Let the pair (π·βˆ—,π‘’βˆ—)βˆˆπ‘€ be an optimal pair for problem (2.4), (2.5). Then it is a solution of problem (2.1)–(2.3) as well. This theorem shows that if instead of the inverse problem (2.1)–(2.3) we take the function 𝐹(π‘₯,𝑒,𝑝) in the form (2.9), we can investigate the variational problem (2.4), (2.5). Notice that the inverse of this fact is not true in general. Though the functional 𝐽(𝐷,𝑒) is convex with respect to the functional 𝑒, this functional is not convex with respect to 𝐷 in general. Using the obtained results to solving problem (2.4), (2.5), we give the following method.

Let the system of the functions {πœ‘π‘˜(π‘₯)},π‘˜=1,2,… form a basis in the space 𝐢2(𝐷). Then function 𝑒=𝑒(π‘₯) may be expanded by this basis𝑒=βˆžξ“π‘–=1π›Όπ‘˜πœ‘π‘˜(π‘₯).(2.16) We take this into account in problem (2.4), (2.5) and getξ€œπΌ(𝐷,𝛼)=𝐷𝛼Φ(x,𝛼)dx⟢min,𝐷∈𝐾,𝛼=1,𝛼2,𝛼3ξ€Έ,,…(2.17)βˆžξ“π‘–=1π›Όπ‘˜πœ‘π‘˜(π‘₯)=0,π‘₯βˆˆπ‘†π·,(2.18) whereΦ(π‘₯,𝛼)=𝐹π‘₯,βˆžξ“π‘–=1π›Όπ‘˜πœ‘π‘˜(π‘₯),βˆžξ“π‘–=1π›Όπ‘˜πœ•πœ‘π‘˜(π‘₯)ξƒͺ.πœ•π‘₯(2.19) In our case, as𝐹(π‘₯,𝑒,𝑝) is in the form (2.9),|||||Ξ¦(π‘₯,𝛼)=βˆžξ“π‘˜=1π›Όπ‘˜πœ•πœ‘π‘˜|||||πœ•π‘₯2|||||+π‘Ž(π‘₯)βˆžξ“π‘–=1π›Όπ‘˜πœ‘π‘˜|||||(π‘₯)2βˆ’2𝑓(π‘₯)βˆžξ“π‘–=1π›Όπ‘˜πœ‘π‘˜(π‘₯).(2.20) For solving problem (2.17), (2.18), calculate the first variation of functional (2.17). It is clear that πœ•πΌ=ξ€œπœ•π›Όπ·πœ•Ξ¦(π‘₯,𝛼)πœ•π›Όπ›Ώπ›Όπ‘‘π‘₯.(2.21) Calculate the first variation of the functional 𝐼(𝐷,𝛼) with respect to the domain 𝐷. In [5, 7], the functional of the formξ€œπΊ(𝐷)=𝐷𝑔(π‘₯)𝑑π‘₯(2.22) is considered, and, for its first variation, the formula ξ€œπ›ΏπΊ(𝐷)=𝑆𝐷𝑔(π‘₯)𝛿𝑃𝐷(𝑛(π‘₯))𝑑𝑠(2.23) is obtained. Here, the function 𝑔(x) is a continuously differentiable function in π‘…π‘Ÿ,𝑛(π‘₯) is an external normal to the surface 𝑆𝐷 at the point π‘₯ and 𝑃𝐷(π‘₯) is a support function of the domain 𝐷 and is determined as follows:𝑃𝐷(π‘₯)=supπ‘™βˆˆπ·(𝑙,π‘₯),π‘₯βˆˆπ‘…π‘Ÿ.(2.24) We take into account this formula and get ξ€œπ›ΏπΌ(𝐷,𝛼)=𝑆𝐷Φ(π‘₯,𝛼)𝛿𝑃𝐷(ξ€œπ‘›(π‘₯))𝑑𝑠+π·πœ•Ξ¦(π‘₯,𝛼)πœ•π›Όπ›Ώπ›Όπ‘‘π‘₯.(2.25)

It is seen from this formula that the numbers 𝛼1,𝛼2,𝛼3,… are found from the system of equations ξ€œπ·πœ•Ξ¦(x,𝛼)πœ•π›Όπ‘‘π‘₯=0.(2.26) In our case, as Ξ¦(π‘₯,𝛼) is in the form (2.20), the system of (2.26) will be a system of linear equations.

The set 𝐾 may satisfy some additional restrictions as well. For example, the volume 𝐾 may be the mentioned domains set, and domains set with the given area of surface. In another case, the domains set 𝐾 may be given as 𝐷0βŠ‚π·βŠ‚π·1 as well, where 𝐷0,𝐷1, and π‘…π‘Ÿ are the given domains. In practical problems, the set 𝐾 may be given in the form of the integral restrictionsξ€œπ·π‘”(π‘₯)𝑑π‘₯=𝑐(2.27) orξ€œπ·π‘”(π‘₯)𝑑π‘₯≀𝑐.(2.28) In general, assume that there is a domain πΊβŠ‚π‘…π‘Ÿ such that for arbitrary 𝐷∈𝐾 contained in the set 𝐾,π·βŠ‚πΊ. We can state this condition in a simpler form as follows.

We know that the optimal domain is contained in a certain domain 𝐺. The obtained relations (2.25), (2.26) enable to solve problem (2.17), (2.18) approximately. For that we give the following algorithm.

3. Algorithm for Numerical Solution

Based on the provided procedure, for numerical solution of the problem (2.4)-(2.5), the following methods are proposed.

Step 1. Take arbitrary domain 𝐷(0)∈𝐾 and the basis functions ξ‚†πœ‘π‘˜(0)(π‘₯),π‘˜=1,2,….(3.1)

Step 2. Solving the system of equations ξ€œπ·(0)πœ•Ξ¦(π‘₯,𝛼)πœ•π›Όπ‘‘π‘₯=0,(3.2) we find the convergence 𝛼(0)=(𝛼01,𝛼02,𝛼03,…).

Step 3. Minimizing the linear functional ξ€œπ‘†π·(0)Ξ¦ξ€·π‘₯,𝛼(0)𝑃𝐷(π‘₯)π‘‘π‘ βŸΆmin,𝐷∈𝐾,(3.3) we find the convex function 𝑃(π‘₯).

Step 4. The intermediate domain 𝐷(0) is found as a subdifferential of the function 𝑃(π‘₯) at the point π‘₯=0 [8]. In other words, 𝐷(0)=πœ•π‘ƒ(0)={π‘™βˆˆπ‘…π‘›;𝑃(π‘₯)β‰₯(𝑙,π‘₯),βˆ€π‘₯βˆˆπ‘…π‘›}.(3.4)

Step 5. A new domain 𝐷(1) is found as follows: 𝐷(1)=(1βˆ’πœ‡)𝐷(0)+πœ‡π·(0),0<πœ‡<1.(3.5) Here, the domain πœ‡ may be chosen in different ways [7, 9, 10].

If a new found domain 𝐷(1) satisfies definite exactness conditions, the iteration process is completed. On the contrary, for a new domain 𝐷(1), the iteration begins from the first step. The exactness condition may be given in different ways for example,||𝐼𝐷(π‘˜+1),𝛼(π‘˜+1)ξ€Έξ€·π·βˆ’πΌ(π‘˜),𝛼(π‘˜)ξ€Έ||<πœ€.(3.6) Here, πœ€>0 is said to be accuracy order of the method. Now, give some rules for choosing the quantity πœ‡π‘˜.(1)In general, the numbers πœ‡π‘˜ may be chosen from the following condition:π‘“π‘˜ξ‚€ξ€·(πœ‡)=𝐼1βˆ’πœ‡π‘˜ξ€Έπ·(π‘˜)+πœ‡π‘˜π·(π‘˜),π›Όπ‘˜ξ‚[].⟢min,πœ‡βˆˆ0,1(3.7)(2)The quantity πœ‡π‘˜ may be given as a sequence satisfying the following conditions: 0β‰€πœ‡π‘˜β‰€1,limπ‘˜β†’βˆžπœ‡π‘˜=0,βˆžξ“π‘˜=0πœ‡π‘˜=∞,(3.8) for example,πœ‡π‘˜=1π‘˜+1,π‘˜=1,2,3,….(3.9)(3)The another method is to take πœ‡π‘˜=1 and verity that the value of the functional decreases. If the value of the functional does not decrease, then the value of πœ‡π‘˜ decreases twice. In order to solve boundary problem (2.1), (2.2), we will use the artificial neural network in each step of the iteration.

4. Application of Neural Networks to Solving the Boundary Problem

Before passing to solution of differential equations by means of neural networks, notice the cause of this necessity. It is known that there are many methods and approaches for solving differential equations and related boundary value problems. But there exist such boundary value problems that application of the known methods to them does not enable to find the solution with high accuracy because of many reasons. This may be connected with complexity of geometric structure of the domain where these differential equations are given and with strong nonlinearity of the problem. One of the reasons is that there are a great plenty of partition points, and this may strongly influence on error. At the same time, the solution of differential equation may require numerous iterations.

In addition, application of a neural network to the solution of differential equations is stipulated also by two reasons. The first reason is that logical sequence scheme corresponds to the logical scheme in the solution of differential equations [11–14]. In other words, the solution scheme of differential equations enables to create the appropriate structure of neural networks. The second reason is that neural networks can accurately approximate the function [15, 16].

Let π·βŠ‚π‘…2 and 𝑆=πœ•π·. Consider the following two-dimensional Poisson equation:Δ𝑒=𝑓(π‘₯),π‘₯∈𝐷,𝑒(π‘₯)=𝑔(π‘₯),π‘₯βˆˆπ‘†.(4.1) Find the solution of this problem as follows: 𝑒(π‘₯)=𝑛𝑖=1π‘€π‘–πœ‘π‘–(π‘₯),(4.2) where the functions πœ‘π‘–(π‘₯) are usually chosen as radial-basis functions. Writing expansion (4.2) in (4.1), we get𝑛𝑖=1π‘€π‘–Ξ”πœ‘π‘–(π‘₯)=𝑓(π‘₯),π‘₯∈𝐷,𝑛𝑖=1π‘€π‘–πœ‘π‘–(π‘₯)=𝑔(π‘₯),π‘₯βˆˆπ‘†.(4.3) The weight coefficients 𝑀𝑖,𝑖=1,𝑛 are found from the minimality condition of the following function:ξ€œπ½(𝑀)=𝐷|||||𝑛𝑖=1π‘€π‘–Ξ”πœ‘π‘–|||||(π‘₯)βˆ’π‘“(π‘₯)2ξ€œπ‘‘π‘₯+𝑆|||||𝑛𝑖=1π‘€π‘–πœ‘π‘–|||||(π‘₯)βˆ’π‘”(π‘₯)2π‘‘π‘ βŸΆmin.(4.4) But, in great majority of cases, the functions𝐽(𝑀)=𝑀1ξ“π‘˜=1|||||𝑛𝑖=1π‘€π‘–Ξ”πœ‘π‘–ξ€·π‘₯π‘˜ξ€Έξ€·π‘₯βˆ’π‘“π‘˜ξ€Έ|||||2+𝑀2ξ“π‘˜=1|||||𝑛𝑖=1π‘€π‘–πœ‘π‘–ξ€·π‘ π‘˜ξ€Έξ€·π‘ βˆ’π‘”π‘˜ξ€Έ|||||2⟢min(4.5) are considered using the uniformity of the equation in the values at discrete points. In order to find the minimum of this function, we can use the gradient method. The iteration may be constructed to find the weight coefficients 𝑀𝑖,𝑖=1,𝑛.

Now, in this way, we will try to solve a boundary value problem stated for partial equations by means of a neural network. We will consider problem (4.1). If we write the solution of this problem by means of the Green function, we will see that the solution is linearly and continuously dependent on the right hand side of the boundary function. As it was noted, the neural networks allow to approximate the function with any accuracy [15, 16]. Replace the domain π·βŠ‚π‘…2 by a regular discrete network by means of a small step β„Ž. Let the value of the function 𝑓(π‘₯) at the internal nodal points be 𝑓𝑖𝑗. Let 𝐼1 be an indices set and denote the data at the boundary point by π‘”π‘π‘ž, the indices set by 𝐼2. Our goal is to construct a neural network by changing the functions 𝑓(π‘₯),𝑔(π‘₯). To get input and output data, take some 𝑒1(π‘₯),𝑒2(π‘₯),…,𝑒𝑀(π‘₯) functions. For these functions to be the solutions of (2.3), the function 𝑓(π‘₯) should be chosen as the functions 𝑓(1)(π‘₯),𝑓(2)(π‘₯),…,𝑓(𝑀)(π‘₯), respectively. For satisfying the boundary condition, the function𝑓(π‘₯) must be defined as 𝑔(1)(π‘₯),𝑔(2)(π‘₯),…,𝑔(𝑀)(π‘₯) in the boundary. In other words, if, in problem (2.3), (2.4), we take 𝑓(π‘₯)=π‘“π‘˜(π‘₯) and𝑔(π‘₯)=π‘”π‘˜(π‘₯), we get the appropriate solution𝑒(π‘₯)=π‘’π‘˜(π‘₯). So we must construct such a neural network that it could associate the 𝑀 number of inputs π»π‘˜=𝑓(π‘˜)𝑖𝑗,(𝑖,𝑗)∈𝐼1,𝑔(π‘˜)π‘π‘ž,(𝑝,π‘ž)∈𝐼2(4.6) to the 𝑀 number of outputs π‘ˆπ‘˜=𝑒(π‘˜)𝑖𝑗,(𝑖,𝑗)∈𝐼1.(4.7) Here, 𝑒(π‘˜)𝑖𝑗 is the value of the chosen solution π‘’π‘˜(π‘₯) at the nodal points (𝑖,𝑗)∈𝐼1. The constructed neural network enables to find an approximate solution of problem (2.3), (2.4) for arbitrarily given 𝑓(π‘₯),𝑔(π‘₯). For attaining it, we must insert the set of𝑓𝐻=𝑖𝑗,(𝑖,𝑗)∈𝐼1,π‘”π‘π‘ž,(𝑝,π‘ž)∈𝐼2ξ€Ύ(4.8) as an input variable and, in this case, the set of ξ€½π‘’π‘ˆ=𝑖𝑗,(𝑖,𝑗)∈𝐼1ξ€Ύ(4.9) as an output variable to the neural network. Then, the neural network will be an approximate solution of problem (4.1).

5. Model Example

Now let us consider the following example.

Consider the following problem:𝑒Δ𝑒=βˆ’2,π‘₯∈𝐷,(π‘₯)=0,π‘₯βˆˆπ‘†,(5.1) where ξ€½ξ€·π‘₯𝐷=π‘₯=1,π‘₯2ξ€Έβˆˆπ‘…2∢π‘₯21+π‘₯22ξ€Ύ.≀1(5.2) The used neural network is nonliner β€œcascade feed forwardβ€”distributed time delayβ€”backpropagation” with levenberg-marqwardt algorithm (train LM) that needs 3 layers (Figure 1).

Backpropagation is the generalization of the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions. Input vectors and the corresponding target vectors are used to train a network until it can approximate a function, associate input vectors with specific output vectors, or classify input vectors in an appropriate way as defined.

The relationship between I/O, weight, and biases is shown as (Figure 2), include a weight connection from the input to each layer, and from each layer to the successive layers.

The additional connections might improve the speed at which the network learns the desired relationship. Each layer has a weight matrix W, a bias vector b, and an output vector a. Networks with wight, biases, a sigmoid layer, and a linear output layer are capable of approximating any function with a finite number of discontinuities.

Now we can start creating network through command structure:NET=newcf (minmax(P),[12, 18, 3], {β€œtansig”, β€œtansig”, β€œpurelin”},β€œtrainlm”);NET=init (NET);NET.trainparam.show=50;NET.trainparam.lr=0.05;NET.trainparam.lr_inc=1.05;NET.trainparam.mc=0.9;NET.trainparam.epochs=300;NET.trainparam.goal=1π‘’βˆ’5;[NET,tr]=train (NET,P,T); Network = Neural Network object:  architecture:   numInputs: 1  numLayers: 3  biasConnect: [1; 1; 1]  inputConnect: [1; 1; 1]  layerConnect: [0 0 0; 1 0 0; 1 1 0]  outputConnect: [0 0 1]  numOutputs: 1 (read-only)  numInputDelays: 0 (read-only)  numLayerDelays: 0 (read-only)   subobject structures:   inputs: {1Γ—1 cell} of inputs  layers: {3Γ—1 cell} of layers  outputs: {1Γ—3 cell} containing 1 output  biases: {3Γ—1 cell} containing 3 biases  inputWeights: {3Γ—1 cell} containing 3 input weights  layerWeights: {3Γ—3 cell} containing 3 layer weights  functions:   adaptFcn: β€œtrains”  divideFcn: β€œdividerand”  gradientFcn: β€œgdefaults”  initFcn: β€œinitlay”  performFcn: β€œmse”  plotFcns: {β€œplotperform”,β€œplottrainstate”,β€œplotregression”}  trainFcn: β€œtrainlm”  parameters:   adaptParam:  .passes  divideParam:  .trainRatio,  .valRatio,  .testRatio  gradientParam: (none)  initParam: (none)  performParam: (none)  trainParam:  .show,  .showWindow,  .showCommandLine,  .epochs, .time,  .goal,  .max_fail,  .mem_reduc, .min_grad,  .mu,  .mu_dec,  .mu_inc,  weight and bias values:   IW: {3Γ—1 cell} containing 3 input weight matrices  LW: {3Γ—3 cell} containing 3 layer weight matrices  b: {3Γ—1 cell} containing 3 bias vectors.

After initializing and adjusting the train and weight parameter, the final created form of the artificial neural network is shown in Figure 3.

After the training network with 𝑃=25 input and output data, Figure 4 shows the result. The result here isn’t reasonable, because the test set error and the validation set error do not have similar characteristics, and it does appear that any significant overfitting has occurred.

Figure 5 shows the net results to perform some analysis of the network response. In this case, there are four outputs, so there are four regressions.

Solution of the considered model example is 𝑒(π‘₯)=(1/2)(1βˆ’π‘₯21+π‘₯22).

We found an approximate solution of this problem for 𝑃=25 by using a neural network. These results are shown in Figure 6.

For reaching the high accuracy, we should train the network with more data. We retrain the net with 𝑃=50 data again. The results are shown in Figures 7 and 8.

The artificial neural network’s error is reasonable, because the test set error and the validation set error have similar characteristics, and it does not appear that any significant overfitting has occurred.

We found an approximate solution for 𝑃=50 again. These results are shown in Figure 9.

For improving the high accuracy of approximate solution, the network is capable to train with more data.