About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 807656, 18 pages
http://dx.doi.org/10.1155/2012/807656
Research Article

A Recurrent Neural Network for Nonlinear Fractional Programming

1Management Department, City College of Dongguan University of Technology, Dongguan 523106, China
2Financial Department, City College of Dongguan University of Technology, Dongguan 523106, China

Received 13 April 2012; Accepted 7 July 2012

Academic Editor: Hai L. Liu

Copyright © 2012 Quan-Ju Zhang and Xiao Qing Lu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints.

1. Introduction

Compared with the well-known applications of nonlinear programming to various branches of human activity, especially to economics, the applications of fractional programming are less known until now. Of course, the linearity of a problem makes it easier to tackle and hence contributes its wide recognition. However, it is certain that not all real-life economic problems can be described by linear models and hence are not likely applications of linear programming. Fractional programming is a nonlinear programming method that has known increasing exposure recently and its importance in solving concrete problems is steadily increasing. Moreover, it is known that the nonlinear optimization models describe practical problems much better than the linear optimization models do.

The fractional programming problems are particularly useful in the solution of economic problems in which various activities use certain resources in various proportions, while the objective is to optimize a certain indicator, usually the most favorable return-on-allocation ratio subject to the constraint imposed on the availability of goods. The detailed descriptions of these models can be found in Charnes et al. [1], Patkar [2], and Mjelde [3]. Besides the economic applications, it was found that the fractional programming problems also appeared in other domains, such as physics, information theory, game theory, and others. Nonlinear fractional programming problems are, of course, the dominant ones for their much widely applications, see Stancu-Minasian [4] in details.

As it is known, conventional algorithms are time consuming in solving optimization problems with large-scale variables and so new parallel and distributed algorithms are more competent then. Artificial neural networks (RNNs) governed by a system of differential equations can be implemented physically by designated hardware with integrated circuits and an optimization process with different specific purposes could be conducted in a truly parallel way. An overview and paradigm descriptions of various neural network models for tackling a great deal of optimization problems can be found in the book by Cichocki and Unbehauen [5]. Unlike most numerical algorithms, neural network approach can handle, as described in Hopfield’s seminal work [6, 7], optimization process in real-time on line and hence to be the top choice.

Neural network models for optimization problems have been investigated intensively since the pioneer work of Wang et al., see [814]. Wang et al. proposed several different neural network models for solving convex programming problems [8], linear programming [9, 10], which, proved to be globally convergent to the problem’s exact solutions. Kennedy and Chua [11] developed a neural network model for solving nonlinear programming problems where a penalty parameter needed to tune in the optimization process and hence only approximate solutions were generated. Xia and Wang [12] gave a general neural network model designing methodology which put together many gradient-based network models for solving the convex programming problems under this framework with globally convergent stability. Neural networks for the quadratic optimization and nonlinear optimization with interval constraints were developed by Bouzerdorm and Pattison [13] and Liang and Wang [14], respectively.

All these neural networks can be classified into the following three types: (1) the gradient-based models [810] and its extension [12]; (2) the penalty-function-based model [11]; (3) the projection based models [13, 14]. Among them the first was proved to have the global convergence [12]; the third quasi-convergence [810] only when the optimization problems are convex programming problems. The second could only be demonstrated to have local convergence [11] and more unfortunately, it might fail to find exact solutions, see [15] for a numerical example. Because of this, the penalty-function-based model has little applications in practice. As it is known, nonlinear fractional programming does not belong to convex optimization problems [4] and how to construct a good performance neural network model to solve this optimization problem becomes a challenge now since. Motivated by this idea, a promising recurrent continous-time neural network model is going to be proposed in the present paper. The proposing RNN model has the following two most important features. (1) The model is complete in the sense that the set of optima of the nonlinear fractional programming with interval constraints coincides with the set of equilibria of the proposing RNN model. (2) The RNN model is invariant with respect to the problem’s feasible set and has the global convergence property in the sense that all the trajectories of the proposing network converge to the exact solution set for any initial point starting at the feasible interval region. These two properties demonstrate that the proposing network model is quite suitable for solving nonlinear fractional programming problems with interval constraints.

Remains of the paper are organized as follows. Section 2 formulates the optimization problem and Section 3 describes the construction of the proposing RNN model. Complete property and global convergence of the proposing model are discussed in Sections 4 and 5, respectively. Section 6 gives some typical application areas of the fractional programming. Illustrative examples with computational results are reported in Section 7 to demonstrate further the good performance of the proposing RNN model in solving the interval-constrained nonlinear fractional programming problems. Finally, Section 8 is a conclusion remark which presents a summary of the main results of the paper.

2. Problem Formulation

The study of the nonlinear fractional programming with interval constraints is motivated by the study of the following linear fractional interval programming: min𝑓(𝑥)=𝑐𝑇𝑐𝑥+0𝑑𝑇𝑥+𝑑0𝑎𝑥𝑏,(2.1) where:(i)𝑓(𝑥)=𝑐𝑇𝑥+𝑐0/𝑑𝑇𝑥+𝑑0, (ii)𝑐,𝑑 are 𝑛 dimensional column vectors,(iii)𝑐0,𝑑0 are scalars,(iv)superscript 𝑇 denotes the transpose operator,(v)𝑥=(𝑥1,𝑥2,,𝑥𝑛)𝑇𝑅𝑛 is the decision vector,(vi)𝑎=(𝑎1,𝑎2,,𝑎𝑛)𝑇𝑅𝑛,𝑏=(𝑏1,𝑏2,,𝑏𝑛)𝑇𝑅𝑛 are constant vectors with 𝑎𝑖𝑏𝑖(𝑖=1,2,,𝑛).

It is assumed that the denominator of the objective function 𝑓(𝑥) maintains a constant sign on an open set 𝑂 which contains the interval constraints 𝑊={𝑥𝑎𝑥𝑏}, say positive, that is, 𝑑𝑇𝑥+𝑑0>0,𝑥𝑂𝑊,(2.2) and that the function 𝑓(𝑥) does not reduce to a linear function, that is, 𝑑𝑇𝑥+𝑑0 constant on 𝑂𝑊 and 𝑐,𝑑 are linearly independent. If 𝑥𝑊 and 𝑓(𝑥)𝑓(𝑥) for any 𝑥𝑊, then 𝑥 is called an optimal solution to the problem (2.1). The set of all solutions to problem (2.1) is denoted by Ω, that is, Ω={𝑥𝑊𝑓(𝑥)𝑓(𝑥),𝑥𝑊}.

Studies on linear fractional interval programming replaced the constraints in programming (2.1) with 𝑎𝐴𝑥𝑏 commenced in a series number of paper by Charnes et al. [1618]. Charnse and Cooper, see [16], employed the change of variable method and developed a solution algorithm for this programming by the duality theory of linear programming. A little later, in [17, 18], Charnse et al. gave a different method which transformed the fractional interval problem into an equivalent problem like (2.1) by using the generalized inverse of 𝐴, and the explicit solutions were followed then. Also, Bühler [19] transformed the problem into another equivalent one of the same format, to which he associated a linear parametric program used to obtain solution for the original interval programming problem.

Accordingly, constraints 𝑎𝐴𝑥𝑏 can always be transformed into 𝑎𝑥𝑏 by change of variable method without changing the programming’s format, see [13] for quadratic programming and [17] for linear fractional interval programming. So, it is necessary to pay our attention on problem (2.1) only. As the existing studies on problem (2.1), see [1619], focused on the classical method which is time consuming in optimization computational aspects, it is sure that the neural network method should be the top choice to meet the real-time computation requirement. To reach this goal, the present paper is to construct a RNN model that is available both for solving nonlinear fractional interval programming and for linear fractional interval programming problem (2.1) as well.

Consider the following more general nonlinear fractional programming problem: min𝐹(𝑥)=𝑔(𝑥)(𝑥)𝑎𝑥𝑏,(2.3) where 𝑔(𝑥),(𝑥) are continuously differentiable function defined on an open convex set 𝑂𝑅𝑛 which contains the problem’s feasible set 𝑊={𝑥𝑎𝑥𝑏} and 𝑥,𝑎,𝑏 the same as in problem (2.1), see the previous (v)-(vi). Similarly, we suppose the objective function’s dominator 𝑔(𝑥) always keeps a constant sign, say 𝑔(𝑥)>0. As the most fractional programming problems arising in real-life world associate a kind of generalized convex properties, we suppose the objective function 𝐹(𝑥) to be pseudoconvex over 𝑂. There are several sufficient conditions for the function 𝐹(𝑥)=𝑔(𝑥)/(𝑥) being pseudoconvex, two of which, see [20], are (1) 𝑔 is convex and 𝑔0, while concave and >0; (2) 𝑔 is convex and 𝑔0, while is convex and >0. It is easy to see that the problem (2.1) is a special case of problem (2.3).

We are going to state the neural network model which can be employed for solving problem (2.3) and so for problem (2.1) as well. Details are described in the coming section.

3. The Neural Network Model

Consider the following single-layered recurrent neural network whose state variable 𝑥 is described by the differential equation: 𝑑𝑥𝑑𝑡=𝑥+𝑓𝑊(𝑥𝐹(𝑥)),(3.1) where is the gradient operator and 𝑓𝑊𝑅𝑛𝑊 is the projection operator defined by 𝑓𝑊(𝑥)=argmin𝑤𝑊𝑥𝑤.(3.2) For the interval constrained feasible set 𝑊, the operator 𝑓𝑊 can be expressed explicitly as 𝑓𝑊(𝑥)=(𝑓𝑊1(𝑥),,𝑓𝑊𝑛(𝑥)) whose 𝑖th component is𝑓𝑊𝑖(𝑥)𝑓𝑊𝑖𝑥𝑖=𝑎𝑖,𝑥𝑖,𝑎𝑖𝑏𝑖,𝑥𝑖<𝑎𝑖,𝑥𝑖𝑏𝑖,𝑥𝑖>𝑏𝑖.(3.3) The activation function to one node of the neural network model (3.1) is the typical piece-wise linear 𝑓𝑊𝑖(𝑥𝑖) which is visibly illustrated in Figure 1.

807656.fig.001
Figure 1: The activation function 𝑓𝑖(𝑥𝑖) of the neural network model (3.1).

To make a clear description of the proposed neural network model, we reformulate the compact matrix form (3.1) as the following component ones: 𝑑𝑥𝑖𝑑𝑡=𝑥𝑖+𝑓𝑊𝑖𝑥𝑖𝜕𝐹(𝑥)𝜕𝑥𝑖,𝑖=1,2,,𝑛.(3.4) When the RNN model (3.1) is employed to solve optimization problem (2.3), the initial state is required to be mapped into the feasible interval region 𝑊. That is, for any 𝑥0=(𝑥01,𝑥02,,𝑥0𝑛)𝑅𝑛, the corresponding neural trajectory initial point should be chosen as 𝑥(0)=𝑓𝑊(𝑥0), or in the component form, 𝑥𝑖(0)=𝑓𝑊𝑖(𝑥0𝑖). The block functional diagram of the RNN model (3.4) is depicted in Figure 2.

807656.fig.002
Figure 2: Functional block diagram of the neural network model (3.1).

Accordingly, the architecture of the proposed neural network model (3.4) is composed of 𝑛 integrators, 𝑛 processors for 𝐹(𝑥), 2𝑛 piece-wise linear activation functions, and 2𝑛 summers.

Let the equilibrium state of the RNN model (3.1) be Ω𝑒 which is defined by the following equation: Ω𝑒=𝑥𝑒𝑅𝑛𝑥𝑒=𝑓𝑊(𝑥𝑒𝐹(𝑥𝑒))𝑊.(3.5) The relationship between the minimizer set Ω of problem (2.3) and the equilibrium set Ω𝑒 is explored in the following section. It is guaranteed that the two sets coincide exactly and, this case is the most available expected one in the neural network model designs.

4. Complete Property

As proposed for binary-valued neural network model in [21], a neural network is said to be regular or normal if the set of minimizers of an energy function is a subset or superset of the set of the stable states of the neural network, respectively. If the two sets are the same, the neural network is said to be complete. The regular property implies the neural network’s reliability and normal effectiveness, respectively, for the optimization process. Complete property means both reliability and effectiveness and it is the top choice in the neural network designing. Here, for the continuous-time RNN model (3.1), we say the model to be regular, normal, and complete respectively if three cases of ΩΩ𝑒, Ω𝑒Ω, and Ω=Ω𝑒 occur, respectively.

The complete property of the neural network (3.1) is stated in the following theorem.

Theorem 4.1. The RNN model (3.1) is complete, that is, Ω=Ω𝑒.

In order to prove Theorem 4.1, one needs the following lemmas.

Lemma 4.2. Suppose that 𝑥 is a solution to problem (2.3), that is, 𝐹𝑥=min𝑦𝑊𝐹(𝑦),(4.1) then 𝑥 is a solution to the variational inequality: 𝑥𝑊(𝐹(𝑥),𝑦𝑥)0for𝑦𝑊.(4.2)

Proof. See [22, Proposition 5.1].

Lemma 4.3. Function 𝐹(𝑥)=𝑐𝑇𝑥+𝑐0/𝑑𝑇𝑥+𝑑0 defined in (2.3) is both pseudoconvex and pseudoconcave over 𝑊.

Proof. See [20, Lemma 11.4.1].

Lemma 4.4. Let 𝐹(𝑥)𝑅𝑛𝑅 be a differentiable pseudoconvex function on an open set 𝑌𝑅𝑛, and 𝑊𝑌 any given nonempty and convex set. Then 𝑥 is an optimal solution to the problem of minimizing 𝐹(𝑥) subject to 𝑥𝑊 if and only if (𝑥𝑥)𝑇𝐹(𝑥)0 for all 𝑥𝑊.

Proof. See [4, Theorem 2.3.1(b)].

Now, we turn to the proof of Theorem 4.1: let 𝑥=(𝑥1,𝑥2,,𝑥𝑛)𝑇Ω, then 𝐹(𝑥)𝐹(𝑥) for any 𝑥𝑊 and hence, Lemma 4.2 means that 𝑥 solves (4.2), that is, 𝑥𝑊𝑦𝑥𝑇𝑥𝐹0,𝑦𝑊,(4.3) which is equivalent to, see [23], 𝑥=𝑓𝑊𝑥𝑥𝐹,(4.4) so, 𝑥Ω𝑒. Thus, ΩΩ𝑒.

Conversely, let 𝑥𝑒=(𝑥𝑒1,𝑥𝑒2,,𝑥𝑒𝑛)𝑇Ω𝑒, that is, 𝑥𝑒=𝑓𝑊(𝑥𝑒𝐹(𝑥𝑒,))(4.5) which, also see [23], means 𝑥𝑒𝑊(𝑦𝑥𝑒)𝑇𝐹(𝑥𝑒)0,𝑦𝑊.(4.6) Since the function 𝐹(𝑥) is pseudoconvex over 𝑊, see Lemma 4.3, it can be obtained by Lemma 4.4 that 𝐹(𝑥)𝐹(𝑥𝑒),𝑥𝑊,(4.7) so, 𝑥𝑒Ω. Thus, Ω𝑒Ω. Therefore, the obtained results Ω𝑒Ω and ΩΩ𝑒 lead the result of Theorem 4.1, Ω=Ω𝑒, to be true then.

5. Stability Analysis

First, it can be shown that the RNN model (3.1) has a solution trajectory which is global in the sense that its existence interval can be extended to on the right hand for any initial point in 𝑊.

The continuity of the right hand of (3.1) means, by Peano’s local existence theorem, see [24], that there exists a solution 𝑥(𝑡;𝑥0) for 𝑡[0,𝑡max) with any initial point 𝑥0𝑊, here 𝑡max is the maximal right hand point of the existence interval. The following lemma states that this 𝑡max to be .

Lemma 5.1. The solution 𝑥(𝑡;𝑥0) of RNN model (3.1) with any initial point 𝑥(0;𝑥0)=𝑥0𝑊 is bounded and so, it can be extended to .

Proof. It is easy to check that the solution 𝑥(𝑡)=𝑥(𝑡;𝑥0) for 𝑡[0,𝑡max) with initial condition 𝑥(0;𝑥0)=𝑥0 is given by 𝑥(𝑡)=𝑒𝑡𝑥0+𝑒𝑡𝑡0𝑒𝑠𝑓𝑊[]𝑥(𝑠)𝐹(𝑥(𝑠))𝑑𝑠.(5.1) Obviously, mapping 𝑓𝑊 is bounded, that is 𝑓𝑊𝐾 for some positive number 𝐾>0, where is the Euclidean 2 norm. It follows from (5.1) that (𝑥𝑡)𝑒𝑡𝑥0+𝑒𝑡𝐾𝑡0𝑒𝑠𝑑𝑠𝑒𝑡𝑥0+𝐾1𝑒𝑡𝑥max0.,𝐾(5.2) Thus, solution 𝑥(𝑡) is bounded and so, by the extension theorem for ODEs, see [24], it can be concluded that 𝑡max= which completes the proof of this lemma.
Now, we are going to show another vital dynamical property which says the set 𝑊 is positive invariant with respect to the RNN model (3.1). That is, any solution 𝑥(𝑡) starting from a point in 𝑊, for example, 𝑥0𝑊, it will stay in 𝑊 for all time 𝑡 elapsing. Additionally, we can also prove that any solution starting from outside of 𝑊 will either enter into the set 𝑊 in finite time elapsing and hence stay in it for ever or approach it eventually.

Theorem 5.2. For the neural dynamical system (3.1), the following two dynamical properties hold:(a)𝑊 is a positive invariant set of the RNN model (3.1);(b)if 𝑥0𝑊, then, either 𝑥(𝑡) enters into 𝑊 in finite time elapsing and hence stays in it for ever or 𝜌(𝑡)=dist(𝑥(𝑡),𝑊)0, as 𝑡, where dist(𝑥(𝑡),𝑊)=inf𝑦𝑊𝑥𝑦.

Proof. Method to prove this theorem can be found in [14] and for the purpose of completeness and readability, here we give the whole proof as follows again.
Suppose that, for 𝑖=1,,𝑛,𝑊𝑖={𝑥𝑖𝑅𝑎𝑖𝑥𝑖𝑏𝑖} and 𝑥0𝑖=𝑥𝑖(0;𝑥0)𝑊𝑖. We first prove that for all 𝑖=1,2,,𝑛, the 𝑖th component 𝑥𝑖(𝑡)=𝑥𝑖(𝑡;𝑥0) belongs to 𝑊𝑖, that is, 𝑥𝑖(𝑡)𝑊𝑖 for all 𝑡0.
Let 𝑡𝑖̃=sup𝑡𝑥𝑖(𝑡)𝑊𝑖̃𝑡,𝑡0,0.(5.3) We show by a contradiction that 𝑡𝑖=+. Suppose 𝑡𝑖<, then 𝑥𝑖(𝑡)𝑊𝑖 for 𝑡[0,𝑡𝑖] and 𝑥𝑖(𝑡)𝑊𝑖 for 𝑡(𝑡𝑖,𝑡𝑖+𝛿) where 𝛿 being a positive number. With no loss of generality, we assume that 𝑥𝑖(𝑡)<𝑎𝑖𝑡,𝑡𝑖,𝑡𝑖+𝛿.(5.4) The proof for 𝑥𝑖(𝑡)>𝑏𝑖,𝑡(𝑡𝑖,𝑡𝑖+𝛿) is similar. By the definition of 𝑓𝑊𝑖, the RNN model (3.4) and the assumption (5.4), it follows that 𝑑𝑥𝑖(𝑡)𝑑𝑡𝑥𝑖(𝑡)+𝑎𝑖𝑡>0,𝑡𝑖,𝑡𝑖.+𝛿(5.5) So, 𝑥𝑖(𝑡) is strictly increasing in 𝑡(𝑡𝑖,𝑡𝑖+𝛿) and hence 𝑥𝑖(𝑡)>𝑥𝑖𝑡𝑖𝑡,𝑡𝑖,𝑡𝑖+𝛿.(5.6) Noting that 𝑥𝑖(𝑡)𝑊 for 𝑡[0,𝑡𝑖] and assumption (5.4) implies 𝑥𝑖(𝑡𝑖)=𝑎𝑖, so, by (5.6), we get 𝑥𝑖(𝑡)>𝑎𝑖𝑡,𝑡𝑖,𝑡𝑖+𝛿.(5.7) This is in contradiction with the assumption (5.4). So, 𝑡𝑖=+, that is 𝑥𝑖(𝑡)𝑊𝑖 for all 𝑡0. This means 𝑊 is positive invariant and hence (a) is guaranteed.
Second, for some 𝑖, suppose 𝑥0𝑖=𝑥𝑖(0;𝑥0)𝑊𝑖. If there is a 𝑡𝑖>0 such that 𝑥(𝑡𝑖)𝑊𝑖, then, according to (a), 𝑥𝑖(𝑡) will stay in 𝑊𝑖 for all 𝑡𝑡𝑖. That is 𝑥𝑖(𝑡) will enter into 𝑊𝑖. Conversely, for all 𝑡0, suppose 𝑥𝑖(𝑡)𝑊𝑖. Without loss of generality, we assume that 𝑥𝑖(𝑡)<𝑎𝑖. It can be guaranteed by a contradiction that sup{𝑥𝑖(𝑡)𝑡0}=𝑎𝑖. If it is not so, note that 𝑥𝑖(𝑡)<𝑎𝑖, then sup{𝑥𝑖(𝑡)𝑡0}=𝑚<𝑎𝑖. It can be followed by (3.4) that 𝑑𝑥𝑖(𝑡)𝑑𝑡𝑚+𝑎𝑖=𝛿>0.(5.8) Integrating (5.8) gives us 𝑥𝑖(𝑡)𝛿𝑡+𝑥0𝑖,𝑡>0,(5.9) which is a contradiction because of 𝑥𝑖(𝑡)<𝑎𝑖. Thus, we obtain sup{𝑥𝑖(𝑡)𝑡0}=𝑎𝑖. This and the previous argument show that, for 𝑥0𝑊, either 𝑥(𝑡) enters into 𝑊 in finite time and hence stays in it for ever or 𝜌(𝑡)=dist(𝑥(𝑡),𝑊)0, as 𝑡.
We can now explore the global convergence of the neural network model (3.1). To proceed, we need an inequality result about the projection operator 𝑓𝑊 and the definition of convergence for a neural network.

Definition 5.3. Let 𝑥(𝑡) be a solution of system ̇𝑥=𝐹(𝑥). The system is said to be globally convergent to a set 𝑋 with respect to set 𝑊 if every solution 𝑥(𝑡) starting at 𝑊 satisfies 𝜌(𝑥(𝑡),𝑋)0,as𝑡,(5.10) here 𝜌(𝑥(𝑡),𝑋)=inf𝑦𝑋𝑥𝑦 and 𝑥(0)=𝑥0𝑊.

Definition 5.4. The neural network (3.1) is said to be globally convergent to a set 𝑋 with respect to set 𝑊 if the corresponding dynamical system is so.

Lemma 5.5. For all 𝑣𝑅𝑛 and all 𝑢𝑊𝑣𝑓𝑊(𝑣)𝑇𝑓𝑊(𝑣)𝑢0.(5.11)

Proof. See [22, pp. 9-10].

Theorem 5.6. The neural network (3.1) is globally convergent to the solution set Ω with respect to set 𝑊.

Proof. From Lemma 5.5, we know that 𝑣𝑓𝑊(𝑣)𝑇𝑓𝑊(𝑣)𝑢0,𝑣𝑅𝑛,𝑢𝑊.(5.12) Let 𝑣=𝑥𝐹(𝑥) and 𝑢=𝑥, then 𝑥𝐹(𝑥)𝑓𝑊(𝑥𝐹(𝑥))𝑇𝑓𝑊(𝑥𝐹(𝑥))𝑥0,(5.13) that is, (𝐹(𝑥))𝑇𝑓𝑊𝑓(𝑥𝐹(𝑥))𝑥𝑊(𝑥𝐹(𝑥))𝑥2.(5.14) Define an energy function 𝐹(𝑥), then, differentiating this function along the solution 𝑥(𝑡) of (3.1) gives us 𝑑𝐹(𝑥(𝑡))𝑑𝑡=(𝐹(𝑥))𝑇𝑑𝑥𝑑𝑡=(𝐹(𝑥))𝑇𝑓𝑊.(𝑥𝐹(𝑥))𝑥(5.15) According to (5.14), it follows that 𝑑𝐹(𝑥(𝑡))𝑓𝑑𝑡𝑊(𝑥𝐹(𝑥))𝑥20.(5.16) It means the energy of 𝐹(𝑥) is decreasing along any trajectory of (3.1). By Lemma 5.1, we know the solution 𝑥(𝑡) is bounded. So, 𝐹(𝑥) is a Liapunov function to system (3.1). Therefore, by LaSalle’s invariant principle [25], it follows that all trajectories of (3.1) starting at 𝑊 will converge to the largest invariant subset Σ of set 𝐸 like 𝐸=𝑥𝑑𝐹.𝑑𝑡=0(5.17) However, it can be guaranteed from (5.16) that 𝑑𝐹/𝑑𝑡=0 only if 𝑓𝑊(𝑥𝐹(𝑥))𝑥=0, which means that 𝑥 must be an equilibrium of (3.1) or, 𝑥Ω. Thus, Ω is the convergent set for all trajectories of neural network (3.1) starting at 𝑊. Noting that Theorem 4.1 tells us that Ω=Ω and hence, Theorem 5.6 is proved to be true then.
Up to now, we have demonstrated that the proposed neural network (3.1) is a promising neural network model both in implementable construction sense and in theoretic convergence sense for solving nonlinear fractional programming problems and linear fractional programming problems with bound constraints. Certainly, it is also important to simulate the network’s effectiveness by numerical experiment to test its performance in practice. In next section, we will focus our attention on handling illustrative examples to reach this goal.

6. Typical Application Problems

This section contributes to some typical problems from various branches of human activity, especially in economics and engineering, that can be formulated as fractional programming. We choose three problems from information theory, optical processing of information and macroeconomic planning to identify the various applications of fractional programming.

6.1. Information Theory

For calculating maximum transmission rate in an information channel Meister and Oettli [26], Aggarwal and Sharma [27] employed the fractional programming described briefly as follows.

Consider a constant and discrete transmission channel with 𝑚 input symbols and 𝑛 output symbols, characterized by a transition matrix 𝑃=(𝑝𝑖𝑗),𝑖=1,,𝑚,𝑝𝑖𝑗0,𝑖𝑝𝑖𝑗=1, where 𝑝𝑖𝑗 represents the probability of getting the symbol 𝑖 at the output subject to the constraint that the input symbol was 𝑗. The probability distribution function of the inputs is denoted by 𝑥=(𝑥𝑗), and obviously, 𝑥𝑗0,𝑗𝑥𝑗=1.

Define the transmission rate of the channel as: 𝑇(𝑥)=𝑖𝑗𝑥𝑗𝑝𝑖𝑗𝑝log𝑖𝑗/𝑘𝑥𝑘𝑝𝑖𝑘𝑗𝑡𝑗𝑥𝑗.(6.1) The relative capacity of the channel is defined by the maximum of 𝑇(𝑥), and we get the following fractional programming problem: 𝐺=max𝑥𝑋𝑇(𝑥)𝑥𝑗0,𝑗𝑥𝑗.=1(6.2) With the notations: 𝑐𝑗=𝑖𝑝𝑖𝑗log𝑝𝑖𝑗𝑐,𝑐=𝑗,𝑦𝑖=𝑗𝑝𝑖𝑗𝑥𝑗𝑦,𝑦=𝑗𝑥,𝑧=𝑗,𝑦𝑖,(6.3) problem (6.2) becomes 𝑐max𝑇(𝑧)=𝑥𝑦log𝑦𝑡𝑥𝑥𝑗0,𝑗𝑥𝑗=1,𝑦𝑖=𝑗𝑝𝑖𝑗𝑥𝑗.(6.4)

6.2. Optical Processing of Information

In some physics problems, fractional programming can also be applied. In spectral filters for the detection of quadratic law for infrared radiation, the problem of maximizing the signal-to-noise ration appears. This means to maximize the filter function 𝑎𝜙(𝑥)=𝑥2𝑥𝐵𝑥+𝛽(6.5) on the domain 𝑆={𝑥𝑅𝑛,0𝑥𝑖1,𝑖=1,,𝑛} in which 𝑎 and 𝛽 are strict positive vector, and constant, respectively, 𝐵 is a symmetric and positive definite matrix, 𝑎𝑥 represents the input signal, and 𝑥𝐵𝑥+𝛽 represents the variance in the background signal. The domain of the feasible solutions 𝑆 illustrates the fact that the filter cannot transmit more than 100% and less than 0% of the total energy. The optical filtering problems are very important in today’s information technology, especially in coherent light applications, and optically based computers have already been built.

6.3. Macroeconomic Planning

One of the most significant applications of fractional programming is that of dynamic modeling of macroeconomic planning using the input-output method. Let 𝑌(𝑡) be the national income created in year 𝑡. Obviously, 𝑌(𝑡)=𝑖𝑌𝑖(𝑡). If we denote by 𝐶𝑖𝑘(𝑡) the consumption, in branch 𝑘, of goods of type 𝑖 (that were created in branch 𝑖) and by 𝐼𝑖𝑘 the part of the national income created in branch 𝑖 and allocated to investment in branch 𝑘, then the following repartition equation applies to the national income created in branch 𝑖: 𝑌𝑖(𝑡)=𝑘𝐶𝑖𝑘(𝑡)+𝐼𝑖𝑘(.𝑡)(6.6) The increase of the national income in branch 𝑘 is function of the investment made in this branch Δ𝑌𝑖(𝑡)=𝑌𝑘(𝑡+1)𝑌𝑘(𝑡)=𝑏𝑘𝐼𝑘+𝑏𝑘0,(6.7) where 𝐼𝑘=𝑖𝐼𝑖𝑘.

In these conditions, the macroeconomic planning leads to maximize the increase rate of the national income:Δ𝑌𝑌=𝑘𝑏𝑘𝐼𝑘(𝑡)+𝑏𝑘0𝑖𝑘𝐶𝑖𝑘(𝑡)+𝐼𝑖𝑘(𝑡)(6.8) subject to the constraints 𝐶𝑘(𝑡)max(𝐶𝑘,𝐶𝑘(0)), where 𝐶𝑘(𝑡)=𝑖𝐶𝑖𝑘(𝑡),𝐼𝑘(0)𝐼𝑘(𝑡)𝐼𝑘(max), and 𝐶𝑘 represents minimum consumption attributed to branch 𝑘 whereas 𝐼𝑘(max) is the maximum level of investments for branch 𝑘.

7. Illustrative Examples

We give some computational examples as simulation experiment to show the proposed network’s good performance.

Example 7.1. Consider the following linear fractional programming: 𝑥min𝐹(𝑥)=1+𝑥2+12𝑥1𝑥2,+3s.t.0𝑥12,0𝑥22(7.1) This problem has an exact solution 𝑥=[0,0]𝑇 with the optimal value 𝐹(𝑥)=1/3. The gradient of 𝐹(𝑥) can be expressed as𝐹(𝑥)=3𝑥2+12𝑥1𝑥2+323𝑥1+42𝑥1𝑥2+32.(7.2) Define 𝑧𝑖=𝑥𝑖𝐹𝑥𝑖,𝑖=1,2(7.3) and pay attention to (7.2), we get 𝑧1=4𝑥314𝑥21𝑥22𝑥1𝑥226𝑥1𝑥2+12𝑥21+9𝑥1+3𝑥212𝑥1𝑥2+32,𝑧2=𝑥324𝑥22𝑥1+12𝑥2𝑥1+4𝑥2𝑥216𝑥22+9𝑥23𝑥14(2𝑥1𝑥2+3)2.(7.4)
The dynamical systems are given by 𝑑𝑥1=𝑑𝑡𝑥1,𝑥1+𝑧1,0𝑥1𝑧+2,1𝑧<0,1𝑧2,1>2,𝑑𝑥2=𝑑𝑡𝑥2,𝑥2+𝑧2,0𝑥2𝑧+2,2𝑧<0,2𝑧2,2>2.(7.5) Various combinations of (7.5) formulate the proposed neural network model (3.1) to this problem. Conducted on MATLAB 7.0., by ODE 23 solver, the simulation results are carried out and the transient behaviors of the neural trajectories 𝑥1,𝑥2 starting at 𝑥0=[0.4,1]𝑇, which is in the feasible region 𝑊, are shown in Figure 3. It can be seen visibly from the figure that the proposed neural network converges to the exact solution very soon.

807656.fig.003
Figure 3: Transient behaviors of neural trajectories 𝑥1,𝑥2 from the inside of 𝑊.

Also, according to (b) of Theorem 5.2, the solution may be searched from outside of the feasible region. Figure 4 shows this by presenting how the solution of this problem is located by the proposed neural trajectories from the initial point 𝑥0=[0.5,3]𝑇 which is not in 𝑊.

807656.fig.004
Figure 4: Transient behaviors of neural trajectories 𝑥1,𝑥2 from the outside of 𝑊.

Example 7.2. Consider the following nonlinear fractional programming: 𝑥min𝐹(𝑥)=115𝑥21𝑥22,s.t.1𝑥12,1𝑥22.(7.6) This problem has an exact solution 𝑥=[1,1]𝑇 with the optimal value 𝐹(𝑥)=1/13. The gradient of 𝐹(𝑥) can be expressed as 𝐹(𝑥)=15+𝑥21𝑥2215𝑥21𝑥2222𝑥1𝑥215𝑥21𝑥222.(7.7) Define 𝑧𝑖=𝑥𝑖𝐹𝑥𝑖,𝑖=1,2(7.8) and pay attention to (7.7), we get 𝑧1=𝑥51+𝑥1𝑥4230𝑥3130𝑥1𝑥22+2𝑥31𝑥22𝑥21+𝑥22+225𝑥11515𝑥21𝑥222,𝑧2=𝑥41𝑥2+𝑥5230𝑥21𝑥230𝑥32+2𝑥21𝑥32+225𝑥22𝑥1𝑥215𝑥21𝑥222.(7.9) The dynamical systems are given by 𝑑𝑥1=𝑑𝑡𝑥1+1,𝑥1+𝑧1,1𝑥1𝑧+2,1𝑧<1,1𝑧2,1>2,𝑑𝑥2=𝑑𝑡𝑥2+1,𝑥2+𝑧2,1𝑥2𝑧+2,2𝑧<1,2𝑧2,2>2.(7.10) Similarly, conducted on MATLAB 7.0., by ODE 23 solver, the transient behaviors of the neural trajectories 𝑥1,𝑥2 from inside of the feasible region 𝑊, here 𝑥0=(2,3), are depicted in Figure 5 which shows the rapid convergence of the proposed neural network.

807656.fig.005
Figure 5: Transient behaviors of neural trajectories 𝑥1,𝑥2 from the inside of 𝑊.

Figure 6 presents a trajectory from outside of 𝑊, here (4,3), it can be seen clearly from this that the solution of this problem is searched by the proposed neural trajectory soon.

807656.fig.006
Figure 6: Transient behaviors of neural trajectories 𝑥1,𝑥2 from the outside of 𝑊.

8. Conclusions

In this paper, we have proposed a neural network model for solving nonlinear fractional programming problems with interval constraints. The network is governed by a system of differential equations with a projection method.

The stability of the proposed neural network has been demonstrated to have global convergence with respect to the problem’s feasible set. As it is known, the existing neural network models with penalty function method for solving nonlinear programming problems may fail to find the exact solution of the problems. The new model has overcome this stability defect appearing in all penalty-function-based models.

Certainly, the network presented here can perform well in the sense of real-time computation which, in the time elapsing sense, is also superior to the classical algorithms. Finally, numerical simulation results demonstrate further that the new model can act both effectively and reliably on the purpose of locating the involved problem’s solutions.

Acknowledgment

The research is supported by Grant 2011108102056 from Science and Technology Bureau of Dong Guan, Guang Dong, China.

References

  1. A. Charnes, W. W. Cooper, and E. Rhodes, “Measuring the efficiency of decision making units,” European Journal of Operational Research, vol. 2, no. 6, pp. 429–444, 1978. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  2. V. N. Patkar, “Fractional programming models for sharing of urban development responsabilities,” Nagarlok, vol. 22, no. 1, pp. 88–94, 1990.
  3. K. M. Mjelde, “Fractional resource allocation with S-shaped return functions,” The Journal of the Operational Research Society, vol. 34, no. 7, pp. 627–632, 1983. View at Publisher · View at Google Scholar
  4. I. M. Stancu-Minasian, Fractional Programming, Theory, Methods and Applications, Kluwer Academic Publishers, Dodrecht, The Netherlands, 1992.
  5. A. Cichocki and R. Unbehauen, Neural Networks for Optimization and Signal Processing, John Wiley & Sons, New York, NY, USA, 1993.
  6. J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proceedings of the National Academy of Sciences of the United States of America, vol. 81, no. 10, pp. 3088–3092, 1984. View at Scopus
  7. J. J. Hopfield and D. W. Tank, “‘Neural’ computation of decisons in optimization problems,” Biological Cybernetics, vol. 52, no. 3, pp. 141–152, 1985.
  8. J. Wang, “A deterministic annealing neural network for convex programming,” Neural Networks, vol. 7, no. 4, pp. 629–641, 1994. View at Scopus
  9. J. Wang and V. Chankong, “Recurrent neural networks for linear programming: analysis and design principles,” Computers and Operations Research, vol. 19, no. 3-4, pp. 297–311, 1992. View at Scopus
  10. J. Wang, “Analysis and design of a recurrent neural network for linear programming,” IEEE Transactions on Circuits and Systems I, vol. 40, no. 9, pp. 613–618, 1993. View at Publisher · View at Google Scholar · View at Scopus
  11. M. P. Kennedy and L. O. Chua, “Neural networks for nonlinear programming,” IEEE Transaction on Circuits and Systems, vol. 35, no. 5, pp. 554–562, 1988. View at Publisher · View at Google Scholar
  12. Y. S. Xia and J. Wang, “A general methodology for designing globally convergent optimization neural networks,” IEEE Transactions on Neural Networks, vol. 9, no. 6, pp. 1331–1343, 1998. View at Scopus
  13. A. Bouzerdoum and T. R. Pattison, “Neural network for quadratic optimization with bound constraints,” IEEE Transactions on Neural Networks, vol. 4, no. 2, pp. 293–304, 1993. View at Publisher · View at Google Scholar · View at Scopus
  14. X. B. Liang and J. Wang, “A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints,” IEEE Transactions on Neural Networks, vol. 11, no. 6, pp. 1251–1262, 2000. View at Scopus
  15. Y. S. Xia, H. Leung, and J. Wang, “A projection neural network and its application to constrained optimization problems,” IEEE Transactions on Circuits and Systems I, vol. 49, no. 4, pp. 447–458, 2002. View at Publisher · View at Google Scholar
  16. A. Charnes and W. W. Cooper, “An explicit general solution in linear fractional programming,” Naval Research Logistics Quarterly, vol. 20, pp. 449–467, 1973. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  17. A. Charnes, D. Granot, and F. Granot, “A note on explicit solution in linear fractional programming,” Naval Research Logistics Quarterly, vol. 23, no. 1, pp. 161–167, 1976. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  18. A. Charnes, D. Granot, and F. Granot, “An algorithm for solving general fractional interval programming problems,” Naval Research Logistics Quarterly, vol. 23, no. 1, pp. 67–84, 1976. View at Scopus
  19. W. Bühler, “A note on fractional interval programming,” Zeitschrift für Operations Research, vol. 19, no. 1, pp. 29–36, 1975. View at Publisher · View at Google Scholar · View at Scopus
  20. M. S. Bazaraa and C. M. Shetty, Nonlinear Programming, Theory and Algorithms, John Wiley & Sons, New York, NY, USA, 1979.
  21. Z. B. Xu, G. Q. Hu, and C. P. Kwong, “Asymmetric Hopfield-type networks: theory and applications,” Neural Networks, vol. 9, no. 3, pp. 483–501, 1996. View at Publisher · View at Google Scholar · View at Scopus
  22. D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and Their Applications, Academic Press, New York, NY, USA, 1980.
  23. B. C. Eaves, “On the basic theorem of complementarity,” Mathematical Programming, vol. 1, no. 1, pp. 68–75, 1971. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  24. J. K. Hale, Ordinary Differential Equations, John Wiley & Sons, New York, NY, USA, 1993.
  25. J. P. LaSalle, “Stability theory for ordinary differential equations,” Journal of Differential Equations, vol. 4, pp. 57–65, 1968. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  26. B. Meister and W. Oettli, “On the capacity of a discrete, constant channel,” Information and Control, vol. 11, no. 3, pp. 341–351, 1967. View at Scopus
  27. S. P. Aggarwal and I. C. Sharma, “Maximization of the transmission rate of a discrete, constant channel,” Mathematical Methods of Operations Research, vol. 14, no. 1, pp. 152–155, 1970. View at Publisher · View at Google Scholar · View at Zentralblatt MATH