- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Mathematical Problems in Engineering

Volume 2012 (2012), Article ID 807656, 18 pages

http://dx.doi.org/10.1155/2012/807656

## A Recurrent Neural Network for Nonlinear Fractional Programming

^{1}Management Department, City College of Dongguan University of Technology, Dongguan 523106, China^{2}Financial Department, City College of Dongguan University of Technology, Dongguan 523106, China

Received 13 April 2012; Accepted 7 July 2012

Academic Editor: Hai L. Liu

Copyright © 2012 Quan-Ju Zhang and Xiao Qing Lu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints.

#### 1. Introduction

Compared with the well-known applications of nonlinear programming to various branches of human activity, especially to economics, the applications of fractional programming are less known until now. Of course, the linearity of a problem makes it easier to tackle and hence contributes its wide recognition. However, it is certain that not all real-life economic problems can be described by linear models and hence are not likely applications of linear programming. Fractional programming is a nonlinear programming method that has known increasing exposure recently and its importance in solving concrete problems is steadily increasing. Moreover, it is known that the nonlinear optimization models describe practical problems much better than the linear optimization models do.

The fractional programming problems are particularly useful in the solution of economic problems in which various activities use certain resources in various proportions, while the objective is to optimize a certain indicator, usually the most favorable return-on-allocation ratio subject to the constraint imposed on the availability of goods. The detailed descriptions of these models can be found in Charnes et al. [1], Patkar [2], and Mjelde [3]. Besides the economic applications, it was found that the fractional programming problems also appeared in other domains, such as physics, information theory, game theory, and others. Nonlinear fractional programming problems are, of course, the dominant ones for their much widely applications, see Stancu-Minasian [4] in details.

As it is known, conventional algorithms are time consuming in solving optimization problems with large-scale variables and so new parallel and distributed algorithms are more competent then. Artificial neural networks (RNNs) governed by a system of differential equations can be implemented physically by designated hardware with integrated circuits and an optimization process with different specific purposes could be conducted in a truly parallel way. An overview and paradigm descriptions of various neural network models for tackling a great deal of optimization problems can be found in the book by Cichocki and Unbehauen [5]. Unlike most numerical algorithms, neural network approach can handle, as described in Hopfield’s seminal work [6, 7], optimization process in real-time on line and hence to be the top choice.

Neural network models for optimization problems have been investigated intensively since the pioneer work of Wang et al., see [8–14]. Wang et al. proposed several different neural network models for solving convex programming problems [8], linear programming [9, 10], which, proved to be globally convergent to the problem’s exact solutions. Kennedy and Chua [11] developed a neural network model for solving nonlinear programming problems where a penalty parameter needed to tune in the optimization process and hence only approximate solutions were generated. Xia and Wang [12] gave a general neural network model designing methodology which put together many gradient-based network models for solving the convex programming problems under this framework with globally convergent stability. Neural networks for the quadratic optimization and nonlinear optimization with interval constraints were developed by Bouzerdorm and Pattison [13] and Liang and Wang [14], respectively.

All these neural networks can be classified into the following three types: (1) the gradient-based models [8–10] and its extension [12]; (2) the penalty-function-based model [11]; (3) the projection based models [13, 14]. Among them the first was proved to have the global convergence [12]; the third quasi-convergence [8–10] only when the optimization problems are convex programming problems. The second could only be demonstrated to have local convergence [11] and more unfortunately, it might fail to find exact solutions, see [15] for a numerical example. Because of this, the penalty-function-based model has little applications in practice. As it is known, nonlinear fractional programming does not belong to convex optimization problems [4] and how to construct a good performance neural network model to solve this optimization problem becomes a challenge now since. Motivated by this idea, a promising recurrent continous-time neural network model is going to be proposed in the present paper. The proposing RNN model has the following two most important features. (1) The model is complete in the sense that the set of optima of the nonlinear fractional programming with interval constraints coincides with the set of equilibria of the proposing RNN model. (2) The RNN model is invariant with respect to the problem’s feasible set and has the global convergence property in the sense that all the trajectories of the proposing network converge to the exact solution set for any initial point starting at the feasible interval region. These two properties demonstrate that the proposing network model is quite suitable for solving nonlinear fractional programming problems with interval constraints.

Remains of the paper are organized as follows. Section 2 formulates the optimization problem and Section 3 describes the construction of the proposing RNN model. Complete property and global convergence of the proposing model are discussed in Sections 4 and 5, respectively. Section 6 gives some typical application areas of the fractional programming. Illustrative examples with computational results are reported in Section 7 to demonstrate further the good performance of the proposing RNN model in solving the interval-constrained nonlinear fractional programming problems. Finally, Section 8 is a conclusion remark which presents a summary of the main results of the paper.

#### 2. Problem Formulation

The study of the nonlinear fractional programming with interval constraints is motivated by the study of the following linear fractional interval programming: where:(i), (ii) are dimensional column vectors,(iii) are scalars,(iv)superscript denotes the transpose operator,(v) is the decision vector,(vi) are constant vectors with .

It is assumed that the denominator of the objective function maintains a constant sign on an open set which contains the interval constraints , say positive, that is, and that the function does not reduce to a linear function, that is, constant on and are linearly independent. If and for any , then is called an optimal solution to the problem (2.1). The set of all solutions to problem (2.1) is denoted by , that is, .

Studies on linear fractional interval programming replaced the constraints in programming (2.1) with commenced in a series number of paper by Charnes et al. [16–18]. Charnse and Cooper, see [16], employed the change of variable method and developed a solution algorithm for this programming by the duality theory of linear programming. A little later, in [17, 18], Charnse et al. gave a different method which transformed the fractional interval problem into an equivalent problem like (2.1) by using the generalized inverse of , and the explicit solutions were followed then. Also, Bühler [19] transformed the problem into another equivalent one of the same format, to which he associated a linear parametric program used to obtain solution for the original interval programming problem.

Accordingly, constraints can always be transformed into by change of variable method without changing the programming’s format, see [13] for quadratic programming and [17] for linear fractional interval programming. So, it is necessary to pay our attention on problem (2.1) only. As the existing studies on problem (2.1), see [16–19], focused on the classical method which is time consuming in optimization computational aspects, it is sure that the neural network method should be the top choice to meet the real-time computation requirement. To reach this goal, the present paper is to construct a RNN model that is available both for solving nonlinear fractional interval programming and for linear fractional interval programming problem (2.1) as well.

Consider the following more general nonlinear fractional programming problem: where are continuously differentiable function defined on an open convex set which contains the problem’s feasible set and the same as in problem (2.1), see the previous (v)-(vi). Similarly, we suppose the objective function’s dominator always keeps a constant sign, say . As the most fractional programming problems arising in real-life world associate a kind of generalized convex properties, we suppose the objective function to be pseudoconvex over . There are several sufficient conditions for the function being pseudoconvex, two of which, see [20], are (1) is convex and , while concave and ; (2) is convex and , while is convex and . It is easy to see that the problem (2.1) is a special case of problem (2.3).

We are going to state the neural network model which can be employed for solving problem (2.3) and so for problem (2.1) as well. Details are described in the coming section.

#### 3. The Neural Network Model

Consider the following single-layered recurrent neural network whose state variable is described by the differential equation: where is the gradient operator and is the projection operator defined by For the interval constrained feasible set , the operator can be expressed explicitly as whose th component is The activation function to one node of the neural network model (3.1) is the typical piece-wise linear which is visibly illustrated in Figure 1.

To make a clear description of the proposed neural network model, we reformulate the compact matrix form (3.1) as the following component ones: When the RNN model (3.1) is employed to solve optimization problem (2.3), the initial state is required to be mapped into the feasible interval region . That is, for any , the corresponding neural trajectory initial point should be chosen as , or in the component form, . The block functional diagram of the RNN model (3.4) is depicted in Figure 2.

Accordingly, the architecture of the proposed neural network model (3.4) is composed of integrators, processors for , piece-wise linear activation functions, and summers.

Let the equilibrium state of the RNN model (3.1) be which is defined by the following equation: The relationship between the minimizer set of problem (2.3) and the equilibrium set is explored in the following section. It is guaranteed that the two sets coincide exactly and, this case is the most available expected one in the neural network model designs.

#### 4. Complete Property

As proposed for binary-valued neural network model in [21], a neural network is said to be *regular* or *normal* if the set of minimizers of an energy function is a subset or superset of the set of the stable states of the neural network, respectively. If the two sets are the same, the neural network is said to be *complete*. The *regular* property implies the neural network’s reliability and *normal* effectiveness, respectively, for the optimization process. *Complete* property means both reliability and effectiveness and it is the top choice in the neural network designing. Here, for the continuous-time RNN model (3.1), we say the model to be *regular*, *normal*, and *complete* respectively if three cases of , , and occur, respectively.

The complete property of the neural network (3.1) is stated in the following theorem.

Theorem 4.1. *The RNN model (3.1) is complete, that is, .*

In order to prove Theorem 4.1, one needs the following lemmas.

Lemma 4.2. *Suppose that is a solution to problem (2.3), that is,
**
then is a solution to the variational inequality:
*

*Proof. *See [22, Proposition 5.1].

Lemma 4.3. *Function defined in (2.3) is both pseudoconvex and pseudoconcave over .*

* Proof. *See [20, Lemma ].

Lemma 4.4. *Let be a differentiable pseudoconvex function on an open set , and any given nonempty and convex set. Then is an optimal solution to the problem of minimizing subject to if and only if for all .*

* Proof. *See [4, Theorem (b)].

Now, we turn to the proof of Theorem 4.1: let , then for any and hence, Lemma 4.2 means that solves (4.2), that is, which is equivalent to, see [23], so, . Thus, .

Conversely, let , that is, which, also see [23], means Since the function is pseudoconvex over , see Lemma 4.3, it can be obtained by Lemma 4.4 that so, . Thus, . Therefore, the obtained results and lead the result of Theorem 4.1, , to be true then.

#### 5. Stability Analysis

First, it can be shown that the RNN model (3.1) has a solution trajectory which is global in the sense that its existence interval can be extended to on the right hand for any initial point in .

The continuity of the right hand of (3.1) means, by Peano’s local existence theorem, see [24], that there exists a solution for with any initial point , here is the maximal right hand point of the existence interval. The following lemma states that this to be .

Lemma 5.1. *The solution of RNN model (3.1) with any initial point is bounded and so, it can be extended to .*

* Proof. *It is easy to check that the solution for with initial condition is given by
Obviously, mapping is bounded, that is for some positive number , where is the Euclidean 2 norm. It follows from (5.1) that
Thus, solution is bounded and so, by the extension theorem for ODEs, see [24], it can be concluded that which completes the proof of this lemma.

Now, we are going to show another vital dynamical property which says the set is positive invariant with respect to the RNN model (3.1). That is, any solution starting from a point in , for example, , it will stay in for all time elapsing. Additionally, we can also prove that any solution starting from outside of will either enter into the set in finite time elapsing and hence stay in it for ever or approach it eventually.

Theorem 5.2. *For the neural dynamical system (3.1), the following two dynamical properties hold:*(a)* is a positive invariant set of the RNN model (3.1);*(b)*if , then, either enters into in finite time elapsing and hence stays in it for ever or , as , where .*

*Proof. *Method to prove this theorem can be found in [14] and for the purpose of completeness and readability, here we give the whole proof as follows again.

Suppose that, for and . We first prove that for all , the th component belongs to , that is, for all .

Let
We show by a contradiction that . Suppose , then for and for where being a positive number. With no loss of generality, we assume that
The proof for is similar. By the definition of , the RNN model (3.4) and the assumption (5.4), it follows that
So, is strictly increasing in and hence
Noting that for and assumption (5.4) implies , so, by (5.6), we get
This is in contradiction with the assumption (5.4). So, , that is for all . This means is positive invariant and hence (a) is guaranteed.

Second, for some , suppose . If there is a such that , then, according to (a), will stay in for all . That is will enter into . Conversely, for all , suppose . Without loss of generality, we assume that . It can be guaranteed by a contradiction that . If it is not so, note that , then . It can be followed by (3.4) that
Integrating (5.8) gives us
which is a contradiction because of . Thus, we obtain . This and the previous argument show that, for , either enters into in finite time and hence stays in it for ever or , as .

We can now explore the global convergence of the neural network model (3.1). To proceed, we need an inequality result about the projection operator and the definition of convergence for a neural network.

*Definition 5.3. *Let be a solution of system . The system is said to be globally convergent to a set with respect to set if every solution starting at satisfies
here and .

*Definition 5.4. *The neural network (3.1) is said to be globally convergent to a set with respect to set if the corresponding dynamical system is so.

Lemma 5.5. *For all and all *

*Proof. *See [22, pp. 9-10].

Theorem 5.6. *The neural network (3.1) is globally convergent to the solution set with respect to set .*

* Proof. * From Lemma 5.5, we know that
Let and , then
that is,
Define an energy function , then, differentiating this function along the solution of (3.1) gives us
According to (5.14), it follows that
It means the energy of is decreasing along any trajectory of (3.1). By Lemma 5.1, we know the solution is bounded. So, is a Liapunov function to system (3.1). Therefore, by LaSalle’s invariant principle [25], it follows that all trajectories of (3.1) starting at will converge to the largest invariant subset of set like
However, it can be guaranteed from (5.16) that only if , which means that must be an equilibrium of (3.1) or, . Thus, is the convergent set for all trajectories of neural network (3.1) starting at . Noting that Theorem 4.1 tells us that and hence, Theorem 5.6 is proved to be true then.

Up to now, we have demonstrated that the proposed neural network (3.1) is a promising neural network model both in implementable construction sense and in theoretic convergence sense for solving nonlinear fractional programming problems and linear fractional programming problems with bound constraints. Certainly, it is also important to simulate the network’s effectiveness by numerical experiment to test its performance in practice. In next section, we will focus our attention on handling illustrative examples to reach this goal.

#### 6. Typical Application Problems

This section contributes to some typical problems from various branches of human activity, especially in economics and engineering, that can be formulated as fractional programming. We choose three problems from information theory, optical processing of information and macroeconomic planning to identify the various applications of fractional programming.

##### 6.1. Information Theory

For calculating maximum transmission rate in an information channel Meister and Oettli [26], Aggarwal and Sharma [27] employed the fractional programming described briefly as follows.

Consider a constant and discrete transmission channel with input symbols and output symbols, characterized by a transition matrix , where represents the probability of getting the symbol at the output subject to the constraint that the input symbol was . The probability distribution function of the inputs is denoted by , and obviously, .

Define the transmission rate of the channel as: The relative capacity of the channel is defined by the maximum of , and we get the following fractional programming problem: With the notations: problem (6.2) becomes

##### 6.2. Optical Processing of Information

In some physics problems, fractional programming can also be applied. In spectral filters for the detection of quadratic law for infrared radiation, the problem of maximizing the signal-to-noise ration appears. This means to maximize the filter function on the domain in which and are strict positive vector, and constant, respectively, is a symmetric and positive definite matrix, represents the input signal, and represents the variance in the background signal. The domain of the feasible solutions illustrates the fact that the filter cannot transmit more than 100% and less than 0% of the total energy. The optical filtering problems are very important in today’s information technology, especially in coherent light applications, and optically based computers have already been built.

##### 6.3. Macroeconomic Planning

One of the most significant applications of fractional programming is that of dynamic modeling of macroeconomic planning using the input-output method. Let be the national income created in year . Obviously, . If we denote by the consumption, in branch , of goods of type (that were created in branch ) and by the part of the national income created in branch and allocated to investment in branch , then the following repartition equation applies to the national income created in branch : The increase of the national income in branch is function of the investment made in this branch where .

In these conditions, the macroeconomic planning leads to maximize the increase rate of the national income: subject to the constraints , where , and represents minimum consumption attributed to branch whereas is the maximum level of investments for branch .

#### 7. Illustrative Examples

We give some computational examples as simulation experiment to show the proposed network’s good performance.

*Example 7.1. *Consider the following linear fractional programming:
This problem has an exact solution with the optimal value . The gradient of can be expressed as
Define
and pay attention to (7.2), we get

The dynamical systems are given by
Various combinations of (7.5) formulate the proposed neural network model (3.1) to this problem. Conducted on MATLAB 7.0., by ODE 23 solver, the simulation results are carried out and the transient behaviors of the neural trajectories starting at , which is *in* the feasible region , are shown in Figure 3. It can be seen visibly from the figure that the proposed neural network converges to the exact solution very soon.

Also, according to (b) of Theorem 5.2, the solution may be searched from outside of the feasible region. Figure 4 shows this by presenting how the solution of this problem is located by the proposed neural trajectories from the initial point which is *not in *.

*Example 7.2. *Consider the following nonlinear fractional programming:
This problem has an exact solution with the optimal value . The gradient of can be expressed as
Define
and pay attention to (7.7), we get
The dynamical systems are given by
Similarly, conducted on MATLAB 7.0., by ODE 23 solver, the transient behaviors of the neural trajectories from *inside* of the feasible region , here , are depicted in Figure 5 which shows the rapid convergence of the proposed neural network.

Figure 6 presents a trajectory from outside of , here , it can be seen clearly from this that the solution of this problem is searched by the proposed neural trajectory soon.

#### 8. Conclusions

In this paper, we have proposed a neural network model for solving nonlinear fractional programming problems with interval constraints. The network is governed by a system of differential equations with a projection method.

The stability of the proposed neural network has been demonstrated to have global convergence with respect to the problem’s feasible set. As it is known, the existing neural network models with penalty function method for solving nonlinear programming problems may fail to find the exact solution of the problems. The new model has overcome this stability defect appearing in all penalty-function-based models.

Certainly, the network presented here can perform well in the sense of real-time computation which, in the time elapsing sense, is also superior to the classical algorithms. Finally, numerical simulation results demonstrate further that the new model can act both effectively and reliably on the purpose of locating the involved problem’s solutions.

#### Acknowledgment

The research is supported by Grant 2011108102056 from Science and Technology Bureau of Dong Guan, Guang Dong, China.

#### References

- A. Charnes, W. W. Cooper, and E. Rhodes, “Measuring the efficiency of decision making units,”
*European Journal of Operational Research*, vol. 2, no. 6, pp. 429–444, 1978. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - V. N. Patkar, “Fractional programming models for sharing of urban development responsabilities,”
*Nagarlok*, vol. 22, no. 1, pp. 88–94, 1990. - K. M. Mjelde, “Fractional resource allocation with S-shaped return functions,”
*The Journal of the Operational Research Society*, vol. 34, no. 7, pp. 627–632, 1983. View at Publisher · View at Google Scholar - I. M. Stancu-Minasian,
*Fractional Programming, Theory, Methods and Applications*, Kluwer Academic Publishers, Dodrecht, The Netherlands, 1992. - A. Cichocki and R. Unbehauen,
*Neural Networks for Optimization and Signal Processing*, John Wiley & Sons, New York, NY, USA, 1993. - J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,”
*Proceedings of the National Academy of Sciences of the United States of America*, vol. 81, no. 10, pp. 3088–3092, 1984. View at Scopus - J. J. Hopfield and D. W. Tank, “‘Neural’ computation of decisons in optimization problems,”
*Biological Cybernetics*, vol. 52, no. 3, pp. 141–152, 1985. - J. Wang, “A deterministic annealing neural network for convex programming,”
*Neural Networks*, vol. 7, no. 4, pp. 629–641, 1994. View at Scopus - J. Wang and V. Chankong, “Recurrent neural networks for linear programming: analysis and design principles,”
*Computers and Operations Research*, vol. 19, no. 3-4, pp. 297–311, 1992. View at Scopus - J. Wang, “Analysis and design of a recurrent neural network for linear programming,”
*IEEE Transactions on Circuits and Systems I*, vol. 40, no. 9, pp. 613–618, 1993. View at Publisher · View at Google Scholar · View at Scopus - M. P. Kennedy and L. O. Chua, “Neural networks for nonlinear programming,”
*IEEE Transaction on Circuits and Systems*, vol. 35, no. 5, pp. 554–562, 1988. View at Publisher · View at Google Scholar - Y. S. Xia and J. Wang, “A general methodology for designing globally convergent optimization neural networks,”
*IEEE Transactions on Neural Networks*, vol. 9, no. 6, pp. 1331–1343, 1998. View at Scopus - A. Bouzerdoum and T. R. Pattison, “Neural network for quadratic optimization with bound constraints,”
*IEEE Transactions on Neural Networks*, vol. 4, no. 2, pp. 293–304, 1993. View at Publisher · View at Google Scholar · View at Scopus - X. B. Liang and J. Wang, “A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints,”
*IEEE Transactions on Neural Networks*, vol. 11, no. 6, pp. 1251–1262, 2000. View at Scopus - Y. S. Xia, H. Leung, and J. Wang, “A projection neural network and its application to constrained optimization problems,”
*IEEE Transactions on Circuits and Systems I*, vol. 49, no. 4, pp. 447–458, 2002. View at Publisher · View at Google Scholar - A. Charnes and W. W. Cooper, “An explicit general solution in linear fractional programming,”
*Naval Research Logistics Quarterly*, vol. 20, pp. 449–467, 1973. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - A. Charnes, D. Granot, and F. Granot, “A note on explicit solution in linear fractional programming,”
*Naval Research Logistics Quarterly*, vol. 23, no. 1, pp. 161–167, 1976. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - A. Charnes, D. Granot, and F. Granot, “An algorithm for solving general fractional interval programming problems,”
*Naval Research Logistics Quarterly*, vol. 23, no. 1, pp. 67–84, 1976. View at Scopus - W. Bühler, “A note on fractional interval programming,”
*Zeitschrift für Operations Research*, vol. 19, no. 1, pp. 29–36, 1975. View at Publisher · View at Google Scholar · View at Scopus - M. S. Bazaraa and C. M. Shetty,
*Nonlinear Programming, Theory and Algorithms*, John Wiley & Sons, New York, NY, USA, 1979. - Z. B. Xu, G. Q. Hu, and C. P. Kwong, “Asymmetric Hopfield-type networks: theory and applications,”
*Neural Networks*, vol. 9, no. 3, pp. 483–501, 1996. View at Publisher · View at Google Scholar · View at Scopus - D. Kinderlehrer and G. Stampacchia,
*An Introduction to Variational Inequalities and Their Applications*, Academic Press, New York, NY, USA, 1980. - B. C. Eaves, “On the basic theorem of complementarity,”
*Mathematical Programming*, vol. 1, no. 1, pp. 68–75, 1971. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. K. Hale,
*Ordinary Differential Equations*, John Wiley & Sons, New York, NY, USA, 1993. - J. P. LaSalle, “Stability theory for ordinary differential equations,”
*Journal of Differential Equations*, vol. 4, pp. 57–65, 1968. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - B. Meister and W. Oettli, “On the capacity of a discrete, constant channel,”
*Information and Control*, vol. 11, no. 3, pp. 341–351, 1967. View at Scopus - S. P. Aggarwal and I. C. Sharma, “Maximization of the transmission rate of a discrete, constant channel,”
*Mathematical Methods of Operations Research*, vol. 14, no. 1, pp. 152–155, 1970. View at Publisher · View at Google Scholar · View at Zentralblatt MATH