Abstract

We prove convergence of solutions to zero in an exponential manner for a system of ordinary differential equations. The feature of this work is that it deals with nonlinear non-Lipschitz and unbounded distributed delay terms involving non-Lipschitz and unbounded activation functions.

1. Introduction

Of concern is the following systemfor and with continuous on . Here , , , , , and , , are continuous functions and are subject to other conditions that will be specified below.

Similar forms of this system arise, for instance, in Neural Network Theory [128] (see also the “Applications” section below). There, the functions and are much simpler. Usually, are equal to the identity and (called the activation functions) are assumed to be Lipschitz continuous. The integral terms represent the distributed delays. When the kernels are replaced by the delta distribution we recover the well-known discrete delays. The functions account for the input functions. The first terms in the right hand side of (1) may be looked at as dissipative terms.

Different methods have been used by many authors to study the well-posedness and the asymptotic behavior of solutions of these systems [13, 6, 917, 1921, 25, 27]. In particular, a lot of efforts are devoted to improving the conditions on the different coefficients involved in the system as well as on the class of activation functions. Regarding the latter issue, the early assumptions of boundedness, monotonicity, and differentiability have been all relaxed to merely a global Lipschitz condition. Since then, it seems that this assumption has not been weakened further considerably. It has been pointed out that there are many activation functions which are continuous but not necessarily Lipschitz continuous in applications [29]. A slightly weaker condition: , and there exist such that , where and is the equilibrium, which has been used in [4, 19, 26, 28] (see also [2224]). Finally, we cite [5] where the authors consider non-Lipschitz continuous but bounded activation functions. There are also many works on discontinuous activation functions.

Here we assume that the functions and are continuous monotone nondecreasing functions that are not necessarily Lipschitz continuous and they may be unbounded (like power type functions with powers bigger than one). We prove that, for sufficiently small initial data, solutions decay to zero exponentially.

We could not find similar interesting works on (continuous but) non-Lipschitz continuous activation functions to compare our results with. Our treatment is in fact concerned with a doubly non-Lipschitz continuous system.

Using standard techniques and the Gronwall-type lemma below we may prove local existence of solutions. The global existence follows from the estimation in our theorem below. The uniqueness, however, is delicate and does not hold in general.

In the next section we present and prove our result and illustrate it by an example.

2. Exponential Convergence

In this section we state and prove our exponential convergence result. Before that we need to present a lemma due to Bainov and Simeonov [30].

Let , and let . We write if is nondecreasing in .

Lemma 1. Let be a positive continuous function in . Assume that , , are nonnegative continuous functions for which are nondecreasing in for any fixed , , , are nondecreasing continuous functions in , with for , and is a nonnegative continuous function in . If in , then the inequalityimplies thatwhere ,and is chosen so that the functions , , are defined for .

In order to shorten the statement of our result we define, for ,for some to be determined.

Theorem 2. Assume that and are continuous monotone nondecreasing functions , in that can be relabelled as , , with , and their corresponding coefficients and are relabelled as . Assume further that , for , , , , , are continuous functions on , are continuously differentiable functions, and    , , . Then, there exists a positive constant such that(a)if , , we have(b)if in addition and and are summable and of arbitrary signs, then the conclusion in (a) holds with , instead of in the new and the corresponding .

Proof. From (1) we entail that for and or, by summation, we getwhere denotes the right Dini derivative. Therefore,and clearlyIt follows that (see [29])whereand and are as defined before the theorem. SetwhereDefineClearly, from (11) and (15) we have , , and(a)  ,  . By integration we see thatwithAccording to our hypotheses we can relabel the terms in (17) so that it may be written aswith .
Applying Lemma 1 we obtainand hencewhere andand is chosen so that the functions , , are defined for .
(b)   of Arbitrary Signs. Define, for ,That is,We have from (16) and (24) thatand by integration we find, for ,withNext, we proceed as in Case (a) with the new functional (24), the constant (27), and instead of in the new . The proof is complete.

Corollary 3. If , (in (a) and , in (b)), then solutions of (1) are global in time. Moreover, if and (resp., ) grows up at most polynomially, then the decay is exponential.

Remark 4. We have judged it useful to treat case (a) separately even though it is covered by case (b) for the simple reason that this case arises in real applications as it corresponds to the “fading memory” situation. Same for the case , for some too looks unnecessary to study separately as it is covered by the second case in the proof but in fact it is also quite interesting. Indeed, in this case, from (16) we haveTherefore,and thusAt this point we must point out that, unlike in the proof of the theorem, we cannot pass to inside the arguments of and (in (30)). However, if the functions and belong to the class , that is, there exist and such that and , , , then for At this stage we may apply Lemma 1 (with the new coefficients and ) to get a bound for the function and thereafter for If as and that bound does not grow faster than we will get an exponential decay. This decay rate is to be compared with the general one obtained in the second case of our result.

3. Applications

This system appears in Neural Network Theory. For a basic one the reader is referred to [7, 8].

A Neural Network is designed in order to mimic the human brain. It is formed by a number of “neurons” with interconnections between them. In general there are an input layer, some (one or more) hidden layers, and an output layer. The input neurons feed the neurons in the hidden layers which perform a transformation of the signal and fires it to the output neurons (or other neurons). They are widely used for solving optimization problems, analyzing, classifying, and evaluating many things. They have the advantage (over traditional computers) of forecasting, predicting, and making decisions.

There are numerous applications of which we cite the following: economic indicator, data compression, complex mapping, biological systems analysis, optimization, process control, time series analysis, stock market, diagnosis of hepatitis, engineering design, soil permeability, speech processing, pattern recognition, and so on.

Most of the existing papers in this theory deal with the constant coefficients case. The few papers on variable coefficients treat mainly the existence of periodic solutions. In the constant coefficients case the system will have the formwhere and are constants. Our theorem gives the estimatefor some , where in case (a) and a similar estimation with in case (b). The corollary provides sufficient conditions ensuring global existence. In this case we have exponential decay provided that does not grow too fast.

Example 5. Consider the functions , , , . The order means ordering and in a nondecreasing manner . Therefore, , , andThe value will be the largest value of for whichfor all For the asymptotic behavior we need this to be infinity. In particular we need a smallness condition on

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The author is grateful for the financial support and the facilities provided by King Fahd University of Petroleum and Minerals through Grant no. IN121044.