Table of Contents Author Guidelines Submit a Manuscript
Discrete Dynamics in Nature and Society
Volume 2010, Article ID 829692, 27 pages
http://dx.doi.org/10.1155/2010/829692
Research Article

Convergence of an Online Split-Complex Gradient Algorithm for Complex-Valued Neural Networks

1Department of Mathematics, Dalian Maritime University, Dalian 116026, China
2Department of Applied Mathematics, Harbin Engineering University, Harbin 150001, China

Received 1 September 2009; Accepted 19 January 2010

Academic Editor: Manuel De La Sen

Copyright © 2010 Huisheng Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The online gradient method has been widely used in training neural networks. We consider in this paper an online split-complex gradient algorithm for complex-valued neural networks. We choose an adaptive learning rate during the training procedure. Under certain conditions, by firstly showing the monotonicity of the error function, it is proved that the gradient of the error function tends to zero and the weight sequence tends to a fixed point. A numerical example is given to support the theoretical findings.

1. Introduction

In recent years, neural networks have been widely used because of their outstanding capability of approximating nonlinear models. As an important search method in optimization theory, gradient algorithm has been applied in various engineering fields, such as adaptive control and recursive parametrical estimation [13]. Gradient algorithm is also a popular training method for neural networks (when used to train neural networks with hidden layers, gradient algorithm is also called BP algorithm) and can be done either in the online or in the batch mode [4]. In online training, weights are updated after the presentation of each training example, while in batch training, weights are not updated until all of the examples are inputted into the networks. As a result, batch gradient training algorithm is always used when the number of training samples is relatively small. However, in the case that a very large number of training samples are available, online gradient training algorithm is preferred.

Conventional neural networks' parameters are usually real numbers for dealing with real-valued signals [5, 6]. In many applications, however, the inputs and outputs of a system are best described as complex-valued signals and processing is done in complex space. In order to solve the problem in complex domain, complex-valued neural networks (CVNNs) have been proposed in recent years [79], which are the extensions of the usual real-valued neural networks to complex numbers. Accordingly, there are two types of generalized gradient training algorithm for complex-valued neural networks: fully complex gradient algorithm [1012] and split-complex gradient algorithm [13, 14]; both of which can be processed in online mode and batch mode. It has been pointed out that the split-complex gradient algorithm can avoid the problems resulting from the singular points [14].

Convergence is of primary importance for a training algorithm to be successfully used. There have been extensive research results concerning the convergence of gradient algorithm for real-valued neural networks (see, e.g., [15, 16] and the references cited therein), covering both of online mode and batch mode. In comparison, the convergence properties for complex gradient algorithm are seldom investigated. We refer the reader to [11, 12] for some convergence results of fully complex gradient algorithms and [17] for those of batch split-gradient algorithm. However, to the best of our knowledge, convergence analysis of online split-complex gradient (OSCG) algorithm for complex-valued neural networks has not yet been established in the literature, and this becomes our primary concern in this paper. Under certain conditions, by firstly showing the monotonicity of the error function, we prove that the gradient of the error function tends to zero and the weight sequence tends to a fixed point. A numerical example is also given to support the theoretical findings.

The remainder of this paper is organized as follows. The CVNN model and the OSCG algorithm are described in the next section. Section presents the main results. The proofs of these results are postponed to Section 4. In Section 5 we give a numerical example to support our theoretical findings. The paper ends with some conclusions given in Section 6.

2. Network Structure and Learning Method

It has been shown that two-layered CVNN can solve many problems that cannot be solved by real-valued neural networks with less than three layers [13]. Thus, without loss of generalization, this paper considers a two-layered CVNN consisting of input neurons and output neuron. For any positive integer , the set of all -dimensional complex vectors is denoted by and the set of all -dimensional real vectors is denoted by . Let us write as the weight vector between the input neurons and output neuron, where , and , , and . For input signals , where , and , the input of the output neuron is

Here “” denotes the inner product of two vectors.

For the convenience of using OSCG algorithm to train the network, we consider the following popular real-imaginary-type activation function [13]:

for any , where is a real function (e.g., sigmoid function). If simply denoting as , the network output is given by

Let the network be supplied with a given set of training examples . For each input from the training set, we write as the input for the output neuron and as the actual output. The square error function can be represented as follows:

where “*" signifies complex conjugate, and

The neural network training problem is to look for the optimal choice of the weights so as to minimize approximation error. The gradient method is often used to solve the minimization problem. Differentiating with respect to the real parts and imaginary parts of the weight vectors, respectively, gives

Now we describe the OSCG algorithm. Given initial weights at time 0, OSCG algorithm updates the weight vector by dealing with the real part and separately:

For , and denote that

Then (2.8) can be rewritten as

Given and a positive constant , we choose learning rate as

Equation (2.11) can be rewritten as

and this implies that

This type of learning rate is often used in the neural network training [16].

For the convergence analysis of OSCG algorithm, similar to the batch version of split-complex gradient algorithm [17], we shall need the following assumptions.

() There exists a constant such that

() The set contains only finite points.

3. Main Results

In this section, we will give several lemmas and the main convergence theorems. The proofs of those results are postponed to the next section.

In order to derive the convergence theorem, we need to estimate the values of the error function (2.4) at two successive cycles of the training iteration. Denote that

where , and . The first lemma breaks the changes of error function (2.4) at two successive cycles of the training iteration into several terms.

Lemma 3.1. Suppose Assumption is valid. Then one has where , , each lies on the segment between and , and each lies on the segment between and .

The second lemma gives the estimations on some terms of (3.2).

Lemma 3.2. Suppose Assumptions and hold, for , then one has where are constants and

From Lemmas 3.1 and 3.2, we can derive the following lemma.

Lemma 3.3. Suppose Assumptions and hold, for , then one has where is a constant.

With the above Lemmas 3.13.3, we can prove the following monotonicity result of OSCG algorithm.

Theorem 3.4. Let be given by (2.11) and let the weight sequence be generated by (2.8). Then under Assumption , there are positive numbers and such that for any and one has

To give the convergence theorem, we also need the following estimation.

Lemma 3.5. Let be given by (2.11). Then under Assumption , there are the same positive numbers and chosen as Theorem 3.4 such that for any and one has

The following lemma gives an estimate of a series, which is essential for the proof of the convergence theorem.

Lemma 3.6 (see [16]). Suppose that a series is convergent and . If there exists a constant such that then

The following lemma will be used to prove the convergence of the weight sequence.

Lemma 3.7. Suppose that the function is continuous and differentiable on a compact set and that contains only finite points. If a sequence satisfies then there exists a point such that

Now we are ready to give the main convergence theorem.

Theorem 3.8. Let be given by (2.11) and let the weight sequence be generated by (2.8). Then under Assumption , there are positive numbers and such that for any and one has Furthermore, if Assumption also holds, then there exists a point such that

4. Proofs

Proof of Lemma 3.1. Using Taylor's formula, we have where lies on the segment between and . Similarly we also have a point between and such that From (2.8) and (2.10) we have Combining (2.4), (2.9), (3.1), and (4.1)–(4.3), then we have where

Proof of Lemma 3.2. From (2.5) and Assumption we know that functions , , , , , and are all bounded. Thus there is a constant such that By (2.9), (2.10), (3.1), and the Mean-Value Theorem, for and we have where . Similarly we have In particular, as , for , we can get where . For , , suppose that where are nonnegative constants. Recalling , then we have where and . Similarly, we also have Thus, by setting , we have (3.3). Now we begin to prove (3.4). Using (3.3) and Cauchy-Schwartz inequality, we have where . This validates (3.4). Finally, we show (3.5). Using (2.10), (3.1), (3.3), and (4.3), we have where . Similarly we also have This together with (2.9) and (4.6) leads to where and . This completes the proof.

Proof of Lemma 3.3. Recalling Lemmas 3.1 and 3.2, we conclude that Then (3.6) is obtained by letting .

Proof of Theorem 3.4. In virtue of (3.6), the core to prove this lemma is to verify that In the following we will prove (4.18) by induction. First we take such that For suppose that Next we will prove that Notice that where lies on the segment between and , and lies on the segment between and . Similar to (4.14), we also have the following estimation: where . By (4.6) and (4.22)-(4.23) we know that there are positive constants and such that where . Taking squares of the two sides of the above inequality gives Now we sum up the above inequality over and obtain Let then On the other hand, from (4.22) we have Similar to the deduction of (4.24), from (4.29) we have It can be easily verified that, for any positive numbers , , , if , Applying (4.31) to (4.30) implies that Similarly, we can obtain the counterpart of (4.28) as and the counterpart of (4.32) as From (4.28) and (4.33) we have From (4.32) and (4.34) we have Using (2.11) and (4.36), we can get Multiplying (4.37) with gives Using (4.20) and (4.35), we obtain Combining (4.38) and (4.39) we have