#### Abstract

This paper contributes a new matrix method for the solution of high-order linear complex differential equations with variable coefficients in rectangular domains under the considered initial conditions. On the basis of the presented approach, the matrix forms of the Bernoulli polynomials and their derivatives are constructed, and then by substituting the collocation points into the matrix forms, the fundamental matrix equation is formed. This matrix equation corresponds to a system of linear algebraic equations. By solving this system, the unknown Bernoulli coefficients are determined and thus the approximate solutions are obtained. Also, an error analysis based on the use of the Bernoulli polynomials is provided under several mild conditions. To illustrate the efficiency of our method, some numerical examples are given.

#### 1. Introduction

Complex differential equations have a great popularity in science and engineering. In real world, physical events can be modeled by complex differential equations usually. For instance, the vibrations of a one-mass system with two DOFs are mostly described using differential equations with a complex dependent variable [1, 2]. The various applications of differential equations with complex dependent variables are introduced in [2]. Since a huge size of such equations cannot be solved explicitly, it is often necessary to resort to approximation and numerical techniques.

In recent years, the studies on complex differential equations, such as a geometric approach based on meromorphic function in arbitrary domains [3], a topological description of solutions of some complex differential equations with multivalued coefficients [4], the zero distribution [5], growth estimates [6] of linear complex differential equations, and also the rational together with the polynomial approximations of analytic functions in the complex plane [7, 8], were developed very rapidly and intensively.

Since the beginning of 1994, the Laguerre, Chebyshev, Taylor, Legendre, Hermite, and Bessel (matrix and collocation) methods have been used in the works in [9–19] to solve linear differential, integral, and integrodifferential-difference equations and their systems. Also, the Bernoulli matrix method has been used to find the approximate solutions of differential and integrodifferential equations [20–22].

In this paper, in the light of the above-mentioned methods and by means of the matrix relations between the Bernoulli polynomials and their derivatives, we develop a new method called the Bernoulli collocation method (BCM) for solving high-order linear complex differential equation with the initial conditions Evidently, we will let denote a variable point in the complex plane. Its real and imaginary parts will be denoted by and , respectively.

Here, the coefficients and the known function together with the unknown function are holomorphic (or analytic) functions in the domain , , where the coefficients are appropriate complex constants.

We assume that the solution of (1) under the conditions (2) is approximated in the form which is the truncated Bernoulli series of the unknown function , where all of are the Bernoulli coefficients to be determined. We also use the collocation points

In this paper, by generalizing the methods [20, 21] from real calculus to the complex calculus, we propose a new matrix method which is based on the Bernoulli operational matrix of differentiation and a uniform collocation scheme. It should be noted that since an ordinary complex differential equation equals to a system of partial differential equations (see Section 4) the methods that based on high-order Gauss quadrature rules [23, 24] could not be effective. Needing to more CPU time from one side and ill conditioning of the associated algebraic problem from another side are two disadvantages of such methods. Therefore, implementing an easy to use approach such as the methods that based on operational matrices is necessary for solving any practical problem.

The rest of this paper is organized as follows. In Section 2, we review some notations from complex calculus and also provide several properties of the Bernoulli polynomials. Section 3 is devoted to the proposed matrix method. Error analysis and accuracy of the approximated solution by the aid of the Bernoulli polynomials is given in Section 4. Several illustrative examples are provided in Section 5 for confirming the effectiveness of the presented method. Section 6 contains some conclusions and notations about the future works.

#### 2. Review on Complex Calculus and the Bernoulli Polynomials

This Section is divided into two subsections. In the first subsection we review some notations from complex calculus specially the concept of differentiability in the complex plane under some remarks. Then we recall several properties of the Bernoulli polynomials and introduce the operational matrix of differentiation of the Bernoulli polynomials in the complex form.

##### 2.1. Review on Complex Calculus

From the definition of derivative in the complex form, it is immediate that a constant function is differentiable everywhere, with derivative 0, and that the identity function (the function ) is differentiable everywhere, with derivative 1. Just as in elementary calculus one can show from the last statement, by repeated applications of the product rule, that, for any positive integer , the function is differentiable everywhere, with derivative . This, in conjunction with the sum and product rules, implies that every polynomial is everywhere differentiable: if , where are complex constants, then .

*Remark 1. *The function is differentiable (in the complex sense) at if and only if and are differentiable (in the real sense) at and their first partial derivatives satisfy the relations . In that case

*Remark 2. *Two partial differential equations
are called the Cauchy-Riemann equations for the pair of functions . As seen above (i.e., the Remark 1), the equations are satisfied by the real and imaginary parts of a complex-valued function at each point where that function is differentiable.

*Remark 3 (sufficient condition for complex differentiability). *Let the complex-valued function be defined in the open subset of the complex plane and assume that and have first partial derivatives in . Then is differentiable at each point where those partial derivatives are continuous and satisfy the Cauchy-Riemann equations.

*Definition 4. *A complex-valued function that is defined in an open subset of the complex plane and differentiable at every point of is said to be holomorphic (or analytic) in . The simplest examples are polynomials, which are holomorphic in , and rational functions, which are holomorphic in the regions where they are defined. Moreover, the elementary functions such as exponential function, the logarithm function, trigonometric and inverse trigonometric functions, and power functions all have complex versions that are holomorphic functions. It should be noted that if the real and imaginary parts of a complex-valued function have continuous first partial derivatives obeying the Cauchy-Riemann equations, then the function is holomorphic.

*Remark 5 (complex partial differential operators). *The partial differential operators and are applied to a complex-valued function in the natural way:
We define the complex partial differential operators and by
Thus, .

Intuitively one can think of a holomorphic function as a complex-valued function in an open subset of that depends only on , that is, independent of . We can make this notion precisely as follows. Suppose the function is defined and differentiable in an open set. One then has

The Cauchy-Riemann equations thus can be written . As this is the condition for to be holomorphic, it provides a precise meaning for the statement: *a holomorphic function is one that is independent of *. If is holomorphic, then (not surprisingly) , as the following calculation shows:

##### 2.2. The Bernoulli Polynomials and Their Operational Matrix

The Bernoulli polynomials play an important role in different areas of mathematics, including number theory and the theory of finite differences. The classical Bernoulli polynomials are usually defined by means of the exponential generating functions (see [21]) The following familiar expansion (see [20]): is the most primary property of the Bernoulli polynomials. The first few Bernoulli polynomials are The Bernoulli polynomials satisfy the well-known relations (see [21])

The Bernoulli polynomials have another specific property that is satisfied in the following linear homogeneous recurrence relation [20]: Also, the Bernoulli polynomials satisfy the following interesting property [25]: Moreover, satisfy the differential equation [20] According to the discussions in [20], the Bernoulli polynomials form a complete basis over the interval .

If we introduce the Bernoulli vector in the form , then the derivative of , with the aid of the first property of (14), can be expressed in the matrix form by where is the operational matrix of differentiation. Note that if we replace the real variable by the complex variable in the above relation, again we reach to the same result, since .

Accordingly, the derivative of can be given by where is defined in (18).

We recall that the Bernoulli expansions and Taylor series are not based on orthogonal functions; nevertheless, they possess the operational matrices of differentiations (and also integration). However, since the integration of the cross product of two Taylor series vectors is given in terms of a Hilbert matrix, which are known to be ill conditioned, the applications of Taylor series in the integration form of view are limited. But in differential form of view, (see for instance [10–12, 16, 17] and the references therein) use the operational matrix of derivatives such as Taylor in a huge number of research works. For approximating an arbitrary unknown function, the advantages of the Bernoulli polynomials over orthogonal polynomials as shifted Legendre polynomials are the following.(i)The operational matrix of differentiation in the Bernoulli polynomials has less nonzero elements than for shifted Legendre polynomials. Because for Bernoulli polynomials, the nonzero elements of this matrix are located in below (or above) its diagonal. However for the shifted Legendre polynomials is a strictly lower (or upper) filled triangular matrix.(ii)The Bernoulli polynomials have less terms than shifted Legendre polynomials. For example, (the 6th Bernoulli polynomial) has 5 terms while (the 6th shifted Legendre polynomial) has 7 terms, and this difference will increase by increasing the index. Hence, for approximating an arbitrary function we use less CPU time by applying the Bernoulli polynomials as compared to shifted Legendre polynomials; this issue is claimed in [25] and is proved in its examples for solving nonlinear optimal control problems.(iii)The coefficient of individual terms in the Bernoulli polynomials is smaller than the coefficient of individual terms in the shifted Legendre polynomials. Since the computational errors in the product are related to the coefficients of individual terms, the computational errors are less by using the Bernoulli polynomials.

#### 3. Basic Idea

In this section by applying the Bernoulli operational matrix of differentiation and also the collocation scheme, the basic idea of this paper would be constructed. We again consider (1) and its approximated solution in the form (3). Trivially, could be rewritten in the vector form By using (19) in the complex form one can conclude that where is introduced in (19).

For the collocation points , the matrix relation (21) becomes where . For more details, one can restate (22) as follows: The matrix vector of the above-mentioned equations is where On the other hand by substituting the collocation points defined by (4) into (1), we have The associated matrix-vector form of the above equations has the following form by the aid of (24): where Since the vector is unknown and should be determined, therefore, the matrix-vector equation (27) could be rewritten in the following form: where .

We now write the vector form of the initial conditions (2) by the aid of (21) as follows: In other words the vector form of the initial conditions could be rewritten as where . Trivially the augmented form of these equations is Consequently, to find the unknown Bernoulli coefficients , related with the approximate solution of the problem (1) under the initial conditions (2), we need to replace the rows of (31) by the last rows of the augmented matrix (29) and hence we have new augmented matrix or the corresponding matrix-vector equation If one can rewrite (32) in the form and the vector is uniquely determined. Thus the order linear complex differential equation with variable coefficients (1) under the conditions (2) has an approximated solution. This solution is given by the truncated Bernoulli series (3). Also we can easily check the accuracy of the obtained solutions as follows [12, 26]. Since the truncated Bernoulli series (3) is an approximate solution of (1), when the solutions and its derivatives are substituted in (1), the resulting equation must be satisfied approximately; that is, for If ( is any positive integer) is prescribed, then the truncation limit may be increased until the values at each of the points become smaller than the prescribed (for more details see [10–12, 16, 17]).

#### 4. Error Analysis and Accuracy of the Solution

This section is devoted to provide an error bound for the approximated solution which may be obtained by the Bernoulli polynomials. We emphasized that this section is given for showing the efficiency of the Bernoulli polynomial approximation and is independent of the proposed method which is provided for showing how a complex ordinary differential equation (ODE) is equivalent to a system of partial differential equations (PDEs). After conveying this subject, we transform the obtained system of PDEs (together with the initial conditions (2)) to a system of two dimensional Volterra integral equations in a special case. Before presenting the main Theorem of this section, we need to recall some useful corollaries and lemmas. Therefore, the main theorem could be stated which guarantees the convergence of the truncated Bernoulli series to the exact solution under several mild conditions.

Now suppose that and be the set of the Bernoulli polynomials and and be an arbitrary element in . Since is a finite dimensional vector space, has the unique best approximation , such that Since , there exists the unique coefficients such that

Corollary 6. *Assume that be an enough smooth function and also is approximated by the Bernoulli serie , then the coefficients for all can be calculated from the following relation:
*

* Proof. *See [21].

In practice one can use finite terms of the above series. Under the assumptions of Corollary 6, we will provide the error of the associated approximation.

Lemma 7 (see [20]). *Suppose that be an enough smooth function in the interval and be approximated by the Bernoulli polynomials as done in Corollary 6. With more details assume that is the approximate polynomial of in terms of the Bernoulli polynomials and is the remainder term. Then, the associated formulas are stated as follows:
**
where and denotes the largest integer not greater than .*

*Proof. *See [20].

Lemma 8. *Suppose (with bounded derivatives) and is the approximated polynomial using Bernoulli polynomials. Then the error bound would be obtained as follows:
**
where denotes a bound for all the derivatives of function (i.e., , for ) and is a positive constant.*

*Proof. *By using Lemma 7, we have

According to [20] one can write

Now we use the formulae (1.1.5) in [27] for the even Bernoulli numbers as follows:

Therefore,

In other words , where is a positive constant independent of . This completes the proof.

Corollary 9. *Assume that be an enough smooth function and also is approximated by the two variable Bernoulli series , then the coefficients for all can be calculated from the following relation:
*

*Proof. *By applying a similar procedure in two variables (which was provided in Corollary 6) we can conclude the desired result.

In [28], a generalization of Lemma 7 can be found. Therefore, we just recall the error of the associated approximation in two dimensional functions.

Lemma 10. *Suppose that be an enough smooth function and be the approximated polynomial of in terms of linear combination of Bernoulli polynomials by the aid of Corollary 9. Then the error bound would be obtained as follows:
**
where is a positive constant independent of and is a bound for all the partial derivatives of .*

We now consider the basic equation (1) together with the initial conditions (2). For clarity of presentation we assume that , and . A similar procedure would be applied in the case of larger values of . Equation (1) with the above-mentioned assumptions has the following form: where Also the initial condition should be considered. Thus, one can write and .

According to Remark 5, since is a Holomorphic function, then (47) by using assumptions in (48) could be rewritten as follows:
In other words
For imposing the initial conditions and , we need to differentiate the above equations with respect to and then integrate with respect to and in the rectangular . Therefore, by differentiating both of the equations of (50) with respect to we have
Integrating from the above equations in the rectangular yields
By imposing the the initial conditions and to these equations we reach to
where
Considering and , (53) could be restated in the following matrix vector form:
Our aim is to show that or , where *, * and was introduced in (3).

In the following lines, the main theorem of this section would be provided. However, some mild conditions should be assumed. These conditions are as follows:(i), (ii).

It should be noted that the second condition is based upon Lemmas 8 and 10.

Theorem 11. *Assume that and be approximated by and , respectively, by the aid of the Bernoulli polynomials in (55) and also we use a collocation scheme for providing the numerical solution of (55). In other words
**
where is the residual function that is zero at the collocation nodes. Also suppose that . Then, under the above-mentioned assumptions
**
and .*

*Proof. *By subtracting (56) from (55) we have
Therefore,

Since , we have ; in other words, and this completes the proof.

#### 5. Numerical Examples

In this section, several numerical examples are given to illustrate the accuracy and effectiveness of the proposed method and all of them are performed on a computer using programs written in MATLAB 7.12.0 (v2011a) (The Mathworks Inc., Natick, MA, USA). In this regard, we have reported in the tables and figures the values of the exact solution , the polynomial approximate solution , and the absolute error function at the selected points of the given domains. It should be noted that in the first example we consider a complex differential equation with an exact polynomial solution. Our method obtains such exact polynomial solutions readily by solving the associated linear algebraic system.

*Example 12 (see [26]). *As the first example, we consider the first-order complex differential equation with variable coefficients
with the initial condition and the exact solution . We suppose that . Therefore, the collocation points are , and . According to (3), the approximate solution has the following form:
where our aim is to find the unknown Bernoulli coefficients . Since , then
Also the matrix (with the assumption ) has the following structure:where .

According to (29), the matrix coefficients are as follows:Since , the right-hand side vector has the following form:
The associated form of the initial condition is
Imposing the above initial condition to the matrix and vector yieldsTherefore, the solution of the system is as follows:
Thus, we obtain the approximate solution which is the exact solution. We recall that

*Example 13 (see [26]). *As the second example, we consider the following second-order complex differential equation
with the initial conditions and the exact solution . In this equation, we have and . Then, for the collocation points are .

According to (29), the fundamental matrix equation is
where and are introduced in (18) and (25), respectively. Also
The right-hand side vector is
The augmented matrix forms of the initial conditions for are
By replacing the above augmented vectors to the last two rows of , we reach to . The solution of the matrix-vector equation is
We also solve this equation by assumptions and . Since the exact solution of the equation is . Therefore, the real and imaginary parts of the exact solution are and , respectively. The values of the approximate solution in the case of for both parts of real and imaginary together with the exact solution are provided in Tables 1 and 2. Also an interesting comparison between the presented method (PM) and the Taylor method [26] (TM) has been provided in Figure 1 for . From this figure, one can see the efficiency of our methods with respect to the method of [26].

*Example 14 (see [26]). *As the final example, we consider the following second-order complex differential equation
with the initial conditions and and also the exact solution . In this equation, we have , and . Similar to the previous two examples, we solve this equation in the case of and 11. In Figure 2, we provide a comparison between our method and the Taylor method (TM) [26]. According to this figure, one can see that not only our method is superior in results but also the behaviour of the error of the presented method has a stable manner with respect to the Taylor method [26] during the computational interval. Moreover, since the Bessel method and the presented method can be considered as an preconditioned solution of any linear algebraic system which is originated from the above differential equation, they have the same accuracy. But the condition number of the matrix coefficients of the Bessel method is very larger than the matrix coefficient of our method. This subject is illustrated in Figure 3. It should be noted that this figure is depicted according to the diagonal collocation methods and the square collocation method was not used.

#### 6. Conclusions

High-order linear complex differential equations are usually difficult to solve analytically. Then, it is required to obtain the approximate solutions. For this reason, a new technique using the Bernoulli polynomials to numerically solve such equations is proposed. This method is based on computing the coefficients in the Bernoulli series expansion of the solution of a linear complex differential equation and is valid when the functions and are defined in the rectangular domain. An interesting feature of this method is to find the analytical solutions if the equation has an exact solution that is a polynomial of degree or less than . Shorter computation time and lower operation count results in reduction of cumulative truncation errors and improvement of overall accuracy are some of the advantages of our method. In addition, the method can also be extended to the system of linear complex equations with variable coefficients, but some modifications are required.

#### Conflict of Interests

The authors declare that they do not have any conflict of interests in their submitted paper.

#### Acknowledgments

The authors thank the Editor and both of the reviewers for their constructive comments and suggestions to improve the quality of the paper.