#### Abstract

Application of the pure rank-one update algorithm as well as a combination of rank-one updates and the Sherman-Morrison formula in computing the Moore-Penrose inverse of the particular Toeplitz matrix is investigated in the present paper. Such Toeplitz matrices appear in the image restoration process and in many scientific areas that use the convolution. Four different approaches are developed, implemented, and tested on a number of numerical experiments.

#### 1. Introduction

Let and denote the set of all complex matrices and the set of all complex matrices of , respectively. The identity matrix of an appropriate order is denoted by . The conjugate transpose, the range, the rank, and the null space of are denoted by , , , and , respectively.

Representation and computation of various generalized inverses are closely related to the following Penrose equations: The set of all matrices obeying the conditions contained in a subset is denoted by . Any matrix from is called -inverse of and is denoted by . By we denote the set of all -inverses of with . For any matrix there exists a single element in the set , called the Moore-Penrose inverse of and denoted by .

A rank-one modification of a matrix is the matrix , which is created from and two vectors and . The Sherman-Morrison formula (S-M shortly) gives the basic relationship between the inverses and (for more details see, e.g., ):The identity (2) provides a numerically efficient way to compute the inverse of the rank-one update . The S-M formula is important in many different fields of numerical computation; see for example .

On the other hand, Toeplitz matrices arise in a number of various theoretical investigations and applications. A number of iterative processes for finding generalized inverses of an arbitrary Toeplitz matrix by modifying Newton’s method have been developed so far. The main results were stated in . Adaptations of the iterative processes to the Toeplitz structure are based on the usage of the displacement operator as well as the concept of displacement representation and -displacement rank of matrices.

A variety of methods for computing the Moore-Penrose inverse of a rank-one modified matrix have been developed so far. Main results were derived in . Relationships between various generalized inverses of an arbitrary matrix and corresponding generalized inverses of its rank-one modifications were investigated in . The leading idea in [15, 16] was successive computation of the symmetric rank-one (SR1) updates of a given matrix , where denotes the th row of and . The authors of the paper  introduced a computational procedure for the Moore-Penrose inverse of a symmetric rank-one perturbed matrix. Using this method, the authors of  proposed a finite method for computing the minimum-norm least-squares solution of the linear system .

The results derived in  reveal that both the SR1 updates techniques and the S-M recursive rule are useful tools in the computation of various matrix products involving the Moore-Penrose inverse of certain symmetric matrices. Particularly, the algorithms introduced in  are numerically efficient in computation of and inverses.

In the present paper, we investigate the possibilities to apply the SR1 update procedure and the S-M formula in the computation of the Moore-Penrose inverse of specific Toeplitz matrices that appear in the image restoration process. Our main motivation arises from the convenience to apply the SR1 update and the S-M procedure in removing the blur which is always present in digital images. Firstly, both the SR1 update and the S-M formula are based on the usage of columns (or rows) of the input matrix. On the other hand, the matrices which appear in the mathematical model of blur in computer-generated images possess very specific structure which can be used to accelerate SR1 and S-M procedures. Namely, entries in Toeplitz matrices are constant along main diagonal parallels and, moreover, possess a significant proportion of zero elements.

The paper is organized as follows. Some basic notations and necessary facts are restated in Section 2. Also, some additional motivation is presented in the same section. Usage of the pure SR1 update algorithm, proposed in , in the computation of the Moore-Penrose inverse of a kind of Toeplitz matrices is considered in Section 3. A hybrid combination of the SR1 and the S-M recursive rules is defined in Section 4. An improvement of the SR1 procedure, which is derived on the basis of the specific structure of the underlying Toeplitz matrix, is presented in the same section. An application of introduced methods in image restoration is presented in Section 5.

#### 2. Preliminaries and Motivation

Toeplitz matrices or diagonally constant matrices are matrices having constant diagonal entries. Toeplitz matrices which are applicable in the image restoration process contain nonzero main diagonal parallels above the main diagonal, where defines the blurring process. In what follows, let us consider the Toeplitz matrix of such form:The assumption is active.

To clarify notation, Toeplitz matrices of the general form (3) will be denoted shortly by We investigate the use of the SR1 update method, as described in [15, Algorithm  2], during the numerical computation of the Moore-Penrose inverse of Toeplitz matrices satisfying the pattern. Also, we examine different improvements of the original method. The improvements are based on appropriate adaptations of the SR1 method and the S-M formula to the characteristic structure of underlying matrices of type . The method of SR1 updates is based on the expression which computes the Moore-Penrose inverse of the first columns of the initial matrix using the Moore-Penrose inverse of its first columns. In detail, the SR1 method from  starts from the well-known representation of the Moore-Penrose. If the th row of is denoted by , then Chen and Ji in  defined the matrix sequences and , asClearly, is the rank-one modification of and

Recall that the Moore-Penrose inverse of a general rank-one modified matrix , where is an arbitrary matrix and ,   are arbitrary vectors, is obtained in [14, Theorem  3.1.3]. The general theorem from  suggests six different cases that one has to follow in order to establish a relation between and . In , the authors proved that the real number , corresponding to the term in (2), satisfies . Later using this result in conjunction with the fact that is a positive semidefinite matrix, the six cases of Theorem  3.1.3 from  can be reduced to the two-case problem. This reduction simplifies the SR1 updates formulas.

Let us denote the first columns of by Also, the last columns of are denoted by . Then the matrix is given in the block form where the square block is the nonsingular band Toeplitz matrix and collects the last columns of .

An application of Greville’s partitioning method from  and the block partitioning method (BP method, shortly) from  in computing the Moore-Penrose inverse of the matrix is presented in . According to Algorithms  1 and 2 and Lemma  3 from , it is clear that the specific structure of matrix enables computation simply by computing the inverse of the nonsingular block .

In this paper, we investigate some alternative methods for computing using the SR1 updates and the S-M formula. Our intention is to decrease computational complexity as much as possible using a specific structure of Toeplitz matrices of the general form .

#### 3. Computing the Pseudoinverse of a Toeplitz Matrix by Rank-One Updates

Our first attempt consists in applying the unmodified SR1 method from  in order to compute the pseudoinverse of the matrix that belongs to the class . Obtained results are compared with corresponding results derived by applying Algorithm  2 from  (the BP method). The results of this comparison are presented in Table 1, where , , and are the parameters which define the Gaussian blur modeled by the Toeplitz matrix . In the rest of the paper, it is assumed that the matrix is of the order , where and represents the width of the blurring function.

If an approximation of is denoted by , then the residual norms are denoted byFor the sake of simplicity, let us denote Algorithm  2 from  by SR1 and Algorithm  2 from  by BP.

From Table 1, it is observable that the SR1 method is marginally efficient in terms of accuracy, but it is time consuming, especially for larger dimensions. There is a theoretical explanation for these results. The number of multiplications and divisions included in the SR1 algorithm is about in the case and in the case . Therefore, as it is stated in , the rank-one updates algorithm works efficiently in cases where it holds or . In our case, , where represents the width of the blurring process. Since, in the general case, is a relatively small integer, the conditions and are satisfied. This fact causes relative inefficiency of the SR1 method.

Our second attempt is to apply Algorithm SR1 to the matrix , from (10), in order to generate its inverse. Then the Moore-Penrose inverse of comes from the block partitioning method (called BP), as it was described in . This approach is shortly denoted by HSR1 + BP.

It is observable from Table 2 that the method HSR1 + BP is marginally efficient in terms of accuracy, but it is time consuming; see, for example, the case .

From the previous analysis of data included in Tables 1 and 2, we conclude that both algorithms SR1 and HSR1 + BP are not efficient approaches for computing with respect to the BP method in terms of computational speed, especially for large matrices. But both approaches show marginal efficiency in terms of the accuracy.

#### 4. A Combination of SR1 Updates and S-M Formula

Two additional algorithms are defined in the current section. The first one uses the SR1 updates to compute the matrix in the first step and the S-M formula to compute in the subsequent step. The second algorithm replaces the usage of the SR1 updates from the first step by a direct construction of the matrix . The matrix can be generated efficiently in view of the specific structure of the input matrix .

##### 4.1. SR1 Updates in Conjunction with the S-M Formula

As usual, the th column of is denoted by . Then, since is of full row rank, it is advantageous to define a proper SR1 recursive computational method for generating the Moore-Penrose inverse of on the basis of the relationFor that purpose, it is necessary to define the matrix sequenceLemma 1 reveals some of basic properties of the sequence . Following the notation from , the notation means that a column is linearly dependent on the previous columns and indicates that is linearly independent of .

Lemma 1. The following statements are valid:(i), for each .(ii), for each .(iii), for each , and , for each .(iv)The following concluding relation is valid:

Proof. (i) This part of the proof is implied by the fact that the columns of are linearly independent.
(ii) According to the part (i), immediately follows , for each . Later, it is easy to verify which implies(iii) This part of the proof can be verified by using (i) and (ii), together with the fact that are all matrices, for each .
(iv) According to part (iii), matrices , , are invertible. Since is invertible, then it holds which completes the proof.

Remark 2. According to Lemma 1 the matrices are of the order and it holds , for each , while , for each . Since , the invertibility of is guaranteed, so that we are able to apply the S-M formula which relates with , for each .
According to Lemma 1, matrices , , are invertible. Therefore, it is possible to define the following matrix sequence:Identity (15) immediately implies

Lemma 3. It holds , for each .

Proof. The proof of lemma follows from the fact that is regular, for each .

Now, we turn our attention to (20). Since the matrices , are regular, it is possible to find a relation between and by applying the Sherman-Morrison formula (2). This relation is given in Proposition 4.

Proposition 4. Let the matrix be defined as in (3). Assume that the matrix sequence is defined in (14). The inverse of is equal tofor each .

In order to present the finite recursive algorithm for computing the Moore-Penrose inverse of with the help of the actual SR1 update formulas (20) and (21), we first define . For each , it is necessary to compute . This requires calculated values of the vectors , and the matrix .

Theorem 5. Let the matrix be defined as in (3), the matrix sequence defined in (14), and the matrix sequence defined in (20). Then the following recursive relations are true:

Proof. Note that from it follows that . Based on the use of (21), the recursive sequences are defined for as in the following:Then the statement (22) can be easily verified using (24).
Also, matrices , are defined as So, (23) holds.

According to Theorem 5, we present Algorithm 1 for computing the Moore-Penrose inverse of the Toeplitz matrix . The algorithm is based on the SR1 modifications to compute and subsequently the S-M formula to compute . For this purpose, we use the abbreviation HSR1 + S-M to denote Algorithm 1.

 Require: Start with and . Let be the th column of , for , . (1) (SR1 Step) For , compute . (2) Set for all and . (3) (S-M Step) Compute and by using (22) and (23), respectively for all . (4) Output .

A Matlab implementation of Algorithm 1 is placed in Appendix.

##### 4.2. An Improvement of the Hybrid Combination in Algorithm 1

By taking into account the fact that the Toeplitz matrix has a specific structure, the matrix can be generated in an efficient way. In this way, an improvement of step  (1) of Algorithm 1 is introduced. The effectiveness of this method will be justified in the numerical experiments presented in Section 4.3.

According to the proper structure of the matrix and step  (1) of Algorithm 1, the general structure of the matrix is defined asLet be one-dimensional vector; then the symmetric Toeplitz matrix generated by is denoted by :Also, for two appropriate vectors , we denote by the (symmetric) Hankel matrix whose first column is and whose last row is .

Theorem 6. The matrix can be expressed as the difference of a specific banded symmetric Toeplitz matrix and a partitioned matrix of the formwhere the block is defined by , for an appropriate Hankel matrix .

Proof. Let and be vectors defined byrespectively. Then we define the following matrices: A simple verification shows that .

Next we present an example, in low dimensions, in order to illustrate Theorem 6.

Example 7. Consider the Toeplitz matrix , defined in (3) for ,  , and : According to (14),

We exploit the matrix

Let and ; then Finally, it is necessary to construct the matrixIt is easy to see that

According to Theorem 6 we present the following improved version of Algorithm 1, called Algorithm 2. A Matlab implementation of Algorithm 2 is placed in Appendix. It is appropriate to use the term IHSR1 to denote the improvement of the HSR1 step of Algorithm 1 and IHSR1 + S-M to denote Algorithm 2.

 Require: Start with , . Let be the th column of , for . (1) (Improved HSR1 Step) Define and as in Theorem 6. Then, set . (2) Set for all and . (3) (S-M Step) Compute and by using (22) and (23), respectively for all . (4) Output .

Note that proposed Algorithms 1 and 2 are not using directly, at any step, the Sherman-Morrison formula to compute the inverse of some matrix. In Section 4.4, we provide more information regarding this situation.

##### 4.3. Numerical Experiments

In this subsection we analyze numerical data arising during the computation of the Moore-Penrose inverse of the Toeplitz matrix by applying a Matlab implementation of Algorithms 1 and 2. In order to test the time efficiency as well as the accuracy of considered methods, we enrich our collection of matrices used in Tables 1 and 2 with larger matrices. In order to complete our numerical study, we also compare the results derived by applying Algorithm  2 from  (Algorithm BP). A comparison of Algorithms 1 and 2 and BP is presented in Table 3.

From Table 3, it is observable that the level of improvement in CPU time that can be achieved by using Algorithm 2 instead of Algorithm 1 is significant. Also, it is clear that the accuracy of both algorithms is almost the same. Clearly, the BP method requires minimal CPU time.

##### 4.4. Rounding Error Analysis

The starting motivation of this subsection is the following basic problem: approximation error grows with repeated use of the S-M formula. As a consequence, computations which involve the S-M rule become unstable. For this purpose, it is interesting to investigate the error curves corresponding to the four Penrose equations during the iterations included in step  (3) of Algorithm 2. Nevertheless, it is important to note that the proposed Algorithm 2 does not use in each step the S-M rule. Of course, Theorem 5 is in the heart of Algorithm 2 but also it is clear that the poor stability of the S-M formula has no serious influence on the stability of proposed algorithms.

So, in this subsection, we shall present error estimations regarding Algorithm 2. That is, we record and investigate the residual matrix norms , and , corresponding to the four Penrose equations during the execution of step  (3) of Algorithm 2. The results are presented in Figures 1, 2, and 3 for the cases: (a) , , and , (b) , , and , and (c) , , respectively. The choice of these matrices from Table 3 is random.

The purple line in each of Figures 13 corresponds to the values of the matrix norm , . Since and the matrix is symmetric, the matrix is symmetric. For this reason, the graph corresponding to these values is almost a constant line.

The stable convergence is observable from each of these figures. Also, it is possible to notice very fast convergence in the terminal phase of the convergence as well as a relatively slower convergence in the middle of the recurrent process. This fact is understandable if one takes into account that it is necessary to take into consideration all the columns of the input matrix in order to get the complete information on the pseudoinverse.

#### 5. Application in Image Restoration

Several image restoration examples are presented in this subsection. Experiments are done using Matlab programming package on an Intel(R) Core(TM) i5-2540M CPU @ 2.6 GHz 64-bit system with 8 GB RAM memory.

Numerical results corresponding to the Moore-Penrose inverse are derived using Algorithm 2. Results are derived using and and different values of . Suppose that the matrix corresponds to the original image and is the matrix corresponding to the degraded image.

The model of the nonuniform blurring is the same as in  and assumes that the blurring of columns in the image is independent with respect to the blurring of its rows. Therefore, the relations between the original and the blurred image are expressed by the matrix equationwhere , , is length of the vertical blurring, and is length of the horizontal blurring (in pixels). Following the approach used in , the Moore-Penrose inverse approach is used to restore the blurred image , which produces the next approximation of :The algorithm for computing the Moore-Penrose inverse of the blurring matrix used in [24, 25] is based on the method for computing the Moore-Penrose inverse of a full rank matrix, introduced in [26, 27]. In our case, we use Algorithm 2.

The first approach is based on the usage of the pure SR1 update algorithm, proposed in . The second approach starts with the SR1 update algorithm and, after its completion, continues with the BP method. The third approach is a hybrid combination which comprises the SR1 updates in the first step and the S-M formula in the second phase (Algorithm 1). The fourth approach is based on the improvement of the SR1 update step method of Algorithm 1, which leads to Algorithm 2 with the best performances. The improvement is achieved using the specific structure of the underlying Toeplitz matrix.

The most popular algorithm for image restoration based on the matrix pseudoinverse is developed using the singular value decomposition (SVD) . But, the SVD is sensitive to singular values very close to zero. For this purpose, it is common approach to leave out small singular values, corresponding to high-frequency components. The resulting method is known as the truncated SVD (or TSVD shortly) . On the other hand, the methods proposed in the present paper are based on different approach and do not require any kind of the regularization. In order to verify the applicability of the proposed method in the image restoration, we present a comparison of results generated by the TSVD and the rank-one updates.

Parameters of Barbara image degradation are , , , and and salt pepper noise with the noise density of is assumed. The original and damaged image are presented in Figure 4.

Comparison with the state-of-the-art methods in image processing is presented. Many of them are constrained least-squares filter (CLS); Wiener filter (WF); Lucy-Richardson algorithm (LR); and truncated singular value decomposition (TSVD) . Improvements in signal-to-noise ratio (ISNR) and Dice Coefficient (DC) are considered.

Results corresponding to ISNR values are presented in Figures 5 and 6.

Values corresponding to Dice Coefficient (DC) [29, 30] are presented in Figure 7.

Restored images are presented in Figures 8 and 9.

#### 6. Conclusion

The present paper investigates the application of the SR1 updates procedure and S-M formula in computation of the Moore-Penrose inverse of specific Toeplitz matrices that appear in the image restoration process.

The first approach is based on the usage of the pure SR1 update algorithm, proposed in . The second approach starts with the SR1 update algorithm and, after its completion, continues with the BP method. The third approach is a hybrid combination which comprises the SR1 updates in the first step and the S-M formula in the second phase (Algorithm 1). The fourth approach is based on the improvement of the SR1 update step method of Algorithm 1, which leads to Algorithm 2 with the best performances. The improvement is achieved using the specific structure of the underlying Toeplitz matrix.

An application of Algorithm 2 in image restoration is considered. In this case, a comparison with the modified BP method from  is presented.

#### Appendix

The Matlab implementation of Algorithm 1 is given in Algorithm 3 under the function . Note that the correct performance of the function requires the presence of yltXl function for computing the sequences and .

 function alg41 = alg41(A) %***************************% % General Information.    % %***************************% % Synopsis: % alg41 = alg41(A) % Input: % A = the initial matrix of interest. % Output: % alg41 = mp-inverse by using rank-one updates and the Sherman_Morrison formula. % This function follows Algorithm 1. % The correct performance of alg41b requires the presence of yltXl function. Astar = A’; m,n = size(A); G0 = zeros(m,m); if m > 1 for i = 1:m-1 Gkout = G0+A(:,i)*Astar(i,:); G0 = Gkout; end Gm = Gkout+A(:,m)*Astar(m,:); else Gm = G0; end Gkout = Gm; Y02 = Gkout∖A; X02 = Y02’; for l = m+1:n-1 yltout,Xlout = yltXl(Y02,X02,Astar,l,m,n); Y02 = yltout; X02 = Xlout; end alg41 = X02 - Astar*Y02(:,n)*Y02(:,n)’/(1+Astar(n,:)*Y02(:,n));

The Matlab implementation of Algorithm 2 is given in Algorithm 4 under the function . Note that the correct performance of requires the presence of yltXl function for computing the sequences and .

 function alg42 = alg42(A,ll) %***************************% % General Information.    % %***************************% % Synopsis: % alg42 = alg42(A) % Input: % A = the mxn initial matrix of interest. % ll=n-m+1 % Output: % alg42 = improvement of Algorithm 1 according to Theorem 6. % This function follows Algorithm 2. % The correct performance of alg42 requires the presence of yltXl function. Astar = A’; m,n = size(A); f = A(1,1:ll); E0 = zeros(1,m); product = f’*f; for i = 1:ll E0(1,i) = sum(diag(product,i-1)); end E1=E0; E = toeplitz(E1); G0=zeros(ll-1); Gnew = hankel(zeros(1,ll-2) f(end), flip(f(1,2:end))); for i = 1:ll-1 Gkoutnew = G0+Gnew(:,i)*Gnew(i,:); G0=Gkoutnew; end Gmnew = G0; fullmatrix = zeros(m-ll+1) zeros(m-ll+1,ll-1);zeros(ll-1,m-ll+1) Gmnew; Gkout = E-fullmatrix; Y02 = Gkout∖A; X02 = Y02’; for l = m+1:n-1 yltout,Xlout = yltXl(Y02,X02,Astar,l,m,n); Y02 = yltout; X02 = Xlout; end alg42 = X02 - Astar*Y02(:,n)*Y02(:,n)’/(1+Astar(n,:)*Y02(:,n));

The yltXl Function. See Algorithm 5.

 function yltout,Xlout = yltXl(Y0,X0,G,l,m,n) d = n-l; Y0l = Y0(:,l); g = 1+G(l,:)*Y0l; g1 = Y0l*G(l,:); multiplier = eye(m)-g1/g; Y0lherm = Y0l*Y0l’; Xl = X0 - G*Y0lherm/g; Xlout = Xl; ylt = zeros(m,n); for i = 1:d t = l+i; ylt(:,t) = multiplier*Y0(:,t); yltout = ylt; end

#### Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

Predrag S. Stanimirović gratefully acknowledges support from the Research Project 174013 of the Serbian Ministry of Science. Vasilios N. Katsikis gratefully acknowledges that this work was carried out at the Department of Economics, University of Athens, Greece, and supported by Special Account Research Grant 11698. Predrag S. Stanimirović and Igor Stojanović gratefully acknowledge support from the Project “Applying Direct Methods for Digital Image Restoring” of the Goce Delev University.