Abstract

Sparse approximate inverses which satisfy have shown to be an attractive alternative to classical smoothers like Jacobi or Gauss-Seidel (Tang and Wan; 2000). The static and dynamic computation of a SAI and a SPAI (Grote and Huckle; 1997), respectively, comes along with advantages like inherent parallelism and robustness with equal smoothing properties (BrΓΆker et al.; 2001). Here, we are interested in developing preconditioners that can incorporate probing conditions for improving the approximation relative to high- or low-frequency subspaces. We present analytically derived optimal smoothers for the discretization of the constant-coefficient Laplace operator. On this basis, we introduce probing conditions in the generalized Modified SPAI (MSPAI) approach (Huckle and Kallischko; 2007) which yields efficient smoothers for multigrid. In the second part, we transfer our approach to the domain of ill-posed problems to recover original information from blurred signals. Using the probing facility of MSPAI, we impose the preconditioner to act as approximately zero on the noise subspace. In combination with an iterative regularization method, it thus becomes possible to reconstruct the original information more accurately in many cases. A variety of numerical results demonstrate the usefulness of this approach.

1. Introduction

For applying an iterative solution method to an ill-conditioned system of linear equations with sparse matrix , , it is often crucial to include an efficient preconditioner. Here, the original problem is replaced by the preconditioned system or . Often, used preconditioners as Jacobi, Gauss-Seidel, or Incomplete LU (ILU) decomposition are of unsatisfactory quality or strongly sequential. In a parallel environment, both the computation of the preconditioner as well as the application of the preconditioner on any given vector should be efficient. Furthermore, an iterative solver applied on or should converge much faster than for (e.g., it holds ).

The first two conditions can be easily satisfied by using a sparse matrix as approximation to . Note that the inverse of a sparse system is nearly dense, but in many cases, the entries of are rapidly decaying, so most of the entries are very small (see Demko et al. [1]).

Sparse approximate inverses can be computed by minimizing in the Frobenius norm, where denotes the identity matrix. In this Frobenius norm minimization, we can include further approximation conditions, described by the Modified SPAI (MSPAI) [2, 3] method. This additional feature allows us to control the approximation property of the preconditioner. So, by means of probing vectors, we can choose subspaces for which the preconditioner satisfies certain conditions. This is especially important for iterative solution methods that differ between high-frequency and low-frequency components, for example, smoothing in multigrid or regularization techniques based on iterative solvers.

The outline of the paper is the following. In Section 1, we will give a survey of SPAI and MSPAI and a short description of multigrid methods and iterative solvers for regularization problems. In Section 2, we show that for multigrid methods the smoothing property can be greatly improved by using MSPAI in comparison to SAI or SPAI smoothers. With a different subspace approach, we focus on the reconstruction of signals associated to ill-posed problems in Section 3. We present numerical results throughout the paper to demonstrate the impact of our approach at the corresponding parts. A conclusion with a short outlook closes the discussion.

Benson and Frederickson [4] were the first to propose the computation of an explicit approximation to the inverse of a system matrix . For an a priori prescribed sparsity pattern , this can be done in a static way by solving with the th column of the preconditioner and the th column of the identity matrix . denotes the pattern of and the pattern of the th column of . This well-known approach of providing a SAI preconditioner, naturally leads to inherent parallelism which is one of its main advantages. Each of the small least squares problems regarding one column can be computed independently of one another.

Using both the index set , which is implicitly given by and contains the indices such that , and its corresponding so-called shadow , that is the indices of nonzero rows in , each subproblem in (1) is related to a small matrix if is sparse. We refer to the reduced sparse column vectors as and , respectively. The solution of every reduced problem can be obtained, for example, by using QR decomposition using Householder or the modified Gram-Schmidt algorithm.

1.1. The SPAI Algorithm

The SPAI algorithm is an additional feature in this Frobenius norm minimization that introduces different strategies for choosing new profitable indices in to improve on an already computed approximation. We assume that by solving (1) for a given index set , we already have determined an optimal solution inducing the sparse vector with residual . Dynamically, we want to augment new entries in and solve (1) for this enlarged index set such that we derive a reduction in the norm of the new residual .

Following Cosgrove et al. [5] and Grote and Huckle [6], in SPAI, we test one possible new index out of a given set of possible new indices to improve . Therefore, we can consider the reduced 1D minimization problem The solution of (2) is given by and leads to an improved squared residual norm For improving , we only have to consider indices in rows of that are related to nonzero entries in the old residual ; otherwise, they do not lead to a reduction of the residual norm. Thus, we have to determine those column indices , which satisfy . Let us denote the index set of nonzero entries in by . By , we denote the set of new indices that are related to the nonzero elements in the th row of , and by the set of all possible new indices that will lead to a reduction of the residual norm. The one or more newly added indices are chosen to be a subset of that corresponds to the maximal reduction in . For the enlarged index set , we have to update the QR decomposition of the related least squares submatrix and solve the new column .

It is possible to influence the sparsity and approximation quality during the computation of the SPAI indirectly by different parameters, for example, how many entries are to be added in one step, how many pattern updates are to be done, which residual norm should be reached, or which initial pattern is to be used. Note that SPAI can also be applied on dense systems to compute a sparse preconditioner.

1.2. Modified SPAI

Holland et al. [7] have generalized the SPAI ansatz allowing a sparse target matrix on the right hand side in the form . This approach is useful in connection with some kind of two-level preconditioning: first, compute a standard sparse preconditioner for , and then improve this preconditioner by an additional Frobenius norm minimization with target . From an algorithmic point of view, the minimization with sparse target matrix , instead of , introduces no additional difficulties. Simply, should be chosen more carefully with respect to and .

In [2], we combine this approach with classical probing techniques [8–10], which are, for example, applied to preconditioning Schur complements. In contrast to classical probing, our basic formulation with sparse matrices and , is not restricted to special probing subspaces as it allows any choice of and . The resulting preconditioner satisfies both and . We refer to the first rows of (5), that is, , as full approximation part and to the additional rows as probing part. The weight enables us to control how much emphasis is put on the probing constraints, and the matrices represent the -dimensional subspace on which the preconditioner should be optimal. Choosing , , and in (5) leads to the classical SPAI formulation. Setting and , we end up with a formulation computing explicit sparse approximations to . In this case, the derived approximation on can have a considerably fewer number of nonzeros (nnz) than but choosing , the preconditioner will have a similar action on as .

Furthermore, it is also possible to include individual probing conditions for each column . As a new approach, we use probing masks defined by sparse row vectors , , containing the same pattern as , that is, , and add the condition to the Frobenius norm minimization of the th column. Corresponding to , denotes the reduced form of . The masks for each column of can be stored in a sparse rectangular matrix , whereby the individual probing conditions result from . Compared to MSPAI using global probing vectors , this approach gives considerably more freedom for choosing probing conditions individually for each column of the preconditioner. Note that, for example, for a tridiagonal pattern, the mask can be used to enforce a quasisymmetry in the column vector such that .

The field of applications using MSPAI is versatile: we can improve preconditioners resulting from ILU, IC, FSAI, FSPAI, or AINV (see [11] for an overview) by adding probing information. We also overcome the main drawbacks of MILU and classical probing such as the restriction to certain vectors like as probing subspace and the rather difficult efficient implementation on parallel computers. The numerical examples in [2, 3] demonstrate MSPAI’s effectiveness for preconditioning various PDE matrices and preconditioning Schur complements arising from domain decompositions.

1.3. Multigrid

The crucial observation leading to multigrid (MG) methods is the following: applying a stationary iterative solver like Gauss-Seidel iteration gives a satisfactory reduction of the error on the subspace related to high-frequency components. Therefore, Gauss-Seidel iteration is considered as a smoother in a first step of the MG algorithm. The error mainly contains smooth components and can be projected to a smaller linear system. This reduced system can be tackled recursively by the same approach based on smoothing steps and projection on an again reduced system. On the coarsest level, the small linear system can be solved explicitly. Afterwards, the coarse solutions have to be prolongated step by step back to the finer levels including postsmoothing steps.

Here, we are mainly interested in the smoother. To derive a convergent method the smoother has to reduce the error in the high-frequency subspace. For a given matrix and an approximate inverse smoother , the iteration is described by , and thus the error is given by . As the eigenvalues and eigenvectors are analytically well-known for a discretization of the constant coefficient Laplace operator, it is possible to fully discuss the convergence behavior of MG in this special case. For analyzing the smoothing property, the eigenvalues are separated into high- and low-frequency eigenvectors. The projection on the high-frequency subspace gives the smoothing factor defined as the spectral radius of . In the constant coefficient case, the smoothing factor can also be described as , where and are generating functions representing and [12, 13]. Note that the technique of generating functions is similar to Local Fourier Analysis (LFA) used in the standard multigrid literature [14]. In 2D the functions are defined in , where the high-frequency domain is given by the difference between the two squares and with corners at Thus, the smoothing factor is given by In this paper, we want to use MSPAI as a smoother. SAI was already considered by Tang and Wan in [15] and SPAI by BrΓΆker et al. in [16].

1.4. Iterative Regularization

For ill-posed problems, as they arise in image restoration, regularization techniques are important in order to recover the original information. Let us consider the model problem where is the original image, is the blur operator, is a vector representing the noise, and is the observed image. We want to recover as good as possible and as fast as possible. Because may be extremely ill-conditioned or even singular, and, because of the presence of noise, (8) cannot be solved directly. Consequently, to solve on the signal subspace, a regularization technique has to be applied. One of the classical methods is the Tikhonov regularization [17] which solves instead of (8) for a fixed regularization parameter .

Another regularization method is based on an iterative solver such as the Conjugate Gradient (CG) method [8, 18] for spd matrices or CG on the normal equations in the general unsymmetric case. The usual observation which coincides with the CG convergence analysis is that in the first iterations, the error is reduced relative to large eigenvalues. In later steps, the eigenspectrum related to noise and small eigenvalues dominates the evolution of the approximate solution. Therefore, the restoration has to stop after a few iterations before the method starts to reduce the error relative to the noise space.

Preconditioning usually should accelerate the convergence without destroying the quality of the reconstruction [19–21]. Structured preconditioners like Toeplitz or circulant matrices are considered typically whenever spatially invariant blur operators are treated. For general , a preconditioner like ILU will lead to faster convergence, but the quality of the reconstruction will deteriorate, because the preconditioner also improves the solution relative to the unwanted noise subspace. Therefore, it is even more demanding to develop preconditioners for general , for example, for spatially variant blur.

The application of the preconditioner can have three positive effects: (i)reduce the necessary number of iterations, (ii)result in a better reconstruction of the original vector, and (iii)result in a flat convergence curve such that it is easier to find the best reconstruction.

In general, we have to expect that not all three conditions can be reached simultaneously. Therefore, we have to present different preconditioners depending on the application.

Using preconditioners within iterative methods to restore original data has been successfully proposed by Nagy, for example, in [22, 23], mostly in connection with nearly structured problems. Furthermore, for the analysis and solution of discrete ill-posed systems there are several MATLAB packages such as Regularization Tools developed by Hansen [24] and RestoreTools developed by Nagy et al. [25].

2. MSPAI Smoothing

2.1. A 1D Model Problem

To derive approximate inverse smoothers, we consider the standard 1D discretized Laplace operator with constant coefficients which is of the form , with . The matrix is related to the generating function or symbol The high-frequency part of is represented by (10) for . Hence, the optimal smoothing parameter in the Jacobi smoother is found by solving the problem The solution can be found by replacing (11) with the maximum over the two boundary values .

2.1.1. Analytical Derivation of the Optimal Smoother

As approximate inverse smoother, we choose a trigonometric polynomial of the same degree and a tridiagonal Toeplitz matrix , respectively. The smoothing condition for (12) can be written as or using the boundary values of as in which is the local extreme point of the function with derivative equal to zero. We thus obtain for the quadratic condition and the overall solution which leads to The related smoothing factor for this optimal preconditioner is , which is significantly smaller than the optimal smoothing factor of related to for the Jacobi smoother.

2.1.2. Individual Probing Masks

The minimization conditions (14) on and can be also seen as conditions for the entries of the matrix directly. A column of is described by the three values , where the main diagonal entry is related to and the upper and lower entries to . Therefore, we can translate the two linear minimization conditions of (14) into individual probing masks on entries of directly Notice that we refer to individual probing conditions as local probing conditions. There are several possibilities to eliminate and replace the quadratic condition (15) by linear conditions that can be related to masks itself. In one possible method, we assume that the linear conditions of (14) are satisfied exactly. Inserting into the second condition yields . We replace the denominator of the quadratic condition with this value and get the new linear condition which is related to the probing condition with isotropic probing mask We call a mask to be isotropic if the minimization can be described by one probing vector with , and , in the form . Here, for instance, of is related to and of to .

In our following numerical experiments, we are interested in a comparison between MSPAI with probing conditions, the optimal smoother (16), and the tridiagonal preconditioner given by SPAI. The SPAI matrix is derived by the minimization over the Frobenius norm in which the reduced Least Squares problem for a typical column of is given by The inner columns of the solution of (20) have Toeplitz structure described by the matrix related to the generating function . Preconditioning with the SPAI matrix results in the smoothing factor 0.250. Indeed, this is a degradation compared to the smoothing factor 0.0588 for the optimal-derived smoother but still an improvement compared to Jacobi. Note that MSPAI minimizes the 2-norm of a combination of conditions while is derived by minimizing the 1-norm. Therefore, to derive efficient smoothing factors by MSPAI, it is important to find a good combination and weighting of probing conditions.

Let us consider a numerical experiment for the matrix of order . Using an approximate inverse preconditioner satisfying the individual probing conditions, we achieve highly reduced smoothing factors in comparison to SPAI and Jacobi. Table 1 shows that depending on the probing mask and weight , reductions down to are possible for weighed with . Moreover, the choice of the subspace weight is stable; that is, increasing values lead to a saturation of the achievable smoothing factors.

2.1.3. Global Probing Vectors

Considering the conditions and with isotropic masks it is possible to derive conditions with low- and high-frequency global probing vectors and for the generalized Frobenius norm minimization (5) of MSPAI respectively. In the following, we use a different approach and derive global probing conditions with respect to the optimal derived preconditioner (16). This can also be applied to general probing vectors like and , representing additional high-frequency subspaces. For a given probing vector , we observe that with . As we want to find the smoother , based on MSPAI with the same action on , we define probing conditions and obtain

Figure 1(a) shows the impact of using these global probing conditions for matrix . Inducing MSPAI to satisfy leads to a strong reduction of the smoothing factor in comparison to SPAI, close to the optimal value of . A combination of global conditions can lead to an efficient smoother as well. Again the smoothing factor stays stable for increasing values of .

In a similar way, it is possible to derive global probing conditions with action on , that is, . For probing vectors, for example, , and , the approximation holds, and therefore up to some boundary perturbations. Note that for an approximate inverse preconditioner which should nearly be exact on the condition should be satisfied.

As a generalization, we are interested in the impact of using global probing conditions when considering the 1D tridiagonal matrix with varying coefficients and th row defined by We denote the exact conditions for the probing vectors , , and as , , and . The approximate conditions for are indicated by , , and .

Figure 1(b) shows that both the exact and approximate probing on the high-frequency part of the system lead to significant improvement compared to SPAI and Jacobi, similar to the 1D problem .

2.2. A 2D Model Problem

We consider the block tridiagonal matrix with related to the generating function As before the smoothing corresponds to the rectangle . Hence, the solution of yields the optimal Jacobi smoother with smoothing factor for .

2.2.1. Analytical Derivation of the Optimal Smoother

As approximate inverse smoother, we use the trigonometric polynomial . Via the minimization over , the solution for the optimal smoothing preconditioner is given by with smoothing factor .

2.2.2. Individual Probing Masks

Analogously to the 1D case, the minimization can be derived by considering The corners of result in the minimization conditions To get rid of the quadratic condition, we make use of the fact that the linear conditions in (28) have the same absolute value for the optimal and . We obtain and the quadratic term can be replaced either by or . Consequently, we end up with two additional linear conditions

Once again, we can see the minimizations (28) and (29) on and as conditions for the entries of the smoother directly. A column of is now described by the five degrees of freedom . Thus, we gain the following local probing conditions for the 5-point stencil : It is possible to combine the conditions and to obtain an isotropic mask to which the low-frequency probing vector corresponds to. To derive this, we add the diagonal and subdiagonal values of the weighed conditions and and set them to be equal. The isotropy condition leads to the equation with the solution . We obtain and for the right hand side. This leads to the additional individual isotropic probing condition

Let us consider of size . Following Figure 2(a) we can see that MSPAI satisfying individual probing conditions reduces the smoothing factor in comparison to SPAI with factor 0.339. Utilizing more degrees of freedom with a combination of probing masks, the smoothing factor can be reduced further towards the optimal value of .

2.2.3. Global Probing Vectors

Similar to the 1D model problem, we observe that for certain , and Ξ±. Therefore, we define global probing conditions by using . Regarding the 2D case, we use global vectors resulting from the Kronecker products , , , and . We obtain

Like MSPAI with individual probing masks an approximation on the subspace spanned by the global probing vectors reduces the smoothing factor compared to SPAI and Jacobi (see Figure 2(b)). The global condition leads to a stable smoothing factor of 0.252. Similar to the 1D model problem it is also possible to derive global probing conditions with action on , not considered here.

2.2.4. 9-Point Stencil of the 2D Model Problem

As a second 2D example, we consider the 9-point stencil of , which is the block tridiagonal matrix related to the generating function with .

We use the pattern of for our approximate inverse smoother With , this yields the minimization problem The optimal solution is given by which leads to with smoothing factor .

We consider the boundary values and to get linear conditions. A third, quadratic condition can be deduced by setting the derivative of (36) to zero and inserting its solution into (36). We obtain the following minimization conditions at high-frequency values: The direct translation into linear probing masks and their combination reveals the individual probing conditions In Table 2, we can see that in comparison to SPAI, all individual probing conditions lead to a reduced smoothing factor and are stable for increasing values of . It is almost feasible to reach the optimal value 0.0588. Again, it is possible to derive global probing conditions, for example, with action on , by using Kronecker products of the 1D probing vectors. By observing similar to Section 2.2.3, we can define conditions with for some , and Ξ±.

3. MSPAI in Regularization

An optimal preconditioner for iterative regularization methods should treat the large eigenvalues and have no effect on the smaller eigenvalues not amplifying the noise. Since SPAI is an efficient smoother and satisfies the first condition, we propose MSPAI probing as regularizing preconditioner for general to suppress a reconstruction on the noise space. Following [20], such a preconditioner should have the following properties: (i) on the signal subspace with , and (ii) or on the noise subspace.

For circulant matrices, the eigendecomposition is known, and, therefore, these conditions can be satisfied by manipulating the eigenvalues. Most of the preconditioners make use of properties of structured matrices. For general matrices, this is usually not possible, and we thus use the probing facility of MSPAI in order to derive a different approximation quality on the signal or noise subspace, respectively. (i)For the signal space, we could use the vector representing smooth components, and therefore, the important part of the signal subspace. In this paper, we expect that the signal subspace is already taken into account by SPAI itself, and thus we omit this probing possibility. (ii)For the noise space, we use , , and as typical vectors related to fast oscillations. In case of using probing masks, and are related to and ,  , respectively.

For higher dimensional problems, probing vectors typically result from a Kronecker product of 1D probing vectors. The conditions in MSPAI are given by in order to derive a good preconditioner and fast convergence on the signal subspace and with for the noise subspace in order to avoid a deterioration of the reconstruction by the preconditioner.

3.1. A 1D Model Problem

Let us consider the matrix related to the symbol As approximate inverse preconditioner, we choose the trigonometric polynomial of symbol related to a Toeplitz matrix . The entries and should be determined such that the preconditioner acts both nearly as the inverse on the signal subspace and as zero on the noise subspace. For , this leads to the minimization conditions By using the factor , we introduce a weighting between the signal and the noise conditions. We obtain the optimal solution of (41) by solving the equality Thus, we are able to derive the optimal regularizing preconditioner In the limiting case, for only preconditioning on the signal space , we get , and for only regularizing on the noise subspace we get . Therefore, the preconditioner for is equivalent to the normal equations. With the parameter , we can choose between preconditioning on the signal subspace or suppression of noise.

Again, the conditions (41) can be seen as conditions for the entries of the preconditioner directly, where a column is described by the three degrees of freedom . Both the first and the second condition are covered by SPAI and are not used in MSPAI, as we can think of SPAI approximation as acting mainly on the signal subspace. The translation of the last condition of (41) into an individual probing condition with probing mask yields It is possible to transfer this individual regularization condition to the global condition with probing vector , representing high-frequency noisy components. Thus, we force to act as approximately zero on the noise subspace. We refer to the high-frequency global probing condition as

As a first numerical example, we examine the given 1D blur operator with for a given vector representing the original data. We use the MSPAI regularization property to reconstruct two different signals perturbed by random noise. By using powers of , we make the problem more ill-conditioned and thus enforce the difference between the reconstruction results of all methods. In order to be able to use CG as iterative method, we ensure to have a symmetric and positive definite (spd) preconditioner via corresponding powers of after the computation of ; that is, the preconditioner is . We implement the CG algorithm without a stopping criterion to iterate to the maximum number of specified iterations, that is, to observe the semiconvergent behavior. We keep track of the reconstruction error between the original signal and the reconstruction as well as of the residual in each iteration. If not mentioned otherwise, we refer to CG as using CG without preconditioner and to PCG as CG using a preconditioner, for example, MSPAI with properties on a certain subspace. Notice that in general, an appropriate stopping criterion should be employed [26] when using an iterative regularization method.

Let us consider in a first example the ill-posed problem with operator and original data of size affected by random noise of order 0.01%. Here, denotes the smooth signal The measurement of the error within each iteration shows that for increasing values of the reconstruction is stable, and the region of optimal reconstruction quality is broader and smoother compared to the unpreconditioned CG (see Figure 3). Moreover, by putting more weight to the regularization property of it is possible to approximate the distribution of CG using normal equations. However, we achieve almost similar reconstruction quality in fewer iterations. Refer to Table 3 for details. Hence, with this family of preconditioners, we can steer the iteration to faster convergence (small ) or slightly better reconstruction quality ().

In our second 1D example, we observe the behavior for the blur operator and a signal which has strongly increasing entries near the boundaries and an additional nonanalytic point in the middle The problem has size and is perturbed by random noise of order 0.1%. Having a closer look at the error between the original and the reconstructed signal vector shows that the reconstruction is very good for interior components, different from , but deteriorates near the boundary. Therefore, we introduce some correction at the boundary by changing the components and , in which is a weight factor of heuristic choice, and in most cases . Additionally, we correct our probing subspace in the middle by and again with heuristic weight . Following Figure 4, we can see the typical CG behavior of reaching the optimal value after a few iterations but afterwards deteriorating the reconstruction very fast by reducing the error relative to the noise subspace. Using a MSPAI preconditioner the optimum is reached after more iterations but in a stable and smooth manner with slightly smaller value. Table 4 shows the optimal reached reconstruction error with its corresponding number of iterations. Compared to the unpreconditioned CG, the reconstruction is by a factor of 2.86 times more accurate when using boundary corrections and 4.72 times when using both corrections within MSPAI.

Likewise, it is possible to use subspace corrections within mask probing. We change the individual probing condition for the first and last column of the preconditioner, in order to take into account the missing value which lies outside the vector. This lost information can be incorporated by using the probing mask for the first and the last column. Note that similarly, interior discontinuities can also be treated by modifying the related masks. Furthermore, also for it is possible to build in similar corrections, for example, by weighting the nondiagonal entries. Various experiments revealed that using such corrections at discontinuities of the data vector within the probing subspaces lead to better reconstruction, that is, smaller reconstruction errors compared to the unpreconditioned case. We will use similar techniques in Section 3.3.

As a more general problem, we consider the reconstruction of based on the blur operator of size affected by random noise of order 1% and 0.1%, respectively. denotes the tridiagonal system with varying coefficients whose th row is given by with . Following Figure 5, CG using normal equations yields both better reconstruction after slightly more iterations and smoother convergence compared to the unpreconditioned CG. In case of applying MSPAI satisfying the global probing condition to CG(), we are able to reconstruct with smaller error and in a smooth way. These positive effects can be enforced by using it for CG with normal equations. The optimal reconstruction is achieved in fewer iterations, is much better, and the convergence curve much smoother.

3.2. A 2D Model Problem

In 2D, we consider the matrix with its corresponding symbol and preconditioner . With , we have to minimize relative to the signal subspace and for relative to the noise subspace. This yields the optimal block tridiagonal preconditioner

The individual probing masks for a reduced inner column of the preconditioner as well as the high-frequency probing vector for the 2D case can be derived via Kronecker products of the 1D probing vectors. Thus, we obtain the individual and global probing condition respectively.

We observe the behavior for the 2D problem with blur operator and original data of size affected by random noise of order 0.1%. Again, for , with increasing values of , we approximate the convergence of CG using normal equations (refer to Figure 6). Applying MSPAI with to PCG it is almost possible to reach the value of CG() but in less iterations. Higher values of lead to smooth and broad convergence curves similar to . CG has its optimal value after 9 iterations with error , MSPAI using , after 12 iterations with value , and CG using normal equations reaches the error after 44 iterations.

3.3. Examples from the Regularization Toolbox

We are interested in the impact of using sparse approximate inverse preconditioners on some problems of the MATLAB package Regularization Tools Version 4.1 for analysis and solution of discrete ill-posed problems, developed by Hansen [24].

In our first example, we consider the deriv2 example which is a discretization of a first kind Fredholm integral equation. We choose deriv2(n,case) with . Our problem has size and is affected with random noise of order 0.001% and 0.01%, respectively. Note that the system matrix is dense. We apply both CG and PCG to and force the MSPAI to act as approximately zero on the noise subspace by using the high-frequency probing conditions and simultaneously. We weigh the subspace with and apply the symmetric preconditioner to the normal equations. To avoid deterioration at the boundary, we adjust by resetting the values , , , and .

Following Figure 7, we are able to achieve better reconstruction of the original data and in fewer iterations when applying MSPAI. For other noise levels and for deriv(n,1), we observed similar behavior.

Let us focus on the blur [24] test problem as a second example which is deblurring images degraded by atmospheric turbulence blur. The matrix is an -by- symmetric, doubly block Toeplitz matrix that models blurring of an -by- image by a Gaussian point spread function. The parameter controls the width of and thus the amount of smoothing and ill-posedness. is symmetric block banded and possibly positive definite depending on and . We choose the problem to be of size with bandwidth 4 and , that is, we invoke blur(150,4,1) and is of size . The original data vector is denoted by .

We compare the unpreconditioned CG to PCG both for and for the normal equations . In case of preconditioning , MSPAI is symmetrized via and for the preconditioner is applied. For MSPAI, we impose the blocktridiagonal pattern . In view of the structure of , we build the high-frequency subspace by Kronecker products of oscillatory probing vectors in the regularizing global conditions and the new ones and , all weighed with .

Following Figure 8 and Table 5, we obtain better reconstruction when applying MSPAI in contrast to the unpreconditioned CG for or . We observed similar behavior for other values of and .

As a last example we reduce the 2D blur example of Hansen to the 1D case. For the blur operator we take the 1D analogon of and reduce the 2D right hand side to appropriate size. The consideration of example with original data (Table 4 and Figure 4) shows that the preconditioner should take also into account the behavior of the original or blurred data vector. Smoothing, for example, with makes sense to remove noisy components only as long as the data is continuous. At discontinuities, smoothing would cause additional errors. Therefore, we use a modified tridiagonal smoothing preconditioner with th row . Here, near continuous components , but near discontinuities. In case that we know the original data we define with th row Otherwise we define the preconditioner with th row The parameters and have to be chosen in such a way that discontinuities are revealed as good as possible.

Following Figure 9, the reconstruction is strongly improved if the preconditioner is adjusted relative to the discontinuities of . Also, using only the observed data improves the reconstruction. Therefore, we could also consider an iterative process, where a first approximation on is used to define the tridiagonal preconditioner delivering an improved approximation which again gives a new preconditioner , and so on.

4. Conclusion

We have considered the derivation of preconditioners with special behavior on certain subspaces. For this purpose, analytic minimization problems in functions have been translated into MSPAI minimizations for vectors based on masks. Such mask-based probing conditions can be different for each column of the preconditioner and can be written in the form with reduced vectors , , and scalar . Mask probing has the advantage that for each sparse column we can use a different sparse probing vector . Furthermore, we have introduced probing conditions based on probing vectors that are global for the whole matrix in the form and , respectively, with . The probing vectors are related to the low-frequency or smooth subspace, represented by , or to the high-frequency oscillatory subspace, represented, for example, by , , or .

For multigrid methods, we have shown that the smoothing property of approximate inverses like SAI or SPAI can be improved significantly by using MSPAI with appropriate probing masks or probing vectors. In special cases, we can analytically determine the optimal approximate inverse smoother and its corresponding smoothing factor. SPAI is far from being optimal in this class, but including convenient individual or global probing conditions, we can nearly reach the optimal smoothing factor with MSPAI. Moreover, an increasing weighting of the subspace leads to stable behavior. Our tests on systems with varying coefficients and for the 2D case demonstrate that even the usage of global probing conditions with action on , only satisfying the approximation , reduces the smoothing factor in comparison to SPAI and the damped Jacobi method.

A different subspace approach becomes necessary during the recovery process of blurred signals. Here, our main focus is to allow a better and more stable reconstruction of the original signal. Applying a preconditioner in iterative regularization can easily lead to a deterioration of the reconstruction by approximating the inverse also in the noise subspace, or by removing high-frequency components in the original signal. Therefore, preconditioners have to be developed very carefully. We derive approximate inverse preconditioners analytically based on generating functions, or by applying MSPAI with probing masks or probing vectors. These preconditioners allow to incorporate filtering for noise reduction, and they can be adjusted both to the system matrix, for example, the blur operator, and to the data vector . So, the deterioration of the reconstruction at discontinuities of can be reduced by modifying the probing conditions relative to the variation of the signal data. We show that these preconditioners can be used for faster convergence or better reconstruction. The application to more general problems is an interesting and important task which will be investigated in the future.