Abstract

Iterative reconstruction (IR) algorithms based on the principle of optimization are known for producing better reconstructed images in computed tomography. In this paper, we present an IR algorithm based on minimizing a symmetrized Kullback-Leibler divergence (SKLD) that is called Jeffreys’ -divergence. The SKLD with iterative steps is guaranteed to decrease in convergence monotonically using a continuous dynamical method for consistent inverse problems. Specifically, we construct an autonomous differential equation for which the proposed iterative formula gives a first-order numerical discretization and demonstrate the stability of a desired solution using Lyapunov’s theorem. We describe a hybrid Euler method combined with additive and multiplicative calculus for constructing an effective and robust discretization method, thereby enabling us to obtain an approximate solution to the differential equation. We performed experiments and found that the IR algorithm derived from the hybrid discretization achieved high performance.

1. Introduction

Various kinds of iterative reconstruction (IR) [14] algorithms based on the principle of optimization are known for producing better reconstructed images in computed tomography (CT). In accordance with the base objective function, each IR method has intrinsic properties in the quality of images and in the convergence of iterative solutions. The maximum-likelihood expectation-maximization (ML-EM) [4] algorithm decreases the generalized Kullback-Leibler (KL) divergence [5, 6] between the measured projection and the forward projection along with the iteration. However, the objective function, in which the multiplicative algebraic reconstruction technique (MART) [710] forces a decrease in the iterative process, is an alternative KL-divergence done by exchanging the measured and forward projections. Due to the asymmetry of the KL-divergence, both objective functions have generally different values.

In this paper, we present an IR algorithm based on the minimization of a symmetrized KL-divergence (SKLD), which is formulated using the mean of the two mutually alternative KL-divergences and is called Jeffreys’ -divergence [11, 12]. Because the base optimization function is a symmetric premetric measure and it gives an upper bound of the Jensen-Shannon divergence [13, 14], one can expect a better performance while preserving good properties of ML-EM and MART algorithms. The convergence to an exact solution and the monotonic decreasing of the SKLD with each iterative step for a consistent inverse problem are guaranteed using the approach of the continuous dynamical method [1520]. Specifically, we construct an autonomous differential equation for which the proposed iterative formula gives a first-order numerical discretization with some step size and demonstrate the stability of an equilibrium in a continuous-time system using Lyapunov’s stability theorem [21]. The theory is extended to a case where the Lyapunov function can be an -skew -divergence with a parameter , which is the SKLD when .

We also propose a novel method for constructing an effective and robust method in the discretization, thereby enabling us to obtain an approximate solution to the differential equation. That is, a hybrid Euler discretization combined with additive and multiplicative calculus works well; however, neither additive nor multiplicative Euler method does.

We conducted experiments and found that the IR algorithm derived from the hybrid Euler discretization of the continuous-time dynamical system achieved high performance.

2. ML-EM and MART Algorithms

The problem of image reconstruction from measured projections in CT is formulated as solving unknown variable for pixel values such that where , , and represent the measured projection, projection operator, and noise, respectively, with and , respectively, denoting the sets of nonnegative and positive real numbers. If the system has a nonnegative solution, it is consistent; otherwise, it is inconsistent.

We introduce the following definitions and notations. The variable denoted by the symbol for is defined by where is the element in the th row and th column of . The notations and indicate the th element and the th row vector of the vector and the matrix , respectively. The function of two nonnegative vectors and represents the generalized KL-divergence [5]:

Known methods of reconstructing tomographic images in clinical CT are filtered back projection (FBP) as a transform method and IR as an optimization process. KL-divergence is a common measure for deriving IR algorithms, and it plays an important role in accordance with the axiom that minimizing the KL-divergence is equivalent to maximizing the likelihood function that is considered suitable for reconstruction modeled with a probability distribution. The sequences and decrease for the respective iterative sequences of the ML-EM and the (simultaneous) MART algorithms defined by and for with .

3. Proposed System

The IR algorithm proposed in this paper can be described using the following difference equation. where is a parameter and where the functions and are, respectively, defined by and for , with and .

For investigating the dynamics of the difference system in (6), we give its continuous analog described using the differential equation: for and with . The term “continuous analog” means that a numerical discretization of the differential equation is the same as the difference equation. The idea is based on the approach for solving a constrained tomographic inverse problem using nonlinear dynamical methods [2225].

3.1. Theoretical Analysis

The theoretical results on the continuous-time system in (9) are given. First, we show that a solution is possible to ensure positive values remain.

Proposition 1. If we choose an initial value for the system in (9), the solution remains in the subspace for all .

Proof. The vector field of the th element of the system is multiplied by ; therefore, the derivative holds for any on the subspace , which means the subspace is invariant and which means any flow cannot pass through invariant subspace on the basis of the uniqueness property of solutions to the initial value problem. Hence, the trajectory emanating from the initial value in behaves in the sub-state space.

Let us define with in the interval as an -skew -divergence. The function is essential for a premetric measure of difference between and in accordance with and if and only if . When , the divergence becomes a symmetrical form of KL-divergence by averaging and , which we abbreviate as SKLD.

Then, under the assumption that the inverse problem is consistent, the following shows that the solution converges to a desired equilibrium and that the nonnegative function monotonically decreases along solution trajectories.

Theorem 2. If a unique solution exists to the system , the equilibrium of the dynamical system in (9) with any is asymptotically stable.

Proof. A Lyapunov candidate function, , is defined for applying Lyapunov’s stability theorem in (10): which is well defined via Proposition 1 when choosing an initial value in . We then calculate its derivative with respect to the dynamical system in (9) with any as The derivative equals zero at . Consequently, the system has a Lyapunov function, so the equilibrium of the system is asymptotically stable.

3.2. Hybrid Euler Discretization

Consider an autonomous differential equation for the state variable , , written by with sufficiently smooth functions and satisfying , where denotes an equilibrium of (13). Then, we have the geometric multiplicative derivative as a counterpart of the additive derivative. Analogous to the ordinary Euler method, the multiplicative first-order expansion [26, 27]with small leads to the iterative formula of the multiplicative Euler discretization with iteration numbers .

However, when in (13), its additive Euler discretization corresponds to the multiplicative one in (16) with . The term is also considered as a first-order term in the Taylor series expansion of the function with in the neighborhood of the steady state . By replacing a part of the multiplicative term in (16) with the term while preserving the multiplication of , we obtain the formula of a hybrid Euler discretization for , with a combination based on the additive and multiplicative calculus for the functions and , respectively. The hybrid Euler method is effective for a practical calculation of an initial value problem in (13) from the viewpoint of choosing a larger value of the step-size , when either an additive or multiplicative calculus is better for each term of the partial vector fields and .

The hybrid Euler discretization of the differential equation in (9) gives the IR algorithm in (6).

4. Experimental Results and Discussion

We conducted experiments to demonstrate the effectiveness of not only the proposed IR method in (6) with and , say SKLD algorithm, based on the minimization of SKLD but also the hybrid Euler discretization of the continuous analog in (9).

We used a modified Shepp-Logan phantom image consisting of shown in Figure 1(a), which is composed of pixels (). The projection was simulated by using the model with denoting the white Gaussian noise such that the signal-to-noise ratio (SNR) was 30 dB unless otherwise specified and by setting the number of view angles and detector bins to 180 and 184, respectively (), as illustrated in Figure 1(b) formatted into a sinogram. A standard MATLAB (MathWorks, Natick, USA) ODE solver ode113 implementing a variable step-size variable order Adams-Bashforth-Moulton method was used for a numerical calculation of solving an initial value problem of the differential equation in (9).

Let us first consider which is the most robust discretization method for numerical approximation. Figure 2 shows the time course of the objective function with , which we denote as , for the continuous-time system with at and the discrete-time systems at , which are discretizations of the differential equation using the additive, multiplicative, and hybrid Euler methods with the fixed step sizes of , , and 1. We see that the hybrid Euler discretization gives a similar good convergence to that of a continuous-time system; however, in the case , the additive Euler method causes, as time goes on, an error with oscillation and solutions using the multiplicative method divergence.

Next, we consider the property of the SKLD algorithm compared with the algorithms using the ML-EM method and MART. Note that the iterative formula in (6) with becomes the ML-EM, SKLD, and MART algorithms when the parameter is equal to 0, , and 1, respectively. The values of the root mean squared (RMS) distance or -norm of difference between the phantom image and iterative state obtained using the formula in (6) at the th iteration starting from the common initial value , , and variations in the parameter are plotted in Figure 3(a). We observed a value of exists in the open interval minimizing the quantitative measure when the number is relatively small. In particular, the measure of SKLD is less than those of ML-EM and MART at, e.g., and 12. The presence of noise is required for this property, which is supported by the fact that an experiment from another simulation with noise-free projection data gave different results, as shown in Figure 3(b), such that we had the minimum value of the measure at . This was due to characteristics of the ML-EM and MART algorithms being disadvantageous. As a result of these characteristics, ML-EM has a slow convergence [2831], which means that a relatively large number of iterations is required to obtain an acceptable value of the objective function, and MART is more susceptible to noise [32, 33], which may result in an increase of the convergence rate with increasing iterative steps due to the noisy projection, as typically seen at and 1 in Figure 3, respectively.

We carried out other experiments to examine whether there exists an not equal to 0 or 1, in which the value of an objective function obtained by using the algorithm in (6) with takes a minimum. A set of SPECT projection data from a publicly accessible database [34] was examined first. The sinogram of a brain scan and an image reconstructed by the SKLD algorithm are shown in Figure 4, where the number of projections and pixels of the image are, respectively, (128 acquisition bins and 60 projection directions in 180 degrees) and ( pixels). The projection data for the second example were acquired from an X-ray CT scanner (Toshiba Medical Systems, Tochigi, Japan) with a body phantom [35] (Kyoto Kagaku, Kyoto, Japan) using a tube voltage of 120 kVp and a tube current of 300 mA. Figure 5 represents the sinogram with (957 acquisition bins and 90 projection directions in 180 degrees) and a reconstructed image with ( pixels). The value of the objective function for the image reconstructed by SKLD algorithm at 30th iteration is , which is less than obtained using FBP with a Shepp-Logan filter.

In the physical experiments, as well as a feature of the object to be reconstructed and also an overdetermined or underdetermined inverse problem, there exists a nonempty set of the iteration number and the parameter (or especially ) in which there is a minimum value of the objective function as shown in Figure 6. From Figure 6(a), we observe that the algorithm with sufficiently converges to a local minimum at a small iteration number, whereas the ML-EM algorithm has a slow convergence rate when the state variable is far away from the local minimum in the state space. Considering the noisy nature of the measured projection and large datasets required for reconstructing large sized images in clinical X-ray CT, the conditions suitable for practical use include a relatively small number of iterations exclusively used in IR systems [36] under which our IR method is effective, as seen in Figure 6(b).

5. Concluding Remarks

We proposed the IR algorithm based on the minimization of -skew -divergence between the measured and forward projections. The objective functions to be minimized with iterative process are , , and -divergence after setting the value of parameter to 0, 1, and in the IR algorithm, respectively. We used Lyapunov’s theorem to construct a differential equation for which the IR algorithm is equivalent to the hybrid Euler discretization and demonstrated the monotonically decreasing convergence of the -divergence with iterative steps to a desired solution of the continuous-time system. The hybrid Euler was found to have more robust performance than the additive and multiplicative Euler methods. The numerical and physical experiments revealed that the SKLD method is advantageous with respect to the minimization of a distance measure when the projection data are noisy and when the maximum iteration number is relatively small. Further investigation is required to clarify the performance of the proposed IR algorithm from the viewpoint of image quality and noise evaluation.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was partially supported by JSPS KAKENHI Grant no. 18K04169.