Abstract

We focus on nonlinearity for images and propose a new method which can preserve curve edges in image smoothing using nonlinear anisotropic diffusion (NAD). Unlike existing methods which diffuse only among the spatial variants, the new method suggests that the diffusion should be performed both among the time variants and spatial variants, named time and space nonlinear anisotropic diffusion (TSNAD). That is, not only the differences of the spatial variants should be estimated by the nearby spatial points but also the differences of the time variants should be approximated by the weighted time differences of nearby points, according to the differences of gray levels between them and the consideration point. Since the time differences of nearby points using NAD can find more points with similar gray levels which form a curve belt for the center pixel on a curve edge, TSNAD can provide satisfied smoothing results while preserving curve edges. The experiments for digital images also show us the ability of TSNAD to preserve curve edges.

1. Introduction

Nonlinear systems are systems that cannot be mathematically described as the sum of their components, and their mathematical modeling is often very difficult or impossible. As a result, nonlinear systems are often studied through use of simulations [1]. Nonlinearity for many real applications has been widely discussed [2โ€“11], and some methods based on NAD have been proposed in the early 90s [12โ€“15]. Recently, some efforts for improving the performance of NAS have been proposed related to numerical method for partial different function (PDF) theory, adaptive smoothing, and multiresolution analysis [16โ€“22].

Although NAD provides prefect mathematical theory for preserving edges, how to preserve curve edges in image smoothing using NAD is still an unsolved problem [23, 24].

In a digital image, a curve edge is composed by discrete points instead of a continuous curve line. Moreover, the discrete points on a curve edge are composed by two types of points: one is edge and the other is corner. Note that most of curve edge points are corners instead of edge points since the curve or declining parts of a curve edge are composed by corners. Note that unlike the definition of corners in mathematics, a corner point defined in a digital image is a point with both large modules of gradients and tangents [12โ€“14] or two large eigenvalues for the tensor matrix [15].

In order to preserve curve edges, some schemes are proposed for different types of points. For edge points, two famous diffusion schemes, which are diffusion only along tangent lines or diffusion along backward directions of the gradients, are proposed in [14, 15]. For corners, Alvarez et al. suggest to stop diffusion at [13].

However, these schemes lead to noise amplification, oscillation effects, and blurred curve edges [23, 24]. Analyzing behavior of these schemes, we can find that the diffusion for most of discrete points of a curve edge, corners, is stopped or only is carried on a very small scale. Therefore, the noise is amplified near edges, and corners are blurred even with a very small-scale diffusion.

Motivated by diffusion along tangents, we propose that the diffusion should be performed along curve edges to preserve curve edges. However, until now, we can not find a method to perform the diffusion along the curve edges. It is the result of one-step diffusion along tangent! Note that diffusion along the tangent means that the diffusion is performed on a straight line passing through the consideration point and with the direction of the tangent. Therefore, even with an elaborately selected scale, a straight line can not approximate a curve edge.

Fortunately, it is well known that a curve edge can be approximated well by multistraight lines. Although one-step method only gets one straight line, we have to design a multistep scheme to obtain multistraight lines. This multistep scheme can be realized when we try to approximate the differences of the time variant of the consideration point using the time differences of nearby points! Since each time difference relates to one straight diffusion line, the time differences of nearby points can find multistraight diffusion lines along the curve edge. Thus, it forms a curve belt along the curve edge!

Based on this theory, a new scheme, named time and space NAD (TSNAD), is proposed that approximating the difference of time variant uses the time difference of nearby points with NAD manner. That is, the time differences of nearby points are weighted by the absolute value of the gray differences between the nearby points and the consideration point. Then, all time differences are computed by the space differences according to the diffusion equation.

Since more time differences allow us to find more straight lines, TSNAD can find enough points with similar gray levels along the curve edges. Therefore, it can preserve curve edges well.

The rest of this paper is as follows. Section 2 is the method of TSNAD. In Section 3, the experimental results are given and discussed. We also give conclusions, future works, and finally acknowledgments.

2. The Method

Just as the above discussion, TSNAD provides a more NAD for time variant to preserve curve edges. In this section, we will introduce the theory of TSNAD in detail.

2.1. Motivation

The main objective for image smoothing is reducing image details or slightly noises without removing significant parts of the image content. Nonlinear anisotropic diffusion (NAD) is a well-known method in this task [12].

Formally, NAD is defined as๐œ•๐‘ข(๐‘ฅ,๐‘ฆ,๐‘ก)๐œ•๐‘ก=div(๐‘”(๐‘ฅ,๐‘ฆ,๐‘ก)โˆ‡๐‘ข(๐‘ฅ,๐‘ฆ,๐‘ก)),(2.1) where ๐‘ข(๐‘ฅ,๐‘ฆ,0) is the initial gray-scale image, ๐‘ข(๐‘ฅ,๐‘ฆ,๐‘ก) is the smooth gray-scale image at time ๐‘ก, โˆ‡ denotes the gradient, div(โ‹…) is the divergence operator, and ๐‘”(๐‘ฅ,๐‘ฆ,๐‘ก) is the diffusion coefficient. ๐‘”(๐‘ฅ,๐‘ฆ,๐‘ก) controls the rate of diffusion and is usually chosen as a monotonically decreasing function of the module of the image gradient. Two functions proposed in [12] are๐‘”(โ€–โˆ‡๐‘ข(๐‘ฅ,๐‘ฆ,๐‘ก)โ€–)=๐‘’โˆ’(โ€–โˆ‡๐‘ข(๐‘ฅ,๐‘ฆ,๐‘ก)โ€–/๐œŽ)21,(2.2)๐‘”(โ€–โˆ‡๐‘ข(๐‘ฅ,๐‘ฆ,๐‘ก)โ€–)=1+(โ€–โˆ‡๐‘ข(๐‘ฅ,๐‘ฆ,๐‘ก)โ€–/๐œŽ)2,(2.3) where โ€–โ‹…โ€– is the module of the vector, and the constant ๐œŽ controls the sensitivity to edges.

Perona and Malik also propose a simple method to approach the modules of gradients which is called PM method as follows from the paper [12]. Its discretization for Laplacian operator is+1๐‘ข(๐‘–,๐‘—,๐‘ก+1)=๐‘ข(๐‘–,๐‘—,๐‘ก)4๎€บ๐‘๐‘โ‹…โˆ‡2๐‘๐‘ข(๐‘–,๐‘—,๐‘ก)+๐‘๐‘†โ‹…โˆ‡2๐‘†๐‘ข(๐‘–,๐‘—,๐‘ก)+๐‘๐ธโ‹…โˆ‡2๐ธ๐‘ข(๐‘–,๐‘—,๐‘ก)+๐‘๐‘Šโ‹…โˆ‡2๐‘Š๎€ป,๐‘ข(๐‘–,๐‘—,๐‘ก)(2.4) whereโˆ‡2๐‘โˆ‡๐‘ข(๐‘–,๐‘—,๐‘ก)=๐‘ข(๐‘–โˆ’1,๐‘—,๐‘ก)โˆ’๐‘ข(๐‘–,๐‘—,๐‘ก),2๐‘†โˆ‡๐‘ข(๐‘–,๐‘—,๐‘ก)=๐‘ข(๐‘–+1,๐‘—,๐‘ก)โˆ’๐‘ข(๐‘–,๐‘—,๐‘ก),2๐ธโˆ‡๐‘ข(๐‘–,๐‘—,๐‘ก)=๐‘ข(๐‘–,๐‘—+1,๐‘ก)โˆ’๐‘ข(๐‘–,๐‘—,๐‘ก),2๐‘Š๐‘ข(๐‘–,๐‘—,๐‘ก)=๐‘ข(๐‘–,๐‘—โˆ’1,๐‘ก)โˆ’๐‘ข(๐‘–,๐‘—,๐‘ก).(2.5) According to (2.2) and (2.3), the diffusion coefficient is defined as a function of module of the gradient. However, computing a gradient accurately in discrete data is very complex, and the module of the gradient is simplified as the absolute values of four directions, and diffusion coefficients are๐‘๐‘๎€ท||โˆ‡(๐‘–,๐‘—,๐‘ก)=๐‘”๐‘๐‘ข||๎€ธ,๐‘(๐‘–,๐‘—,๐‘ก)๐‘†(๎€ท||โˆ‡๐‘–,๐‘—,๐‘ก)=๐‘”๐‘†||๎€ธ,๐‘๐‘ข(๐‘–,๐‘—,๐‘ก)๐ธ๎€ท||โˆ‡(๐‘–,๐‘—,๐‘ก)=๐‘”๐ธ๐‘ข||๎€ธ,๐‘(๐‘–,๐‘—,๐‘ก)๐‘Š(๎€ท||โˆ‡๐‘–,๐‘—,๐‘ก)=๐‘”๐‘Š||๎€ธ,๐‘ข(๐‘–,๐‘—,๐‘ก)(2.6) where |โ‹…| is the absolute value of the number and ๐‘”(โ‹…) is defined in (2.2) or (2.3).

Although this scheme is not the exact discretization of (2.1) from the view of mathematics, it is the best way for preserving singularities in existing NAD methods since it can find neighbors with similar gray levels correctly without computing the gradients. Thus, it reduces the discretization errors greatly.

In addition, unlike other methods which have to distinguish corners form the edges, PM method allows to handle corners and edges using a unified scheme diffusion only among neighbors with similar gray levels. This advantage also decreases the influence of incorrect classification for corners and edges.

However, even for this best scheme in existing methods, how to preserve curve edges in image smoothing is an unsolved problem. The main difficulty for PM method and other existing methods is that approximation of a curve edge using a straight line segment (a line passing through the consideration point and along the tangent with a certain scale) is impossible.

Thus, we need a new scheme to find multistraight lines to approximate a curve edge. Intuitively, these multistraight lines should be related to some tangent lines passing through some of neighbors of the consideration points. Thus, these multistraight lines can be sought out using a quasimultisteps. That is, differences of the time variant are computed by the time differences of nearby pixels. It is obvious that each time difference relates to one tangent line. Thus, multitime differences relate to multitangent lines, which form a curve belt along the curve edge.

By this way, the new scheme, TSNAD, can provide satisfied smoothing results for preserving curve edges and deleting details.

2.2. The Space Difference

Existing methods approximate the time difference for the left hand of (2.1) by๐œ•๐‘ข(๐‘ฅ,๐‘ฆ,๐‘ก)=๐œ•๐‘ก๐‘ข(๐‘–,๐‘—,๐‘ก+1)โˆ’๐‘ข(๐‘–,๐‘—,๐‘ก)ฮ”๐‘ก,(2.7) where ๐‘ข(๐‘–,๐‘—) is the consideration point, and ๐‘ก is the time variant.

The space difference for a two-dimension digital image ๐‘ข(๐‘–,๐‘—) is defined as half-point differences between the center point ๐‘ข(๐‘–,๐‘—) and the half points of its eight nearest neighborsโˆ‡๐ฎ(๐ข,๐ฃ)=[โˆ‡0๐‘ข(๐‘–,๐‘—),โˆ‡1๐‘ข(๐‘–,๐‘—),โˆ‡2๐‘ข(๐‘–,๐‘—),โˆ‡3๐‘ข(๐‘–,๐‘—),โˆ‡4๐‘ข(๐‘–,๐‘—),โˆ‡5๐‘ข(๐‘–,๐‘—),โˆ‡6๐‘ข(๐‘–,๐‘—),โˆ‡7๐‘ข(๐‘–,๐‘—)]๐‘‡,(2.8) where ๐‘‡ represents the transpose of the vector, and โˆ‡๐‘ข๐‘˜(๐‘–,๐‘—), ๐‘˜=0,โ€ฆ,7 are defined asโˆ‡0๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–,๐‘—+0.5)โˆ’๐‘ข(๐‘–,๐‘—),โˆ‡1โˆ‡๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–โˆ’0.5,๐‘—+0.5)โˆ’๐‘ข(๐‘–,๐‘—),2๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–โˆ’0.5,๐‘—)โˆ’๐‘ข(๐‘–,๐‘—),โˆ‡3๐‘ขโˆ‡(๐‘–,๐‘—)=๐‘ข(๐‘–โˆ’0.5,๐‘—โˆ’0.5)โˆ’๐‘ข(๐‘–,๐‘—),4๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–,๐‘—โˆ’0.5)โˆ’๐‘ข(๐‘–,๐‘—),โˆ‡5โˆ‡๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–+0.5,๐‘—โˆ’0.5)โˆ’๐‘ข(๐‘–,๐‘—),6๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–+0.5,๐‘—)โˆ’๐‘ข(๐‘–,๐‘—),โˆ‡7๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–+0.5,๐‘—+0.5)โˆ’๐‘ข(๐‘–,๐‘—).(2.9)

Thus, the first-scale second-order difference of ๐‘ข(๐‘–,๐‘—) isโˆ‡๐Ÿ๎€บโˆ‡๐ฎ(๐ข,๐ฃ)=20๐‘ข(๐‘–,๐‘—),โˆ‡21๐‘ข(๐‘–,๐‘—),โˆ‡22๐‘ข(๐‘–,๐‘—),โˆ‡23๐‘ข(๐‘–,๐‘—),โˆ‡24๐‘ข(๐‘–,๐‘—),โˆ‡25๐‘ข(๐‘–,j),โˆ‡26๐‘ข(๐‘–,๐‘—),โˆ‡27๎€ป๐‘ข(๐‘–,๐‘—)๐‘‡,(2.10) where ๐‘‡ represents the transpose of the vector. From (2.7), we haveโˆ‡20๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–,๐‘—+1)โˆ’๐‘ข(๐‘–,๐‘—),โˆ‡21โˆ‡๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–โˆ’1,๐‘—+1)โˆ’๐‘ข(๐‘–,๐‘—),22๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–โˆ’1,๐‘—)โˆ’๐‘ข(๐‘–,๐‘—),โˆ‡23โˆ‡๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–โˆ’1,๐‘—โˆ’1)โˆ’๐‘ข(๐‘–,๐‘—),24๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–,๐‘—โˆ’1)โˆ’๐‘ข(๐‘–,๐‘—),โˆ‡25โˆ‡๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–+1,๐‘—โˆ’1)โˆ’๐‘ข(๐‘–,๐‘—),26๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–+1,๐‘—)โˆ’๐‘ข(๐‘–,๐‘—),โˆ‡27๐‘ข(๐‘–,๐‘—)=๐‘ข(๐‘–+1,๐‘—+1)โˆ’๐‘ข(๐‘–,๐‘—).(2.11)

Let๎€บ๐‘”๐ (๐ข,๐ฃ)=0(๐‘–,๐‘—),๐‘”1(๐‘–,๐‘—),๐‘”2(๐‘–,๐‘—),๐‘”3(๐‘–,๐‘—),๐‘”4(๐‘–,๐‘—),๐‘”5(๐‘–,๐‘—),๐‘”6(๐‘–,๐‘—),๐‘”7๎€ป(๐‘–,๐‘—)๐‘‡,(2.12) where ๐‘‡ represents the transpose of the vector, and ๐‘”๐‘˜(๐‘–,๐‘—), ๐‘˜=0,โ€ฆ,7 are defined as๐‘”๐‘˜๎€ท||โˆ‡(๐‘–,๐‘—)=๐‘”๐‘˜๐‘ข||๎€ธ(๐‘–,๐‘—),๐‘˜=0,1,โ€ฆ,7,(2.13) where โˆ‡๐‘˜๐‘ข(๐‘–,๐‘—), ๐‘˜=0,โ€ฆ,7 defined in (2.8) are the components of vector โˆ‡๐ฎ(๐ข,๐ฃ), and ๐‘” is the decreasing function of absolute value of โˆ‡๐‘˜๐‘ข(๐‘–,๐‘—), ๐‘˜=0,โ€ฆ,7. Following (2.2) and (2.3), ๐‘”(|โˆ‡๐‘ข๐‘˜(๐‘ฅ,๐‘ฆ,๐‘ก)|) can be defined as๐‘”๎€ท||โˆ‡๐‘ข๐‘˜||๎€ธ(๐‘–,๐‘—,๐‘ก)=๐‘’โˆ’(|โˆ‡๐‘ข๐‘˜(๐‘–,๐‘—,๐‘ก)|/๐œŽ)2,๐‘˜=0,โ€ฆ,7,(2.14) or๐‘”๎€ท||โˆ‡๐‘ข๐‘˜||๎€ธ=1(๐‘–,๐‘—,๐‘ก)๎€ท||1+โˆ‡๐‘ข๐‘˜||๎€ธ(๐‘–,๐‘—,๐‘ก)/๐œŽ2,๐‘˜=0,โ€ฆ,7,(2.15) where |โ‹…| is the absolute value of the number, and the constant ๐œŽ controls the sensitivity to edges. However, half-point difference for โˆ‡๐ฎ(๐ข,๐ฃ) defined in (2.8) cannot be computed directly. Thus, the second-order integral-point difference can be used to approximate the first-order half-point difference. The (2.14) and (2.15) become๐‘”๎€ท||โˆ‡๐‘ข๐‘˜||๎€ธ(๐‘–,๐‘—,๐‘ก)=๐‘’โˆ’(|โˆ‡๐‘ข๐‘˜(๐‘–,๐‘—,๐‘ก)|/๐œŽ)2โ‰ˆ๐‘’โˆ’(|โˆ‡2๐‘ข๐‘˜(๐‘–,๐‘—,๐‘ก)|/๐œŽ)2๐‘”๎€ท||,๐‘˜=0,โ€ฆ,7,(2.16)โˆ‡๐‘ข๐‘˜||๎€ธ=1(๐‘–,๐‘—,๐‘ก)๎€ท||1+โˆ‡๐‘ข๐‘˜||๎€ธ(๐‘–,๐‘—,๐‘ก)/๐œŽ2โ‰ˆ1๎€ท||โˆ‡1+2๐‘ข๐‘˜||๎€ธ(๐‘–,๐‘—,๐‘ก)/๐œŽ2,๐‘˜=0,โ€ฆ,7.(2.17)

The NAD is converted to๐œ•๐‘ข(๐‘–,๐‘—,๐‘ก)โŽ›โŽœโŽœโŽœโŽœโŽœโŽœโŽœโŽœโŽœโŽ๐‘”๐œ•๐‘ก=๐œ†div0(๐‘–,๐‘—)โˆ‡0๐‘”๐‘ข(๐‘–,๐‘—,๐‘ก)1(๐‘–,๐‘—)โˆ‡1๐‘”๐‘ข(๐‘–,๐‘—,๐‘ก)2(๐‘–,๐‘—)โˆ‡2๐‘”๐‘ข(๐‘–,๐‘—,๐‘ก)3(๐‘–,๐‘—)โˆ‡3๐‘ข๐‘”(๐‘–,๐‘—,๐‘ก)4(๐‘–,๐‘—)โˆ‡4๐‘”๐‘ข(๐‘–,๐‘—,๐‘ก)5(๐‘–,๐‘—)โˆ‡5๐‘”๐‘ข(๐‘–,๐‘—,๐‘ก)6(๐‘–,๐‘—)โˆ‡6๐‘”๐‘ข(๐‘–,๐‘—,๐‘ก)7(๐‘–,๐‘—)โˆ‡7โŽžโŽŸโŽŸโŽŸโŽŸโŽŸโŽŸโŽŸโŽŸโŽŸโŽ ๐‘ข(๐‘–,๐‘—,๐‘ก),(2.18) where ๐œ† is a constant to ensure the convergence of the iteration, the โˆ‡๐‘˜๐‘ข(๐‘–,๐‘—,๐‘ก), ๐‘˜=0,โ€ฆ,7 are the components of vector โˆ‡๐ฎ(๐ข,๐ฃ,๐ญ) in (2.10), and ๐‘”๐‘˜(๐‘–,๐‘—), ๐‘˜=0,โ€ฆ,7 defined in (2.13) are the components of ๐ (๐ข,๐ฃ) in (2.12). Moreover, ๐‘”๐‘˜(๐‘–,๐‘—) in (2.13) can be represented by (2.16) or (2.17).

The above equation also is๐œ•๐‘ข(๐‘–,๐‘—,๐‘ก)๐œ•๐‘ก=๐œ†7๎“๐‘˜=0๐‘”๐‘˜(๐‘–,๐‘—)โˆ‡2๐‘˜๐‘ข(๐‘–,๐‘—,๐‘ก),(2.19) where ๐œ† is a constant to ensure the convergence of the iteration. โˆ‡2๐‘˜๐‘ข(๐‘–,๐‘—,๐‘ก) is the second-order difference of the ๐‘˜th components of ๐‘ข(๐‘–,๐‘—) which can be computed according to (2.10).

Thus, we have๐œ•๐‘ข(๐‘–,๐‘—,๐‘ก)๐œ•๐‘ก=๐‘ข(๐‘–,๐‘—,๐‘ก+1)โˆ’๐‘ข(๐‘–,๐‘—,๐‘ก)=๐œ†7๎“๐‘˜=0๐‘”๐‘˜(๐‘–,๐‘—)โˆ‡2๐‘˜๐‘ข(๐‘–,๐‘—,๐‘ก),(2.20) where ๐‘ข(๐‘–,๐‘—,๐‘ก+1) is the gray level of (๐‘–,๐‘—) at time ๐‘ก+1, and ๐‘”๐‘˜(๐‘–,๐‘—), โˆ‡2๐‘˜๐‘ข(๐‘–,๐‘—,๐‘ก), ๐‘˜=0,โ€ฆ,7 are defined in the same way as in (2.19).

2.3. The New Time Difference

In this paper, we propose that not only the space differences but also the time differences should be estimated using NAD. That is, the time difference of ๐‘ข(๐‘–,๐‘—,๐‘ก) should be estimated using the time differences of nearby points. In this section, we will discuss the simplest form for new time differences approximated by the weighted time differences of its eight nearest neighbors.

Representing this approximation using the terms of high-dimensional vector analysis, a vector is defined as๐œ•๐ฎ(๐ข,๐ฃ,๐ญ)=๎‚ธ๐œ•๐‘ก๐œ•๐‘ข0(๐‘–,๐‘—,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข1(๐‘–,๐‘—,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข2(๐‘–,๐‘—,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข3(๐‘–,๐‘—,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข4(๐‘–,๐‘—,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข5(๐‘–,๐‘—,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข6(๐‘–,๐‘—,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข7(๐‘–,๐‘—,๐‘ก)๎‚น๐œ•๐‘ก๐‘‡,(2.21) where ๐‘‡ represents the transpose of the vector, and ๐œ•๐‘ข๐‘˜(๐‘–,๐‘—,๐‘ก)/๐œ•๐‘ก, ๐‘˜=0,โ€ฆ,7 are defined as๐œ•๐‘ข0(๐‘–,๐‘—,๐‘ก)=๐œ•๐‘ก๐œ•๐‘ข(๐‘–,๐‘—+1,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข1(๐‘–,๐‘—,๐‘ก)=๐œ•๐‘ก๐œ•๐‘ข(๐‘–โˆ’1,๐‘—+1,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข2(๐‘–,๐‘—,๐‘ก)=๐œ•๐‘ก๐œ•๐‘ข(๐‘–โˆ’1,๐‘—,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข3(๐‘–,๐‘—,๐‘ก)=๐œ•๐‘ก๐œ•๐‘ข(๐‘–โˆ’1,๐‘—โˆ’1,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข4(๐‘–,๐‘—,๐‘ก)=๐œ•๐‘ก๐œ•๐‘ข(๐‘–,๐‘—โˆ’1,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข5(๐‘–,๐‘—,๐‘ก)=๐œ•๐‘ก๐œ•๐‘ข(๐‘–+1,๐‘—โˆ’1,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข6(๐‘–,๐‘—,๐‘ก)=๐œ•๐‘ก๐œ•๐‘ข(๐‘–+1,๐‘—,๐‘ก),๐œ•๐‘ก๐œ•๐‘ข7(๐‘–,๐‘—,๐‘ก)=๐œ•๐‘ก๐œ•๐‘ข(๐‘–+1,๐‘—+1,๐‘ก).๐œ•๐‘ก(2.22)

The diffusion coefficients for ๐œ•๐‘ข๐‘˜(๐‘–,๐‘—,๐‘ก)/๐œ•๐‘ก, ๐‘˜=0,โ€ฆ,7 are represented by a vector ๐ (๐ข,๐ฃ) defined in (2.12), and its components are defined in (2.16). Thus, the time difference of ๐‘ข(๐‘–,๐‘—) is๐œ•๐‘ข(๐‘–,๐‘—,๐‘ก)๐œ•๐‘ก=๐œ†1๎‚ธ๐œ•๐‘ข(๐‘–,๐‘—,๐‘ก)๎‚ต๐œ•๐‘ก+div๐ (๐ข,๐ฃ)๐‘‡๐œ•๐ฎ(๐ข,๐ฃ,๐ญ)๐œ•๐‘ก๎‚ถ๎‚น,(2.23) where ๐‘‡ represents the transpose of the vector, ๐œ†1 is a constant to ensure the convergence of the iteration, and ๐œ•๐ฎ(๐ข,๐ฃ,๐ญ)/๐œ•๐‘ก is defined in (2.21). Thus, (2.23) is๐œ•๐‘ข(๐‘–,๐‘—,๐‘ก)๐œ•๐‘ก=๐œ†1๎ƒฌ๐œ•๐‘ข(๐‘–,๐‘—,๐‘ก)+๐œ•๐‘ก7๎“๐‘˜=0๐‘”๐‘˜(๐‘–,๐‘—)๐œ•๐‘ข๐‘˜(๐‘–,๐‘—,๐‘ก)๎ƒญ,๐œ•๐‘ก(2.24) where ๐œ•๐‘ข๐‘˜(๐‘–,๐‘—,๐‘ก)/๐œ•๐‘ก, ๐‘˜=0,โ€ฆ,7 are the components of vector ๐œ•๐ฎ(๐ข,๐ฃ,๐ญ)/๐œ•๐‘ก defined in (2.21), and ๐‘”๐‘˜(๐‘–,๐‘—), ๐‘˜=0,โ€ฆ,7 are the components of vector g (๐ข,๐ฃ) defined in (2.12).

According to (2.24), the time difference of ๐‘ข(๐‘–,๐‘—,๐‘ก) is estimated by the weighed time difference of its neighbors according to the difference of the gray levels between them and ๐‘ข(๐‘–,๐‘—,๐‘ก). That is, the neighbors whose gray levels are more similar to ๐‘ข(๐‘–,๐‘—,๐‘ก) have larger weights.

2.4. Time and Space NAD (TSNAD)

Substituting (2.19) and (2.21) into (2.24), we have the equation of TSNAD,๐œ•๐‘ข(๐‘–,๐‘—,๐‘ก)๐œ•๐‘ก=๐œ†1๎ƒฌ๐œ•๐‘ข(๐‘–,๐‘—,๐‘ก)+๐œ•๐‘ก7๎“๐‘˜=0๐‘”๐‘˜(๐‘–,๐‘—)๐œ•๐‘ข๐‘˜(๐‘–,๐‘—,๐‘ก)๎ƒญ๐œ•๐‘ก=๐œ†1๎ƒฌ๐œ†7๎“๐‘˜=0๐‘”๐‘˜(๐‘–,๐‘—)โˆ‡2๐‘˜๐‘ข(๐‘–,๐‘—,๐‘ก)+๐œ†๐‘”0(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–,๐‘—+1,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–,๐‘—+1,๐‘ก)+๐œ†๐‘”1(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–โˆ’1,๐‘—+1,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–โˆ’1,๐‘—+1,๐‘ก)+๐œ†๐‘”2(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–โˆ’1,๐‘—,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–โˆ’1,๐‘—,๐‘ก)+๐œ†๐‘”3(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–โˆ’1,๐‘—โˆ’1,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–โˆ’1,๐‘—โˆ’1,๐‘ก)+๐œ†๐‘”4(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–,๐‘—โˆ’1,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–,๐‘—โˆ’1,๐‘ก)+๐œ†๐‘”5(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–+1,๐‘—โˆ’1,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–+1,๐‘—โˆ’1,๐‘ก)+๐œ†๐‘”6(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–+1,๐‘—,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–+1,๐‘—,๐‘ก)+๐œ†๐‘”7(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–+1,๐‘—+1,๐‘ก)โˆ‡2๐‘˜๎ƒญ.๐‘ข(๐‘–+1,๐‘—+1,๐‘ก)(2.25)

Let ๐œ†1ร—๐œ†=๐œ† be a constant to ensure the convergence of the iteration, (2.25) becomes ๐œ•๐‘ข(๐‘–,๐‘—,๐‘ก)๎ƒฌ๐œ•๐‘ก=๐œ†7๎“๐‘˜=0๐‘”๐‘˜(๐‘–,๐‘—)โˆ‡2๐‘˜๐‘ข(๐‘–,๐‘—,๐‘ก)+๐‘”0(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–,๐‘—+1,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–,๐‘—+1,๐‘ก)+๐‘”1(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–โˆ’1,๐‘—+1,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–โˆ’1,๐‘—+1,๐‘ก)+๐‘”2(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–โˆ’1,๐‘—,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–โˆ’1,๐‘—,๐‘ก)+๐‘”3(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–โˆ’1,๐‘—โˆ’1,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–โˆ’1,๐‘—โˆ’1,๐‘ก)+๐‘”4(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–,๐‘—โˆ’1,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–,๐‘—โˆ’1,๐‘ก)+๐‘”5(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–+1,๐‘—โˆ’1,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–+1,๐‘—โˆ’1,๐‘ก)+๐‘”6(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–+1,๐‘—,๐‘ก)โˆ‡2๐‘˜๐‘ข(๐‘–+1,๐‘—,๐‘ก)+๐‘”7(๐‘–,๐‘—)7๎“๐‘˜=0๐‘”๐‘˜(๐‘–+1,๐‘—+1,๐‘ก)โˆ‡2๐‘˜๎ƒญ,๐‘ข(๐‘–+1,๐‘—+1,๐‘ก)(2.26) where ๐‘”๐‘˜(๐‘–,๐‘—), ๐‘˜=0,โ€ฆ,7 defined in (2.13) are the components of vector g (๐ข,๐ฃ) defined in (2.12), and โˆ‡2๐‘˜๐‘ข(๐‘–,๐‘—), ๐‘˜=0,โ€ฆ,7 are the components of โˆ‡๐Ÿ๐ฎ(๐ข,๐ฃ) defined in (2.10).

3. The Experiments and Discussion

In order to analyse the performance of TSNAD and compare it with existing methods, two images are selected in our experiments: one is a gray level image of an autotire, most of whose edges are curve edges (see Figure 1(d)); the other is a binary image with some test patterns including circles with different width, filled circles and some shapes with curve edges, and so forth (see Figure 2(d)). Both images are selected from the standard test images of MatLab.

Essentially, there are only three schemes for NAD until now: PM method [12, 13], backward filter [14], and coherence filter [15].

PM provides a simple discretization scheme for (2.1) by diffusion among points in four directions with similar gray levels to the consideration point. Although PM method is not the exact discretization of (2.1) from the view of mathematics, it has the best performance in existing methods because of the direct computing of diffusion coefficients using the differences of gray levels without computing the gradients [12].

Backward filter suggests that the diffusion near edges should be performed along the backward directions of the gradients [14], while coherence filter suggests that the diffusion should be performed along the tangents [15]. These two methods have to approximate the gradients and tangents using few discrete directions which leads to discretization errors. Moreover, since these two methods assume that the curve edges are composed by both corners and edges which should be handled separately, the diffusion has to stop at the corners or perform in a very small scale. The former will amplify noises at the corners while the latter will blur the corners. Therefore, both schemes can not provide satisfied smoothing results for curve edges.

TSNAD has all advantages for the above schemes. Firstly, its diffusion coefficients are computed according to PM method which can reduce the discretization errors greatly. Secondly, it adopts the scheme of diffusion along the curve edges to reduce the influence of the noises. Thirdly, it provides a more NAD for time variant to track the curve edges. Thus TSNAD has good performance both in image smoothing and curve edge preserving.

In the first image (see Figure 1(d)) is added a Gaussian white noise (GWN) with standard deviation 5 (see Figure 1(g)) to test the smoothing performance at slight noises. The NAD is performed on the noisy image, while the difference image is absolute difference value images between the relative smoothing images and the original image (see Figures 1(c), 1(f), 1(i), and 1(l)). Note that the difference images are recoded to show the relation in difference images more clearly. Thus, although Figure 1(l), which is the difference image between the smoothing image using TSNAD and the original image, has the much more white points, it does not mean that TSNAD has the biggest absolute difference to the original image because of โ€œrecorded" image. However, the difference images at least can provide what parts of the original image represented by white points are corrupted more seriously. For example, the white points in Figures 1(f) and 1(i) are all on the curve edges means that backward filter and coherence filter will blur the curve edges, while the white points in Figure 1(l) are distributed very randomly means that the curve edges will be kept well.

The second image is a binary image with many test patterns (see Figure 2(d)). Observing the difference images in Figures 2(c), 2(f), 2(i), and 2(l), we can find that the difference image of the TSNAD has no white points means that it is very similar to the original image (see Figure 2(l)). Moreover, the difference image for the PM method has big differences at thinning edges (see Figure 2(c)), while the backward filter only lost curve edges (see Figure 2(f)). The difference image for the coherence filter shows that all edges for test patterns are lost (see Figure 2(i)).

From the above experiments and discussion, we can conclude that TANAD can preserve curve edge well in image smoothing.

4. Conclusions

In this paper, we propose TSNAD to preserve curve edges in image smoothing. Unlike existing methods which only perform NAD for the space variant, TSNAD also allows us to approximate the differences of time variant using NAD. The more time differences for NAD form curve belts along the curve edges, which can provide better fitness for curve edges. Thus, TSNAD can get satisfied results both in curve edge preserving and image smoothing.

5. Future Works

Recently, fractional-order transforms become a hot topic in many fields both in theory and in application [5โ€“7, 25โ€“31] for their attractive natures in image and signal processing. Thus, our future works will be devoted to the combination of NAD and the fractal; it includes the following.(1) It is a work trying the kernel of fractional order in (2.2) instead of power of 2 [32].(2) The future work will consider the fractional Gaussian noise (fGn) following [25โ€“27, 32, 33]. In addition to fGn, other classes of Gaussian noise, such as the generalized Cauchy (GC) process [29โ€“31], are also worth experiments. Further, multiscaled fGn or the GC noise [28โ€“31] are worth noting in experiments.

Acknowledgments

This paper is supported by the National Natural Science Foundation of China (nos. 60873102 and 60873264), Major State Basic Research Development Program (no. 2010CB732501), and Open Foundation of Visual Computing and Virtual Reality Key Laboratory Of Sichuan Province (no. J2010N03). This work was supported by a Grant from the National High Technology Research and Development Program of China (no. 2009AA12Z140).