Abstract

Methods of fracture surface 3D reconstruction from a series of partially focused images acquired in a small field of view (e.g., by confocal microscope or CCD camera) are well known. In this case, projection rays can be considered parallel and recorded images do not differ in any geometrical transformation from each other. In the case of larger samples (oversized for microscope or CCD camera), it is necessary to use a wider viewing field (e.g., standard cameras); taken images primarily differ in scaling but may also differ in shifting and rotation. These images cannot be used for reconstruction directly; they must be registered; that is, we must determine all transformations in which the images differ and eliminate their effects. There are several ways to do this. This paper deals with the registration based on phase correlation.

1. Introduction

This paper follows up recent published works [1, 2] about 3D reconstruction methods for images acquired by confocal and nonconfocal microscopes. Confocal microscopes are actually optical microscopes which are distinguished from other optical microscopes by two unique properties. They have a very small depth of optical field and their advanced hardware is capable of removing nonsharp points from images. The points of objects very near to the focal plane are visible as sharp points and are depicted as light areas in our optical sections (see Figure 1(a)) whereas those parts lying above or beneath the focal plane are invisible and are represented by black regions. Analogous regions can be observed by using conventional microscopes fitted with the same lens (see Figure 1(b)). Both the confocal and nonconfocal snapshots (optical sections) show sharp regions of very similar shapes. The only difference concerns nonsharp regions that manifest themselves as blurred regions in the nonconfocal sections, whereas in the confocal sections they are missing (black regions). However, the shapes of confocal and nonconfocal out-of-focus regions are very similar.

To create a sharp image (2D reconstruction), it is necessary to obtain a series of images of the same object, each of them with different focusing and each point on the object focused in one of the images (in the ideal case). The sharp parts are identified and composed in a new image. There is also a simple method for constructing a rough 3D model of the object where all sharp points belonging to the same image have the same height.

There are two principal problems in this 2D and 3D reconstruction. The work [3] deals with methods that detect focused and blurred parts in images taken in nonconfocal mode and are able to assign the corresponding focus plane for each surface point. In this way, construction of 3D stair-approximation of the studied surface is possible (see Figure 2). However, this approximation is not sufficient in many applications. It is necessary to specify the height of each point between two focal planes and construct a smooth approximation (see Figure 3).

The projection used is the second problem. In the case of the confocal microscope, we can assume that the field of view is small and the used projection is parallel. The paper [4] and many other works [3, 511] presume this projection property. In parallel projection, all images are provided in the field of view with the same size. In Figure 4, there is the first image (a) and the thirtieth image (b) in a series of fifty-two photos of the fracture surface of hydrated cement paste acquired by the confocal microscope Olympus LEXT 3100. Corresponding pixels have the same coordinates in separate partial focused images (compare the crosses and arrows in Figure 4). However, this assumption is not valid in the case of larger samples; the angle of the projection lines is not negligible and the view fields are different. In Figure 5, we can see the first image (a) and the forty-third image (b) in a series of seventy photos of a sandstone sample (locality Brno-Hády, Czech Republic) taken with a Canon DSLR camera. The used projection is central and the fields of view (i.e., also coordinates of corresponding pixels) of individual images are clearly different (see crosses in Figure 5).

Therefore, we set two goals in the conclusion of the work [1]: firstly, to specify the height of each point between two sharpness planes and construct a smooth approximation. This goal has already been met in [2]; for the result of this new reconstruction see Figure 3. Some other works [3, 8, 10] also deal with this problem.

The second goal, solution of different sizes of the field of view, is discussed in this paper.

2. Materials and Methods

2.1. Materials and Equipment

The methods discussed in this article are suitable for 2D and 3D processing of samples which are oversized for confocal microscopes. These samples may have a size from a few centimetres (e.g., geological samples). Data may be taken by a CCD camera or conventional digital camera with a narrow sharpness zone. This scanning device must be connected with a stepping device that allows changing the distance between the camera and the scanning sample and thus the position of the camera sharpness zone.

Data used in this paper was acquired by special hardware designed and assembled by professor Tomáš Ficker from the Faculty of Civil Engineering of our University. The hardware consists of a Canon EOS 600D photographic camera augmented by EF 100 mm f/2.8 Macro USM lens. The photographic camera is mounted on a motorized tough stand which enables movement in the vertical direction. The vertical stepping movement is governed by software running on a PC. See [12] for more details.

Separate images are taken by central projection; that is, they differ at least in the scale used. The various scale of images could be solved using only elementary mathematics; the image size is proportional to the camera shifting in this case (see Figure 6 on the left).

However, the practical situation is more complicated. The images differ not only in the scale used but also in the content displayed (different parts are focused in different images). Due to mechanical errors, the step in the -axis may be not fully constant; the images can also be mutually shifted in the - or -axis and even rotated. Image registration is also complicated by the nonplanarity of samples (see Figure 6 on the right). In Figure 5, we can see the first image (on the left) and the forty-third image (on the right) in a series of photos of a sandstone sample (locality Brno-Hády, Czech Republic) taken with a Canon DSLR camera. The projection used is central, the fields of view are clearly different.

In this paper we describe the preprocessing of a series of such images for 2D and 3D reconstruction. A suitable tool for this preprocessing is the Fourier transform.

2.2. Fourier Transform

This is an integral transform that transforms a function of one or more variables (in spatial domain) to another function (in frequency domain) of the same number of variables [3, 13, 14]. Since the Fourier transform of a function is in general a function with a complex image and since a digital image is a function of two spatial variables, we will deal here for simplicity with functions .

Digital images are rectangles; for simplicity we deal here with square images only. All computations that use the Fourier transform are performed using the discrete Fourier transform (or more precisely by special algorithms that speed up the discrete Fourier transform, such as the Fast Fourier Transform (FFT)). However, some derivations of image processing methods are better performed with the Fourier transform of functions with the domain since operations such as rotation and rescaling are easily modeled on these functions.

The standard definition of the Fourier transform of a function of two variables is as follows [15].

Definition 1 (Fourier transform). Let be a function such thatexists and is finite. The Fourier transform of isFunction is also called the Fourier spectrum of function . Function is called amplitude spectrum of .

Definition 2 (inverse Fourier transform). Let be a function such thatexists and is finite. The inverse Fourier transform of function is functiondefined as

2.3. Phase Correlation

For processing and analyzing the images it is necessary to transform the images so that the studied structures are at the same position in all the images. This is the task of image registration, to find the transformation. In some applications we assume that images were shifted only; in others we allow shift, rotation and scale change (i.e., similarity), general linear transformation, or even general transformations.

The methods used for registration depend on the expected transformation and on the structures in the image. Some methods use corresponding structures or points in the images and then find a global transformation using the measurements of positions of the structures or points [1618]. These methods require these structures to be clearly visible. Other methods are based on correlation and work with the image as a whole. The phase correlation proved to be a powerful tool (not only) for registration of partially focused images. For functions ; it is defined asand its modification aswhere bar means complex conjugation and is a bounded real function such that and are arbitrary constants. It can be proved that for real functions ; the phase correlation function is real [14]. This is of great value, since it enables us to search for extremes of the phase correlation function.

2.4. Shifted Images

The phase correlation function can be also used for estimation of image shift. The method was first published by Kuglin and Hines [19].

It is clearly seen that the phase correlation function of a function with itself is the δ-distribution, that is,δ-distribution is a generalized function for which In an illustration of the δ-distribution, maximum pixel value is used instead of infinity. The illustration of the phase correlation of a function with itself can be seen in Figure 7 on the left.

If two functions are shifted in arguments, that is, , their Fourier transforms are shifted in phase; that is,and their phase correlation function is the δ-distribution shifted in arguments by the opposite shift vectorThe illustration of phase correlation of shifted but otherwise identical images can be seen in Figure 7 on the right.

This is the main idea of phase correlation. The task to find a shift between two images is converted by the phase correlation to the task of finding the only nonzero point in a matrix (computation using the discrete Fourier transform). If the images are not identical (up to a shift), that is, if the images are not ideal, the phase correlation function is more complicated, but it still has a global maximum at the coordinates corresponding to the shift vector. To keep this maximum global, (6) can be modified with possibilities suggested in (7) or modifying directly the original images and the parameters of these modifications can be optimized.

2.5. Rotated Images

The phase correlation function can be also used for estimation of image rotation and rescale. The method was first published by Reddy and Chatterji [20]. Let be function rotated and shifted in arguments; that is,Their Fourier spectra and amplitude spectra are related as follows:The shift results in a phase shift and the spectra are rotated in the same way as the original functions. A crucial step here is transformation of the amplitude spectra into the polar coordinate system to obtain functions such that . The rotation around an unknown centre of rotation was transformed to a shift. This shift is estimated with the standard phase correlation, Section 2.4.; after rotating back by the measured angle, the shift is measured with another computation of the phase correlation.

2.6. Scaled Images

Let be function rotated, shifted, and scaled in arguments; that is,Their Fourier spectra and amplitude spectra are related as follows:The shift results in a phase shift; the spectra are rotated in the same way as the original functions and scaled with a reciprocal factor. A crucial step here is transformation of the amplitude spectra into the logarithmic-polar coordinate systemto obtain such that .

Both rotation and scale changes were transformed to a shift. The unknown angle and unknown factor can be estimated by means of the phase correlation applied on the amplitude spectra in the logarithmic-polar coordinate system . After rotating function back by the estimated angle and scaling by factor , the shift vector is estimated by means of the standard phase correlation, Section 2.4.

2.7. Practical Issues

Amplitude spectra of real functions are even functions ; therefore it is sufficient to use only a half of the domain of the spectra, for example, . If amplitude spectra (computed by means of the discrete Fourier transform) are transformed to polar coordinates, only a half of the domain on the angular axis is sufficient.

The amplitude spectra have very high values in and its close neighbourhood compared to the rest of the domain; therefore instead of the values of the amplitude spectra it is better to use their logarithms ; to use the dynamic range of the amplitude spectra more effectively.

The discrete Fourier transform takes images as if they were periodic with period on both axes. The image edges thus represent a jump in pixel values. Therefore, it is necessary to “remove” image edges, to smooth them out by multiplying them with so-called windowing functions. The most common are Gaussian and Hanning window functions. Most commonly they are applied radial-symmetrically. If there are important structures closer to image corners, they may also keep untouched a square or a rectangle and then decrease to zero.

Image pixel coordinates are integers but the scaling, rotation, and shift vector are obviously stated as not-integer values by registration. Therefore, values of pixels in the target image are calculated by various interpolation methods (nearest neighbour, bilinear or bicubic interpolation).

3. Results

The theory described in the previous section was applied to a series of 43 partially focused images of a sandstone sample (locality Brno-Hády, Czech Republic) which was acquired using a central projection in the viewing field 2,5 × 1,875 cm. The results are summarized in Table 1. We have identified mostly insignificant rotation (note that rotation of two hundred arc seconds around the centre means deviation of one-half pixel in the image with the resolution 1024 × 768); shift of the image centre is more significant.

The graph in Figure 8 illustrates the scaling for the individual image (relative to the first). It is not precisely linear in practice as was stated in Section 2.1 (see also Figure 6). In Figure 9 we can see the sum of four input images without registration (a) and the sum of the same images preprocessed using the registration method described in Section 3 (b).

The entire process of 2D and 3D processing of the series of the partially focused images acquired in central projection thus proceeds as follows:(a)Preprocessing: registration of partially focused images using elementary mathematics (see Figure 6 on the left) simple but usually not precise enough method; or registration using the method proposed in Sections 2.22.7 in this paper.(b)2D reconstruction: Identification of sharp parts in separated images and composition of a new whole sharp 2D image (see [3, 4]).(c)3D reconstruction: Height is assigned to all the image points (see [1, 2, 4, 10]).

In Figure 10 we can see the 3D reconstruction of this series using the method described in [1, 2], that is, without any registration. The registration is not necessary in the case of confocal microscope images. However, images differ in scaling and rotation in the case of classic cameras and their 3D reconstruction without any registration is unusable.

In Figure 11, we can see the 3D reconstruction of the same series which was transformed using elementary mathematics according to Figure 6 on the left. The result is significantly better but artifacts are still evident.

In Figure 12, there is illustrated the 3D reconstruction of the same series which was registered using phase correlation described in Section 2. No artifacts are perspicuous in this reconstruction.

4. Conclusions

In the case of 3D reconstruction of a series of partially focused images of oversized surfaces, we usually cannot neglect the angle between projection rays and therefore even the different scale of individual images. In the simplest case, we can assume that the scaling is linear and no other geometric transformations occur. This case can be solved using elementary methods but subsequent 3D reconstruction contains usually unwanted artifacts. In real devices, scaling is not linear and images can be shifted and even rotated with respect to each other. If accurate 3D reconstruction is required, precise image registration is necessary as preprocessing. Phase correlation is a suitable method for this preprocessing. It is able to detect the above-mentioned transformations with subpixel precision and we can neatly eliminate them.

Conflicts of Interest

There are no conflicts of interest related to this paper.

Acknowledgments

This work was supported by the Grant Agency of the Czech Republic under Contract no. 13-03403S. The authors acknowledge partial support from the Brno University of Technology, Specific Research Project no. FSI-S-14-2290 and Project LO1202 by financial means from the Ministry of Education, Youth and Sports under the National Sustainability Programme I. The authors thank Professor Tomáš Ficker from the Faculty of Civil Engineering of Brno University of Technology for the provided data.