Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 852016, 10 pages
http://dx.doi.org/10.1155/2015/852016
Research Article

Phase Error Caused by Speed Mismatch Analysis in the Line-Scan Defect Detection by Using Fourier Transform Technique

School of Mechatronic Engineering, China University of Mining & Technology, 1 Daxue Road, Xuzhou, Jiangsu 221116, China

Received 8 April 2015; Revised 18 June 2015; Accepted 23 June 2015

Academic Editor: Oleg V. Gendelman

Copyright © 2015 Eryi Hu and Yuan Hu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The phase error caused by the speed mismatch issue is researched in the line-scan images capturing 3D profile measurement. The experimental system is constructed by a line-scan CCD camera, an object moving device, a digital fringe pattern projector, and a personal computer. In the experiment procedure, the detected object is moving relative to the image capturing system by using a motorized translation stage in a stable velocity. The digital fringe pattern is projected onto the detected object, and then the deformed patterns are captured and recorded in the computer. The object surface profile can be calculated by the Fourier transform profilometry. However, the moving speed mismatch error will still exist in most of the engineering application occasion even after an image system calibration. When the moving speed of the detected object is faster than the expected value, the captured image will be compressed in the moving direction of the detected object. In order to overcome this kind of measurement error, an image recovering algorithm is proposed to reconstruct the original compressed image. Thus, the phase values can be extracted much more accurately by the reconstructed images. And then, the phase error distribution caused by the speed mismatch is analyzed by the simulation and experimental methods.

1. Introduction

It is known that the Fourier transform profilometry was widely used in the machine vision, industry monitoring, production surface inspection, and so forth [15]. By using this type of phase extracting algorithm, the phase distribution of the deformed fringe patterns can be obtained with only one frame of image [6, 7], so that the projection grating phase measurement method can be applied in the dynamic or moving object surface profile inspection [8]. Furthermore, the line-scan or the time delay and integration (TDI) CCD camera is used to capture the surface image in the moving object inspection [9, 10]. In particular, the applications of the TDI camera have been reported for the dynamic inspection of rotating objects [11, 12], and the Fourier transform profilometry is used to obtain the surface profile information. The error analysis about the projection grating Fourier transform profilometry has been reported [8, 13], from which it is found that the main reasons of the phase extraction error are the nonlinear response of the CCD, the random noise, the quantization of grey levels, spatial carrier frequency, calibration error, and so forth.

However, compared with the TDI CCD camera image capturing system, there is no time delay and integration procedure of the charges in the line-scan CCD image sensors [12], so that the image capture speed of the line-scan CCD camera will be much faster than the TDI CCD, and the price of the line-scan camera will be much lower than the TDI CCD. Similarly, the phase error will also exist when the line-scan and the moving speed of the detected object are mismatch with each other. As there is an obvious difference of the operation principle between the proposed two kinds of CCD camera, the measurement error of the line-scan CCD camera caused by the speed mismatch will be discussed in depth in this paper. A line-scan 3D surface profile measurement system is constructed to obtain the surface profile of the moving object. The object surface height values are calculated by the Fourier transform profilometry. The simulation and experimental methods are applied to analyze the phase error distribution of the detected 3D model. In order to decline the phase error into the lower level, an image recovering algorithm will be applied to reconstruct the distorted images.

2. The Principle of the Line-Scan Fourier Transform Profilometry

2.1. The Experimental System

The configuration of the line-scan Fourier transform profilometry experimental setup is founded on the conventional projection grating systems. As shown in Figure 1, the experimental system is constructed by an image line-scan CCD camera, an object moving device, a fringe pattern digital projector, a speed coder, and a personal computer. Only one frame of parallel fringe pattern with cosine function modulated intensity is projected onto the moving object plane at an incidence angle. The line-scan CCD camera is put upon the moving device to obtain the deformed fringe patterns modulated by the surface profile of the detected object. The optical axis of the camera is normal to the reference plane and the line-scan direction is perpendicular to the moving direction. Furthermore, the projected fringe direction is parallel to the moving direction of the object. The detected object is put on the motorized translation stage. When the detected object is moving in a stable velocity with the stage, the fringe pattern is captured by the line-scan CCD camera line by line. The speed coder is used to detect the moving speed of the object, and the pulse is sent back to the computer for the synchronization between the object moving and the line-scan. Furthermore, the image capturing system must be calibrated to get an original setting about the trigger of the line-scan CCD camera. However, the moving speed mismatch error will still exist in most of the engineering applications even after an image system calibration. Finally, the deformed images are recorded in a personal computer for the next calculation. The object surface height can be obtained after extracting the fringe deformation between the reference and the detected surface grating. In this study, the Fourier transform method, which is shown in the following, is used to evaluate the fringe deformation. When the speed mismatch error is unavoidable, the phase error of the deformed fringe pattern will be introduced in the measurement.

Figure 1: Sketch map of the experimental setup.
2.2. The Phase Extraction Algorithm

The geometric relationship between the projection gratings and the surface height is shown in Figure 2. The incidence angle between the fringe pattern projection and the observation directions is denoted as , and the detected object is moving in -axis direction. A point on the object surface would have two corresponding shadow points on the reference plane denoted as and , respectively. Following the triangle geometric principle, the object surface height can be obtained if the fringe deformation between the reference plane and the detected surface grating can be measured [7].

Figure 2: Optical geometric relationship.

The image intensity of the deformed fringe pattern recorded by the line-scan CCD camera can be expressed as follows:where is the background intensity, is the amplitude of the gratings, is the spatial frequency, and is the phase change caused by the surface height of the object. Equation (1) shows that the signal is modulated by a constant high-frequency signal . The key point in this issue is how to extract the phase change information accurately in full field from the deformed fringe patterns. There are some classical phase extracting methods, such as the phase-shifting method with steps and the Fourier transform technique. However, the detected object is moving relative to the image capturing system in the online inspection procedure. Hence, only one frame of fringe pattern can be captured and recorded by the line-scan CCD camera in the moving object inspection. 1D Fourier transform method is normally used to extract the phase change in this kind of occasion.

In order to apply the proposed Fourier transform method, (1) can be written aswhere and is the complex conjugate of . The Fourier transform of with respect to becomeswhere , , and represent the Fourier spectra and is the complex conjugate of .

As the frequency of the signals denoted as , , and is much lower than that of , the function can be filtered by an adequate window in the frequency domain. And then can be obtained by spectrum shift center calculation. Taking inverse Fourier transform of , can be extracted easily. Finally, the phase change of the deformed pattern is expressed as

And then the surface height information of the detected object can be calculated by a simple trigonometry relationship. The error caused by the calibration and other factors in the dynamic line-scan measurement with Fourier transform was reported in the reference [13]. So, in the next part, we mainly focus on the phase error caused by the speed mismatch problem in the line-scan Fourier transform profilometry.

3. Speed Mismatch in the Line-Scan Procedure

In order to overcome the image error caused by the speed mismatch in the line-scan procedure, a speed coder must be used to obtain the relative speed between the detected object and the line-scan CCD camera image capturing system. The coder shaft is set in a wheel which is contacted with the translation stage to measure the transport speed. Moreover, the pulse signal generated from the speed coder is fed back to the computer to trigger the line rate of the line-scan CCD camera real time. When the relative moving speed of the detected object is well corresponding to the scan line speed of the line-scan CCD camera, the captured image will be much more clear and bright without any distortion. Otherwise, the fringe pattern image will be distorted and blurred. However, the speed mismatch error will still exist in our experimental environment because of the unavoidable error in the image system synchronization procedure. The captured image will be drawn or compressed in the moving direction of the detected object. As shown in Figure 3, the two-frame images of a 3D specimen are recorded by the proposed line-scan image system. The width of the experimental specimen is 8 cm. It is found that there is an obvious image deformation in the moving direction between the speed match well and speed mismatch conditions. In the captured image of Figure 3(b), the moving speed is 20% faster than the appropriate value. So the captured image is compressed obviously in the moving direction of the detected object.

Figure 3: Captured fringe patterns: (a) speed match well and (b) speed mismatch.

In order to explain this image distortion phenomenon, a digital simulation method is used to study the speed mismatch error. The image compressing procedure can be simulated by the approach shown in Figure 4. In this image processing, “1,” “2,” “3,” and “4” are four simulated image pixel cells, which are used to explain the line-scan action of the line-scan CCD camera. The four image pixel cells are rank in the moving direction. If the moving speed matches well with the line-scan speed, the corresponding captured points on the detected object surface are “A,” “B,” “C,” and “D.” However, when the moving speed of the detected object is faster than the expected value, maybe three image pixel cells are enough to record the object surface area marked as “A,” “B,” “C,” and “D.” Hence, the recorded image is compressed in the moving direction. On the other hand, if the moving speed is smaller than the required speed, the recorded image will be drawn along the moving direction. In this study, the interpolation technique is applied to recover the image mismatch procedure. Firstly, the image pixels “1”~“4” are extended to 12 image pixels by an image intensity interpolation method. For instance, the pixel values “” and “” are added into the original image pixels between “1” and “2.” And then three new image pixels denoted as “,” “,” and “4” are extracted to construct a compressed image.

Figure 4: The principle of image compressed.

4. Phase Error Simulation Analysis

4.1. A Cap Model

Following the previous image compressed theory shown in Figure 4 with the line-scan image capturing system, the compressed fringe pattern can be simulated to research the phase extracting error caused by the speed mismatch issue. The 3D shape of a cap model, which is shown in Figure 5, is formed in the simulation. The plane size of the cap model is pixels, and the height is set as 5 pixels. One frame of sinusoidal grating was hypothetically projected onto the moving cap model at an incident angle of . The fringe pattern modulated by the model surface is simulated through the triangle relationship between the projection and the observation directions. When the moving speed of the detected object is fit well with the line-scan CCD camera, the simulated deformed fringe pattern is shown in Figure 6(a). On the other hand, when the moving speed is 17% faster than the appropriate speed, the simulated fringe pattern modulated by the same 3D cap model surface is obtained and shown in Figure 6(b). It is obvious that the image size of the speed mismatch one is smaller than the speed match well one in the moving direction, and the new image area is pixels. The desired phase distributions modulated by the model surface are extracted by the proposed Fourier transform phase recovering algorithm. The phase distribution maps without unwrapping process are shown in Figures 7(a) and 7(b). After a phase unwrapping process, the phase difference distributions corresponding to the speed match well and the speed mismatch conditions are shown in Figures 8(a) and 8(b), respectively.

Figure 5: Simulated cap model.
Figure 6: Deformed fringe patterns of cap model: (a) speed match well and (b) speed mismatch.
Figure 7: Phase distribution of cap model without unwrapping: (a) speed match well and (b) speed mismatch.
Figure 8: Unwrapped phase difference distribution of cap model: (a) speed match well and (b) speed mismatch.

It is well known that when the moving speed of the detected object is faster than the expected value, the captured image will be compressed. Hence, the measured surface profile data cannot cover the actual object well, which will introduce an extra measurement error in the experiment. In order to obtain this phase error caused by the speed mismatch in the moving detection, the compressed phase map must be recovered to the original image pixels size ( pixels). The compressed phase image recovering procedure can be simulated by the image processing shown in Figure 9. In this reverse procedure, four image pixel cells noted as “1,” “2,” “3,” and “4” are also used to explain the line-scan action of the CCD camera. It is well known that the corresponding captured points on the object surface are “A,” “B,” “C,” and “D” when the moving speed matches well with the line-scan speed. However, when the moving speed of the detected object is faster than the expected value, three image pixels denoted as “1,” “2,” and “3” are enough to record the object surface area. However, the compressed image can be reconstructed to the original situation as follows. Firstly, the image pixels “1,” “2,” and “3” are extended to 12 image pixels by an image intensity interpolation method, and then four new image pixels denoted as “,” “2,” “,” and “” are extracted for the new recovering image. By using this kind of image recovering algorithm, the compressed phase map shown in Figure 8(b) is extended to the same size as pixels, and the reconstructed phase difference distribution map corresponding to the speed mismatch condition is shown in Figure 10. It is found that the corners, top and down, in the reconstructed phase map are blurred because of the image reconstruction at the image boundary; however, this error will not present in the main middle area of the detected object surface.

Figure 9: The principle of image reconstruction.
Figure 10: Reconstructed phase difference distribution map with speed mismatch.

However, the extra error will be introduced into this image reconstruction procedure. As shown in Figure 9, the pixel point denoted as “1” is corresponding to the real detected object surface point, but after an intensity interpolation the virtual point “” is used instead of the real point “1.” There is an intensity error between the two different pixel points, and the phase error defined as between the two different pixel points also exists. However, the detected object is moving in the image capturing system, so the pixel point corresponding to the real surface point in different measurement will be present in different position in the image. Hence, there will be an uncertainty about the phase error in the measurement. For a series of measurements of the same measurand, the quantity characterizes the dispersion of the results and is given by the following formula:where is the result of the th measurement and is the arithmetic mean of the results considered.

Finally, the phase maps obtained by the two different simulated results, which are shown in Figures 8(a) and 10, are subtracted, so that the phase error caused by the speed mismatch error can be obtained. The phase error distribution map of the simulated cap model is shown in Figure 11. The simulation result shows that the phase extracted by the Fourier algorithm with the speed mismatch fringe pattern presents a special phase error distribution. The maximal phase error caused by the speed mismatch is 0.2 rad. Subsequently, the computed height of the model surface will also present the same special error distribution.

Figure 11: Phase extracting error of cap model.

And then the proposed digital simulation method for error analysis was applied in different speed mismatch conditions. Five different speed mismatch values are simulated to obtain the characteristic of the error changing. The distributions of the phase error map are also presenting the same characteristics as shown in Figure 11. However, the maximal phase error caused by the speed mismatch and image recovering procedure are shown in Table 1. It is found that the phase error level is declined with the reduction of the speed mismatch values.

Table 1: The maximal phase error in different speed mismatch.

4.2. A Cylinder Model

Following the principle of the image compressed and the image reconstruction shown in Figures 4 and 9, it seems that the main reason of the phase error is the image blurring and pixel interpolation appearing in the moving direction. So, in this part, another 3D shape of a cylinder model shown in Figure 12 is simulated. It is found that there is no surface slope in the moving direction of the cylinder model. The plane size of this model is also set as pixels, and the height is 15 pixels. The incident angle of sinusoidal gratings is also set as . The fringe patterns modulated by the model surface with speed match well and speed mismatch conditions are simulated in Figures 13(a) and 13(b), respectively. It is obvious that the speed corresponding to the compressed fringe pattern image is a little fast in the moving direction, and the image area is compressed to pixels. The desired phase modulated by the simulated cylinder model surface is also extracted by the Fourier transform technique. The phase distribution maps without unwrapping process are shown in Figures 14(a) and 14(b). After a phase unwrapping process, the unwrapped phase difference distribution of the fringe patterns is shown in Figures 15(a) and 15(b), respectively.

Figure 12: Simulated cylinder model.
Figure 13: Deformed fringe patterns of cylinder model: (a) speed match well and (b) speed mismatch.
Figure 14: Phase of cylinder model without unwrapping: (a) speed match well and (b) speed mismatch.
Figure 15: Unwrapped phase difference distribution of cylinder model: (a) speed match well and (b) speed mismatch.

Following the same calculation procedure as shown in Figure 9, the compressed phase difference map must be recovered to the original image pixels size ( pixels), and then the phase maps obtained by the two different simulated results are subtracted. The phase error distribution map caused by the speed mismatch issue is shown in Figure 16. The simulation result shows that the phase error of this type of cylinder model extracted by the Fourier algorithm with the speed mismatch fringe pattern can be ignored. Comparing the two different 3D shape models, it is found that if the detected object has no surface slope in the moving direction, there will be no influence with the phase distribution after an image reconstruction of the compressed fringe pattern.

Figure 16: Phase extracting error distribution of cylinder model.

5. Experimental and Error Analysis

In order to verify the previous simulation results about the phase error distribution caused by the moving speed mismatch in the line-scan profile measurement, an experiment is designed and the calculation results are shown as follows. The image capturing system is shown in Figure 1, and the detected object is a white plate with three surface defects. Two different moving speeds of the detected object are set for the error analysis, and thus the two frames of image are recorded by the proposed line-scan image system. As shown in Figure 3, the captured image is compressed in the moving direction of the detected object when the moving speed is faster than the appropriate value. By using the Fourier transform phase extraction algorithm, the phase distribution maps without unwrapping process are shown in Figure 17. After a phase unwrapping procedure, the unwrapped phase of the detected object can be obtained. The surface phase meshing of the detected object is shown in Figure 18. And then the compressed phase map corresponding to the faster moving object can be recovered via the proposed image reconstruction algorithm which is shown in Figure 9. Thus, the phase error distribution map of the experimental specimen is shown in Figure 19. It is found that the phase error is much more obvious at the edge of the surface defects in the moving direction, and these areas have been noted as red blocks in the phase error map. On the contrary, the phase error can be ignored at the middle of the surface defects. The main reason of these phase error distribution characteristics is that the slopes of the surface profile around the defects are different. In detail, the surface slopes at the edge of the surface defects are much bigger than that in the middle of the surface defects. In a word, the experimental results are fit well with the simulation analysis.

Figure 17: Phase distribution without unwrapping of experimental specimen: (a) speed match well and (b) speed mismatch.
Figure 18: Phase meshing of specimen.
Figure 19: Phase error distribution of specimen.

6. Conclusion

Because of the speed mismatch error in the line-scan profilometry experimental system, the phase error of a complex detected surface is unavoidable in the surface information detection. Through the proposed phase error simulation and experimental analysis, the phase error maps extracted by the Fourier transform algorithm present a special distribution characteristic corresponding to the detected surface profile. Furthermore, the phase error can be ignored when the detected object surface profile has no sharp change in the moving direction. Hence, the line-scan image capturing system must be calibrated accurately in the practical inspection in engineering, so that the speed mismatch error can be decreased to a much smaller level to overcome these phase and surface height calculation errors as much as possible.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors gratefully acknowledge support from the National Natural Science Foundation of China under Grant no. 51205396, Natural Science Foundation of Jiangsu Province under Grant no. BK2012130, Fundamental Research Funds for the Central Universities under Grant no. 2012QNA20, A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions, and the Young Teacher Overseas Training Program of CUMT.

References

  1. J. Yi and S. Huang, “Modified fourier transform profilometry for the measurement of 3-D steep shapes,” Optics and Lasers in Engineering, vol. 27, no. 5, pp. 493–505, 1997. View at Publisher · View at Google Scholar · View at Scopus
  2. M. Zhong, W. Chen, and M. Jiang, “Application of S-transform profilometry in eliminating nonlinearity in fringe pattern,” Applied Optics, vol. 51, no. 5, pp. 577–587, 2012. View at Publisher · View at Google Scholar · View at Scopus
  3. F. Da and F. Dong, “Windowed Fourier transform profilometry based on improved S-transform,” Optics Letters, vol. 37, no. 17, pp. 3561–3563, 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. Y. Fu, Y. Wang, J. Wu, and G. Jiang, “Dual-frequency fringe Fourier transform profilometry based on defocusing,” Optics Communications, vol. 295, pp. 92–98, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. Q. Zhang and Z. Wu, “A carrier removal method in Fourier transform profilometry with Zernike polynomials,” Optics and Lasers in Engineering, vol. 51, no. 3, pp. 253–260, 2013. View at Publisher · View at Google Scholar · View at Scopus
  6. T. Kart, G. Kösoğlu, H. Yüksel, and M. N. Inci, “Fourier transform optical profilometry using fiber optic Lloyd's mirrors,” Applied Optics, vol. 53, no. 35, pp. 8175–8181, 2014. View at Publisher · View at Google Scholar · View at Scopus
  7. Y. M. He, C. J. Tay, and H. M. Shang, “Deformation and profile measurement using the digital projection grating method,” Optics and Lasers in Engineering, vol. 30, no. 5, pp. 367–377, 1998. View at Publisher · View at Google Scholar · View at Scopus
  8. X. Su and W. Chen, “Fourier transform profilometry: a review,” Optics and Lasers in Engineering, vol. 35, no. 5, pp. 263–284, 2001. View at Publisher · View at Google Scholar · View at Scopus
  9. E. Hu and Y. He, “Surface profile measurement of moving objects by using an improved π phase-shifting Fourier transform profilometry,” Optics and Lasers in Engineering, vol. 47, no. 1, pp. 57–61, 2009. View at Publisher · View at Google Scholar · View at Scopus
  10. A. K. Asundi, S. R. Marokkey, G. G. Olson, and J. N. Walker, “Digital moire applications in automated inspection,” in Machine Vision Applications, Architectures, and Systems Integration III, vol. 2347 of Proceedings of SPIE, pp. 270–275, October 1994. View at Publisher · View at Google Scholar
  11. M. R. Sajan, C. J. Tay, H. M. Shang, and A. Asundi, “Improved spatial phase detection for profilometry using a TDI imager,” Optics Communications, vol. 150, no. 1-6, pp. 66–70, 1998. View at Publisher · View at Google Scholar · View at Scopus
  12. C. J. Tay, S. L. Toh, and H. M. Shang, “Time delay and integration imaging for internal profile inspection,” Optics and Laser Technology, vol. 30, no. 8, pp. 459–465, 1998. View at Publisher · View at Google Scholar · View at Scopus
  13. X. Y. Su and Q. C. Zhang, “Dynamic 3-D shape measurement method: a review,” Optics and Lasers in Engineering, vol. 48, no. 2, pp. 191–204, 2010. View at Publisher · View at Google Scholar · View at Scopus