About this Journal Submit a Manuscript Table of Contents
Advances in Mechanical Engineering
Volume 2013 (2013), Article ID 456927, 10 pages
http://dx.doi.org/10.1155/2013/456927
Research Article

An Efficient and Accurate Method for Real-Time Processing of Light Stripe Images

Key Laboratory of Precision Opto-Mechatronics Technology, Ministry of Education, Beihang University, Beijing 100191, China

Received 8 July 2013; Revised 22 September 2013; Accepted 28 October 2013

Academic Editor: Liang-Chia Chen

Copyright © 2013 Xu Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper describes a direction-guided method for real-time processing of light stripe images in practical active vision applications. The presented approach consists of three main steps. Firstly, we select two thresholds to divide the image into three classes, that is, the foreground, the background, and the undecided area. Then, we locate a feature point of the light stripe in the image foreground and calculate the local line direction at that point. Finally, we detect succeeding points along that direction in the foreground and the undecided area and directly exclude pixels which are in the normal direction of the line. We use synthetic and real images to verify the accuracy and efficiency of the proposed method. The experimental results show that the proposed algorithm gains a 1-fold speedup compared to the prevalent method without any loss of accuracy.

1. Introduction

Due to the merit of the wide measurement range, noncontact capability, and high accuracy, active vision techniques have been widely used in fields of surface reconstruction, precision measurement, and industrial inspection [1, 2]. Nowadays, more and more vision systems for real-time measurement and inspection tasks have appeared [3]. Among the variety of active vision methods, line structured light vision technique is perhaps the most widespread. In this paper, we focus on the line structured light vision in real-time measurement applications.

Light stripe processing plays an important role in line structured light vision applications in the following aspects. Firstly, the processing method should ensure the integrality of light stripes which are modulated by the surface of the object and contain the desired information. Secondly, the line point detection accuracy directly influences the measurement accuracy. Finally, the speed of the stripe processing algorithm is crucial in real-time applications. A time-consuming method is apparently not eligible for those tasks despite its high accuracy and robustness.

To detect centric lines of curvilinear structures, there are many methods which can be roughly classified into two categories. The first approach detects line points by only taking the gray values of the image into consideration and uses some local criteria such as extrema or centroids of gray values in a subregion [4, 5]. This kind of method is inclined to be affected by noisy pixels and the detection accuracy is relatively low.

The second approach uses differential geometric properties to extract feature lines. These algorithms regard ridges and ravines in the image as the desired lines [6, 7]. Among those methods, the one proposed by Steger [8] is widely used in the community. Steger’s method firstly extracts line points by locally approximating the image function by its second order Taylor polynomial and determines the direction of the line at the current point from the Hessian matrix of the Taylor polynomial. After individual line points have been detected, a point linking algorithm is applied to facilitate further process. The linking step is necessary since we often want to get some characteristics about the feature line such as its length. Steger’s method has the advantages of high accuracy and nice robustness, whereas it also suffers from the drawback that the algorithm is computationally expensive, which is not sustainable in real-time applications.

Based on Steger’s method, many revised algorithms aiming at processing light stripe images more efficiently have been proposed. Hu et al. [9] utilize the separability of the Gaussian smoothing kernels and implement a recursive version of the Gaussian derivatives to approximate the original ones.Besides, Zhou et al. [10] present a composite processing method to detect the subpixel centers of light stripes based on region of interest (ROI). The method combines image thresholding with image dilation to automatically segment the ROIs of light stripes and then detects feature points in limited area. These algorithms have greatly improved the original Steger method, but it is still important to develop more efficient ones for real-time vision applications.

In this paper, a direction-guided algorithm is given to accelerate the speed of line detection procedure. In the presented algorithm, the input light stripe image is firstly bilevel thresholded into image foreground, background, and the undecided area. Then, the first feature point of a new line in the light stripe is detected in the image foreground and the local line direction is calculated. Finally, succeeding line points are located in the foreground and undecided area along the direction and pixels which are in the normal direction of the line are excluded. Experiments on synthetic and real images have been conducted to verify the accuracy and efficiency of the proposed method and conclusions are drawn.

2. Bilevel Thresholding of the Light Stripe Image

Most existing light stripe image processing methods can be divided into two stages, that is, image thresholding and line detection. Those methods assume that gray values of light stripes are markedly greater than those of the image background, and the threshold can be easily selected using Otsu’s method [11]. But the actual gray levels of light stripes may vary in a wide range and some stripe segments resemble the image background due to the serious reflection and large curvature of the object surface. In addition, some images may even be affected by serious disturbing lights which cannot be removed by optical filters. This circumstance occurs frequently when we try to recover profiles of some large scale metal parts whose surfaces are curving.

Figure 1 exemplifies the aforementioned situation. Figure 1(a) is an image captured from a real freight wheel, and Figure 1(b) is an image captured from a real train rail. We can see that there exists large area of disturbing lights in Figure 1(a), and the upper segments of light stripes in Figure 1(b) are relatively weaker than the lower parts. Given these conditions, thresholding simply using Otsu’s method may lead to the undesired results that some weak stripe segments are treated as the image background or the disturbing lights are seen as the foreground.

fig1
Figure 1: Real light stripe images affected by serious reflection.

To deal with light stripe images including those affected by serious reflection, we choose the bilevel thresholding strategy. That is, pixels with gray values greater than the higher threshold are treated as the foreground and those with gray values smaller than the lower threshold are seen as the background. Pixels with gray values which are between the two thresholds are treated as the undecided area. In the succeeding operations, we search the beginning feature point of a new line in the foreground and detect the potential points in the foreground and the undecided area.

To determine the two thresholds, we combine the methods proposed by Otsu [11] and Rosin [12]. After calculating the image histogram, we can locate two gray values and which are corresponding to the highest peak and the last nonzero value of the histogram, respectively. Besides, gray levels and are selected using the methods proposed by Otsu and Rosin, respectively. We then determine the two thresholds and according to the following rules:

Actually, the parameters and are usually set as zeros for ordinary light stripe images. We only have to tune the two parameters when confronted with images which contain large area of disturbing reflection lights. Besides, we can utilize the entropy and the peak number of the histogram to decide whether the input image contains large area of disturbing reflection lights or not. Images with disturbing lights often have larger entropies and more than two peaks in their histograms. The entropy calculation is given by Here, is the th index of the normalized histogram and is the maximum gray value.

3. Determination of the First Feature Point of a New Line

After bilevel thresholding the image into three classes, we can detect the first feature point of a new line by examining each unprocessed pixel in the image foreground using Steger’s method. In brief, if we denote the direction perpendicular to the line by , then the first directional derivative of the desired line point in the direction should vanish and the second directional derivative should be of large absolute value.

To compute the direction of the line locally for the image point to be processed, the first and second order partial derivatives , , , , and of the image pixel have to be estimated by convolving it and its neighborhood points with the discrete two-dimensional Gaussian partial derivative kernels as follows: The Gaussian kernels are given by According to Steger’s conclusions, the parameter should satisfy the inequality The parameter is defined as the half width of the light stripe in the image. Then the normal direction can be determined by calculating the eigenvalues and eigenvectors of the Hessian matrix The eigenvector corresponding to the eigenvalue of maximum absolute value is used as the normal direction with .

After determining the local normal direction, the Taylor polynomial of the image function at the pixel along the direction can be approximated by Thus, by setting the derivative of the Taylor polynomial along to zero, the subpixel point can be obtained: where Finally, the point is declared a line point if .

Using the aforesaid line point detection method, we can search the first feature point of a new line by examining every unprocessed pixel in the foreground. After detecting the first feature point, we can search succeeding line points and simultaneously exclude undesired pixels until no more line points exist in the current line. If there are no feature points left in the image foreground, the total processing procedure of the input light stripe image finishes.

4. Search of Potential Line Points and Exclusion of Undesired Pixels

Beyond the ROI-based techniques which exclude the operations on the image background, the proposed method utilizes a further fact that the center line is only a little portion of the light stripe to be extracted. Therefore, it is unnecessary to process those undesired pixels around the feature points which are in the normal direction of the line. As a result, once we detect one feature point in the line, we can directly extract and link the succeeding points along the local direction of the line and then exclude pixels in the normal direction. It is the exclusion of undesired pixels that makes the presented method intrinsically gain great speedup when compared with other methods. Herein, we divide the light stripe extraction into two main steps: one is the search of potential line points and the other is the exclusion of undesired pixels.

4.1. Search of Potential Line Points

Starting from the first line point , we can directly search the succeeding points along both line directions and . In fact, we only have to process three neighboring pixels according to the current orientation of the line. We still use the feature point detection method described in Section 3 to determine whether a potential pixel is a line point or not. If there are two or more points found, we choose the one which minimizes the summation of the distance between the respective subpixel locations and the angle difference of the two points. Once a new line point is found, we update the local line direction. If there is no point eligible, we finish the search along the current direction and turn to the other direction. We end the line point linking until the searching operations along both directions finish and start to find a new line.

Figure 2 exemplifies the aforementioned direction-guided line point searching step. As shown in the figure, the current orientation is assumed to be in the interval of , and the points and are both found to be line points except for . Then, the distances , and the angle differences , can be easily obtained. Suppose that is smaller than ; the point is then chosen as the new jumping-off point of the line, and the direction is regarded as the new orientation.

456927.fig.002
Figure 2: Search of potential line points.
4.2. Exclusion of Undesired Pixels

Previous methods consider the light stripe extraction problem as two separated issues, that is, line point detection and feature point linking. Without exception, they first process all pixels in the region of interest, pick up the line points, and then link those points into lines. Actually, this kind of schemes often wastes calculations on pixels which are unnecessary to be processed, since it is only the central lines of light stripes that are the needed features in most cases.

To avoid the unwanted line point detection operations on the undesired pixels, the proposed algorithm fuses the point detection and linking steps and utilizes the normal directions of detected points to exclude the pixels which are perpendicular to the line. Figure 3 illustrates the way of determining the undesired pixels. As shown in the figure, the three pixels with the red frame are the potential line points. Meanwhile, the fourteen pixels with the blue frame in both normal directions and are treated as undesired pixels and should be omitted. It is evident that the undesired pixels occupy the great mass of the light stripe, and thus it is significant to exclude them for the purpose of the speedup.

456927.fig.003
Figure 3: Exclusion of undesired pixels.
4.3. The Entire Direction-Guided Algorithm

Figure 4 summarizes the entire approach based on the idea of direction guidance. Note that starting points of feature lines are detected in the image foreground and potential line points are searched in the image foreground and undecided area.

456927.fig.004
Figure 4: The flow chart of the proposed direction-guided approach.

5. Results and Discussion

In order to demonstrate the accuracy and efficiency of the proposed method, experiments on synthetic and real images have been conducted, respectively.

5.1. Synthetic Image Data

To verify the accuracy of the line point detection, we manually generate an ideal image with one light stripe. The synthetic image shown in Figure 5(a) is 512 × 512, and the horizontal coordinate of the ideal center line of the light stripe is 256. After adding Gaussian white noise of 0 mean and standard deviation to the image, we use the presented method to detect line points and calculate the detection error by Here, and are the ideal and detected position of the th line point, respectively, and is the number of detected line points. Figure 5(b) is an example of the degraded image affected by Gaussian noise with 0 mean and . We vary the noise level from 0.05 to 0.4. For each noise level, we perform 100 independent trials, and the results shown are the average. As we can see from Figure 5(c), errors increase linearly with the noise level. For (which is larger than the normal noise in practical situation), the detection error is less than 0.35.

fig5
Figure 5: Accuracy verification experiment using a synthetic image.
5.2. Real Image Data

Here, we provide four representative examples to demonstrate the efficiency of the proposed method. All of those images are . Figure 6(a) is an image captured from a running train wheel using an exposure time of 50 us. Figure 6(b) is an image of a flat metal part with cross light stripes. Besides, Figure 6(c) is an image of a steel pipe with three light stripes and the image with five light stripes shown in Figure 6(d) is captured from a rail in use.

fig6
Figure 6: Real example images to verify the efficiency.

Figure 7 displays the histograms of the example images and shows the positions of four important gray levels in green circles and two thresholds in red crosses. For all the test images, we directly use the gray level selected by Otsu’s method as the higher threshold. As for the lower threshold, we tune the parameter to be 0.75 for the wheel image affected by serious reflection lights and use the gray level selected by Rosin’s method as the lower threshold for other images. Figure 8 exhibits the bilevel thresholding results of the four images. From the thresholding results of the wheel and rail images, we can see that image thresholding merely utilizing Otsu’s method will mistake the weak segments of light stripes for background just as expected. We can also observe that images are better segmented using bilevel thresholding especially for Figure 6(a). Figure 9 shows the line detection results. Although there may be several line points missing in the cross-sections of the light stripes due to the exclusion operations, the results are satisfying as a whole.

fig7
Figure 7: Selected thresholds for test images.
fig8
Figure 8: Bilevel thresholding results of test images.
fig9
Figure 9: Line detection results of test images.

To demonstrate the gain in speed, we compared the presented algorithm with those proposed by Steger [8] and Zhou et al. [10]. The experiments were conducted on an Intel Core2 2.93 GHz machine running Windows 7 and the three algorithms were implemented under the Visual Studio 2012 environment. For every test image, experiments were repeated 100 times under the same conditions and a comparison of mean running time is shown in Table 1. From the table, we can see that the time consumed by the proposed method is less than half of the time consumed by Zhou’s method and about tenth of the time consumed by Steger’s method.

tab1
Table 1: Comparison of mean running time of different line detection methods (ms).

Furthermore, the excluded pixels of the test images during processing are shown in Figure 10 with black brush to illustrate the effect of the exclusion operations of undesired pixels and visually explain the speedup of the proposed algorithm. It is noticeable that most of the undesired pixels around the line points are excluded during the line detection process, and thus a lot of unnecessary calculations are eliminated in advance.

fig10
Figure 10: Excluded pixels of the test images during processing.

6. Conclusion

A direction-guided light stripe processing approach is presented. This method is characterized by its sound utilization of the local orientations of light stripes and has the following advantages. Firstly, the proposed method utilizes a bilevel thresholding method which not only ensures the integrality of uneven light stripes but also avoids blind calculations on undecided pixels. Secondly, the method retains the high accuracy of line point detection, since the point extraction algorithm is the same as the state-of-the-art one. Thirdly, the point detection and linking steps are combined tightly, and thus it is more flexible to do some specific operations such as selective processing without the need of extracting all the line points. Finally, the exclusion operations of undesired pixels successfully avoid a mass of unwanted calculations and bring a remarkable speedup. Although there may be several feature points missing when dealing with cross light stripes due to the exclusion operation, the data loss is extremely limited. As a whole, the proposed method may be more suitable to be applied in real-time structured light vision applications considering its high accuracy and great efficiency.

Acknowledgment

This work is supported by the National Natural Science Foundation of China under Grant nos. 61127009 and 61275162.

References

  1. P. Lavoie, D. Ionescu, and E. M. Petriu, “3-D object model recovery from 2-D images using structured light,” IEEE Transactions on Instrumentation and Measurement, vol. 53, no. 2, pp. 437–443, 2004. View at Publisher · View at Google Scholar · View at Scopus
  2. Z. Liu, J. Sun, H. Wang, and G. Zhang, “Simple and fast rail wear measurement method based on structured light,” Optics and Lasers in Engineering, vol. 49, no. 11, pp. 1343–1351, 2011. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Xu, B. Gao, J. Han et al., “Realtime 3D profile measurement by using the composite pattern based on the binary stripe pattern,” Optics & Laser Technology, vol. 44, no. 3, pp. 587–593, 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. D. Geman and B. Jedynak, “An active testing model for tracking roads in satellite images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 1, pp. 1–14, 1996. View at Publisher · View at Google Scholar · View at Scopus
  5. M. A. Fischler, “The perception of linear structure: a generic linker,” in Proceedings of the 23rd Image Understanding Workshop, pp. 1565–1579, Morgan Kaufmann Publishers, Monterey, Calif, USA, 1994.
  6. D. Eberly, R. Gardner, B. Morse, S. Pizer, and C. Scharlach, “Ridges for image analysis,” Tech. Rep. TR93-055, Department of Computer Science, University of North Carolina, Chapel Hill, NC, USA, 1993.
  7. A. Busch, “A common framework for the extraction of lines and edges,” International Archives of the Photogrammetry and Remote Sensing, vol. 31, part B3, pp. 88–93, 1996.
  8. C. Steger, “An unbiased detector of curvilinear structures,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 2, pp. 113–125, 1998. View at Publisher · View at Google Scholar · View at Scopus
  9. K. Hu, F. Zhou, and G. Zhang, “Fast extrication method for sub-pixel center of structured light stripe,” Chinese Journal of Scientific Instrument, vol. 27, no. 10, pp. 1326–1329, 2006. View at Scopus
  10. F.-Q. Zhou, Q. Chen, and G.-J. Zhang, “Composite image processing for center extraction of structured light stripe,” Journal of Optoelectronics.Laser, vol. 19, no. 11, pp. 1534–1537, 2008. View at Scopus
  11. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979. View at Scopus
  12. P. L. Rosin, “Unimodal thresholding,” Pattern Recognition, vol. 34, no. 11, pp. 2083–2096, 2001. View at Publisher · View at Google Scholar · View at Scopus