International Journal of Optics

International Journal of Optics / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 4876876 | https://doi.org/10.1155/2020/4876876

Wei Feng, Shaojing Tang, Xiaodong Zhao, Guodong Sun, Daxing Zhao, "Adaptive Fringe Projection for 3D Shape Measurement with Large Reflectivity Variations by Using Image Fusion and Predicted Search", International Journal of Optics, vol. 2020, Article ID 4876876, 14 pages, 2020. https://doi.org/10.1155/2020/4876876

Adaptive Fringe Projection for 3D Shape Measurement with Large Reflectivity Variations by Using Image Fusion and Predicted Search

Academic Editor: Rainer Leitgeb
Received13 May 2020
Revised24 Aug 2020
Accepted02 Sep 2020
Published16 Sep 2020

Abstract

There is always a great challenge for the structured light technique that it is difficult to deal with the surface with large reflectivity variations or specular reflection. This paper proposes a flexible and adaptive digital fringe projection method based on image fusion and interpolated prediction search algorithm. The multiple mask images are fused to obtain the required saturation threshold, and the interpolated prediction search algorithm is used to calculate the optimal projection gray-level intensity. Then, the projection intensity is reduced to achieve coordinate matching in the unsaturated condition, and the adaptive digital fringes with the optimal projection intensity are subsequently projected for phase calculation by using the heterodyne multifrequency phase-shifted method. The experiments demonstrate that the proposed method is effective for measuring the high-reflective surface and unwrapping the phase in the local overexposure region completely. Compared with the traditional structured light measurement methods, our method can decrease the number of projected and captured images with higher modulation and better contrast. In addition, the measurement process only needs two prior steps and avoids hardware complexity, which is more convenient to apply to the industry.

1. Introduction

Structured light technique has been widely used in academic research and industrial fields because of the advantages in their full-field inspection, noncontact operation, low cost, and high precision [1, 2]. However, the method still has some limitations, for example, the measured surface with optical characteristics should have enough diffuse reflection and no large-area specular reflection. In fact, when the coded fringe images are projected to measure the surface with large reflectivity variations, the high-reflective region is too bright to result in image saturation, which will easily lead to large deviation in the three-dimensional (3D) measurement [3, 4]. Therefore, it is an intractable challenge to measure the object with high-reflective surface accurately.

Many methods have been developed for the 3D shape measurement of the high-reflective surface [524]. Among them, an adaptive digital fringe projection (ADFP) method based on antiprojection theory was proposed, and it can adaptively adjust the projection intensity in gray level to make the camera capture the modulated fringes with optimal intensity [1722]. For example, Babaie et al. [19] proposed a method to adjust the projection intensity by calculating the overall corresponding relationships between the projector and the camera, but this method used the global matching relationships to generate the fringe, which inaccurately adjusted the projection intensity. Waddington and Kofman [20] presented a method to be used under the variable ambient light condition that leaded to image saturation by an adapted maximum input gray level (MIGL). However, since the MIGL of the projector was only uniform to avoid image saturation, this method decreased the SNR for surface regions with low reflectivity. Li and Kofman [21] proposed an adaptive fringe projection method through the binary images to extract and compensate the overexposed region. Nevertheless, the phase information of the edge pixels in the overexposed region was easily deficient, and the calculation results were not accurate enough. Holographic matrix was used to calculate the camera coordinate and projector-coordinate mapping function, but it was not suitable for measuring stepped objects [22]. In addition, Li et al. [23] presented an algorithm to calculate the optimal projection gray level. Chen et al. [24] proposed an adaptive fringe projection technique to measure the 3D morphology of color objects. However, this method was sensitive to the ambient light and mostly used in dark conditions.

In this paper, we propose an adaptive digital fringe projection method based on image fusion and interpolated prediction search algorithm to achieve the 3D measurement of the high-reflective surface with large reflectivity variations. According to the reflectivity characteristics of the measured surface, the captured images with valid uniform gray level are fused with the mask images, and the interpolated prediction search algorithm is used to calculate the optimal projection intensity at each pixel. Therefore, our method can adaptively adjust gray level intensity, avoid saturation in the captured images, and maintain higher intensity modulation. Compared with traditional optical 3D measurement methods, our method has optimal fringe contrast, which can achieve complete reconstruction in the overexposed region and effectively solve the problem of 3D shape measurement with high-reflective surface. Moreover, it will avoid additional hardware complexity and projector nonlinear gamma compensation.

The rest of the paper is arranged as follows. Section 2 demonstrates the measurement principles of the proposed method, and Section 3 illustrates the adaptive digital fringe projection method; then, the measurement system setup and method procedure are described in detail, and we present the experimental results to illustrate the feasibility and reliability of our method in Section 4. The conclusions are presented in Section 5.

2. Principles

2.1. Basic Three-Dimensional Measurement Principle

Figure 1 shows the coordinate system of the phase-shifted fringe projection system, where is the world coordinate system. and are the projector coordinate system and its pixel coordinate system, respectively; similarly, and denote the camera coordinate system and its pixel coordinate system, respectively. Two arbitrary points and are taken as a pair, which means that they correspond to the same point and have the same phase. The correspondences of the paired points can be established by projecting sinusoidal fringe images.

The multifrequency heterodyne method is used to unwrap the phase and measure the complex surface since this method is a pixel-by-pixel measurement. The measurement system sequentially projects four kinds of fringe images with different frequency phase shifts, and the phase can be calculated as ϕ1, ϕ2, and ϕ3 by means of the four-step phase-shifted method. Then, superposition phases ϕ12 and ϕ23 are, respectively, obtained and unwrapped phase ϕ123 will be ultimately calculated from superposition phases ϕ12 and ϕ23 by using the heterodyne algorithm. The schematic diagram is shown in Figure 2.

2.2. Adaptive Digital Fringe Projection Principle

In this case, “adaptive” refers to the 3D shape measurement for the surface with large reflectivity variations as well as the influence of ambient light and complex surface interreflections, and it can calculate the optimal projection intensity at the pixel level [25].

The schematic diagram is shown in Figure 3. Our method will generate adaptive digital fringe images based on image fusion and interpolated prediction search algorithm, which are projected onto the measured object so that the captured sinusoidal fringe images are unsaturated, and high-precision measurement results can be obtained by projecting the unsaturated sinusoidal fringes with higher modulation to calculate the phase.

The processing flow is further decomposed into six steps, as shown in Figure 4.(i)Step 1: projecting uniform gray-level image sequences: a series of image sequences in uniform gray level are projected onto the measured surface, and the captured image sequences and the mask image sequences are obtained after acquisition(ii)Step 2: obtaining valid gray-level images and mask images: the image sequences with valid uniform gray level are extracted by the mask image sequences to recalculate the gray level at each pixel and composite the final images(iii)Step 3: determining the saturation threshold: the saturation threshold is set by calculating the maximum pixel gray level in the composite image(iv)Step 4: calculating the optimal projection gray level: the optimal projection gray level is determined through the required saturation threshold and the interpolated prediction search algorithm(v)Step 5: camera-projector coordinate matching: the horizontal and vertical fringe sequences are projected onto the measured surface, and the absolute phase is calculated by the modulated fringe image under the low projection intensity to carry out camera-projector coordinate matching(vi)Step 6: 3D reconstruction: the adaptive digital fringe sequences are projected onto the measured surface, and the unwrapped phase and the 3D reconstruction are achieved by using the phase-shifted method based on heterodyne multifrequency

3. Methods

3.1. The Optimal Projection Gray-Level Calculation

Since the projection intensity images have different brightness, the optimal projection gray level is obtained only if the captured fringe images are unsaturated. In this paper, the saturation threshold is obtained by the image fusion method, and the interpolated prediction search algorithm is used to obtain the optimal projection gray level. Our method can be described as follows:(i)Step 1: project image sequences Si = 255 − K × (i − 1) onto the measured surface, i = 1, 2, …, N, and K is the step size. The image sequences have uniform gray level and then are captured by the camera. Their corresponding gray levels are Ii, i = 1, 2, …, N. The captured image sequences are Ik (uc, ), k = 1, 2, …, N. The projection intensity is initially set to a maximum gray level, which is 255. All of the pixels will be in the unsaturated state as the projection intensity, and the constant step K is reduced.(ii)Step 2: apply the threshold segmentation method. One kind of pixel is considered to be zero if its gray level is greater than T, and the other is maintained as Ik (uc, ). Finally, the obtained image sequences Pi (uc, ), i = 1, 2, …, N, have valid uniform gray level. Mathematically, the image sequences are expressed asAmong them, the segmentation threshold T is set to 250 to reserve some gray-level space since noise may lead to the saturation of the captured images.(iii)Step 3: design the inverse binary threshold and obtain the valid mask image sequences Mi (uc, ), i = 1, 2, …, N. They are a binary matrix that will be used for subsequent image fusion, i.e., the mask image sequences are expressed aswhere the largest threshold is usually set to 1 in which the inverse binary threshold can be realized. The pixel value is defined as the intensity that results from extracting the maximum unsaturated corresponding pixel from all the intensities of captured images .Furthermore, the mask image algorithm has been improved by using the method of transition compensation in the saturation region. The compensation value is gradually reduced, and the reconstruction error in the saturation region boundary is also reduced by adding transition compensation region around the local saturation region. Supposing that In−max and Is−max are the maximum projection gray level of the saturated region and the adjacent region, respectively. The minimum surrounding ellipse algorithm is added to the transition region. If the length of the overcompensated region is L pixels, the long and short-half axes in the projection plane coordinate system are a and b, and the center coordinates of the ellipse are ; then, the transition overcompensation algorithm is given as follows:In formula (3), the floor represents downward rounding, s is the step size, and Δ is the transition compensation factor, which is used to compensate the incompleteness in the discrete data in the transition region, usually taking Δ as 0.1–0.2.(iv)Step 4: composite the projection intensity images: the projection intensity images H (uc,) are generated by the image fusion algorithm according to the acquired mask images Mi (uc, ) and intensity values from the valid uniform gray-level image sequence Pi (uc, ).(v)Step 5: calculate the optimal projection gray level. Due to the disadvantages and complexity of the threshold segmentation algorithm and information acquisition of the pixel value in the overexposed region, the maximum gray level in the composited images is not the optimal gray level of the final adaptive fringe projection images. Therefore, the gray-level response curve of the camera-projector system needs to be computed as shown in the following equation:where is the gray level of the image captured by the camera, represents the projection gray level, p1 is the modulation factor among the projector, camera, and objects for projection intensity, p2 represents factors under specific measurement conditions, such as ambient light, and In is the noise intensity. Formula (5) is a linear function, and the definition domain and the range are both finite and ordered sets, so we put forward the fast lookup algorithm based on interpolated prediction [26] to calculate formula (5). The Lagrange interpolation polynomial L (x) is used to dynamically predict the middle value in the sequence of numbers. Mathematically, the formula is expressed aswhere Ymax represents the maximum gray level in the composited image in Step 4, which equals Icam (uc, ). L and H are the width and height of the composited image, which are dependent on the image resolution and initially set as L = 1 and H = n; A [YL] is the minimum value in the gray-level array, and A [YH] represents the maximum value in the gray-level array. In each computational cycle, H (uc, ) is a numerical matrix, i.e., the parameters L, H, A [YH], and A [YL] can be calculated. The corresponding Lagrange interpolation polynomial is obtained by the known parameters L, H, A [YH], and A [YL], and then the value of the middle element (mid) is calculated by the value that needs to be looked up, which is the core idea of the interpolation prediction lookup algorithm. Compared with the target element Ymax and the value of the middle element in the sequence of numbers, the optimal projection gray level xoptimal which is equal to Ipro (up, ) can be obtained by dichotomous search and continuous iteration.

3.2. Phase Error Analysis

Two kinds of phase errors are mainly considered in the phase-shifted fringe. One is caused by additive random noise from the system, and the other is caused by the measured object with high reflectivity.

As is known to all that when the gray level is in the normal range, the factors, such as projection intensity, surface reflectivity, ambient light, and noise, can affect the gray level captured by the camera. In fact, all other factors are considered to remain constant except that the noise is a random error, which will affect the final unwrapped phase quality. The relationships between the measurement gray-level random noise In and the final phase error induced by the random noise can be expressed as the following equation [27]:where N is the number of phase-shifted steps, Im is the modulation intensity, is the phase, and In is the random noise of the camera. It can be concluded from equation (8) that both increasing the number of phase-shifted steps and modulation intensity can reduce the phase error under the condition that the image random noise changes very little.

Furthermore, we analyze the phase error because the gray-level image is saturated. For the camera with depth in Db bits, the maximum gray level of its captured image is limited to . If the captured gray-level intensity exceeds the response range of the camera, the gray-level image will not correctly represent the actual gray-level intensity. As shown in equation (9), the gray-level intensity beyond the camera response range is calculated as the highest gray level of the camera:where is the gray level of the camera image at the pixel point and is the captured light intensity. If the captured image is saturated, the error of the gray-level image at the saturated pixel point is shown in the following formula:

In equation (10), is the gray-level image error due to the image saturation truncation. Since the error caused by the saturation is much larger than that caused by the noise, the phase error caused by the saturation is shown in the following equation:

It can be concluded from equation (11) that the phase error caused by the saturation decreased when the number of phase-shifted steps N and the modulation intensity Im are increased.

From the above analysis, we can draw the conclusion that the phase error caused by saturation is derived from the error value where the gray level is truncated. The phase error at low gray level is derived from the noise, which is much less than the phase error caused by saturation. If the phase can be calculated and the captured images are unsaturated, it can be used to achieve high-precision camera-projector coordinate matching.

3.3. Camera-Projector Coordinate Matching

As mentioned above, the optimal projection intensity only presents the magnitude of the adapted intensity. However, its position in the projector pixel coordinate system is not addressed. Therefore, the absolute phase resolution is the key process to map the camera pixel coordinate system to the projector pixel coordinate system.

In our case, a coordinate matching method for the projector-camera system is proposed to reduce the overall projection intensity in the unsaturated state. The method of calculating the optimal gray level can find the optimal projection intensity in Section 3.1 so that the whole intensity of the image is unsaturated.

The projector projects the adaptive fringe image sequences onto the measured surface, and the deformed fringe images contain the 3D shape information of the measured object. The phase-shifted method is applied to calculate and unwrap the vertical phase and the horizontal phase . These calculated phases contain the projection intensity image coordinates corresponding to the camera pixel coordinates . As shown in equations (12) and (13), the phase in the horizontal direction and vertical direction directly corresponds to the projector:where and are calculated by the phase-shifted method based on heterodyne multifrequency, and Nh are the periods of the adaptive transverse and longitudinal fringe, and and H are the width and height of the projected fringe images, respectively.

The matching process is shown in Figure 5. After calculating the absolute phase of the orthogonal sinusoidal fringes, the projector and the camera coordinate systems have unique horizontal and vertical phase values at each pixel in the projection area. In this way, the absolute phase value is obtained precisely. According to the phase equality relationships, the pixel-to-pixel correspondences between the projector and the camera are established. Subsequently, the subpixel-level mapping relationships between the projector and the camera can be constructed by traversing the whole projector coordinate system. After that, the phase is calculated from the modulation fringe images, and the matching results are obtained through calculating the phase of the corresponding projection coordinates. Finally, 3D coordinates of the measured surface with high-reflective surface are achieved by the relevant calibration parameters.

The mapping accuracy relates to the accuracy of corner extraction and the quality of the absolute phase, rather than affected by the calibration accuracy of camera parameters. The standard extraction functions are used to extract corner coordinates, and we have compensated the absolute phase error and considered the false calculation of the points at the edge of fringes in order to ensure the high quality of the absolute phase. These methods can ensure the one-to-one correspondence between the camera and the projector pixel coordinates.

Its matching accuracy is shown in Table 1. It can be seen that the overall matching error between the camera and the projector is less than one pixel, and the matching accuracy meets our experimental needs. Moreover, the radial distortion and tangential distortion of the camera are corrected in the calibration, so the internal and external parameters of the camera in this paper have enough accuracy, and the accuracy error is less than 0.02 pixel/mm.


CameraProjectorSystem

Reprojection error/pixel0.14350.09530.1346

Since both the contour pixel extraction and the pixel gray level of a certain point are complex in the actual optical measuring system, several pixel points around the contour will affect the process. According to the proposed method in Section 3.1, the required low projection intensity xoptimal can be obtained, which is satisfied within the dynamic range of the camera, i.e., the highest gray level of the captured image will be unsaturated. Then, the adaptive fringe images Ii (u, ), i = 1, 2, …, N, are generated and obtained by the following equation:where the phase value of the solution is solved by equation (15) and δi = i × 2π/N is the phase shift. In equation (14), the values are expressed as the average intensity and the modulation intensity of the fringe, respectively, which can be computed by equations (16) and (17).

The adaptive fringe images can be generated and calculated after mapping the coordinate system. In this case, our proposed method can precisely adjust the pixelwise projection intensity, avoid image saturation, and maintain higher intensity modulation for the high-reflective surface.

4. Experiments and Results

To verify the feasibility and utility of the proposed method, the adaptive digital fringe 3D shape measurement system is set up, as shown in Figure 6. The system was composed of the computer, camera, and digital projector. The digital projector is DLP4500, whose resolution is 1440 pixels × 912 pixels, and the high-speed projection can be realized. The camera is a point gray digital camera, the maximum resolution is 2048 pixels × 1536 pixels, and its maximum frame rate can reach 121 fps. To minimize the influence of ambient light and mutual reflection during the process, the camera was adjusted to have a small aperture, the exposure time was fixed, and the gain was set to 0 dB. Then, the exposure time of the camera was preferably set as integer times of 1/fp, where fp was the refresh frequency of the digital projector. Generally, fp was set to 60 Hz. We synchronized the camera and projector by triggering signals, which could ensure the perfect exposure time to cover the correct exposure time of the projection intensity images.

Firstly, a series of uniform gray-level images Si, i = 1, 2, …, 9, ranging from 30 to 255 with the constant step size of 30 in the gray level, were projected, and the corresponding images , k = 1, 2,..., 9, were orderly captured.

Then, the image sequences with valid uniform gray level and the corresponding mask image sequences could be calculated by equations (1) and (2). Because the image noise was unavoidable, the threshold value of the pixel was taken as 248. After that, the mask image Mi (uc, ) and the valid uniform gray-level image were fused into image according to equation (4). The process is shown in Figure 7.

The gray level of the composited image was taken as the set saturation threshold by the interpolated prediction search algorithm. Combining equations (6) and (7), the intermediate value was predicted dynamically. Then, the optimal projection gray level could be obtained quickly by using the dichotomous method, and the value was calculated as 135 in our experiment. After that, the average intensity I′ (u, ) and the modulation intensity I″ (u, ) of the fringe were computed as 0.5, respectively. Then, the coordinate mapping relationships were achieved at low gray level to generate the adaptive fringe. After many experiments, the results demonstrate that it was flexible and effective to match the coordinate of the camera-projector in the saturated area when the captured image was unsaturated. The comparison with the fringe projection effects is shown in Figures 8(a) and 8(b). From the local details selected in the red frame, it could be seen that our proposed method could make the fringe modulation unsaturated.

As the gray histogram could effectively reflect the frequency of the certain gray level in the image, the gray histogram was commonly used to calculate the distribution of overexposed pixels. In order to highlight the local details of the image, we only extracted the distribution histogram of pixel gray level in the small red frame selection part of Figures 8(a) and 8(b), and contrast histograms are shown in Figures 9(a) and 9(b). It showed that the number of pixels with gray level above 255 was reduced greatly through using the proposed method. Therefore, the optimal projection intensity from 128 to 150 had been verified, and it also illuminated that the measured surface could avoid image saturation and maintain higher SNR by using our method.

In our experiment, the phase-shifted method based on heterodyne multifrequency was used to carry out the unwrapped phase. Three groups of fringes were projected, each of four sinusoidal fringes with different phase shifts. We set the fringe frequencies to λ1 = 1/70, λ2 = 1/64, and λ3 = 1/59, and the corresponding wrapped phase values are ϕ1, ϕ2, and ϕ3, respectively. According to the heterodyne principle, the phases of ϕ12 and ϕ23 were obtained by superimposing λ12 and λ23. Then, the phase with frequencies λ12 and λ23 was superimposed to obtain the unwrapped phase ϕ123 with only one periodic phase in the whole field [28]. The process is shown in Figure 10.

We analyzed the experimental results in extracting 0 to 945 lines from the final phase unwrapping diagram. As could be seen from Figure 11, our method could successfully unwrap the phase in the area where the measured surface is saturated.

Furthermore, the 3D reconstruction was generated by applying the phase-height mapping relationships. The traditional 3D measurement method based on fringe projection and the proposed method were used to measure the same metal workpiece with high-reflective surface. The high-reflective objects with different materials and reflection coefficients were used to realize 3D reconstruction, and the reflection coefficient range was from 0.40 to 0.90 in our experiment.

The reflection coefficient of the first metal workpiece was from 0.40 to 0.50 [29]. We compared the experimental results shown in Figures 12(a) and 12(b); it could draw the conclusion that there was an obvious point cloud-missing phenomenon of the measured surface in the 3D reconstruction map by using the traditional phase-shifted method, and the 3D reconstruction of the measured surface could be completely reconstructed by using our proposed method. The depth-compared result of the 3D point is shown in Figure 13.

The measurement output comprised the 3D point cloud, so the quality of the point cloud was the most important indicator to evaluate the performance of the measurement system. Similarly, we measured the workpiece and merged the point clouds after alignment by using the cloud-based triangular-mesh reconstruction algorithm. Furthermore, we compared and analyzed the quality of the model data and the original point-cloud data. We set the deviation value, one side of the reference plane was negative, and the other side was a positive value. The maximum and minimum distance from the point to the plane were calculated by the least square fitting plane. The average error and the standard deviation are calculated as shown in Tables 2 and 3, which were used to evaluate quantitatively the validity of the proposed method.


DirectionMaximal value (mm)Average error (mm)Standard deviation (mm)

Negative direction−4.9650−0.62360.7501
Absolute direction4.96500.01630.1549
Forward direction3.65780.00890.1134


DirectionMaximal value (mm)Average error (mm)Standard deviation (mm)

Negative direction−3.3964−0.62361.2337
Absolute direction4.96240.00470.1104
Forward direction4.96240.00170.0692

From the quantitative analysis on Tables 2 and 3, we could demonstrate that the average error and standard deviation of the proposed method in the absolute and the positive directions were smaller than those obtained by the traditional method. The average error of the absolute direction was decreased by 71%.

We have measured a piece of machined aluminum, and its reflection coefficient was from 0.60 to 0.75 through the traditional ADFP method [21] and our proposed method, respectively. The comparison result of reconstruction effect maps is shown in Figures 14(a) and 14(b). The depth-compared result of the 3D point is shown in Figure 15.

From the quantitative analysis of Tables 4 and 5, the average error and standard deviation of the proposed method in the absolute direction and the forward direction were less than the measured values obtained by the conventional method. The average error of the absolute direction was reduced by 84.1%. The forward average error was decreased by 83.7%. The standard deviation of the absolute direction was reduced by 71.6%, and the forward standard deviation was reduced by 69.4%. The data further verified that our proposed method was flexible and adaptive for the 3D measurement of the high-reflective surface.


DirectionMaximal value (mm)Average error (mm)Standard deviation (mm)

Negative direction−3.9525−0.45870.4720
Absolute direction3.95250.00880.0843
Forward direction3.82430.00430.0544


DirectionMaximal value (mm)Average error (mm)Standard deviation (mm)

Negative direction−0.9415−0.31160.1994
Absolute direction0.94150.00140.0239
Forward direction0.81500.00070.0166

In addition, the experimental object was a metal processing object with the reflection coefficient of 0.75–0.90. The raw measured object is shown in Figure 16(a). The comparison result of reconstruction effect maps is shown in Figures 16(b) and 16(c).

From the quantitative analysis of Tables 6 and 7, the average error and standard deviation of the proposed method in the absolute direction and the forward direction were both less than the measured values obtained by the conventional method.


DirectionMaximal value (mm)Average error (mm)Standard deviation (mm)

Negative direction−2.4104−0.63880.4115
Absolute direction2.75580.00190.0470
Forward direction2.75580.00110.0394


DirectionMaximal value (mm)Average error (mm)Standard deviation (mm)

Negative direction−2.7106−0.42960.4486
Absolute direction2.71060.00140.0239
Forward direction0.94570.00070.0197

All in all, it could be seen from the above experiments that the method proposed in this paper had a wide range of applications and could effectively solve the problem of 3D shape measurement with large reflectivity variations.

5. Conclusions

An adaptive digital fringe projection method based on image fusion and interpolated prediction search algorithm is proposed to solve the point cloud-missing problem of 3D shape measurement with a high range of reflectivity. For overexposed pixels on high-reflective surfaces, appropriate gray-level intensity of the composite image is computed as the optimal projection intensity to avoid image saturation. Simultaneously, for dark pixels with low surface reflectivity, the high gray-level intensity is selected as the optimal projection intensity, which maintains higher SNR. The experiments show that the proposed method achieved high measurement accuracy for the high-reflective surface. The average error in the absolute direction is reduced by 84.1%, and the forward standard deviation is reduced by 69.4%. Our proposed method only needs two prior steps for measuring the high-reflective surface, without projecting and capturing a large number of fringe images at multiple intensities or too many exposure times; thereby, it avoids additional hardware complexity and makes the whole measurement easier to carry out and less laborious. However, the proposed method cannot be used in dynamic measurements; the optimal projection intensity will be predicted adaptively and achieve high measurement accuracy in the future.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This research was funded by the National Natural Science Foundation of China (Grant nos. 51805153 and 51675166) and the State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University (pilab1801).

References

  1. L. Huang, J. Xue, B. Gao, C. McPherson, J. Beverage, and M. Idir, “Model mismatch analysis and compensation for modal phase measuring deflectometry,” Optics Express, vol. 25, no. 2, pp. 881–887, 2017. View at: Publisher Site | Google Scholar
  2. X. Liu, X. Peng, H. Chen, D. He, and B. Z. Gao, “Strategy for automatic and complete three-dimensional optical digitization,” Optics Letters, vol. 37, no. 15, pp. 3126–3128, 2012. View at: Publisher Site | Google Scholar
  3. H. Lin, J. Gao, G. Zhang, X. Chen, Y. He, and Y. Liu, “Review and comparison of high-dynamic range three-dimensional shape measurement techniques,” Journal of Sensors, vol. 2017, Article ID 9576850, 11 pages, 2017. View at: Publisher Site | Google Scholar
  4. S. Zhang, “High-speed 3D shape measurement with structured light methods: a review,” Optics and Lasers in Engineering, vol. 106, pp. 119–131, 2018. View at: Publisher Site | Google Scholar
  5. V. L. Tran and H. Y. Lin, “A structured light RGB-D camera system for accurate depth measurement,” International Journal of Optics, vol. 2018, Article ID 8659847, 7 pages, 2018. View at: Publisher Site | Google Scholar
  6. Z. Song, R. Chung, and X. Zhang, “An accurate and robust strip-edge-based structured light means for shiny surface micro-measurement in 3D,” IEEE Transactions on Industrial Electronics, vol. 60, no. 3, pp. 1023–1032, 2013. View at: Publisher Site | Google Scholar
  7. S. Feng, Q. Chen, C. Zuo, and A. Asundi, “Fast three-dimensional measurements for dynamic scenes with shiny surfaces,” Optics Communications, vol. 382, pp. 18–27, 2017. View at: Publisher Site | Google Scholar
  8. J. Li, H. Ren, P. Luo, X. Gao, and Z. Wang, “Specular reflection compensation in homography fringe projection profilometry,” Optik, vol. 140, pp. 413–422, 2017. View at: Publisher Site | Google Scholar
  9. C. Zhang, M. Wang, Q. Chen, D. Wang, and S. Wei, “Two-step phase retrieval algorithm using single-intensity measurement,” International Journal of Optics, vol. 2018, Article ID 8643819, 7 pages, 2018. View at: Publisher Site | Google Scholar
  10. Z. Song, H. Jiang, H. Lin, and S. Tang, “A high dynamic range structured light means for the 3D measurement of specular surface,” Optics and Lasers in Engineering, vol. 95, pp. 8–16, 2017. View at: Publisher Site | Google Scholar
  11. W. Feng, H. Liu, D. Zhao, and X. Xu, “Research on defect detection method for high-reflective-metal surface based on high dynamic range imaging,” Optik, vol. 206, p. 164349, 2020. View at: Publisher Site | Google Scholar
  12. W. Feng, F. Zhang, W. Wang, W. Xing, and X. Qu, “Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging,” Applied Optics, vol. 56, no. 13, pp. 3831–3840, 2017. View at: Publisher Site | Google Scholar
  13. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: a novel 3-D scanning technique for high-reflective surfaces,” Optics and Lasers in Engineering, vol. 50, no. 10, pp. 1484–1493, 2012. View at: Publisher Site | Google Scholar
  14. L. Ekstrand and S. Zhang, “Autoexposure for three-dimensional shape measurement using a digital-light-processing projector,” Optical Engineering, vol. 50, no. 12, p. 123603, 2011. View at: Publisher Site | Google Scholar
  15. Y. Li, Y. Fu, Z. Liu et al., “Three-dimensional polarization algebra for all polarization sensitive optical systems,” Optics Express, vol. 26, no. 11, pp. 14109–14122, 2018. View at: Publisher Site | Google Scholar
  16. J. Jeong and M. Y. Kim, “Adaptive imaging system with spatial light modulator for robust shape measurement of partially specular objects,” Optics Express, vol. 18, no. 26, pp. 27787–27801, 2010. View at: Publisher Site | Google Scholar
  17. C. Chen, N. Gao, X. Wang, and Z. Zhang, “Adaptive projection intensity adjustment for avoiding saturation in three-dimensional shape measurement,” Optics Communications, vol. 410, pp. 694–702, 2018. View at: Publisher Site | Google Scholar
  18. J. Peng, X. Liu, D. Deng, H. Guo, Z. Cai, and X. Peng, “Suppression of projector distortion in phase-measuring profilometry by projecting adaptive fringe patterns,” Optics Express, vol. 24, no. 19, pp. 21846–21860, 2016. View at: Publisher Site | Google Scholar
  19. G. Babaie, M. Abolbashari, and F. Farahi, “Dynamics range enhancement in digital fringe projection technique,” Precision Engineering, vol. 39, pp. 243–251, 2015. View at: Publisher Site | Google Scholar
  20. C. Waddington and J. D. Kofman, “Modified sinusoidal fringe-images projection for variable illuminance in phase-shifting three-dimensional surface-shape metrology,” Optical Engineering, vol. 53, p. 084109, 2014. View at: Publisher Site | Google Scholar
  21. D. Li and J. Kofman, “Adaptive fringe-pattern projection for image saturation avoidance in 3D surface-shape measurement,” Optics Express, vol. 22, no. 8, pp. 9887–9901, 2014. View at: Publisher Site | Google Scholar
  22. H. Lin, J. Gao, Q. Mei, Y. He, J. Liu, and X. Wang, “Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement,” Optics Express, vol. 24, no. 7, pp. 7703–7718, 2016. View at: Publisher Site | Google Scholar
  23. S. Li, F. Da, and L. Rao, “Adaptive fringe projection technique for high-dynamic range three-dimensional shape measurement using binary search,” Optical Engineering, vol. 56, p. 094111, 2017. View at: Publisher Site | Google Scholar
  24. C. Chen, N. Gao, X. Wang, and Z. Zhang, “Adaptive pixel-to-pixel projection intensity adjustment for measuring a shiny surface using orthogonal color fringe images projection,” Measurement Science and Technology, vol. 29, no. 5, Article ID 055203, 2018. View at: Publisher Site | Google Scholar
  25. H. Lin, J. Gao, Q. Mei, G. Zhang, Y. He, and X. Chen, “Three-dimensional shape measurement technique for shiny surfaces by adaptive pixel-wise projection intensity adjustment,” Optics and Lasers in Engineering, vol. 91, pp. 206–215, 2017. View at: Publisher Site | Google Scholar
  26. I. Lobel, R. P. Leme, and A. Vladu, “Multidimensional binary search for contextual decision-making,” Operations Research, vol. 66, no. 5, pp. 1346–1361, 2018. View at: Publisher Site | Google Scholar
  27. E. Hu, Y. He, and W. Wu, “Further study of the phase-recovering algorithm for saturated fringe patterns with a larger saturation coefficient in the projection grating phase-shifting profilometry,” Optik, vol. 121, no. 14, pp. 1290–1294, 2010. View at: Publisher Site | Google Scholar
  28. Y. Xu, H. Zhao, H. Jiang, and X. Li, “High-accuracy 3D shape measurement of translucent objects by fringe projection profilometry,” Optics Express, vol. 27, no. 13, pp. 18421–18434, 2019. View at: Publisher Site | Google Scholar
  29. M. Oren and S. K. Nayar, “A theory of specular surface geometry,” International Journal of Computer Vision, vol. 24, no. 2, pp. 105–124, 1997. View at: Publisher Site | Google Scholar

Copyright © 2020 Wei Feng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views246
Downloads163
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.