Research Article  Open Access
Wei Feng, Shaojing Tang, Xiaodong Zhao, Guodong Sun, Daxing Zhao, "Adaptive Fringe Projection for 3D Shape Measurement with Large Reflectivity Variations by Using Image Fusion and Predicted Search", International Journal of Optics, vol. 2020, Article ID 4876876, 14 pages, 2020. https://doi.org/10.1155/2020/4876876
Adaptive Fringe Projection for 3D Shape Measurement with Large Reflectivity Variations by Using Image Fusion and Predicted Search
Abstract
There is always a great challenge for the structured light technique that it is difficult to deal with the surface with large reflectivity variations or specular reflection. This paper proposes a flexible and adaptive digital fringe projection method based on image fusion and interpolated prediction search algorithm. The multiple mask images are fused to obtain the required saturation threshold, and the interpolated prediction search algorithm is used to calculate the optimal projection graylevel intensity. Then, the projection intensity is reduced to achieve coordinate matching in the unsaturated condition, and the adaptive digital fringes with the optimal projection intensity are subsequently projected for phase calculation by using the heterodyne multifrequency phaseshifted method. The experiments demonstrate that the proposed method is effective for measuring the highreflective surface and unwrapping the phase in the local overexposure region completely. Compared with the traditional structured light measurement methods, our method can decrease the number of projected and captured images with higher modulation and better contrast. In addition, the measurement process only needs two prior steps and avoids hardware complexity, which is more convenient to apply to the industry.
1. Introduction
Structured light technique has been widely used in academic research and industrial fields because of the advantages in their fullfield inspection, noncontact operation, low cost, and high precision [1, 2]. However, the method still has some limitations, for example, the measured surface with optical characteristics should have enough diffuse reflection and no largearea specular reflection. In fact, when the coded fringe images are projected to measure the surface with large reflectivity variations, the highreflective region is too bright to result in image saturation, which will easily lead to large deviation in the threedimensional (3D) measurement [3, 4]. Therefore, it is an intractable challenge to measure the object with highreflective surface accurately.
Many methods have been developed for the 3D shape measurement of the highreflective surface [5–24]. Among them, an adaptive digital fringe projection (ADFP) method based on antiprojection theory was proposed, and it can adaptively adjust the projection intensity in gray level to make the camera capture the modulated fringes with optimal intensity [17–22]. For example, Babaie et al. [19] proposed a method to adjust the projection intensity by calculating the overall corresponding relationships between the projector and the camera, but this method used the global matching relationships to generate the fringe, which inaccurately adjusted the projection intensity. Waddington and Kofman [20] presented a method to be used under the variable ambient light condition that leaded to image saturation by an adapted maximum input gray level (MIGL). However, since the MIGL of the projector was only uniform to avoid image saturation, this method decreased the SNR for surface regions with low reflectivity. Li and Kofman [21] proposed an adaptive fringe projection method through the binary images to extract and compensate the overexposed region. Nevertheless, the phase information of the edge pixels in the overexposed region was easily deficient, and the calculation results were not accurate enough. Holographic matrix was used to calculate the camera coordinate and projectorcoordinate mapping function, but it was not suitable for measuring stepped objects [22]. In addition, Li et al. [23] presented an algorithm to calculate the optimal projection gray level. Chen et al. [24] proposed an adaptive fringe projection technique to measure the 3D morphology of color objects. However, this method was sensitive to the ambient light and mostly used in dark conditions.
In this paper, we propose an adaptive digital fringe projection method based on image fusion and interpolated prediction search algorithm to achieve the 3D measurement of the highreflective surface with large reflectivity variations. According to the reflectivity characteristics of the measured surface, the captured images with valid uniform gray level are fused with the mask images, and the interpolated prediction search algorithm is used to calculate the optimal projection intensity at each pixel. Therefore, our method can adaptively adjust gray level intensity, avoid saturation in the captured images, and maintain higher intensity modulation. Compared with traditional optical 3D measurement methods, our method has optimal fringe contrast, which can achieve complete reconstruction in the overexposed region and effectively solve the problem of 3D shape measurement with highreflective surface. Moreover, it will avoid additional hardware complexity and projector nonlinear gamma compensation.
The rest of the paper is arranged as follows. Section 2 demonstrates the measurement principles of the proposed method, and Section 3 illustrates the adaptive digital fringe projection method; then, the measurement system setup and method procedure are described in detail, and we present the experimental results to illustrate the feasibility and reliability of our method in Section 4. The conclusions are presented in Section 5.
2. Principles
2.1. Basic ThreeDimensional Measurement Principle
Figure 1 shows the coordinate system of the phaseshifted fringe projection system, where is the world coordinate system. and are the projector coordinate system and its pixel coordinate system, respectively; similarly, and denote the camera coordinate system and its pixel coordinate system, respectively. Two arbitrary points and are taken as a pair, which means that they correspond to the same point and have the same phase. The correspondences of the paired points can be established by projecting sinusoidal fringe images.
The multifrequency heterodyne method is used to unwrap the phase and measure the complex surface since this method is a pixelbypixel measurement. The measurement system sequentially projects four kinds of fringe images with different frequency phase shifts, and the phase can be calculated as ϕ_{1}, ϕ_{2}, and ϕ_{3} by means of the fourstep phaseshifted method. Then, superposition phases ϕ_{12} and ϕ_{23} are, respectively, obtained and unwrapped phase ϕ_{123} will be ultimately calculated from superposition phases ϕ_{12} and ϕ_{23} by using the heterodyne algorithm. The schematic diagram is shown in Figure 2.
2.2. Adaptive Digital Fringe Projection Principle
In this case, “adaptive” refers to the 3D shape measurement for the surface with large reflectivity variations as well as the influence of ambient light and complex surface interreflections, and it can calculate the optimal projection intensity at the pixel level [25].
The schematic diagram is shown in Figure 3. Our method will generate adaptive digital fringe images based on image fusion and interpolated prediction search algorithm, which are projected onto the measured object so that the captured sinusoidal fringe images are unsaturated, and highprecision measurement results can be obtained by projecting the unsaturated sinusoidal fringes with higher modulation to calculate the phase.
The processing flow is further decomposed into six steps, as shown in Figure 4.(i)Step 1: projecting uniform graylevel image sequences: a series of image sequences in uniform gray level are projected onto the measured surface, and the captured image sequences and the mask image sequences are obtained after acquisition(ii)Step 2: obtaining valid graylevel images and mask images: the image sequences with valid uniform gray level are extracted by the mask image sequences to recalculate the gray level at each pixel and composite the final images(iii)Step 3: determining the saturation threshold: the saturation threshold is set by calculating the maximum pixel gray level in the composite image(iv)Step 4: calculating the optimal projection gray level: the optimal projection gray level is determined through the required saturation threshold and the interpolated prediction search algorithm(v)Step 5: cameraprojector coordinate matching: the horizontal and vertical fringe sequences are projected onto the measured surface, and the absolute phase is calculated by the modulated fringe image under the low projection intensity to carry out cameraprojector coordinate matching(vi)Step 6: 3D reconstruction: the adaptive digital fringe sequences are projected onto the measured surface, and the unwrapped phase and the 3D reconstruction are achieved by using the phaseshifted method based on heterodyne multifrequency
3. Methods
3.1. The Optimal Projection GrayLevel Calculation
Since the projection intensity images have different brightness, the optimal projection gray level is obtained only if the captured fringe images are unsaturated. In this paper, the saturation threshold is obtained by the image fusion method, and the interpolated prediction search algorithm is used to obtain the optimal projection gray level. Our method can be described as follows:(i)Step 1: project image sequences S_{i} = 255 − K × (i − 1) onto the measured surface, i = 1, 2, …, N, and K is the step size. The image sequences have uniform gray level and then are captured by the camera. Their corresponding gray levels are I_{i}, i = 1, 2, …, N. The captured image sequences are I_{k} (u^{c}, ), k = 1, 2, …, N. The projection intensity is initially set to a maximum gray level, which is 255. All of the pixels will be in the unsaturated state as the projection intensity, and the constant step K is reduced.(ii)Step 2: apply the threshold segmentation method. One kind of pixel is considered to be zero if its gray level is greater than T, and the other is maintained as I_{k} (u^{c}, ). Finally, the obtained image sequences P_{i} (u^{c}, ), i = 1, 2, …, N, have valid uniform gray level. Mathematically, the image sequences are expressed as Among them, the segmentation threshold T is set to 250 to reserve some graylevel space since noise may lead to the saturation of the captured images.(iii)Step 3: design the inverse binary threshold and obtain the valid mask image sequences M_{i} (u^{c}, ), i = 1, 2, …, N. They are a binary matrix that will be used for subsequent image fusion, i.e., the mask image sequences are expressed as where the largest threshold is usually set to 1 in which the inverse binary threshold can be realized. The pixel value is defined as the intensity that results from extracting the maximum unsaturated corresponding pixel from all the intensities of captured images . Furthermore, the mask image algorithm has been improved by using the method of transition compensation in the saturation region. The compensation value is gradually reduced, and the reconstruction error in the saturation region boundary is also reduced by adding transition compensation region around the local saturation region. Supposing that I_{n−max} and I_{s−max} are the maximum projection gray level of the saturated region and the adjacent region, respectively. The minimum surrounding ellipse algorithm is added to the transition region. If the length of the overcompensated region is L pixels, the long and shorthalf axes in the projection plane coordinate system are a and b, and the center coordinates of the ellipse are ; then, the transition overcompensation algorithm is given as follows: In formula (3), the floor represents downward rounding, s is the step size, and Δ is the transition compensation factor, which is used to compensate the incompleteness in the discrete data in the transition region, usually taking Δ as 0.1–0.2.(iv)Step 4: composite the projection intensity images: the projection intensity images H (u^{c},) are generated by the image fusion algorithm according to the acquired mask images M_{i} (u^{c}, ) and intensity values from the valid uniform graylevel image sequence P_{i} (u^{c}, ).(v)Step 5: calculate the optimal projection gray level. Due to the disadvantages and complexity of the threshold segmentation algorithm and information acquisition of the pixel value in the overexposed region, the maximum gray level in the composited images is not the optimal gray level of the final adaptive fringe projection images. Therefore, the graylevel response curve of the cameraprojector system needs to be computed as shown in the following equation:where is the gray level of the image captured by the camera, represents the projection gray level, p_{1} is the modulation factor among the projector, camera, and objects for projection intensity, p_{2} represents factors under specific measurement conditions, such as ambient light, and I_{n} is the noise intensity. Formula (5) is a linear function, and the definition domain and the range are both finite and ordered sets, so we put forward the fast lookup algorithm based on interpolated prediction [26] to calculate formula (5). The Lagrange interpolation polynomial L (x) is used to dynamically predict the middle value in the sequence of numbers. Mathematically, the formula is expressed aswhere Y_{max} represents the maximum gray level in the composited image in Step 4, which equals I_{cam} (u^{c}, ). L and H are the width and height of the composited image, which are dependent on the image resolution and initially set as L = 1 and H = n; A [Y_{L}] is the minimum value in the graylevel array, and A [Y_{H}] represents the maximum value in the graylevel array. In each computational cycle, H (u^{c}, ) is a numerical matrix, i.e., the parameters L, H, A [Y_{H}], and A [Y_{L}] can be calculated. The corresponding Lagrange interpolation polynomial is obtained by the known parameters L, H, A [Y_{H}], and A [Y_{L}], and then the value of the middle element (mid) is calculated by the value that needs to be looked up, which is the core idea of the interpolation prediction lookup algorithm. Compared with the target element Y_{max} and the value of the middle element in the sequence of numbers, the optimal projection gray level x_{optimal} which is equal to I_{pro} (u^{p}, ) can be obtained by dichotomous search and continuous iteration.
3.2. Phase Error Analysis
Two kinds of phase errors are mainly considered in the phaseshifted fringe. One is caused by additive random noise from the system, and the other is caused by the measured object with high reflectivity.
As is known to all that when the gray level is in the normal range, the factors, such as projection intensity, surface reflectivity, ambient light, and noise, can affect the gray level captured by the camera. In fact, all other factors are considered to remain constant except that the noise is a random error, which will affect the final unwrapped phase quality. The relationships between the measurement graylevel random noise I_{n} and the final phase error induced by the random noise can be expressed as the following equation [27]:where N is the number of phaseshifted steps, I_{m} is the modulation intensity, is the phase, and I_{n} is the random noise of the camera. It can be concluded from equation (8) that both increasing the number of phaseshifted steps and modulation intensity can reduce the phase error under the condition that the image random noise changes very little.
Furthermore, we analyze the phase error because the graylevel image is saturated. For the camera with depth in D^{b} bits, the maximum gray level of its captured image is limited to . If the captured graylevel intensity exceeds the response range of the camera, the graylevel image will not correctly represent the actual graylevel intensity. As shown in equation (9), the graylevel intensity beyond the camera response range is calculated as the highest gray level of the camera:where is the gray level of the camera image at the pixel point and is the captured light intensity. If the captured image is saturated, the error of the graylevel image at the saturated pixel point is shown in the following formula:
In equation (10), is the graylevel image error due to the image saturation truncation. Since the error caused by the saturation is much larger than that caused by the noise, the phase error caused by the saturation is shown in the following equation:
It can be concluded from equation (11) that the phase error caused by the saturation decreased when the number of phaseshifted steps N and the modulation intensity I_{m} are increased.
From the above analysis, we can draw the conclusion that the phase error caused by saturation is derived from the error value where the gray level is truncated. The phase error at low gray level is derived from the noise, which is much less than the phase error caused by saturation. If the phase can be calculated and the captured images are unsaturated, it can be used to achieve highprecision cameraprojector coordinate matching.
3.3. CameraProjector Coordinate Matching
As mentioned above, the optimal projection intensity only presents the magnitude of the adapted intensity. However, its position in the projector pixel coordinate system is not addressed. Therefore, the absolute phase resolution is the key process to map the camera pixel coordinate system to the projector pixel coordinate system.
In our case, a coordinate matching method for the projectorcamera system is proposed to reduce the overall projection intensity in the unsaturated state. The method of calculating the optimal gray level can find the optimal projection intensity in Section 3.1 so that the whole intensity of the image is unsaturated.
The projector projects the adaptive fringe image sequences onto the measured surface, and the deformed fringe images contain the 3D shape information of the measured object. The phaseshifted method is applied to calculate and unwrap the vertical phase and the horizontal phase . These calculated phases contain the projection intensity image coordinates corresponding to the camera pixel coordinates . As shown in equations (12) and (13), the phase in the horizontal direction and vertical direction directly corresponds to the projector:where and are calculated by the phaseshifted method based on heterodyne multifrequency, and N_{h} are the periods of the adaptive transverse and longitudinal fringe, and and H are the width and height of the projected fringe images, respectively.
The matching process is shown in Figure 5. After calculating the absolute phase of the orthogonal sinusoidal fringes, the projector and the camera coordinate systems have unique horizontal and vertical phase values at each pixel in the projection area. In this way, the absolute phase value is obtained precisely. According to the phase equality relationships, the pixeltopixel correspondences between the projector and the camera are established. Subsequently, the subpixellevel mapping relationships between the projector and the camera can be constructed by traversing the whole projector coordinate system. After that, the phase is calculated from the modulation fringe images, and the matching results are obtained through calculating the phase of the corresponding projection coordinates. Finally, 3D coordinates of the measured surface with highreflective surface are achieved by the relevant calibration parameters.
The mapping accuracy relates to the accuracy of corner extraction and the quality of the absolute phase, rather than affected by the calibration accuracy of camera parameters. The standard extraction functions are used to extract corner coordinates, and we have compensated the absolute phase error and considered the false calculation of the points at the edge of fringes in order to ensure the high quality of the absolute phase. These methods can ensure the onetoone correspondence between the camera and the projector pixel coordinates.
Its matching accuracy is shown in Table 1. It can be seen that the overall matching error between the camera and the projector is less than one pixel, and the matching accuracy meets our experimental needs. Moreover, the radial distortion and tangential distortion of the camera are corrected in the calibration, so the internal and external parameters of the camera in this paper have enough accuracy, and the accuracy error is less than 0.02 pixel/mm.

Since both the contour pixel extraction and the pixel gray level of a certain point are complex in the actual optical measuring system, several pixel points around the contour will affect the process. According to the proposed method in Section 3.1, the required low projection intensity x_{optimal} can be obtained, which is satisfied within the dynamic range of the camera, i.e., the highest gray level of the captured image will be unsaturated. Then, the adaptive fringe images I_{i} (u, ), i = 1, 2, …, N, are generated and obtained by the following equation:where the phase value of the solution is solved by equation (15) and δ_{i} = i × 2π/N is the phase shift. In equation (14), the values are expressed as the average intensity and the modulation intensity of the fringe, respectively, which can be computed by equations (16) and (17).
The adaptive fringe images can be generated and calculated after mapping the coordinate system. In this case, our proposed method can precisely adjust the pixelwise projection intensity, avoid image saturation, and maintain higher intensity modulation for the highreflective surface.
4. Experiments and Results
To verify the feasibility and utility of the proposed method, the adaptive digital fringe 3D shape measurement system is set up, as shown in Figure 6. The system was composed of the computer, camera, and digital projector. The digital projector is DLP4500, whose resolution is 1440 pixels × 912 pixels, and the highspeed projection can be realized. The camera is a point gray digital camera, the maximum resolution is 2048 pixels × 1536 pixels, and its maximum frame rate can reach 121 fps. To minimize the influence of ambient light and mutual reflection during the process, the camera was adjusted to have a small aperture, the exposure time was fixed, and the gain was set to 0 dB. Then, the exposure time of the camera was preferably set as integer times of 1/fp, where fp was the refresh frequency of the digital projector. Generally, fp was set to 60 Hz. We synchronized the camera and projector by triggering signals, which could ensure the perfect exposure time to cover the correct exposure time of the projection intensity images.
Firstly, a series of uniform graylevel images S_{i}, i = 1, 2, …, 9, ranging from 30 to 255 with the constant step size of 30 in the gray level, were projected, and the corresponding images , k = 1, 2,..., 9, were orderly captured.
Then, the image sequences with valid uniform gray level and the corresponding mask image sequences could be calculated by equations (1) and (2). Because the image noise was unavoidable, the threshold value of the pixel was taken as 248. After that, the mask image M_{i} (u^{c}, ) and the valid uniform graylevel image were fused into image according to equation (4). The process is shown in Figure 7.
The gray level of the composited image was taken as the set saturation threshold by the interpolated prediction search algorithm. Combining equations (6) and (7), the intermediate value was predicted dynamically. Then, the optimal projection gray level could be obtained quickly by using the dichotomous method, and the value was calculated as 135 in our experiment. After that, the average intensity I′ (u, ) and the modulation intensity I″ (u, ) of the fringe were computed as 0.5, respectively. Then, the coordinate mapping relationships were achieved at low gray level to generate the adaptive fringe. After many experiments, the results demonstrate that it was flexible and effective to match the coordinate of the cameraprojector in the saturated area when the captured image was unsaturated. The comparison with the fringe projection effects is shown in Figures 8(a) and 8(b). From the local details selected in the red frame, it could be seen that our proposed method could make the fringe modulation unsaturated.
(a)
(b)
As the gray histogram could effectively reflect the frequency of the certain gray level in the image, the gray histogram was commonly used to calculate the distribution of overexposed pixels. In order to highlight the local details of the image, we only extracted the distribution histogram of pixel gray level in the small red frame selection part of Figures 8(a) and 8(b), and contrast histograms are shown in Figures 9(a) and 9(b). It showed that the number of pixels with gray level above 255 was reduced greatly through using the proposed method. Therefore, the optimal projection intensity from 128 to 150 had been verified, and it also illuminated that the measured surface could avoid image saturation and maintain higher SNR by using our method.
(a)
(b)
In our experiment, the phaseshifted method based on heterodyne multifrequency was used to carry out the unwrapped phase. Three groups of fringes were projected, each of four sinusoidal fringes with different phase shifts. We set the fringe frequencies to λ_{1} = 1/70, λ_{2} = 1/64, and λ_{3} = 1/59, and the corresponding wrapped phase values are ϕ_{1}, ϕ_{2}, and ϕ_{3}, respectively. According to the heterodyne principle, the phases of ϕ_{12} and ϕ_{23} were obtained by superimposing λ_{12} and λ_{23}. Then, the phase with frequencies λ_{12} and λ_{23} was superimposed to obtain the unwrapped phase ϕ_{123} with only one periodic phase in the whole field [28]. The process is shown in Figure 10.
We analyzed the experimental results in extracting 0 to 945 lines from the final phase unwrapping diagram. As could be seen from Figure 11, our method could successfully unwrap the phase in the area where the measured surface is saturated.
Furthermore, the 3D reconstruction was generated by applying the phaseheight mapping relationships. The traditional 3D measurement method based on fringe projection and the proposed method were used to measure the same metal workpiece with highreflective surface. The highreflective objects with different materials and reflection coefficients were used to realize 3D reconstruction, and the reflection coefficient range was from 0.40 to 0.90 in our experiment.
The reflection coefficient of the first metal workpiece was from 0.40 to 0.50 [29]. We compared the experimental results shown in Figures 12(a) and 12(b); it could draw the conclusion that there was an obvious point cloudmissing phenomenon of the measured surface in the 3D reconstruction map by using the traditional phaseshifted method, and the 3D reconstruction of the measured surface could be completely reconstructed by using our proposed method. The depthcompared result of the 3D point is shown in Figure 13.
(a)
(b)
(a)
(b)
The measurement output comprised the 3D point cloud, so the quality of the point cloud was the most important indicator to evaluate the performance of the measurement system. Similarly, we measured the workpiece and merged the point clouds after alignment by using the cloudbased triangularmesh reconstruction algorithm. Furthermore, we compared and analyzed the quality of the model data and the original pointcloud data. We set the deviation value, one side of the reference plane was negative, and the other side was a positive value. The maximum and minimum distance from the point to the plane were calculated by the least square fitting plane. The average error and the standard deviation are calculated as shown in Tables 2 and 3, which were used to evaluate quantitatively the validity of the proposed method.


From the quantitative analysis on Tables 2 and 3, we could demonstrate that the average error and standard deviation of the proposed method in the absolute and the positive directions were smaller than those obtained by the traditional method. The average error of the absolute direction was decreased by 71%.
We have measured a piece of machined aluminum, and its reflection coefficient was from 0.60 to 0.75 through the traditional ADFP method [21] and our proposed method, respectively. The comparison result of reconstruction effect maps is shown in Figures 14(a) and 14(b). The depthcompared result of the 3D point is shown in Figure 15.
(a)
(b)
(a)
(b)
From the quantitative analysis of Tables 4 and 5, the average error and standard deviation of the proposed method in the absolute direction and the forward direction were less than the measured values obtained by the conventional method. The average error of the absolute direction was reduced by 84.1%. The forward average error was decreased by 83.7%. The standard deviation of the absolute direction was reduced by 71.6%, and the forward standard deviation was reduced by 69.4%. The data further verified that our proposed method was flexible and adaptive for the 3D measurement of the highreflective surface.


In addition, the experimental object was a metal processing object with the reflection coefficient of 0.75–0.90. The raw measured object is shown in Figure 16(a). The comparison result of reconstruction effect maps is shown in Figures 16(b) and 16(c).
(a)
(b)
(c)
From the quantitative analysis of Tables 6 and 7, the average error and standard deviation of the proposed method in the absolute direction and the forward direction were both less than the measured values obtained by the conventional method.


All in all, it could be seen from the above experiments that the method proposed in this paper had a wide range of applications and could effectively solve the problem of 3D shape measurement with large reflectivity variations.
5. Conclusions
An adaptive digital fringe projection method based on image fusion and interpolated prediction search algorithm is proposed to solve the point cloudmissing problem of 3D shape measurement with a high range of reflectivity. For overexposed pixels on highreflective surfaces, appropriate graylevel intensity of the composite image is computed as the optimal projection intensity to avoid image saturation. Simultaneously, for dark pixels with low surface reflectivity, the high graylevel intensity is selected as the optimal projection intensity, which maintains higher SNR. The experiments show that the proposed method achieved high measurement accuracy for the highreflective surface. The average error in the absolute direction is reduced by 84.1%, and the forward standard deviation is reduced by 69.4%. Our proposed method only needs two prior steps for measuring the highreflective surface, without projecting and capturing a large number of fringe images at multiple intensities or too many exposure times; thereby, it avoids additional hardware complexity and makes the whole measurement easier to carry out and less laborious. However, the proposed method cannot be used in dynamic measurements; the optimal projection intensity will be predicted adaptively and achieve high measurement accuracy in the future.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare no conflicts of interest.
Acknowledgments
This research was funded by the National Natural Science Foundation of China (Grant nos. 51805153 and 51675166) and the State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University (pilab1801).
References
 L. Huang, J. Xue, B. Gao, C. McPherson, J. Beverage, and M. Idir, “Model mismatch analysis and compensation for modal phase measuring deflectometry,” Optics Express, vol. 25, no. 2, pp. 881–887, 2017. View at: Publisher Site  Google Scholar
 X. Liu, X. Peng, H. Chen, D. He, and B. Z. Gao, “Strategy for automatic and complete threedimensional optical digitization,” Optics Letters, vol. 37, no. 15, pp. 3126–3128, 2012. View at: Publisher Site  Google Scholar
 H. Lin, J. Gao, G. Zhang, X. Chen, Y. He, and Y. Liu, “Review and comparison of highdynamic range threedimensional shape measurement techniques,” Journal of Sensors, vol. 2017, Article ID 9576850, 11 pages, 2017. View at: Publisher Site  Google Scholar
 S. Zhang, “Highspeed 3D shape measurement with structured light methods: a review,” Optics and Lasers in Engineering, vol. 106, pp. 119–131, 2018. View at: Publisher Site  Google Scholar
 V. L. Tran and H. Y. Lin, “A structured light RGBD camera system for accurate depth measurement,” International Journal of Optics, vol. 2018, Article ID 8659847, 7 pages, 2018. View at: Publisher Site  Google Scholar
 Z. Song, R. Chung, and X. Zhang, “An accurate and robust stripedgebased structured light means for shiny surface micromeasurement in 3D,” IEEE Transactions on Industrial Electronics, vol. 60, no. 3, pp. 1023–1032, 2013. View at: Publisher Site  Google Scholar
 S. Feng, Q. Chen, C. Zuo, and A. Asundi, “Fast threedimensional measurements for dynamic scenes with shiny surfaces,” Optics Communications, vol. 382, pp. 18–27, 2017. View at: Publisher Site  Google Scholar
 J. Li, H. Ren, P. Luo, X. Gao, and Z. Wang, “Specular reflection compensation in homography fringe projection profilometry,” Optik, vol. 140, pp. 413–422, 2017. View at: Publisher Site  Google Scholar
 C. Zhang, M. Wang, Q. Chen, D. Wang, and S. Wei, “Twostep phase retrieval algorithm using singleintensity measurement,” International Journal of Optics, vol. 2018, Article ID 8643819, 7 pages, 2018. View at: Publisher Site  Google Scholar
 Z. Song, H. Jiang, H. Lin, and S. Tang, “A high dynamic range structured light means for the 3D measurement of specular surface,” Optics and Lasers in Engineering, vol. 95, pp. 8–16, 2017. View at: Publisher Site  Google Scholar
 W. Feng, H. Liu, D. Zhao, and X. Xu, “Research on defect detection method for highreflectivemetal surface based on high dynamic range imaging,” Optik, vol. 206, p. 164349, 2020. View at: Publisher Site  Google Scholar
 W. Feng, F. Zhang, W. Wang, W. Xing, and X. Qu, “Digital micromirror device camera with perpixel coded exposure for high dynamic range imaging,” Applied Optics, vol. 56, no. 13, pp. 3831–3840, 2017. View at: Publisher Site  Google Scholar
 H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: a novel 3D scanning technique for highreflective surfaces,” Optics and Lasers in Engineering, vol. 50, no. 10, pp. 1484–1493, 2012. View at: Publisher Site  Google Scholar
 L. Ekstrand and S. Zhang, “Autoexposure for threedimensional shape measurement using a digitallightprocessing projector,” Optical Engineering, vol. 50, no. 12, p. 123603, 2011. View at: Publisher Site  Google Scholar
 Y. Li, Y. Fu, Z. Liu et al., “Threedimensional polarization algebra for all polarization sensitive optical systems,” Optics Express, vol. 26, no. 11, pp. 14109–14122, 2018. View at: Publisher Site  Google Scholar
 J. Jeong and M. Y. Kim, “Adaptive imaging system with spatial light modulator for robust shape measurement of partially specular objects,” Optics Express, vol. 18, no. 26, pp. 27787–27801, 2010. View at: Publisher Site  Google Scholar
 C. Chen, N. Gao, X. Wang, and Z. Zhang, “Adaptive projection intensity adjustment for avoiding saturation in threedimensional shape measurement,” Optics Communications, vol. 410, pp. 694–702, 2018. View at: Publisher Site  Google Scholar
 J. Peng, X. Liu, D. Deng, H. Guo, Z. Cai, and X. Peng, “Suppression of projector distortion in phasemeasuring profilometry by projecting adaptive fringe patterns,” Optics Express, vol. 24, no. 19, pp. 21846–21860, 2016. View at: Publisher Site  Google Scholar
 G. Babaie, M. Abolbashari, and F. Farahi, “Dynamics range enhancement in digital fringe projection technique,” Precision Engineering, vol. 39, pp. 243–251, 2015. View at: Publisher Site  Google Scholar
 C. Waddington and J. D. Kofman, “Modified sinusoidal fringeimages projection for variable illuminance in phaseshifting threedimensional surfaceshape metrology,” Optical Engineering, vol. 53, p. 084109, 2014. View at: Publisher Site  Google Scholar
 D. Li and J. Kofman, “Adaptive fringepattern projection for image saturation avoidance in 3D surfaceshape measurement,” Optics Express, vol. 22, no. 8, pp. 9887–9901, 2014. View at: Publisher Site  Google Scholar
 H. Lin, J. Gao, Q. Mei, Y. He, J. Liu, and X. Wang, “Adaptive digital fringe projection technique for high dynamic range threedimensional shape measurement,” Optics Express, vol. 24, no. 7, pp. 7703–7718, 2016. View at: Publisher Site  Google Scholar
 S. Li, F. Da, and L. Rao, “Adaptive fringe projection technique for highdynamic range threedimensional shape measurement using binary search,” Optical Engineering, vol. 56, p. 094111, 2017. View at: Publisher Site  Google Scholar
 C. Chen, N. Gao, X. Wang, and Z. Zhang, “Adaptive pixeltopixel projection intensity adjustment for measuring a shiny surface using orthogonal color fringe images projection,” Measurement Science and Technology, vol. 29, no. 5, Article ID 055203, 2018. View at: Publisher Site  Google Scholar
 H. Lin, J. Gao, Q. Mei, G. Zhang, Y. He, and X. Chen, “Threedimensional shape measurement technique for shiny surfaces by adaptive pixelwise projection intensity adjustment,” Optics and Lasers in Engineering, vol. 91, pp. 206–215, 2017. View at: Publisher Site  Google Scholar
 I. Lobel, R. P. Leme, and A. Vladu, “Multidimensional binary search for contextual decisionmaking,” Operations Research, vol. 66, no. 5, pp. 1346–1361, 2018. View at: Publisher Site  Google Scholar
 E. Hu, Y. He, and W. Wu, “Further study of the phaserecovering algorithm for saturated fringe patterns with a larger saturation coefficient in the projection grating phaseshifting profilometry,” Optik, vol. 121, no. 14, pp. 1290–1294, 2010. View at: Publisher Site  Google Scholar
 Y. Xu, H. Zhao, H. Jiang, and X. Li, “Highaccuracy 3D shape measurement of translucent objects by fringe projection profilometry,” Optics Express, vol. 27, no. 13, pp. 18421–18434, 2019. View at: Publisher Site  Google Scholar
 M. Oren and S. K. Nayar, “A theory of specular surface geometry,” International Journal of Computer Vision, vol. 24, no. 2, pp. 105–124, 1997. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2020 Wei Feng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.