Abstract

The main objective of this study is to present a new method to carry out measurements so as to improve the positioning verification step in the wind hub part dimensional validation process. This enhancement will speed up the measuring procedures for these types of parts. An industrial photogrammetry based system was applied to take advantage of its results, and new functions were added to existing capabilities. In addition to a new development based on photogrammetry modelling and image processing, a measuring procedure was defined based on optical and vision system considerations. A validation against a certified procedure by means of a laser-tracker has also been established obtaining deviations of ±0.125 μm/m.

1. Introduction

The component analyzed in this article is part of several wind turbine models and from an operational point of view is one of the most critical, together with other components such as shovels and lift towers [1]. The machining of these finished parts requires a periodical accuracy check which guarantees dimensional quality so as to obtain a suitable operation performance.

The hub which is usually made of cast iron or steel is the component of the rotor that joins the blades with the rotation system and constitutes the center of the rotor.

1.1. Review of Current Approaches

The approach presented in this article aims to solve the limitations for measuring the positioning [2, 3] of multiple holes/drills in a large part like a wind hub. Normally the tolerance for positioning is 1 mm for each hole center in relation to a general reference (e.g., a machined diameter), but tighter tolerances are also required depending on the hub model. The standardized actual solution for assuring that the part fulfills the manufacturing acceptance interval [4] is based either on laser-tracker technology [5] or on close range photogrammetry solutions [6]. Although other inspection devices are available (articulated arms, stereo-systems, CMMs, etc.) for 3D inspection [1, 7], specifications such as dimension of the parts as well as required accuracy limit their feasibility for machined part inspection. While casting parts up to 3-4 m require assuring enough material (tolerances around ±15 mm) for further processing, machined parts require tighter tolerances for finished surfaces.

Laser-tracker devices are the most widespread portable CMMs for machined part inspection and validation. Nevertheless, recently appearing vision based systems (photogrammetry, stereo-systems) are also used for some specific measuring tasks (see Figure 1). For example, for cast part validation, the use of white light scanners [8] is very extended, but their uncertainty [9, 10] for machined parts is not suitable and in most of the cases surface preparation is required before the measurement, extending the time schedule for the overall measuring process. Other systems like 3D scanners [1114] are also used for this cast part validation, but the limitation for finished geometries is similar. The only technology apart from laser-tracker that is accepted by standardized procedures for machined part inspection is the close range photogrammetry. Nevertheless, due to its versatility the laser-tracker technology has been historically the reference device.

One of the main drawbacks of these types of measuring devices is the huge amount of time employed for verifying the positioning of multiple (more than 300) repetitive features such as the holes where the blades are fixed. This is mainly due to the measuring strategy that is required to measure these features. For each of the elements, it is necessary to use fixtures [15] that allow the correct scanning of the feature using the laser-tracker in a dynamic mode (interferometric mode). In some cases, this approach is avoidable (static mode), but when the positioning tolerance is narrow, it is the recommended methodology for the verification and First Article Inspection (FAI) validation of machined parts.

Another disadvantage is the inaccessibility to the part when the size of the measurand is greater than a few meters (2 meters). In this case, the operator needs to access the elements to be verified. This lack of ergonomy also increases the time for the overall measuring time which is the main reason for the research presented in this paper.

In the last years, there is a clear tendency to manufacture bigger and bigger parts for more powerful energy generation [16]. Therefore, managing these obstacles will be more challenging in the following years. With the aim of avoiding these problems, some optical methods can be applied to reduce the time for the verification and increase the flexibility of the measuring procedure. Some approaches are still based on new adapters’ design and validation. Nevertheless, some of the solutions do not provide enough accuracy for machined features and others require as many adapters (see Figure 2) as features to be measured; accordingly they are not cost-effective. Undoubtedly, the evolution of these optical techniques will result in an improvement of the capacity to detect, assess, and locate natural features in 3D with high accuracy, but at present, this function is not supported by commercial devices high accuracy requirements.

The research presented in this article puts forward an alternative measuring method and technique to solve the limitations that are mentioned in the above part of the Introduction. The following approach is based on a combination of close range photogrammetry and image processing techniques in addition to previously known data that defines the nominal position of the features to be measured (see Figure 3). This alternative approach consists of avoiding the employment of targets to speed up the measuring process. The validation against other certified results will determine the scope of this new development.

1.2. Background and Motivation

The capacity to carry out this approach is based on previously acquired knowledge according to photogrammetry and image processing techniques found in libraries in software tools like Labview® or Halcon®. On the one hand, the large experience of users of photogrammetric devices and on the other hand a contrasted trajectory in vision based solutions are gathered to take the most from both technologies. This combination makes it possible to get further functionalities based on image data advanced processing. Besides, tests carried out in laboratory conditions were promising not only according to accuracy, but also in relation to time saving.

Another point for this research was some previous comparison tests between commercial close range photogrammetry and laser-tracker based methodology. The result for the highest positioning differences between both techniques was around 0.05 mm. Along with this comparison, a rejected and rusted wind hub with holes were scanned with a white light scanner to compare also the results against this type of portable coordinate measuring machine (PCMM). Nevertheless, the result of this method was not good enough for comparison due to a lack of clear data regarding the edge of the holes. Thus, this other comparison only tried to extend the comparison between cutting-edge techniques for other dimensions and geometric tolerances of the hub.

1.3. Overview

This article is organized as follows: firstly an introduction of the aim of the study is presented together with a review of the current measuring methods, then followed by a description of the developed solution, and procedure is shown as well as the obtained results, and finally some conclusions of the research and future trends and challenges regarding this field are mentioned.

2. Method

This section describes the developed solution to combine it with standardized procedures in order to improve the overall measuring process for wind generation part dimensional verification. The root of the solution is based on an industrial photogrammetric system (TRITOP® from GOM, Braunschweig, Germany) which obtains the intrinsic and extrinsic parameters of the image network employed for the approach. The resulting data of this system is combined with the nominal data of the features to be measured to reproject them in the acquired images. Once the images are processed and the centers of the features are detected in image coordinates, these coordinates are transformed into spatial coordinates applying the photogrammetric model previously solved. In this way, the positioning of these features is determined without using any target and this speeds up the duration of the verification method.

Apart from the developed procedure, and to validate the proposed solution, other aspects such as lighting, accuracy, and traceability were analyzed not only in simulation but also experimentally.

2.1. Developed Solution

The developed procedure (see scheme in Figure 4) is a hybrid solution between photogrammetric data and image processing tools integrated in a Matlab architecture/platform. First, the photogrammetric data is obtained by means of a commercial portable CMM called TRITOP. This device is composed of a professional reflex camera (CANON EOS-1ds Mark III®), a wide aperture nonmotorized lens (DISTAGON 2.8/25 ZF from Carl Zeiss®), a minimal number of targets to be located in the part, and a software that registers the images and processes them to solve the inverse problem of a photogrammetric network. A specific procedure is used to take the multiview images but this point is described in the following chapter. The software automatically identifies the coded targets used to solve both, intrinsic and extrinsic orientation in the images. Then, it applies a scale to the scene according to the length to the detected physical artifact. The noncoded targets are considered and taken into consideration to define the approximated coordinate system according to the nominal specification. Once these steps are carried out, the results can be exported in file to use them afterwards.

In order to be able to use these results, a photogrammetric model (algorithm) was developed and programmed in Matlab. This model allows complementing different functions available in commercial software:(1)Photogrammetric model inverse problem solving control(2)Network design simulation(3)Uncertainty assessment based on residuals(4)3D features spatial position determination based on geometric featuresThe photogrammetric model (see Figure 5) permits the input of the result of the data obtained in the previous step and its interpretation. This enables the user to take advantage of the prior exported data and to use it to determine the 3D of other features that are not measured with targets or adapters. The model is based on collinearity equations and the Brown model [17, 18] for camera distortion error estimation. Some other general references were also investigated to obtain a general overview of the current state of the art of close range photogrammetry [1820].

These are the main equations that describe the mathematical model of the photogrammetric system: The parameters that compound the previous equations are the image coordinates , the principal point coordinates , the principal distance , the rotation matrix components , the spatial coordinates , the projection center coordinates , and the optical distortions .

Among the parameters that define the model, coordinates, spatial transformations, mechanical decentering, and optical distortions are considered. The bundle adjustment procedure is typically used to solve model parameters that comprise both extrinsic orientation and intrinsic one. The main optical aberrations are radial distortion, tangential distortion, shear distortion, and so forth.

The sum of all the distortion sources is corrected in the main equations (1). The photogrammetric model described has been used as the mathematical model of the described solution.

If a characteristic (e.g., the center) of these features is detected and measured in each image (nominal data is generated and used for this step), it is feasible to calculate the 3D position of those characteristics for the same reference frame considered in the industrial solution by means of the photogrammetric model. Most of the bibliographic references are based on geometric element direct determination [21], but the solution presented in this paper is based on image data adjustment. A short description of some approaches is presented in the next lines.

The first attempt was to fit geometric elements based on border points of the holes. For this aim, image processing is needed to identify and extract the points of the border of each hole in each image. The most common filters are Sobel or Canny [18] operators and they are applied after binarizing the image. The main drawback of this approach was the required time for all the images and features and the filtering parameter establishment due to nonhomogeneous image intensity distribution, so it was not considered any more as a suitable approach. Moreover, the fitting of the ellipses comprising the border points of the holes was not repetitive due to outliers (bezel points and noise), so nor was the accuracy obtained in this manner appropriate.

The following effort was focused on pattern matching approaches, where the features to be detected are searched on the images. Targeting this, a reference image (circle) is defined and then is employed as a pattern. However, the features seen in the images are not circles, so this reference image is not very representative of the features to be identified on the images and the trials were not robust. Because of this, more complex patterns need to be used. Considering this point and also that the image processing is highly time-consuming, this approach was also discarded.

This step was finally solved considering a new approach that is based on the photogrammetric model and nominal geometric data previously determined that enables reprojecting and pointing on the images the nominal theoretical border points for each feature (see Figure 6). Based on this functionality, the following described image processing process flow was established.

The final image processing step (see Figure 7) was implemented in Matlab and it is based on the following processing flow (see process flow in Figure 4):(i)Loading of images(ii)TRITOP result importation(iii)Nominal data importation(iv)3D data generation for the edge points of each hole from centers values(v)For each image(a)Image adjustment (from RGB to grey data, edge filtering)(b)Region of interest (ROI) determination for each feature(c)Reprojection of edge points from 3D to 2D for each feature(d)Sum of the gradient of the evaluated ROI at these edge points for each feature(e)Maximization of this function to establish the correction value for the center image coordinates(f)Center image coordinate correction(g)Result data storage and exportationThis processing is mainly applied on nonoblique images to avoid eccentricity effects [22, 23] and to assure a proper accuracy for the determination of the hole center, distinguishing between bezel and hole diameter and, therefore, assessing the 3D position. So, although all images are necessary to solve a photogrammetric network, not all of the features are suitable for this image processing step nor for bundle adjustment step.

2.2. Measuring Procedure

The measuring procedure consists of a photogrammetry based measuring session from the point of view of feature acquisition and determination. This specification requires not only a photogrammetry based approach but also an acquisition process where the focusing and the contrast of the features to be measured are taken into account. With this aim, the combination between the image sensor and lens is studied so as to define the minimum focusing distance, the depth of field (dof), and the resolution on the features to be measured. Depending on this parameter and the working distance, the features to be determined are more accurately defined. It should be pointed out that, in order to get a more accurate intrinsic calibration, camera parameters remain fixed during the acquisition. The most critical one is the focal distance, but the shutter time and the aperture of the lens remain also fixed throughout the image acquisition process.

It is also important to consider the design of the image network to be taken according to accuracy aspects [24, 25] and measurand geometry and dimensions. In this case, about 60 holes located radially in 1200 mm diameter on a machined plane need to be measured. Although narrow views are preferable for feature measurement due to geometrical form deviations, wide angles (approximated to 90°) are suggested to solve the photogrammetric model and, at least, to obtain the same accuracy between coordinates according to the reference frame. So, there is a balance between accuracy requirements and feature oriented photogrammetric approaches for network design.

As a first step, this measuring development and procedure have been developed in laboratory conditions where lighting influences were under control in addition to part surface characteristics. Once this case study was solved, the second step consisted in real workshop conditions (see Figure 8) where these critical parameters were not so controlled. However, and in order to minimize these uncontrolled effects, some experimental tests were established beforehand. Also, the use of external lighting for workshop conditions was determined.

Taking into consideration that the execution time for the measuring is a critical factor on workshops measurements, this point was analyzed in the set-up of the system.

2.3. Industrialization

The previously defined procedure was modified to adapt it to real workshop conditions in relation to lighting condition, as well as to feature’s real geometry. To minimize the effect of an unstable illumination, some fixed powerful light sources (see Figure 9) were used at the same base from which images were taken. The lighting for this approach is critical in order to set up and fix the image processing parameters for workshop conditions. For artificial targets, as the gradient change on the targets is assured whatever the lighting conditions are, the light sources are not necessary or at least are not indispensable.

Besides, a pointing angle of 30° was used to improve lighting path towards the surface of the part. This approach was previously established in lab using a sample to determine the most suitable lighting approach. The angle value employed for the experiments was selected taking into account the light incidence above the surface and the bezels (<45°) and also the convergence angle between the cameras so as to get a proper accuracy for the spatial intersection of coordinates.

Another key point in carrying out these measurements is to study the optical parameters regarding the scene to be measured in relation to different measuring process parameters such as working distance, view angles, depth of focus, and resolution.

From the point of view of time saving, a minimum number of images were considered in data processing.

With the aim of estimating the approximated accuracy of the designed photogrammetric network, a photogrammetric model was used. This model allows for estimating the uncertainty for the positioning of the holes according to a specific photogrammetric intrinsic and extrinsic consideration. The uncertainty assessment is based on residuals analysis and determines the proportional relationship among the image data accuracy and the estimated spatial coordinates. The overall result is the uncertainty for each coordinate of the center of the holes. This estimation is useful to validate the photogrammetric network design for the real scene. The acceptance criteria between measuring uncertainty and manufacturing tolerance should be of at least 1/3 and 1/10 as recommended (ISO 17025 standard).

For this study, 12 images were considered with a working distance of 1.5 m.

Apart from the optical estimations of the measuring procedure, uncertainty estimation [26] was carried out with Spatial Analyzer® (SA) software considering a fixed position of laser-tracker and the holes center according to a reference frame defined in the center of all these holes. The objective of this simulation was to check if this kind of measurement can be considered as a reference for our solution. Monte-Carlo approach [27] was used to assess the uncertainty for each center position. A representation of this estimation is shown in Figure 10.

As a reference value for the comparison of obtained results, the same part was measured by means of laser-tracker taking care of accuracy aspects. The working distances, as well as tool calibration, were checked previously to minimize inaccuracies and to determine the center position of the holes regarding the theoretical coordinate system defined in the nominal model.

The following results are the coordinate’s difference between the reference values and the values obtained from the developed vision based system (see Figure 11). Both values were compared aiming to validate the result of the developed system. It must be pointed out that, in this manner, the traceability chain is assured at workshop conditions. Besides, these deviations are compared with the theoretical estimated uncertainty for the laser-tracker measurements. The common reference frame necessary for the comparison is defined in each system using the same geometries and elements.

3. Results and Discussion

The results of this research range from new uncertainty estimation functionalities to real case validation. In each of the stages, important findings for future measuring cases with even larger dimensions were found.

As a main result, it is established that the developed vision based system can measure multiple hole 3D position simultaneously without targets and with an accuracy of ±0.125 mm/m (confidence interval ) for directions (see Figure 12). The stated accuracy is the difference between 2 measuring techniques that have their own coordinate system created by different approaches. Indeed, the state-of-the-art accuracy for photogrammetric approaches based on targets is 1 : 50000 (20 μm/m).

These results were demonstrated by means of a comparison between different measuring techniques and the above-mentioned procedures. A nominal definition of the features to be verified is previously required regarding a common coordinate system, which assures the comparison between methods. The laser-tracker approach is based on contact (reflector against surface) while photogrammetry is a noncontact procedure. Although similar areas or points haven been used to define the geometric elements that compound the coordinate systems, some differences are still possible. Besides, the residuals for the fitting of the projected points above the processed images are another source of error. This processing can be improved to adapt dynamically some image parameters for gaining robustness in this step but this tool has not been developed.

Moreover, for half of the holes (50%), the differences are less than 0.05 mm without taking into consideration the results for coordinate.

Another important result of this research turned out to be the development of an uncertainty estimation tool for close range photogrammetry [2830] that allows the comparison of nominal accuracy with the verification tolerance (), so as to check whether the photogrammetric set-up fulfills the accuracy requirements for the measurement. This new functionality will be improved in the future but currently, it permits this kind of accuracy estimation based on residuals (sensibility analysis) from the bundle adjustment solving step. Figure 13 shows the estimated uncertainty for each whole center depending on coordinates. This evaluation enables assessing the linear relationship between the inputs (image coordinates) to the photogrammetric model and outputs (results of coordinates).

The uncertainty for depth coordinate is larger (around ±0.35 mm) than for plane coordinates ( and ), but in this type of verification this difference is not important because coordinate is not considered as a positioning result. So, for plane hole positioning center, the measuring procedure’s uncertainty is ±0.15 mm (see Figure 13).

Another interesting result is the estimated uncertainty for the measurement carried out with the laser-tracker in a virtual and theoretical environment by means of SA software. This tool enables a priori estimation for the measuring process. In this case and to assure that the certified method is good enough for the comparison in Figure 12, the uncertainty of the reference process is also estimated. The results obtained range from 0.015 to 0.017 mm for all the center position (see Figure 14), so they are insignificant in comparison with the differences achieved between the results of both measuring methods. This means than in real measuring conditions there are more sources of error than the ones considered in simulation models.

Another key point of this study is the need to combine photogrammetric procedures with photography procedures so as to obtain results with enough accuracy. This interaction between these fields requires an advanced view of two worlds that is not obvious to a standard user. In workshop conditions, images with less than 30° angles were used to obtain the positioning values and reduce image distortion issues.

4. Conclusions and Future Work

It has been demonstrated that commercial photogrammetric results can be useful for further processing in relation to the results obtained in this study. Therefore, commercial devices capabilities currently in the market can be improved for future demanding tasks. The main drawback for noncontact approaches is the lack of control for lighting conditions. Normally the identification and processing of the features to be measured depend on lighting conditions and in workshop or outdoor environment, which is highly unstable factor. Future vision based developments will be more robust and smarter in this sense, but at the moment some limitations still exist.

The results obtained are suitable comparing to the specified accuracy and seem to be promising for further applications where the dimensions of the part are larger and actual certified procedures are hard to apply. In any case, the developed method is likely to be integrated as a graphical user interface (GUI) in order to make the most of it and use it for 3D measuring services in a flexible manner. Some programmed codes can also be improved for more efficient computing at this stage. More advanced and friendly simulation tools are also required to design properly the measuring procedure in the simulation field. Further integration stages (hardware and software) will provide new solutions for inline measuring tasks like the approach mentioned in [31].

Anyway, the results obtained are suitable enough for some sectors such as renewable energies, scientific and naval, where the developed tool can also be modified for other features and measurements. Accuracy is not greater due to the bevel in the entrance part of the holes. The adjustment algorithms of the center can be improved to become more robust in relation to this type of feature and others. The triangulation is not the optimum from photogrammetry side. That is why the accuracy of the obtained results is also conditioned in this sense. The comparison is also dependent on the alignment accuracy for the laser-tracker and TRITOP system.

One of the main problems of the overall solution was to determine the image processing method. The first approach tried to define the border points of each hole and to fit these points to an ellipse. Aiming at this, several image filters were tested and geometric adjustment of ellipses was programmed. Nevertheless, the results were not repetitive nor accurate due to the bezel of each hole and also workshop lighting variations. The pattern matching approach was also unsuccessful for this objective because of a lack of robust models of the features to be detected. Moreover, the approach was highly time-consuming for all the images and features. To solve these problems, the above described image processing approach was finally applied.

With the improvement of more powerful image identification algorithms and more realistic photogrammetric models, better results will be assessed in the future both for close range photogrammetry field and for other optical approaches for multiple scales.

Another improvement way is to work on the simulation side to understand and contrast the most correct procedures for the measurements. Some industrial software already offers this functionality for this task based on targets. In the future, this interesting field will be studied to assure, before the measurement is carried out, that the acceptance criteria of the part are achieved. This previous knowledge permits estimation of the uncertainty of the measuring procedure and is specially indicated for large volume metrology applications where verification and assembly tasks are critical.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The work described in this paper was carried out in collaboration with ETXETAR, S.A. Company in a Basque Project called DUOMO and funded by Business Development Basque Agency (SPRI).