Journal of Sensors

Journal of Sensors / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 3715129 | 13 pages | https://doi.org/10.1155/2016/3715129

Calibration of Short Range 2D Laser Range Finder for 3D SLAM Usage

Academic Editor: Tao Zhu
Received17 Aug 2015
Revised21 Oct 2015
Accepted25 Oct 2015
Published20 Dec 2015

Abstract

The laser range finder is one of the most essential sensors in the field of robotics. The laser range finder provides an accurate range measurement with high angular resolution. However, the short range scanners require an additional calibration to achieve the abovementioned accuracy. The calibration procedure described in this work provides an estimation of the internal parameters of the laser range finder without requiring any special three-dimensional targets. This work presents the use of a short range URG-04LX scanner for mapping purposes and describes its calibration. The precision of the calibration was checked in an environment with known ground truth values and the results were statistically evaluated. The benefits of the calibration are also demonstrated in the practical applications involving the segmentation of the environment. The proposed calibration method is complex and detects all major manufacturing inaccuracies. The procedure is suitable for easy integration into the current manufacturing process.

1. Introduction

The laser range finder (LRF), also known in the literature as LIDAR or LADAR is one of the most essential sensors in the field of robotics. LRFs are part of many robotic applications and their gradually falling prices continuously increase their expansion to the current and newly developed robotic systems.

The LRF provides an accurate range measurement with high angular resolution. Thanks to the principle of measurement, LRF’s accuracy is in the order of centimetres even at long distances and its construction is relatively simple and reliable. Due to this, the estimation of the internal parameters through the calibration is often neglected as the error is generally considered to be negligible. This is generally true for the more expansive models (e.g., SICK LMS111), but not for the low-cost LRFs as Hokuyo URG-04LX, and so forth.

The Hokuyo URG-04LX is one of the most popular short range LRFs in the world. Small dimensions, low weight, and measurement range up to 4 m predetermine the URG-04LX for easy usage on small robots.

Hokuyo and other short range LRFs are used in many applications requiring precise measurements including SLAM, object recognition, obstacle avoidance, or ground surface estimation. For such applications developers often choose these cheaper devices as they are more available (e.g., [13]).

The aim of this paper is to present the use of the short range URG-04LX scanner for mapping purposes. We will introduce a set of improvements to the short range 2D LRF in order to achieve the higher accuracy and precision needed for use in 3D simultaneous localization and mapping (SLAM) applications. The precision of the calibration was checked in an application using the spinning actuated LRF and the results were statistically evaluated.

The idea of using a 2D LRF as a 3D scanner is not original. Probably one of the earliest designs of 3D LRF with URG-04LX were presented in [4, 5].

These designs have however three main disadvantages. The first one is the limited field of view facing only to the front of the robot. The second disadvantage is the mechanical realization. The vibrations from the robot chassis are carried through the construction with springs to the LRF and thus decrease its accuracy. Additionally, the field of view is limited by the belts. The third and the most important is the lack of a calibration. As we will show, the internal parameters significantly influence the precision of the measurements.

The calibration procedure described in this work provides an estimation of the internal parameters of the LRF without requiring any special three-dimensional targets. The proposed procedure requires only the projection plane (e.g., white wall) and camera capable of capturing the laser beam. Using this calibration procedure for single LRF may seem complex and time demanding, but in case of calibrating several devices it provides fast and reliable way to obtain the calibration parameters for correcting all major manufacturing inaccuracies. The most time demanding task is to set up all necessary equipment. The measurement of the LRF itself is relatively fast and can be achieved in a few minutes. For the manufacturer, producing hundreds of LRF, the time of the calibration of the single unit is negligible. The benefits of the calibration for the end users are clear and will be described in the following paragraphs.

Most of the state-of-the-art calibration procedures are aimed at the distance precision [6], performance characterization [7, 8], or noise characterization [9] but not at the spatial accuracy of the measurement.

In recent years, we can notice an increasing popularity of multibeam sensors (e.g., Velodyne). As these sensors are frequently used in the context of autonomous driving, the proper calibration becomes crucial. Several new methods for the calibration of the 3D LRF scanners were proposed. The methods differ whether they require a calibration target [10, 11] or not [12, 13]. The calibration procedures are based on entropy optimization [14], intensity imagery and lens distortion estimation [15], or calibration based on automatic registration of planes in generic scenes [13].

The calibration is usually performed manually, by measuring the offsets of the sensor and the assessment of the calibration accuracy is often performed only by a visual inspection. Our proposed procedure is similar to [16] but, in contrast to the fact that it provides very rigorous estimation of the error, significantly improves the precision and simplifies the calibration procedure. The process is designed to minimize the changes needed to be implemented into the existing manufacturing process in order to extend it by calibration step.

2. 3D LRF Design

The design of 3D LRF is depicted in Figure 1. This device is composed of two parts joined by the carrier. The first part is LRF and the second part is in our case a step motor (the main reason for the use of the step motor is the price; e.g., the servo Dynamixel RX-64 costs about 280, but the step motor SX17-1005-09, used here, costs less than 20). The carrier is mounted on the step motor by a clamp connection. The LRF is mounted eccentrically on the carrier.

The local coordinate system of the LRF is depicted in Figure 1. The LRF measurement plane is identical to plane. The axis is the rotation axis of the laser beam. The LRF is mounted in such way that the axis is identical to the rotation axis of the step motor. This design will simplify the following transformation of coordinates.

The operation and measurement range of URG-04LX is summarized in Table 1 [17]. The operating range of LRF is 270° between uttermost steps denoted as begin and end. The measurement range is only 240° between start and stop steps. The step front is identical with the axis .


Position name Begin Start Front Stop End

Sample num. 0 44 384 725 768

  [°] −135 −120 0 120 135

The distance measurement in the single plane is performed by the LRF. To achieve the 3D measurement, the step motor rotates the LRF around its axis. The rotation of 180° provides the measurement of the entire environment around the device.

According to this design we have to perform transformation of measured values. The angular resolution of URG-04LX is 1024 samples per revolution [18]. The size of one step of the LRF is . The step motor SX17-1005-09 has angular resolution of 0.9° [19] and in half step control mode is the size of single step .

Based on this, we can compute the angular position of the laser beam and the step motor:where is the angular position of the laser beam and is the position of the step motor (see Figure 1).

The symbols and are sequential number of LRF sample and step motor position. The LRF scanning range is and the range for the step motor to cover half revolution is . The and are step numbers of LRF (see Table 1).

The output of 3D LRF measurements is matrix , where its elements are single measured distances. The dimensions of the matrix are .

The content of the matrix represents distances in cylindrical coordinate system. For each element we know the pair of angles and . These values allow us to transform data from the cylindrical coordinate system to more common Cartesian coordinates (3D). This transformation can be performed in an easy way by using transformation matrices [20]:

In Formula (3) the transformation matrix reflects the mounting position of the LRF on the carrier. The angle is depicted in Figure 1 and it represents the angle between the rotation axis of the step motor and axis of the LRF. Formula (3) transforms the cylindrical coordinates of measured distances to the position vectors , used in the following sections.

2.1. 3D LRF Measurement Results with Inaccuracies

The results obtained from the initial measurement did not fulfil our expectations. Beginning and ending edges of measured walls and ceilings of our experimental lab are not connected to each other. The surfaces of walls are twisted. For illustration see Figure 2, where the ceiling is twisted and the point cloud contains two wedge-shaped gaps. The problem in this figure is highlighted by red outlines.

We have also tested three other scanners from our lab and the results were in some cases worse, sometimes better, but never correct. However, during the measurement each LRF showed repeatedly the same measurement errors. Such errors can be considered as systematic.

Similar results were reported in [5] as shown in Figure 3. The LRF was rotated one full revolution (in contrast to half revolution used in our design). In the presented result, it is clearly visible that all surfaces are measured twice. The most striking errors are in Figure 3 highlighted by the red colour. When using the same approach we have come to a similar result: a point cloud full of errors.

The distortion in results complicates further point cloud processing. The segmentation process identifies more areas than there are in reality. Similarly the process of features detection identifies false features.

The measured results reveal asymmetrical measurements of URG-04LX relative to axis. This feature is very contradictory with the technical principle of LRF operation. There is no reason for such behaviour.

The single positive conclusion is that each LRF behaves differently, but still in the same manner. We can therefore assume that every scanner has a systematic manufacturing flaw. So we decided to find the root of the problem.

3. URG-04LX Diagnostics

When we were unable to obtain any further information considering the discovered flaw of URG-04LX, we decided to carry out a more detailed diagnostics ourselves. We obtained an additional URG-04LX scanner and two URG-04LX-UG01 scanners, a newer version of URG-04LX with a USB power supply. All scanners were manufactured between 2008 and 2013. We lined up these four LRFs for the following measurements according to their year of production and we labelled them as LRF-1 to LRF-4. Internally we have all LRFs identified by their serial numbers.

We started the first pretesting stage with the set of four LRFs.

3.1. URG-04LX Pretesting

From the results presented in the first chapter we can assume that the problem of asymmetry flaw will be mainly caused by bad LRF’s sampling positions. Thus we have prepared the first test to confirm or disprove this assumption.

Figure 4 shows the diagram of pretesting configuration. The plastic frame with the  mm opening was mounted on the floor with double-sided tape and URG-04LX was inserted into it. The reference point was marked on the floor in front of the LRF in a distance  mm. The obstacle with the 20 cm width and the 8 cm height was placed on the point as shown in Figure 4. The measurement proved that the obstacle was detected in incorrect position shifted by a distance . Each LRF detected the obstacle in different position. The range of distance was from 7 to 14.5 cm.

The pretesting experiment was prepared to not only detect deviation of the obstacle in front of LRF, but also verify whether the deviation is constant around the LRF perimeter. The plastic frame allows us to rotate LRF 90° clockwise and counterclockwise.

In each orientation the measurement showed that each LRF detects the obstacle in position shifted by the similar displacement . We can preliminarily assume that the deviation is constant around the perimeter.

So far, all observations lead us to a conclusion that the asymmetrical behaviour is typical for this type of LRF.

3.2. Deviation Measurement of URG-04LX

To detect the real projections of LRF laser beams it is probably the best option to record the weak laser light reflection on an obstacle. This recording is not possible by an ordinary camera. The most suitable camera available in our lab was IDS USB 2 uEye LE industrial camera [21]. The LRF’s beam reflection can be reliably recorded on a white surface in a mildly darkened room.

The precise setup is necessary before the recording of the laser beam reflection. We have to especially maintain the following steps:(1)Select a tool for defining the measurement plane suitable for the recording.(2)Create a precision angular positioning tool for the LRF.(3)Mark the measurement plane, the start point, and the scale for the recording.(4)Perform and record the measurement for the set of LRFs.

All these steps cannot be handled individually and independently. Each step must be designed with respect to the following steps.

3.2.1. Tool for Defining the Measurement Plane

The previous measurement discovered relatively large deviations within several centimetres. Thus the accuracy of the measurement performed by our tools must be approximately one millimeter per meter to precisely measure the deviations. We decided to use a standard laser level with a tripod (see Figure 6). Its accuracy is  mm per meter. The tripod is equipped with three adjustable screws to set its base precisely to the required position.

The tripod base and the bottom of the level have a ground surface to allow precise mounting of the level. Additionally it is also possible to mount anything else with the similar shape as the level has.

3.2.2. Positioning Tool

The level tripod does not allow us to mount LRF directly and adjust the rotation. For that purpose we made a simple unit, which can be easily mounted on the tripod and allows the repeatability of the LRF angular positioning. This unit is composed of three parts as visible in Figure 5: the step motor and two aluminium profiles.

The step motor SX17-1005 is mounted on the precise rectangular aluminium profile with the same width as the level has. It allows us to mount the assembled unit repeatedly and precisely on the tripod. The second aluminium profile is mounted on the step motor by a clamp join.

It is very important to mention that the step motor axis must be mounted precisely perpendicular to the plane of the profile.

The step motor will provide precise angular positioning of the joined profile. The positioning accuracy of used step motor is and one step size is (200 steps per revolution) [19].

The positioning unit mounted on the tripod is shown in Figure 6.

3.2.3. Marking Measurement Plane

As we have prepared the measuring tool, it is now necessary to plan the organization of the measurement in the lab. The schema of the arrangement of all necessary tools for the measurement is presented in Figure 7 (figure is not to scale). The tripod with the positioning unit is placed 3 meters in front of the wall-mounted whiteboard. IDS industrial camera is placed off-axis of the positioning unit on the second tripod and the camera field of view is directed to the whiteboard.

When all measurement tools are arranged then we can start with the marking of the measurement plane. The level is placed on the positioning unit edge to edge (see Figure 6). The marking cannot be done only by the level’s water bubble and three tripod screws: it is necessary to use the laser beam and create a few marks around the laboratory. Then it is necessary to rotate the level on the positioning unit by 180° and verify or modify created marks by means of adjusting the screws. This delicate work must be very precise and it is necessary to repeat it many times until the marking plane is perfectly horizontal.

The marked horizontal plane must be inside the camera field of view together with some type of scale. As the scale we have used a grid printed on the paper. The grid with  mm spacing was printed on A3 paper and it was attached to the whiteboard by four magnets. An example of the captured image is in Figure 8(a). The horizontal white line is the reflection of the laser beam from the level. It overlaps the dashed line which is 13 mm above the level base. It is given by the construction of the laser level depicted in Figure 9. The base is marked in the grid with the thick black horizontal line.

The similar situation is in Figure 8(b). The laser beam from the level was reoriented to the vertical position and the white vertical line represents the position of the edge of the positioning unit. This position is shifted 14 mm (see Figure 9) to match the laser beam output from the level. The expected position of the laser beam reflection is again marked in the grid by the dashed vertical line.

The zero point, marked in Figure 7 as , is in the intersection of the horizontal and vertical thick black lines on the grid.

Now the position of the grid in the field of view is clearly defined and the grid will help us to determine the position of laser reflection.

3.2.4. Tracking the Beams of LRFs

Out workplace is now prepared for the whole set of LRFs measurements. The LRF is mounted on the positioning unit, as can be seen in Figure 11. The laboratory has to be mildly darkened for the measurement.

For each LRF we have created a sequence of images capturing the laser beam reflection around a perimeter. The sequences were created in a counter clockwise direction, which means that measurements were started on the right side of LRF and finished on its left side.

Three images from the whole record of LRF-2 are shown in Figure 10(a). The recording started at angular position −138.6°. This angular position represents 77 steps of the step motor. The theoretical starting position should be −135°, but the inaccuracy of the LRF enforced the movement of the measurement starting position. This position is highlighted in Figure 10(a) by a short white vertical line on the right side of the image. Figure 10 depicts the record of LRF-4. These two LRFs were selected, because they represent the highest and the lowest deviation in our measurement set. The whole overview of the measured deviations and computed corrections are summarized in Table 2.


Shift [mm] Turn   Begin Correction   

LRF-1 112 2.14 −136.46 −1.46
LRF-2 71 1.36 −137.24 −2.24
LRF-3 129 2.46 −136.14 −1.14
LRF-4 156 2.97 −135.63 −0.63

The summary in Table 2 is for all four LRFs from our set. The first column shift denotes the measured distance between the first light dot and the highlighted white mark. To determine this distance properly it is necessary to enlarge the image, estimate the centroid of a light dot, and specify its position in the grid. Without the enlargement it is impossible to do that.

The second column contains measured shift converted to the angular revolution given by where is in our setup 3000 mm. In this calculation we have neglected the distance error caused by the inclination of the laser beam of the LRF. We can omit this since the error caused by the inclination is less than 0.05% for the vertical shift equal to 100 mm from the theoretical plane. This error is acceptable as none of the tested LRF had shown the error larger than 60 mm (see Figure 14).

The third column is the real Begin position of each LRF (calculated as ). According to Table 1 the Begin position is theoretically . However our measurement has shown slightly different position for each LRF.

The fourth column of Table 2 contains the correction from the theoretical Begin position. This correction is denoted as for further use.

We can approximately recalculate the correction angle to the number of LRF steps. According to the known value of the LRF’s resolution we can conclude that inaccuracies of our scanners are in the range between two and seven steps.

3.2.5. Other Undocumented Features and Issues of URG-04LX

The first anomaly is the density of the projected points in Figure 10. We know that the angle between two samples of URG-04LX is . The spacing between two projected points at the distance 3 m is expected to be approximately 18 mm which is close to the size of two squares of the grid. But in Figure 10 we obtained much higher density. After the image enlargement we can see the approximate distance between two points is only 9 mm. Such finding may reduce the credibility of our measurement. It might be a bad camera recording or some optical phenomenon between the laser and camera. In any case, the URG-04LX manufacturer did not mention the higher sampling density. We should explain this.

The second problem is also visible in Figure 10. The laser track is not 58 mm above the base (see Figure 1) as expected. The laser track varies around the expected measurement plane very unevenly. The track of the laser beam suggests that LRF rotation axis is not perpendicular to its base.

The third problem arises from the previous. Irregularly shaped laser track indicates that the laser beam does not lay in the plane during the rotation and it forms a general shape.

In the following steps we will disassemble the LRF to verify which features originate from the design. Next, we will suggest a calibration method.

3.3. URG-04LX Disassembly and Diagnostics

In this chapter we will not fully disassemble the LRF or study the details of its design. The principle of the LRF is very well described in the literature, for example, [22]. Yet if we want to find a source of LRF inaccuracies we have to look under the LRF hood. The top of URG-04LX cover can be disassembled by unscrewing four screws. After the uncovering we can see the most important parts of the LRF. An opened LRF is visible in Figure 12.

The top cover is not just a cover. It is a mounting point for a laser diode. The small mirror is mounted on the rotating head and the detector is placed under this head. The important part is the metal synchronization ring which is permanently joined with the rotating head. The synchronization is realized by the optocoupler. The metal ring with the openings together with the optocoupler guarantees the accurate positioning of the generated laser beams even though the engine speed may slightly vary.

The metal ring has a fixed shape and guarantees the constant synchronization even if the operating conditions change. The ring contains 192 openings in angular range of 270°. It corresponds to 768 samples per one turn (see Table 1). Each opening in the ring must initiate 4 laser diode pulses. Whether this is true we can easily verify on an opened LRF by a digital oscilloscope.

Channel one of the oscilloscope was connected to the optocoupler output. The measured waveform is shown in Figure 13(a). In this diagram there are two periods with high density oscillating signal visible as black rectangles: these are pulses of the optocoupler output. The signal is active at the low level. The timing corresponds to a 10 Hz measurement frequency. The time period between two repeated signal sequences is 100 ms and the length of the synchronization interval is 75 ms. This represents 75% of one revolution and corresponds to the measurement range 270°.

Now we can increase the sampling resolution and measure the optocoupler and the laser diode together. The detailed waveform is in Figure 13(b). Channel one is still connected to the optocoupler and channel two is the laser diode switching. From the captured waveforms it is clear that the falling edge of the optocoupler precisely synchronizes the pulse of the laser diode and it starts a short sequence of laser pulses. The second important finding is the number of laser pulses during one optocoupler’s time period. We can see eight pulses, not only four as expected from the manufacturer documentation.

This finding explains the density of samples in Figure 10 and confirms the correctness of our measurement. The reason why laser pulses are recorded by the camera as the dots, and not lines, is because of the short time of pulse duration.

Furthermore it is necessary to determine the sources of detected inaccuracies. The angular deviation of the laser beam can have two possible sources. The first possible cause is the inaccurate mounting position of the rotating head and the synchronization ring. This position is set during manufacturing and there is no possibility to change it.

The second possible cause is the position of the mirror on the rotating head. When the mirror on the head is slightly rotated around axis (see Figure 1) it will cause angular deviation around this axis.

Both of these inaccuracies are uncorrectable and come from production. When both inaccuracies are added together, we detect them as a single error, manifested as the correction angle .

The source of the irregular shape of the laser trace in Figure 10 can also have two different reasons. The first is caused by a small rotation of the mirror around the axis . Every tenth of a degree is doubled by the impact and reflection of the laser beam. The resulting shape of the rotating beam is then a coned surface: convex or concave.

The second possible cause is the tilt of the internal rotation axis. When the axis of rotation is not parallel to the construction of axis then the result is the irregular height of the laser trace above the base. This tilt cannot be measured inside the LRF, since there is no reference point for the measurement.

These two inaccuracies are caused in the production process and there is no possibility to fix them inside the LRF.

Unfortunately, these inaccuracies are completely unique for each LRF.

Based on the principle of the LRF operation and known sources of inaccuracies (as described above), we can propose the most suitable calibration procedure.

4. URG-04LX Calibration

The errors we have discovered in the previous chapters point to the obvious inaccuracies of the scanner’s design. An easy way would be to create a simple solution of the calibration where each tracked point has its own correction value (a lookup table). Such a solution would be simple, but heavy going and unclear. Nevertheless, based on the known design of the LRF we can suggest a simpler solution composed of few correction values. We will explain this step by step.

At first, we have to evaluate the difference between the track of the laser beam and the horizontal plane going through the LRF base in several distinguished angular positions around the perimeter (partially shown in Figure 10). These differences can be considered as the height of the laser beam above the base. The summary of measured values is given in Table 3 for all LRFs in our set.


   Height above base [mm]
LRF-1 LRF-2 LRF-3 LRF-4

0 −120 107 73 61 59
1 −90 110 77 54 63
2 −60 109 67 58 73
3 −30 101 52 53 78
4 0 96 41 46 81
5 30 81 24 43 82
6 60 69 14 40 77
7 90 60 11 43 73
8 120 58 19 48 70

The height of the laser track is given in Table 3. The angular positions are in range from −120° to 120° with a step of 30°. These positions are indexed from 0 to 8. The proposed index is suitable for further processing. The graphical representation of the measured differences is shown in Figure 14.

This figure shows the deviation in the full measurement range of the LRF (as was partially indicated in Figure 10). The laser beam deviation is in some parts of the perimeter larger than 50 mm above or below the expected level (in the distance 3000 mm from an obstacle). The theoretical level should be 58 mm above the base and, in the figure, the base is highlighted by the thick horizontal black line. The deviation of the laser beam track combined with the inaccurate head placement (see Section 3.2.4) has a significant negative impact on the precision of the measurements.

4.1. Mathematical Model of LRFs Inaccuracies

Now we can illustrate the measurement setup graphically and then represent it mathematically. Figure 15 shows the laser beam starting in point and creating a circle in space with the diameter  mm. This circle is in the plane, but this plane is shifted above or below the plane . The intersection of the axis and the plane of the circle is the point . In the ideal state we should expect that the plane of the circle is parallel to the plane , as visible in Figure 15. Unfortunately in the real situation we have to expect that this plane of the circle will be inclined and its normal vector will have an arbitrary direction.

The position of the plane and its normal vector can be computed using the known values from Table 3. To fit the plane to points we can use the method of least squares. The plane is described by formula and parameters , , and will be computed using the following:

The parameters , , and can be directly used to compute the normal vector and the height of the point above the LRF base. The point is computed as the intersection of axis and the measurement plane. The results for all LRFs are summarized in Table 4.


LRFNormal vector
Height above base [mm]  

LRF-1 [−2.49, 8.30, 1000] 85.6 −0.53
LRF-2 [1.68, 10.46, 1000] 43.5 0.28
LRF-3 [1.46, 2.60, 1000] 50.9 0.14
LRF-4 [−3.86, −1.51, 1000] 69.4 −0.22

The computed height of the point and the position of the laser beam source above the base allow us to compute the angle . The values of angle can be found in the fourth column of Table 4.

4.2. Proposal of Calibration Procedure

In this section we will propose a universal calibration procedure using the introduced mathematical model. The calibration will contain an extension of the known Formula (3). That formula is intended for the ideal LRF without any manufacturing inaccuracies.

First, let us summarize all previously measured and calculated values:(1)The horizontal correction angle in Table 2.(2)The normal vector of the LRF measurement plane in Table 4 which reflects inclination of this plane.(3)The height of the measurement plane above the LRF base and the resulting angle between the laser beam and the measurement plane that are also given in Table 4.

Thus we have two angular correction values and one normal vector. Formula (3) contains the transformation matrices and we need to extend this formula by adding additional components. Therefore we must transform the LRF measurement plane into the position given by the normal vector.

This situation is illustrated in Figure 16. In this figure the normal vector is placed arbitrary to the LRF coordinate system , , . The task is to place the plane perpendicularly to and thus we create a new coordinate system of the rotating head , , with axis identical to . The same transformation procedure will be used for all laser beams. Figure 16 depicts one laser beam denoted as a vector and its resulting position marked as . The transformation of is described by the following and is composed of two rotation matrices:

Nevertheless Formula (9) has one additional byproduct to the vector . The vector lays in the LRF coordinate system , , in the plane . The transformation shifts the vector to an arbitrary position in the coordinate system , , but this is undesirable.

Our measurements depicted in Figure 10 recorded the final position of all laser beam tracks around the perimeter. The horizontal inaccuracy was eliminated in a previous step by the correction angle . Thus the proposed transformation in Formula (9) must not add any additional horizontal shift of the laser beam. Only a vertical movement is allowed.

Therefore we have to perform an additional rotation around the axis to move the vector to the position which lays in the plane . This transformation is represented by the following: where is the matrix joining all rotations around individual axis.

In this equation the angle and some coordinates of vector are unknown values. The expansion of matrices and vector will produce three equations. The most important is the second Equation (11) for coordinate

From the text above we already know that and must lay in the plane . Thus and must be equal to zero, so we obtain a simplified equation as follows:

In this equation the only unknown value is . From this equation we can easily express a new Formula (13) to compute the angle . In this formula the fraction can be represented as a cotangent of the known angle :

The resulting Formula (14) is invariant with respect to the length of vector . The important fact is that the angle is computed only once for each LRF.

The calculation of was the last missing step for the formulation of the final equation that will integrate all computed calibrations. We will modify Formula (3) with the following steps:(1)All measured samples, represented by the vector , will be inclined from the plane by the angle .(2)The following rotation around axis , represented by the matrix , will be adjusted by the correction angle .(3)The axis will be inclined by the computed matrix to its real position according the normal vector .(4)The mounting position of the LRF will be expressed in the final equation by the matrix (see (3)).(5)The last transformation is the rotation to the current position of the LRF mounted on the carrier (see (3)).

The following expresses the previously described process of the modified computation of position vectors for all points of the point cloud:

This formula still contains only three variables: , , and . The rest of the formula is constant for each LRF.

The result of the calibration can be summarized into the following calibration set:

This set provides all necessary information for calibration. If the calibration process had been implemented by a manufacturer, this set would have been provided in the product calibration protocol.

5. Verification of the Proposed LRF Calibration

In this section we will describe the process of the evaluation of the proposed calibration. It is important to determine how much the calibration improves the spatial precision of computed points in the point cloud. The success of the further point cloud processing depends mainly on the quality of the acquired data.

Therefore, we will further describe an acquired data set, statistical comparison of calibrated and uncalibrated data, and a few practical results of the point cloud processing of the calibrated and uncalibrated data.

5.1. Data Set Description

For the following verification we have prepared the data set composed of 100 point clouds. These point clouds were acquired inside the building in the special experimental area with the known dimensions and shapes. This area is used for various photogrammetric experiments. The length of the measured area is 26 meters and each measurement was performed in different position. The ceiling, all walls, and the floor are mutually perpendicular and all surfaces are flat.

5.2. Statistical Evaluation

The main problem of the measurements performed by the uncalibrated LRF was mentioned in Section 2.1. The problem typically creates a twist of the surfaces in the measured environment. This twist significantly complicates or even prevents further data processing. Therefore we will verify the quality of the calibration by observing the flatness of the measured surfaces.

The data for the statistical evaluation were prepared in the following steps:(1)From each (uncalibrated and calibrated) point cloud we have selected the largest horizontally oriented segment and the largest vertical one. As a result we had 200 pairs of calibrated and uncalibrated point clouds, each with the same set of points.(2)The ideal virtual plane was fitted for each calibrated and uncalibrated point cloud and the signed (oriented) distances between points and ideal plane were computed.(3)The deviation of distances in calibrated and uncalibrated point clouds were statistically evaluated.

The result of statistical evaluation is visible in Figure 17. The boxplot shows the comparison of absolute deviations for horizontal uncalibrated and calibrated segments denoted as HC and HU. The same comparison is given for vertical segments denoted as VC and VU.

The boxplot indicates the decrease of deviations in both horizontal and vertical point clouds.

Further statistical analysis shows the statistically significant differences between the uncalibrated and calibrated point clouds.

The alternative view of the evaluation is visible in Figures 18 and 19. The histograms of signed distances were calculated for uncalibrated and calibrated point clouds. The histograms were approximated with kernel density estimation. The figures show that the vast majority of points in calibrated point clouds are in a small range from −20 to 20 mm. The deviations in uncalibrated point clouds are scattered in much greater range.

The statistical evaluation proved that proposed calibration process significantly improves the results.

5.3. Practical Contribution of Calibration

The main reason which forced us to calibrate the LRF was the poor usability of acquired data for the further processing. The problem was illustrated in Figure 2. From the practical point of view we are interested whether the calibrated point clouds reflect the measured environment without undesirable deviations or deformations.

The detail of single measurement is depicted in Figure 20. Figure 20(a) depicts a detail of ceiling (horizontal segment) made up from uncalibrated point cloud.

The ceiling is split into two parts by a wedge-shaped gap, similar as seen in Figure 2, whereas Figure 20(b) shows a calibrated point cloud where the ceiling is formed correctly as a continuous point cloud without any unwanted twists and gaps.

Similar results were achieved for all 100 point clouds of our data file. Such data can be further processed without problems.

An additional positive practical impact of the calibration for the further point cloud processing is illustrated by Figure 21. Figure 21(a) shows the result of a segmentation of single uncalibrated point cloud. The segmentation process separates points into 9 segments (each segment is marked by a different colour). On the other hand Figure 21(b) shows the result of the segmentation process for the same measurement, but with the calibrated point cloud. Calibrated points were separated into only 5 segments that were expected as a result.

We have demonstrated the benefits of the calibration on the example of point cloud segmentation, significantly reducing the number of fake segments. Nevertheless the segmentation is not the only process that will benefit from the calibration results.

6. Conclusion

This paper has presented a calibration technique suitable for short range LRF. The purpose of the calibration is to compensate typical deviations caused by the manufacturing process. For each LRF the calibration process creates a set of characteristic calibration parameters. By applying these parameters to the measured data we have obtained significantly higher accuracy and spatial precision. The efficiency of the calibration process was statistically evaluated. The deviations from the ground truth were significantly lower for flat surfaces. The application of the calibration in the segmentation process greatly reduces the number of false segments.

The proposed calibration process can be easily integrated into the manufacturing process and thus provide system integrators with the set (see (16)) of calibration parameters to achieve higher accuracy. Despite its complexity, the calibration procedure requires only a standard equipment such as step motor, camera, and laser level. With the correct setup of the measuring equipment, the calibration procedure of the single unit takes only a few minutes.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work is partially supported by Grant of SGS no. SP2015/141, VŠB-Technical University of Ostrava, Czech Republic.

References

  1. M. Sekiguchi, K. Ishiwata, M. Fuchida, and A. Nakamura, “Development of a wearable system for navigating the visually impaired in the indoor environment—a prototype system for fork detection and navigation,” in Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN '14), pp. 549–554, IEEE, Edinburgh, UK, August 2014. View at: Publisher Site | Google Scholar
  2. F.-M. Chang and F.-L. Lian, “Polar grid based robust pedestrian tracking with indoor mobile robot using multiple hypothesis tracking algorithm,” in Proceedings of the IEEE SICE Annual Conference (SICE '12), pp. 1558–1563, Akita, Japan, August 2012. View at: Google Scholar
  3. D. Luo, F. Wang, B. Wang, and B. M. Chen, “Implementation of obstacle avoidance technique for indoor coaxial rotorcraft with Scanning Laser Range Finder,” in Proceedings of the 31st Chinese Control Conference (CCC '12), pp. 5135–5140, IEEE, Hefei, China, July 2012. View at: Google Scholar
  4. R. Sheh, N. Jamali, M. W. Kadous, and C. Sammut, “A low-cost, compact, lightweight 3D range sensor,” in Proceedings of the Australian Conference on Robotics and Automation, 2006. View at: Google Scholar
  5. I Heart Robotics, “More Hokuyo 3D Laser Scanner Images,” 2009, http://www.iheartrobotics.com/2009/06/more-hokuyo-3d-laser-scanner-images.html. View at: Google Scholar
  6. R. Bréda, “Experimental measurement of parameters of the spatial scanner Hokuyo URG-04LX,” Przeglad Elektrotechniczny, vol. 88, no. 5, pp. 132–135, 2012. View at: Google Scholar
  7. C.-S. Park, D. Kim, B.-J. You, and S.-R. Oh, “Characterization of the Hokuyo UBG-04LX-F01 2D laser rangefinder,” in Proceedings of the 19th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN '10), pp. 385–390, IEEE, Viareggio, Italy, September 2010. View at: Publisher Site | Google Scholar
  8. P. Demski, M. Mikulski, and R. Koteras, “Characterization of Hokuyo UTM-30LX laser range finder for an autonomous mobile robot,” Studies in Computational Intelligence, vol. 440, pp. 143–153, 2013. View at: Publisher Site | Google Scholar
  9. F. Pomerleau, A. Breitenmoser, M. Liu, F. Colas, and R. Siegwart, “Noise characterization of depth sensors for surface inspections,” in Proceedings of the 2nd International Conference on Applied Robotics for the Power Industry (CARPI '12), pp. 16–21, IEEE, Zurich, Switzerland, September 2012. View at: Publisher Site | Google Scholar
  10. N. Muhammad and S. Lacroix, “Calibration of a rotating multi-beam Lidar,” in Proceedings of the 23rd IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems (IROS '10), pp. 5648–5653, Taipei, Taiwan, October 2010. View at: Publisher Site | Google Scholar
  11. F. M. Mirzaei, D. G. Kottas, and S. I. Roumeliotis, “3D LIDAR-camera intrinsic and extrinsic calibration: identifiability and analytical least-squares-based initialization,” The International Journal of Robotics Research, vol. 31, no. 4, pp. 452–467, 2012. View at: Publisher Site | Google Scholar
  12. J. Levinson and S. Thrun, “Unsupervised calibration for multi-beam lasers,” in Experimental Robotics, vol. 79 of Springer Tracts in Advanced Robotics, pp. 179–193, Springer, Berlin, Germany, 2014. View at: Publisher Site | Google Scholar
  13. H. Alismail and B. Browning, “Automatic calibration of spinning actuated lidar internal parameters,” Journal of Field Robotics, vol. 32, no. 5, pp. 723–747, 2015. View at: Publisher Site | Google Scholar
  14. M. Sheehan, A. Harrison, and P. Newman, “Self-calibration for a 3D laser,” The International Journal of Robotics Research, vol. 31, no. 5, pp. 675–687, 2012. View at: Publisher Site | Google Scholar
  15. H. Dong, S. Anderson, and T. D. Barfoot, “Two-axis scanning lidar geometric calibration using intensity imagery and distortion mapping,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '13), pp. 3672–3678, IEEE, Karlsruhe, Germany, May 2013. View at: Publisher Site | Google Scholar
  16. S. Scherer, J. Rehder, S. Achar et al., “River mapping from a flying robot: state estimation, river detection, and obstacle mapping,” Autonomous Robots, vol. 33, no. 1-2, pp. 189–214, 2012. View at: Publisher Site | Google Scholar
  17. Hokuyo, “URG-04LX Communication Protocol Specification 2.0,” 2009, http://www.hokuyo-aut.jp/02sensor/07scanner/download/pdf/URG_SCIP20.pdf. View at: Google Scholar
  18. HOKUYO, URG-04LX Specification, HOKUYO, 2010, https://www.hokuyo-aut.jp/02sensor/07scanner/download/pdf/URG-04LX_UG01_spec_en.pdf.
  19. Microcon, “SX Line Step Motors Specification,” 2004, http://www.microcon.cz/produkty2.asp. View at: Google Scholar
  20. R. Paul, Robot Manipulators: Mathematics, Programming, and Control: the Computer Control of Robot Manipulators, Artificial Intelligence Series, MIT Press, 1981.
  21. IDS Imaging Development Systems, “uEye LE industrial camera specification,” 2015, https://en.ids-imaging.com/store/produkte/kameras/usb-2-0-kameras/ueye-le.html. View at: Google Scholar
  22. Y. Okubo, C. Ye, and J. Borenstein, “Characterization of the Hokuyo URG-04LX laser rangefinder for mobile robot obstacle negotiation,” in SPIE Defense, Security, and Sensing, vol. 7332 of Proceedings of SPIE, International Society for Optics and Photonics, 2009. View at: Google Scholar

Copyright © 2016 Petr Olivka et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

2285 Views | 1039 Downloads | 12 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.