- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Journal of Robotics
Volume 2011 (2011), Article ID 168481, 11 pages
A Robotic System to Scan and Reproduce Object
Department of Mechanical Engineering for Energetics, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy
Received 22 February 2012; Revised 7 December 2011; Accepted 13 December 2011
Academic Editor: Gordon R. Pennock
Copyright © 2011 Cesare Rossi and Sergio Savino. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
An application of a robotic system integrated with a vision system is presented. The robot is a 3-axis revolute prototype, while the vision system essentially consists in a laser scanner made up of a camera and a linear laser projector. Both the robotic and the video system were designed and built at DIME (Department of Mechanical Engineering for Energetics), University of Naples Federico II. The presented application essentially consists of a laser scanner that is installed on the robot arm; the scanner scans a 3D surface, and the data are converted in a cloud of points in the robot’s workspace. Then, starting from those points, the end-effector trajectories adopted to replicate the scanned surface are calculated; so, the same robot, by using a tool, can reproduce the scanned object. The software was developed also at the DIME. The adopted tool was a high-speed drill, installed on the last link of the robot arm, with a spherical milling cutter in order to obtain enough accurate surfaces by the data represented by the cloud of points. An algorithm to interpolate the paths and to plan the trajectories was also developed and successfully tested.
An excellent discussion on the sensors that are suitable for work environment recognition is presented in . For building 3D maps, laser scanner is one of the best choices.
Laser scanning technology allows the digital acquisition of tridimensional objects as clouds of points. The digital geometric description of the object is discrete, the resolution that was set for the acquisition defines the density of the cloud of points and so the details of the representation. Each point is represented by a position in the (3D) Cartesian space, in a frame of the scanner; by means of those points it is possible to obtain a 3D model that can be useful to interact with work environment. For this reason the use of laser scanning systems becomes even more common in robotic applications.
2D laser scanners are widely used within the mobile robotics applications and were applied to object following and obstacle avoidance , feature extraction , map building [4–7], and self-localization .
Biber et al.  present a method to acquire a realistic 3D model of indoor office environments by means of a mobile robot equipped with a laser range scanner and a panoramic camera.
Borangiu et al.  presented a simulation environment to integrate a short-range 3D laser scanning probe with a robotic system made up of a vertical articulated robotic arm with 6 d.o.f. and a rotary table. Essentially in  it is studied the feasibility of a system based on a robotic arm moving around the object that has to be scanned by using computer-generated adaptive scanning paths; the latter are computed in real time while the scanner is recognising the object features.
In , Larsson and Kjellander show how a standard industrial robot, equipped with a laser profile scanner, can be used as a measuring device that is free to move along arbitrary paths and, hence, can make measurements from suitable directions.
All these technologies made easy the development of rapid prototyping and of robotic reproduction of 3D solids; this represents a research area that has developed since several years, as evidenced by works such as .
In this paper, we describe an application of an optical laser scanner integrated on a robot arm. The main aim of the research consists in studying the possibility of using a robot to increase the performance of a laser scanner. At the same time the laser scanner data are used as input for a new technique for robot trajectory planning that allows also to reproduce the scanned object.
The developed device permits to digitalize surfaces and to reproduce them, by means of mechanical tooling. The development of such a device permits to make flexible and, at the same time, repeatable, acquisitions of forms, giving them an accurately controlled motion by means of a robot. Unlike static acquisition equipments, this equipment can move the camera system around the object to be analyzed without introducing problems of matching data. However in this application, the vision system becomes an integrated device, with the role of position transducer and recognition of shape and volume. So, in this way, it is possible to increase the robot performance, if the vision system is inserted into the robot control loop.
A revolute robot with three axes, designed and developed in the laboratories of the Di.M.E., was used. The robot arm and its characteristics were already described; see, for example, [13–16]. The robot control system was modified both in terms of hardware and software to be able to assign the appropriate laws of motion to the joints.
A software, opportunely developed, allows to acquire, in real time, the cloud of points, related to the examined surface, while the vision system, operated by the robot in the workspace, observes the same surface from different angles in order to take the morphological characteristics.
It is possible to make some acquisition cycles with established paths or to move directly the robot to capture in detail the surface to analyze. The information recorded by the acquisition system is used to plan a robot trajectory, that can be used to replicate the surface, also with a scale factor. The trajectory planning algorithm that we proposed provides for the replication of the surface through subsequent stages of carving. The trajectory is planned giving the real geometric path that the tool must follow in terms of position, velocitym and acceleration. This typology of planning has the advantage to guarantee paths that are more faithful to the real profile to reproduce.
Hence, the main aim of the research is to evaluate the techniques and algorithms that we propose in order to:(1)use a robot to increase the performance of a laser scanner;(2)use the laser scanner data as input for robot trajectory planning that will permit to reproduce the scanned object.
2. The Laser Scanning System
The laser scanner system for the acquisition of forms consists of a webcam and a laser emitter, fixed on the last link of the robot (Figures 1 and 2). The light is emitted by the laser module at a fixed wavelength equal to 635 nm; an optical bandpass filter centered on the wavelength of the laser was also used, so the vision system can better observe the laser beam and make the edge detection operation more simple and robust.
Since the laser scanner is linked to robot, it is possible to use the coordinates of the robot joints, to determine the position and orientation of the scan window in the robot reference system, and thus in the workspace. In order to achieve this goal, it is necessary to know the relationship between the frame of camera system and the frame of the robot last link (see Figure 1). Then, by means of the Denavit-Hartenberg matrix [DH], it is possible to determine, in the robot base frame, the coordinates of points obtained by triangulation between the camera and the laser module.
The calibration procedure of robot-laser scanner system essentially consists in determining the relative positions between the laser module and camera and between camera and robot. The relationship between laser module and camera is achieved through a procedure implemented by maintaining the position of the robot fixed in front of a moving target (Figure 2) . One set of images is acquired for different positions of the target. At each step it is possible to evaluate the coordinates in the image plane, of the points of the laser line projected on the target. By means of an optimization procedure, all parameters of the transformation that describes the relationship between laser and camera can be identified.
The relative position between camera and the reference of the last link of the robot is obtained by means of a procedure in which the robot is moved in front of a known grid. As the camera moves in various configurations with a known law, is defined by the kinematics of the robot, and as the position of the grid is known in the robot base frame, it is possible to determine the camera position in the frame of the robot last link.
After the calibration procedure, all the information that is necessary to identify the vision system in the workspace of the robot is obtained. In this way, it is possible to use the positioning information provided by the robot, with their repeatability and precision, in the triangulation algorithm of the laser scanning system; so it is possible to rebuild the surfaces, in real time, acquiring the angular position of the links by the encoders, and extrapolating the laser line from the rest of the image .
3. Surface Acquisition
The laser scanner captures the contours of the object surface. During a scanning procedure, the laser scanner is moved by robot to capture images of the profile of the object from different positions and orientations, depending on the shape of the object. A software with GUI was developed it allows the user to use the system easily (Figure 3).
By means of this GUI, it is possible to load calibration data, to set camera parameters, like brightness, saturation, and so forth, and the threshold value of red intensity used to extrapolate the shape of the laser from the rest of the image. It is possible to control the variables of the robot joints, the original image, and the elaboration of laser line. A scan path can be scheduled with robot, and a 3D surface can be captured in real time, with “Continuous 3D generation” option. Another routine, “3D generation,” allows to elaborate the acquired and saved data with a triangulation algorithm.
3.1. 3D Reconstruction Results
The results of elaboration are saved like a cloud of points that can be analyzed with standard CAD software.
In Figure 4, it is possible to observe the acquisition operations of two test specimens.
With the use of CATIA software, it was possible to construct the surface of two objects by obtaining the CAD model this step of the method of 3D reconstruction can be seen as a real process of reverse engineering. The elaboration of the cloud of points allows to remove the imperfections that could make difficult the task of surface rebuilding.
4. Surface Reproduction
Starting from the surface data, a procedure has been developed to replicate the surface itself from a block of raw material, by using the robot.
The points available are obtained by 3D reconstruction system using the robot-laser scanner ; these points must first be processed to remove any imperfections of reconstruction, then the trajectory is generated to move the robot, and finally the instructions for processing will be assigned to the robot.
The trajectory, which must be assigned to the robot, can be planned both to move it on the object surface and to replicate the acquired surface working it from a block by means of a tool. Naturally, it is also possible to obtain the mold needed to replicate the object. The data can also be used to reproduce the surface with a scale factor.
4.1. Path Planning
In the following, the steps to obtain a trajectory to be assigned to the robot starting from the reconstructed points are described.
The first step to do, starting from the cloud of points, is to eliminate any imperfections, identifying the geometry of the path that the tool should do.
Again, for the whole procedure of obtaining the trajectory, a software with a GUI was developed (Figure 6).
Starting from a cloud of points, which is the result of laser scanner acquisition (see Figure 7(a)), it is possible to reduce acquired points and eliminate the points that are due to reconstruction errors. In this way, a new cloud of points is obtained (Figure 7(b)).
The second step consists in selecting that portion of the surface that is to be realized; for example, in Figure 8 a portion of the head is shown.
With another software (Figure 9), it is possible to move and to orient the cloud of points in the robot workspace; eventually, it is also possible to scale it in order to reproduce a scaled surface.
Starting from the cloud of points and its position in the robot workspace, it is necessary to implement a processing cycle.
The path planning algorithm involves the construction of the surface by steps. The trajectory is planned by assigning the real geometric path that the tool must follow, in terms of position, velocity, and acceleration .
For example, it is possible to consider an initial volume with a shape of a box (that could be a piece of material on which the surface must be reproduced), and it is possible to fix on it a frame (Figure 10).
If is the set of points of the initial box, they must be internal to robot workspace.
If is the set of the coordinates of points that belong to the surface, must be oriented so that
In (1), , and , are the size of the initial volume, respectively, along the axes.
The level curves from belonging in planes parallel to the plane can be defined with a distance between them that is equal to that depends on the geometry of the utilized tool. If is the number of curves, for each specific we have
The depth of each cutting along the axis is chosen, and the number is equal to the smallest integer greater than the ratio then, for each level curve, the points that define the path that must be assigned to the tools are defined as follows:
These points are sorted from the maximum value to the minimum and alternating a sorting increasing-decreasing to the value of, as shown in Figure 11.
A point that has the coordinates and of the last point of and is added to the set of points that describes the path of each level curves , so that it is possible to bring the tool over the initial volume to move to the next curve .
The path to be assigned to the tool to replicate the surface is represented by the union of the level curves :
The next step is to define a parametric curve that can interpolate the path points. A parametric curve in the 3D space is a continuous application from an interval to :
In order to define different properties associated the curve, it is required, at least, the class regularity (i.e., the regularity of each component). A curve is regular in a point if the vector is not a null vector. The (7) gives the tangent direction to the same curve in each point of the curve (the apex indicates the differentiation with respect to). The loss of regularity in a point of the curve is associated with the existence of singular points.
In order to obtain a parametric curve, a cubic spline interpolating function is used. If are the path points of each vector of , an interpolating function must be defined for each of them.
As an example, the cubic spline of the function in the nodes is a function so that where each is a polynomial of degree ≤ 3, and so that
(I) for .
In this way, we can obtain a parametric curve in the 3D space:
For each point of the function (9), it is possible to define the tangent versor:
4.2. The Trajectory Planning
Once the path is defined, it is possible to plan the trajectory. The trajectory planning essentially consists in planning the movement of each of the robot links while the end effector moves from the initial position to the final one.
The path is the locus of points in the joints space that the robot must follow; so the path involves only a description of geometrical type. The trajectory, instead, indicates a path and its law of motion, that is to say the time dependence of position, velocities, and accelerations.
Hence, an algorithm to plan the trajectories must take into account what follows: the definition of the path, the constraints of the path, and the constrains due to the kinematical and dynamical limits of the manipulator. As output, the algorithm will give the trajectories in terms of joint positions, velocities, and accelerations. The definition of the path can be done either in the joints space or in the workspace.
An algorithm was conceived and developed; it was based on the following concept: the path itself is assigned not only by a number of points but also by a number of points and the corresponding tangent vectors to the path; the tangent in each of the points is given by the velocities of the joints [19, 20].
The proposed algorithm can be summarized in the following steps.(1)The speed module, along the geometrical path, is stated as equal to the desiderated speed of the end effector that is the speed of the tool that must replicate the surface. If is the speed module, the components of the defined speed in each point of the curve are the following ones: (2)The speed direction is the one of the tangent line to the trajectory in each of it is given points. In Figure 14 is shown an example of trajectory given, imposing the speed vector on the points belonging to it.(3)It is now possible, by means of inverse kinematics, to compute the velocities of each of the joints. If each of the joints has those computed velocities at each of the given points of the trajectory, the end effector will reach and leave each of the path points with the planned tangent.
So, a set of values of position and speed is obtained for each of the joint, of the manipulator that are given to the control system. Each of the robot actuators follows a law of motion, which is variable instant by instant, and the end effector moves through each of the trajectory points with the speed assigned in module and direction.
In this way it is possible to obtain more precise trajectories than the ones obtained by using methods where the trajectory is made by a set of points linked by joint displacements that are obtained following a fixed law of motion .
As for an example of the proposed trajectory planning, in Figure 15 is shown a comparison between a sinusoidal trajectory obtained by the described method and a trajectory obtained by a point-point control, giving the coordinates of 9 points in the workspace.
5. Experimental Results
Two tests are presented to show the results of the reconstruction technique. The first one is related to the industrial applications and concerns an axial compressor blade; the second one is an example of reconstruction application in artistic field and concerns a mock-up head. In Figures 16 and 17, the two test objects and the part of their surfaces replicated in different scales are shown.
Each of two surfaces is obtained with two different typologies of trajectory: a rough one and a finished one; the latter was obtained from the first one by interpolating the data and by giving a lower feeding speed to the tool. Of course it is possible to work using different materials and choosing the most adequate devices to machine them.
It was studied and tested a 3D reconstruction process that, by means of a known manipulator, it is possible to obtain satisfactory results; this, even if not a high definition sensor is adopted. The laser triangulation was adopted as acquisition shape technique, so that the reconstruction is in real time, with accuracy, robustness, and repeatability.
A technique of calibration was also tuned up ; the latter permits to calibrate the whole robot-laser scanner system and, hence, to reduce the errors due to the relative position between the devices.
Finally, a new trajectory planning technique was also studied and applied, in order to obtain a more accurate reproduction of the scanned surfaces of the objects. Since the control system was assembled at our lab, it was possible to implement a software for the reproduction of the acquired surface by fixing on the robot a high-speed drill with a spherical milling cutter.
The system that was realized permits to scan objects and to reproduce them in any scale with extreme flexibility. It is also possible to acquire the worked shape to analyze the quality of the reproduction.
This system can be used in a lot of applications where a 3D shape is required without contact and to reproduce the same shape to make a copy. It is also possible to make not only the same surface that was acquired but also to make the die to obtain a number of copies. A development in progress of this system will consist in installing a wrist on the robot; this, by adding further degrees of freedom, will permit to analyze and to reproduce more complex surfaces.
- M. Hebert, “Active and passive range sensing for robotics,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '00), pp. 102–110, San Francisco, Calif, USA, April 2000.
- J. L. Martinez, A. Pozo-Ruz, S. Pedraza, and R. Fernandez, “Object following and obstacle avoidance using a laser scanner in the outdoor mobile robot Auriga-α,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 204–209, October 1998.
- S. I. Roumeliotis and G. A. Bekey, “SEGMENTS: A layered, dual-Kalman filter algorithm for indoor feature extraction,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 454–461, Takamatsu, Japan, November 2000.
- Y. D. Kwon and J. S. Lee, “Stochastic map building method for mobile robot using 2-D laser range finder,” Autonomous Robots, vol. 7, no. 2, pp. 187–200, 1999.
- A. Scott, L. E. Parker, and C. Touzet, “Quantitative and qualitative comparison of three laser-range mapping algorithms using two types of laser scanner data,” in Proceedings of the IEEE Interantional Conference on Systems, Man and Cybernetics, pp. 1422–1427, October 2000.
- J. Guivant, E. Nebot, and S. Baiker, “High accuracy navigation using laser range sensors in outdoor applications,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '00), pp. 3817–3822, April 2000.
- L. A. Albuquerque and J. M. S. T. Motta, “Implementation of 3D shape reconstruction from range images for object digital modeling,” ABCM Symposium Series in Mechatronics, vol. 2, pp. 81–88, 2006.
- L. Zhang and B. K. Ghosh, “Line segment based map building and localization using 2D laser rangefinder,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '00), pp. 2538–2543, April 2000.
- P. Biber, H. Andreasson, T. Duckett, and A. Schilling, “3D modeling of indoor environments by a mobile robot with a laser scanner and panoramic camera,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, October 2004.
- T. Borangiu, A. Dogar, and A. Dumitrache, “Modeling and simulation of short range 3D triangulation-based laser scanning system,” International Journal Computers,Communications & Control, vol. 3, supplement, pp. 190–195, 2008.
- S. Larsson and J. A. P. Kjellander, “Motion control and data capturing for laser scanning with an industrial robot,” Robotics and Autonomous Systems, vol. 54, no. 6, pp. 453–460, 2006.
- R. A. Jarvis and Y. L. Chiu, “Robotic replication of 3D solids,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS '96), pp. 89–95, November 1996.
- R. Brancati, C. Rossi, and S. Scocca, “An experimental planar manipulator controlled in the joint space,” in Proceedings of the 4th IEEE International Conference on Intelligent Engineering Systems (INES '00), pp. 125–128, Portoroz, Slovenia, September 2000.
- S. Pagano, C. Rossi, and F. Timpone, “A low cost 5 axes revolute industrial robot design,” in Proceedings of the Workshop On Robotics in Alpe-Adria-Danube (RAAD '02), pp. 117–122, Balatonfüred, Hungary, June 2002.
- R. Brancati, C. Rossi, S. Savino, and G. Vollono, “A robot prototype for advanced didatic,” in Proceedings of the 13th International Workshop On Robotics in Alpe-Adria-Danube (RAAD '04), pp. 293–298, Brno, Czech Republic, June 2004.
- R. Brancati, C. Rossi, and S. Savino, “A method for trajectory planning in the joint space,” in Proceedings of the 14th International Workshop on Robotics in Alpe-Adria-Danube Region, pp. 81–85, Bucharest, Romania, May 2005.
- V. Niola, C. Rossi, S. Savino, and S. Strano, “A method for the calibration of a 3-D laser scanner,” in Proceedings of the 19th International Conference on Flexible Automation and Intelligent Manufacturing, 2009.
- V. Niola, C. Rossi, S. Savino, and S. Strano, “A new real time shape acquisition system with a laser scanner: first test results,” in Proceedings of the 19th International Conference on Flexible Automation and Intelligent Manufacturing, 2009.
- V. Niola, C. Rossi, S. Savino, and S. Strano, “Robot trajectory planning by points and tangents,” in Proceedings of the 10th WSEAS International Conference On Robotics, Control And Manufacturing Technology, Hangzhou, China, April 2010.
- C. Rossi, S. Savino, and S. Strano, “3D object reconstruction using a robot arm,” in Proceedings of the 2nd European Conference on Mechanism Science, Cassino, Italy, 2008.
- C. Rossi, Lectures notes in Mechanics of Robot, Edizioni Scientifiche ed Artistiche, Naples, Italy, 2008, ISBN 978-88-95430-118-8, A.Y. 2008-2009.