Table of Contents Author Guidelines Submit a Manuscript
Journal of Robotics
Volume 2019, Article ID 5304267, 8 pages
https://doi.org/10.1155/2019/5304267
Research Article

Automated Scanning Techniques Using UR5

1Department of Industrial, Systems and Manufacturing Engineering, Wichita State University, Wichita, KS, USA
2Department of Mechanical Engineering, Wichita State University, Wichita, KS, USA

Correspondence should be addressed to Yimesker Yihun; ude.atihciw@nuhiy.reksemiy

Received 14 August 2018; Accepted 22 November 2018; Published 3 February 2019

Academic Editor: Raffaele Di Gregorio

Copyright © 2019 Homar Lopez-Hawa et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This study seeks to advance technologies pertaining to integration of low-cost collaborative robots to perform scanning operations where moderate accuracy is needed. Part inspection is an almost universal aspect of manufacturing which traditionally requires human observation. Advanced metrology techniques, such as scanning, allow greater inspection capabilities but still require a human operator and require significant capital investment. Using off-the-shelf line scanners in conjunction with small collaborative robots can completely automate the inspection process while minimizing cost. This project seeks to investigate the feasibility of utilizing a UR5 robot with a Keyence line scanner for scanning inspection in an industrial setting. Data from the line scanner is gathered, along with the position and orientation of the end-effector of the robot. The data are collected, combined, and analyzed in MATLAB to generate surface geometry. A user interface will allow viewing of the specific points gathered, expedite product inspection during manufacturing, and involve humans in higher skill-based decision-making tasks. A professional grade scan of the test part is used for comparison of experimentally gathered data. Feasibility is assessed on cost, effectiveness, ease of programming and operation, and development difficulty. In the preliminary result, it was found that the UR5 and line scanner provide a cheap and easily programmable and automated solution to line inspection. However, effectiveness and difficulty of development may pose challenges that require future research.

1. Introduction

Robotic line scanners are used today in a wide variety of applications and industries. One such area is in the manufacturing industry in which certain inspection systems are required to inspect surfaces of components as manufactured parts become more complex and high quality of such parts are required. Coordinate Measuring Machines (CMM) are normally used for such inspection purposes where the machine makes physical contact with each point on the surface (Figure 1(a)) [1]. The major problem with such contact type method is its slow acquiring of data from a surface. As a solution, laser line scanning technology was proposed (Figure 1(c)). The main advantage of laser line scanners over the CMM is that it is a non-contact-based scanning method that can obtain large amounts of data in a shorter time using a technique known as “high resolution of digitization and inspection” [2]. For this reason, visually scanned technologies have been growing in the recent years [3, 4]. Attention has been particularly given to automated laser line scanning [2, 5] to mitigate errors caused by manual scanning and to increase the efficiency of scanned data.

Figure 1: (a) Manual inspection. (b) Manual probing. (c) 2D automated scanning. (d) 3D automated scanning.

Studies have been further done to enhance robotic scanning technologies in Nondestructive Testing (NDT) applications to examine microdefects in manufactured parts. NDT uses a 6-degree-of-freedom industrial robot and positions the probe normal to the test surface. There are two types of NDT robotic scanning methods, which are Ultrasonic Probe Grabbed by Robot (UPGR), similar to Figure 1(d), and Test Object Grabbed by Robot (TOGR). The first method however had drawbacks due to challenges posed in positioning the probe around complex shapes and for this reason the TOGR method was tested. Unlike the UPGR method, where the end effector grabs the NDT probe, the TOGR method grabs the test specimen and manipulates it around the stationary probe. The key advantage of this method is that, unlike other methods, which require several joint motions, this method may only need one joint motion, the end effector’s motion, to manipulate the specimen around the probe [5]. TOGR has enabled the robot to perform high-speed automatic inspection of complex structures, particularly ones that have curved surface profiles such as turbine blades and milling tools.

Another novel area where laser line scanning has gain attention is in underwater 3D reconstruction of objects [6]. The conventional method of detecting objects underwater is using acoustic technology; however, this method has relatively low precision in reconstructing objects underwater. To overcome this, laser line scanning has been given attention due to its high precision and antijamming capabilities. Compared to traditional light sources, laser light has the capability to reduce backscatter and forward scatter underwater in an environment of low light and high turbidity. Experiments for this purpose have been conducted, one particular case using a laser line scanner to scan a keyboard first on land, and then in a sink filled with water [7]. The experiment revealed some decent results but still had issues with accuracy of calibration and laser extraction and therefore is in an experimental stage. An experiment of this magnitude could be integrated into our project and the results could be compared. Another area where line scanning is used is in space applications, particularly in continuous scanning of low earth orbit (LEO). CMOS-TDI line sensors, a sensor type used in space applications, convert an incident light signal into an electrical signal [8]. Concisely, the scanner has several pixels where each pixel collects photons from the same region but at different times and the time difference is considered to produce a better scan of low earth orbit images [8]. The field of archeology uses scanning technologies to study artefacts [7, 9]. The most common technique used is hyperspectral line scanning. This method takes advantage of the spectral reflectance of the different artefacts and as each material has its own spectral signature, data from hyperspectral scans will reveal what material the artefacts are made from and many other material properties. From industrial applications on land, to underwater, as well as outer space applications, line scanning has many benefits as evident from the above examples.

In robotic scanning of objects, proper sensing of the object is another important feature that should not be ignored. As discussed earlier in the paper, hyperspectral line scanning is one sensing technology used in studying artefacts in archeology. Hyperspectral imaging uses the reflectiveness of a material surface in many spectral bands within the visible light and near infrared spectrum and this reflective information is useful in determining the materials the surface is made from [9, 10]. Ultrasonic Sensors are another type of sensors used in robotic scanning. Ultrasonic testing is used to detect microdefects in parts. Ultrasonic time-domain reflectometry (UDTR) technique is employed in NDT in which the UT probe scans the sample and detects a flaw echo wave if there is a microdefect in the material. In a sample with no defects, the material will give only a surface echo wave and a backwall echo wave [5]. Thermographic imaging is a sensing technology used in the automotive final assembly line to reduce the cost factor due to manual labor [11]. In the final assembly line, a water-leak test is conducted on the vehicle to investigate any moisture leaks in the interior of the vehicle. The thermographic images can detect the moisture content inside the interior of a vehicle for a water drop diameter as low as 1 mm [12]. Due to its relative high accuracy, thermographic imaging is a good sensing technology that can be utilized. Line laser scanners have become more popular as they provide faster output than regular CMM [13]. This type of sensor are made of a laser source and an optical sensor.

Complementary metal oxide semiconductor- (CMOS-) based and coupled-charge device (CCD) are the most popular imaging sensors. Thanks to its characteristics, such as small size, low cost, and performance, CMOS technology is widely accepted as the preferred one [14]. However, this type of sensor has its drawbacks. Systematic error occurs due to reflection of the surface, defects of the lenses, laser source, and the nonuniform environmental illumination, which causes an increment in uncertainty [13]. In order to reduce or correct the systematic error obtained from Laser scanning, a global model of error is being studied and developed [15]. This global error takes in consideration error that influences the mechanical parts involved in the system. Due to this, the systematic error has been reduced by half when compared to references values.

In many different areas, the need for having a digital reproduction of a real-world object in order to analyze said object is a topic of interest. In the medical world, having the correct shape of the residual shape and volume of amputees’ limb poses a better product when a prosthesis needs to be designed [16]. Even for anthropologist, this is an appealing technology due to its capability of digitalizing cultural heritage artifacts [17]. Even with the impressive technological evolution that has been in motion in the last years, 3D scanning remains as a rather expensive process and not so commonly implemented. The fact that regular 3D scanning operations require a significant human intervention and that the setting up can be time consuming has a negative impact in the appeal of this technology [18]. The emergence of Industry 4.0 promises high work productivity through a successful implementation of human-robot cooperative systems, where a human interacts and collaborates with robots. However, such a human-robot work environment at industrial scale presents some unique challenges that require human workers to adapt to the changes and the evolving technologies easily with minimal retraining and new skill efforts. Also, moving from traditional industrial robots to collaborative robots typically limits the spindle speed to 250 mm/s to prevent a human from being injured before the collision is detected [19]. This poses a huge limitation in the production rate, as cycle times are extremely important in manufacturing processes, because it impacts efficiency, quality, and overall productivity. For some industrial tasks, such as lifting and fitting in an assembly operation, and quality inspection tasks, a human can make important decisions through a simplified GUI-Based HRI platform while running the robot at higher speed and maintaining a higher production rate. This study investigates the feasibility of utilizing a UR5 robot with a Keyence line scanner for scanning inspection in an industrial setting. The ideal goal is to be able to develop a fully automated low-cost 3D scanner that can be able to scan objects in a small interval of time and that needs little or no human assistance at all.

2. Methodology

In this study, an affordable and easily operated robotic system, which can perform moderately accurate scanning inspection, is developed using a UR5 and a line scanner. The UR5 position sensors (the end-effector, position data and roll, pitch, and yaw orientations) are integrated with a line scanner data, and an inverse kinematics is employed to construct the point cloud of the scanned object. The data from the UR5 has been sent to the computer via an Ethernet socket communication. At the same time, the line scanner sends the scanned raw data to the scanner controller box and process it (Figure 2). A USB cable is used to send the processed scanned data to the computer interface. These inputs from both the UR5 and the scanner need to be converted to the global reference frame to generate the cloud point data for the scanned object.

Figure 2: Flowchart for scanning process.

3. Data Generation and Communication

3.1. Scanning Operation

To conduct this experiment, a line scanner is mounted onto a UR5 robot. A converter plate is designed to allow the scanner to remain firmly in place during the scanning process. Two plates were designed in a CAD software and 3D-printed to allow the line scanner to easily be attached to the UR5 as shown in Figure 3. An object that needs to be scanned will be located in the production line and in the workspace of the robot. A path then will be programmed for the robot to follow, in order for it to be able to access the features of the reference piece. This will also allow predicting the outcome of the process to some degree. Due to this, it is then easier to troubleshoot the process if something is going wrong and if the information is not as expected.

Figure 3: Assembly of the plate, line scanner, and UR5 end-effector with the object.
3.2. Data Transmission/Communication

After the UR5 proceeds to follow the predetermined path and the line scanner gathers the data, the information from both the robot and the scanner needs to be put together. To have the data in such form, a communication has to be established between the UR5 and the line scanner. The UR5 has Ethernet capabilities and it can send either joint angle data or position of the joints. Allowing communication between the line scanner and the computer was a challenging task. The scanner must be connected to a controller box, which provides power to the scanner. Communication between this controller and the computer utilizes a USB connection, as it is shown in Figure 2. Specific toolboxes are provided by Keyence to allow data acquisition from various programming languages. MATLAB is not one of those languages, so another language, such as Visual Basic, must be utilized. The easiest solution will be to alter a preexisting example program provided by Keyence in order to capture the data and save it as a data file. This data file can be then imported into the Matlab to automate the process.

3.3. Data Processing

For the development of this work, a Matlab graphical user interface (GUI) has been created. An overview of the GUI can be seen in Figure 4. The GUI can be used to model any type of robot with up to seven revolute joints; it takes the Denavit-Hartenberg parameters (DH Parameters) as an input and accordingly generates the sketch of the robot. This feature provides flexibility and provides the opportunity to adapt the scanning system with the given robot geometry. Section 3.3.1 provides the calculation and structure of the robot behind the Matlab GUI code.

Figure 4: Overview of the graphical user interface.
3.3.1. DH Table Creation

For the creation of the robot skeleton, the DH parameters are used to generate the homogeneous transformation matrices . This is done in the MATLAB code following symbolic operations included in the software and with the help of a for-loop that will repeat depending on the number of joints selected, and these matrices will be saved in a structure type of variable that will contain all the information regarding the robot. The general structure of these matrices is as follows:where is the link length (distance between frames along ), is the link twist (rotation about axis), is the link offset (distance along between frames), and is the link angle (rotation about axis).

When the user provides the number of joints of the robot, the specific number of rows becomes available in the DH table on the GUI. With the information filled in the table, the homogeneous transformation matrices will be calculated. For the calculation of the transformation matrices, the forward kinematic equations with joint variables , and the link dimensions , will be used as shown in (2), and each matrix is stored in another location following the structure type variable:

After each matrix is calculated in symbolic notation and the “Generate Robot” button is pressed, the code then proceeds to generate the schematic of the robot in an initial position based on the DH table. In order for the plot to be effective, generating the points of a cylinder is required, and this is done through the embedded Matlab function “cylinder.” The transformation matrix is applied to the original cylinder and then each cylinder is plotted to represent each link and each joint as it is shown in Figure 5.

Figure 5: Example of the GUI after sketching the UR5.

4. Data Validation

The cloud points obtained from the line scanner are transformed using the joint information from the UR5, and the resulting data are plotted in the GUI. To test the system integration, a Professional Quality Tool Hardened and Precision Ground Block [20] is used, and the dimensions of the block are known for comparison with its scand output. For more robustness, the RMS value are calculated.

5. Results and Discussions

The main objective of this project was to identify the communication capabilities between the robot, line scanner, and computer to automate scanning process in industrial setting. This communication was identified and established, utilizing a combination of Ethernet socket communication and USB connections so that all scanning data and robot position data can be automatically acquired (Figure 6).

Figure 6: Communication setup and data flow.

Another key objective is to generate a MATLAB user interface to simulate the process and to apply the correct transformation to the data gathered from the line scanner. This interface was successfully created, allowing a human operator to visualize the scanning process. The code takes the position information of the robot (end-effector position and orientation) and combines it with the line scanner to get the cloud points of the scanned object and to simulate the scan operation, as shown in Figure 7. A zoom-in of the result of the scan is shown in Figure 8. To obtain a clear image and find to show the details of the object, more cloud points need to be taken. However, if the overall size is required, only few cloud points are needed, and hence the scanning process will be expedite.

Figure 7: Simulation of the scan process.
Figure 8: Isometric view of points from scanned object.

The accuracy of the scan is verified based on the overall dimension, as it is shown in Figure 9, and only few cloud points are used; looking at two reference cloud points, and their value along the direction, the height of the scanned block is , for the height of the original object. For example, by increasing few more scanned cloud points on the top surface, the object shown in Figure 8 is somewhat recreated with the shape and holes in the middle. However, still more cloud points and calibrations are required for the full construction of the object. The system can be calibrated for accuracy and repeatability based on the trends of the successive scans of the same object. Another factor, which is not necessarily a problem but a suggestion for improvement, is the communication channel. In this study, communication between computer and the UR5 and between computer and the scanner was achieved by Ethernet socket communication and USB cable, respectively. As cables create clutter, using wireless communication may help in reducing clutter while transmitting a higher range of data [21].

Figure 9: Front view of the scanned object.

6. Conclusion

The solution for automated part inspection provides a variety of benefits. The UR5 and line scanner cost approximately $35,000 that is significantly cheaper than other metrology hardware, which is upwards of $100,000. The collaborative nature of the robot allows humans to work in the area of the robot without risk of injury. This is a large advantage compared to conventional industrial robots, which require a large footprint for safety guarding. This UR5 allows a huge amount of flexibility in positioning of the line scanner to reach around and even inside of parts needing to be scanned. This is a clear advantage over stationary mounted line scanners located above the part to be scanned. The UR5 also allows operators to be quickly trained, as paths can easily be retaught for a new part using the free drive mode of the robot. This scanning method has also shown to be accurate. This is acceptable for areas that do not require the high accuracy of professional metrology equipment. The code generated for use on this robot can be generalized to be used with other robots as well, as any DH table can be used as an input to the program. In the future, various areas must be investigated to create an improved, competitive part inspection tool in the manufacturing sector.

7. Future Works

This project lacks an automatic pass/fail feature. The program can be written in manner where if the scanned object is an accurate scan, a message such as “Object Passed Scan” will appear. If the generated scan is a poor representation of the scanned object, “Failed Scan” or a similar message can appear and discard the scan. Another problem we encountered during the scan was an error in the generated data due to reflection of the laser beam. This gave some inaccurate data for the scan at certain cloud points of the scan. This error increases with an increase in reflectivity of the object being used and therefore methods to eradicate or minimize this issue further research is required. More specifically, a correction need to be made to account for the differences of reflectivity of the laser beam depending on the surface being scanned.

Data Availability

The research data related to the graphical user interface (GUI) development, equipment, and experimental setups used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. H. Kunzmann, F. Wäldele, and E. Saljé, “On Testing Coordinate Measuring Machines (CMM) with Kinematic Reference Standards (KRS),” CIRP Annals - Manufacturing Technology, vol. 32, no. 1, pp. 465–468, 1983. View at Publisher · View at Google Scholar · View at Scopus
  2. K. Deshmukh, J. L. Rickli, and A. Djuric, “Kinematic Modeling of an Automated Laser Line Point Cloud Scanning System,” Procedia Manufacturing, vol. 5, pp. 1075–1091, 2016. View at Publisher · View at Google Scholar · View at Scopus
  3. F. Remondino and S. El-hakim, “Image-based 3D modelling: A review,” The Photogrammetric Record, vol. 21, no. 115, pp. 269–291, 2006. View at Publisher · View at Google Scholar · View at Scopus
  4. G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors, vol. 9, no. 1, pp. 568–601, 2009. View at Publisher · View at Google Scholar · View at Scopus
  5. Z. Xiao, C. Xu, D. Xiao, F. Liu, and M. Yin, “An Optimized Robotic Scanning Scheme for Ultrasonic NDT of Complex Structures,” Experimental Techniques, vol. 41, no. 4, pp. 389–398, 2017. View at Publisher · View at Google Scholar · View at Scopus
  6. G. Bianco, A. Gallo, F. Bruno, and M. Muzzupappa, “A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects,” Sensors, vol. 13, no. 8, pp. 11007–11031, 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. S. Jiang, F. Sun, Z. Gu, H. Zheng, W. Nan, and Z. Yu, “Underwater 3D reconstruction based on laser line scanning,” in Proceedings of the OCEANS 2017 - Aberdeen, pp. 1–6, UK, June 2017. View at Scopus
  8. O. Cohen, N. Ben-Ari, I. Nevo et al., “Backside illuminated CMOS-TDI line scanner for space applications,” in Proceedings of the International Conference on Space Optics, ICSO 2016, vol. 10562, France, October 2016. View at Scopus
  9. V. Miljković and D. Gajski, “Adaptation of industrial hyperspectral line scanner for archaeological applications,” in Proceedings of the 23rd International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Congress, ISPRS 2016, pp. 343–345, Czech Republic, July 2016. View at Scopus
  10. P. J. Cutler, M. D. Malik, S. Liu, J. M. Byars, D. S. Lidke, and K. A. Lidke, “Multi-Color Quantum Dot Tracking Using a High-Speed Hyperspectral Line-Scanning Microscope,” PLoS ONE, vol. 8, no. 5, p. e64320, 2013. View at Google Scholar · View at Scopus
  11. S. M. Shepard, J. R. Lhota, B. A. Rubadeux, D. Wang, and T. Ahmed, “Reconstruction and enhancement of active thermographic image sequences,” Optical Engineering, vol. 42, no. 5, pp. 1337–1342, 2003. View at Publisher · View at Google Scholar · View at Scopus
  12. R. Müller, M. Vette, and M. Scholer, “Inspector robot - A new collaborative testing system designed for the automotive final assembly line,” Assembly Automation, vol. 34, no. 4, pp. 370–378, 2014. View at Publisher · View at Google Scholar · View at Scopus
  13. M. A. Isa and I. Lazoglu, “Design and analysis of a 3D laser scanner,” Measurement, vol. 111, pp. 122–133, 2017. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Molleda, R. Usamentiaga, D. F. García et al., “An improved 3D imaging system for dimensional quality inspection of rolled products in the metal industry,” Computers in Industry, vol. 64, no. 9, pp. 1186–1200, 2013. View at Publisher · View at Google Scholar · View at Scopus
  15. A. Isheil, J.-P. Gonnet, D. Joannic, and J.-F. Fontaine, “Systematic error correction of a 3D laser scanning measurement device,” Optics and Lasers in Engineering, vol. 49, no. 1, pp. 16–24, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. E. Seminati, D. C. Talamas, M. Young, M. Twiste, V. Dhokia, and J. L. J. Bilzon, “Validity and reliability of a novel 3D scanner for assessment of the shape and volume of amputees’ residual limb models,” PLoS ONE, vol. 12, no. 9, p. e0184498, 2017. View at Google Scholar · View at Scopus
  17. M. Levoy, K. Pulli, B. Curless et al., “The digital michelangelo project: 3d scanning of large statues,” in Proceedings of the the 27th annual conference on Computer graphics and interactive techniques, pp. 131–144, ACM Press/Addison-Wesley Publishing Co., 2000.
  18. M. Callieri, A. Fasano, and G. Impoco, “RoboScan: an automatic system for accurate and unattended 3D scanning,” in Proceedings of the 2nd International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT '04), pp. 805–812, IEEE, September 2004. View at Publisher · View at Google Scholar · View at Scopus
  19. S. Pellegrinelli, A. Orlandini, N. Pedrocchi, A. Umbrico, and T. Tolio, “Motion planning and scheduling for human and industrial-robot collaboration,” CIRP Annals - Manufacturing Technology, vol. 66, no. 1, pp. 1–4, 2017. View at Publisher · View at Google Scholar · View at Scopus
  20. W. Cai, S. J. Hu, and J. Yuan, “A variational method of robust fixture configuration design for 3-d workpieces,” Journal of Manufacturing Science and Engineering, vol. 119, no. 4, pp. 593–602, 1997. View at Publisher · View at Google Scholar · View at Scopus
  21. M. A. K. Yusoff, R. E. Samin, and B. S. K. Ibrahim, “Wireless mobile robotic arm,” Procedia Engineering, vol. 41, pp. 1072–1078, 2012. View at Google Scholar