Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2017 (2017), Article ID 4207432, 12 pages
https://doi.org/10.1155/2017/4207432
Research Article

An Automatic Assembling System for Sealing Rings Based on Machine Vision

Department of Electronics and Information, Hangzhou Dianzi University, Hangzhou, China

Correspondence should be addressed to Yuxiang Yang; nc.ude.udh@xyy

Received 16 January 2017; Accepted 30 April 2017; Published 24 May 2017

Academic Editor: Gaetano Sequenzia

Copyright © 2017 Mingyu Gao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In order to grab and place the sealing rings of battery lid quickly and accurately, an automatic assembling system for sealing rings based on machine vision is developed in this paper. The whole system is composed of the light sources, cameras, industrial control units, and a 4-degree-of-freedom industrial robot. Specifically, the sealing rings are recognized and located automatically with the machine vision module. Then industrial robot is controlled for grabbing the sealing rings dynamically under the joint work of multiple control units and visual feedback. Furthermore, the coordinates of the fast-moving battery lid are tracked by the machine vision module. Finally the sealing rings are placed on the sealing ports of battery lid accurately and automatically. Experimental results demonstrate that the proposed system can grab the sealing rings and place them on the sealing port of the fast-moving battery lid successfully. More importantly, the proposed system can improve the efficiency of the battery production line obviously.

1. Introduction

Faced with the social reality of the aging population, high labor costs and low demographic dividend receive more and more attentions. The industrial reform of China has been urgent. In order to promote industrial upgrading of China, industrial robots have been used as a key development area. Hence, more and more companies are beginning to try to change their traditional manual assembly model by using industrial robots. Robot-based production is an essential part of the industrial manufacturing automation reform [1]. Traditional applications of industrial robots in industrial production are generally open-loop mechanisms, which are fixed trajectories and motions based on teaching and off-line programming [2, 3]. But, this kind of robot applications only can repeat the programmed motions. The trajectory and motion of the robots cannot achieve adaptive adjustments according to the processing objects and the working environments. Some specific industrial processing procedures cannot be replaced by these fixed track robots. Machine vision is a novel technology for recognizing objects, extracting and analyzing object information from digital images [46]. Thus, machine vision technology [7] has been widely applied in the field of industrial robots to improve the flexibility and adaptability of industrial robots [8, 9]. With the visual feedback, adaptive adjustments can be achieved according to the processing objects and the working environments [10].

In recent years, the robot systems combined with machine vision module are more and more used in various fields [11, 12]. By using an optic camera to measure the bead geometry (width and height), a real-time computer vision algorithm [13] is proposed to extract training patterns and enable an industrial robot to acquire and learn autonomously the welding skill. With the development of the robot technology, the domestic and foreign scholars have done a lot of research on the robots based on machine vision. A pneumatic manipulator based on vision positioning is developed in [14], whose core is the visual calibration of static objects. A sorting technology of industrial robot based on machine vision is proposed by [15], and the recognition of the target contour is the key process of this sorting technology. An eye-in-hand robot system [16] is built through the camera at the end effector of robot.

In battery production industry of China, labor intensity is very large and production processes are very specific. As shown in Figure 1(a), traditional battery manufacturers of China rely entirely on human to grab rubber rings and place the rubber rings on the sealing port of the battery lid. Traditional manual operations require lots of labor and time costs and the production efficiency is very low. Hence, the industrial robots based on machine vision are urgently needed to improve production efficiency. In [17] a noncalibration scanning method is proposed to locate and grab the sealing rings using a camera fixed at the end effector of robot. However, using such a noncalibration scanning method, the scanning range is small and the scanning speed is slow. With such a slow scanning method it is unable to place the sealing rings on the fast-moving battery lid. In this paper, a novel industrial robot assembling system for sealing rings based on machine vision is developed for the lead battery production line. Specifically, the sealing rings are recognized and located automatically with the machine vision module. Then industrial robot is controlled for grabbing the sealing rings dynamically under the joint work of multiple control units and visual feedback. Furthermore, the coordinates of the fast-moving battery lid are tracked by the machine vision module. Then the sealing ports on the battery lid are identified and located under the joint work of the contour recognition and fitting algorithms. Finally the sealing rings are placed on the sealing ports of battery lid automatically using the industrial robot. With the proposed system the sealing rings can be grabbed and placed on the sealing ports accurately and automatically. More importantly, the proposed system can improve the efficiency of the battery production line obviously.

Figure 1: (a) The traditional assembling process of battery lids. (b) The sealing rings and the battery lid.

The rest of the paper is organized as follows: The overview of the system is given in Section 2. The proposed target recognition and tracking algorithms are described in Section 3. In Section 4, experimental results are given to demonstrate the superiority of our system. Finally, conclusions are made in Section 5.

2. System Composition and Design

As shown in Figure 2, the system mainly consists of a 4-degree-of-freedom industrial robot, a grab-side visual processing module, a place-side visual processing module, and an air pump. The grab-side visual processing module includes a light source, a grab-side camera, and a grab-side processing unit, and the place-side visual processing module also includes a light source, a camera, and a processing unit. Specifically, the grab-side visual processing module recognizes and locates the coordinates of the sealing rings. The place-side visual processing module calibrates the coordinates of the fast-moving battery lid on the conveyor belt. And the coordinates of the sealing ring and battery lid are transferred to the 4-DOF robot controller through the serial ports. Then, the 4-DOF robot controller implements the motion and trajectory control to achieve grabbing and placing processes with an air pump. The concrete structure of the proposed system is described in Figure 3. The image of the real system is shown in the Figure 11.

Figure 2: System block diagram.
Figure 3: The compositions of the system. ①, ③: crosslink Ethernet cable; ②: grab-side control unit; ④: serial line; ⑤: robot controller; ⑥: place-side control unit; ⑦: robot control line; ⑧, ⑪: light source of camera; ⑨: place-side camera; ⑭: grab-side camera; ⑩: battery lid; ⑬: conveyor belt; ⑫: 4-degree robot; ⑮: air pump; : circuit board of air pump driver; : tray of sealing ring; : back view of sealing ring; : front view of sealing ring.

As shown in Figure 3, the transmission device is constructed to simulate the mode of the industrial production line and realize the high speed movement of the battery lid. Then place-side and grab-side cameras are fixed above the robot working area to collect the images of the different target regions (sealing ring region and battery lid region), respectively. Furthermore, a novel parallel light source [18] is provided for each target region. The specific designs of the light source are shown in Figure 4(a); the designed light sources are hanged on both sides of the camera. The size of each light board is 250 × 250 mm, and the 200 LED lamps are evenly distributed on each light board. The place-side and grab-side vision ranges are shown in Figure 4(b). The regional ranges are 650 × 400 mm and 100 × 100 mm.

Figure 4: Mechanical design of the light sources: (a) camera side view; (b) camera vision range.

Task scheduling [19] of the proposed system is shown in Figure 5. The robot controller will ask the grab-side control unit for the grabbing coordinates. After receiving the inquiry, the grab-side control unit will process the real-time pictures transmitted by the corresponding camera immediately. Then the coordinates will be fed back to the robot controller. When receiving feedback coordinates, the controller will control the robot to move to the corresponding coordinates and open the pump drive via an internal GPIO. Then the sealing ring under the end of the robot will be drawn to complete the action of grabbing. After the action of grabbing, the robot controller will ask the place-side control unit for the placing coordinates. Then a novel dynamic target tracking algorithm is applied in the place-side control unit to obtain the coordinates of the fast-moving battery lid. Then the tracking coordinates will be feedback to the robot controller. When receiving the coordinates, the controller will control the robot to move to the corresponding coordinates and turn off the pump drive. Then the sealing ring will be put down to complete the action of placing.

Figure 5: The diagram of the task scheduling.

3. Robot Dynamic Target Tracking

It can be seen that how to place the sealing rings on the sealing port of the fast-moving battery lid quickly and effectively is the key of the developed system. Firstly, a visual calibration technique should be applied to achieve precise positioning for the sealing ring and the sealing port on the battery lid. Then through accurate position feedback, we can adjust the robot trajectory and realize the coordinate tracking of the moving objects on the conveyor belt. Zhang calibration method [20] has the advantages of strong robustness and high accuracy and has been widely used in the various applications. Reference [21] proved that Zhang calibration method gives the most accurate model in the experiment. Thus, we adopt Zhang calibration method to complete the camera calibration in this paper.

3.1. Calibration

Zhang’s calibration method is a flexible calibration method, based on the plane of the calibration method template. Images of the calibration plate are captured from different directions. And then, through the corresponding relation between each point feature of the calibrating board, the camera calibration can be completed.

After the operation above, the matrix related to the camera’s internal structure can be calculated. The is called camera intrinsic parameters matrix defined by ,  ,  ,  . ,   are expressed as the effective focal length of the camera in the - and -axes. The origin of the image pixel coordinate is . The relationship between the image pixel coordinate and the camera coordinate can be represented by a rotation matrix in the following formula:

As shown in formula (2), the relationship between the camera coordinate and the world coordinate can be represented by a rotation matrix and a translation vector .   is 3 × 3 unit orthogonal matrix and is translation vector.

Since we only consider two-dimensional calibration, . Furthermore, we set ,   on behalf of ,  . Hence, combined with formula (1) and (2), and can be expressed by ,   ,   , and .

Then, we also completed the transformation between the fixed coordinate system of camera and the fixed coordinate system of robot. In order to facilitate the calculation, the reference coordinate system is coincided with the world coordinate system when the external parameters are obtained. And the -axis of the reference coordinate system is parallel to the -axis of the robot coordinate system. Since the fixed value of the target in the robot coordinate system can be determined directly by teaching method, it is necessary to consider only the rotation and translation of the and directions in the coordinate system. A point is taken on the console casually, whose coordinates are measured directly in the world coordinate system (the reference coordinate system). At the same time, the coordinates of the origin in the reference coordinate system and the point are obtained by the teaching method in the robot coordinate system and are and , respectively. Figure 6(a) is the calculation matrix of the transformation between coordinate matrices.

Figure 6: The calculation matrix of the transformation between coordinate matrices.

As shown in Figure 6(b), the -axis and the -axis of the reference coordinate system are translated to coincide with the origin ,   of the two coordinates. At this time, the coordinates of point in the reference coordinate system remain unchanged, but the coordinates relative to the robot coordinate system become . There is the relationship in formula (4). The angle between the -axis of the reference coordinate system and the -axis of the robot coordinate system is the rotation angle between the two coordinate systems. So, according to the triangle formula, can be obtained. And there is the derivation process:

Therefore, we can obtain the corresponding coordinates of the point in the reference coordinate system with respect to the robot coordinate system.

After the calculation above, for the place-side camera, we can calculate the internal reference matrix , the rotation matrix , the translation matrix , and the rotation angle . Similarly, for the grab-side camera, the reference matrix , rotation matrix , translation matrix , and the rotation angle are as follows:

3.2. Target Recognition Algorithms

The recognition of the sealing ring and sealing port is another key problem. The front and back recognition of the sealing ring is achieved using the algorithm proposed in the literature [17]. Firstly, some parameters for the Hough transform should be set by a series of experiments, such as the dp, the min_dist, the param1, the param2, and the min_radius and max_radius. And the specific meaning and value of these parameters are shown in Table 1. Then the gray image is processed through the Hough transform with these parameters. And the circular appearance of the sealing ring will be found through the algorithm of voting in grab-side vision ranges.

Table 1: The parameters of related algorithm.

Secondly, because only the front-view sealing ring can be covered on the sealing port as shown in Figure 1(b), the front and back recognition of the sealing ring should be distinguished. As shown in Figure 7(a), there are some differences between the front and back of the sealing ring. The front-view sealing rings are brighter and relatively smooth while the back-view sealing rings are darker with numbers and letters on them. Image binarization has a large effect on the rest of the document image analysis processes in character recognition [22]. So the obvious difference between the front and back of the sealing ring can be found after the binarization with the threshold set by observing the phenomenon of the experiment. As shown in Figure 7(b), there are almost no black pixels in the front binarization image, while there are a lot of black pixels in the back binarization image. Hence, the front and back recognition can be achieved by the number of black pixels in the image:Here, is the identified circular area. And the coordinates of the center of the sealing ring are and . is the number of black pixels within . The results of recognition are shown in Figure 7(c). Then the circular feature of the sealing ring is extracted to obtain the center coordinates. After the front and back recognition of the sealing ring, the first identified front-view sealing ring will be garbed by the robot.

Figure 7: The front and back recognition: (a) the grab-side image, (b) the binary image, and (c) the results of recognition.

For the overlap and overturn of the sealing rings, the tray of the sealing ring will be improved in the future. The tray can be shaken to flip the sealing ring. Furthermore, there is a rod on the tray which can be rotated to limit the height of the sealing rings to avoid overlap.

Different from the sealing rings, the battery lid is a dynamic target. Hence the place-side camera will take images of the battery lid at different positions on the conveyor belt, which results in different illumination conditions. However, the Hough transform algorithm [23, 24] is sensitive to the parameters and the illumination conditions. In this paper, a novel circle fitting algorithm is proposed to recognize the sealing port on the battery lid in this paper. The proposed circle fitting algorithm is as follows.

In the beginning, the image of the battery lid of the conveyor belt is collected in real time under certain intensity of vertical illumination. Then the binarization will be carried out with the threshold set by observing the phenomenon of the experiment.

The parameter represents the th row of the image plane and the parameter represents the th column of the image plane. denotes gray value of the pixel at the th row and th column of the image plane. is black gray value, and 255 is white gray value. Figure 8(a) shows the binarized images of the battery lid obliquely and laterally.

Figure 8: The contour extraction of the sealing port: (a) binary image; (b) contour searching; (c) contour selection.

Then, the edges of the binarized images are detected using the Canny operator [25], and the pixels of the boundary of the contour can be detected according to the differences among the pixels. And then all the inner contours can be extracted on the binarized images, which are the white regions surrounded by black pixels as shown in Figure 8(a). Of course, the results of edge detection and contour searching include not only the approximate contour of the sealing ports, but also other interference profiles of the object or background. Figure 8(b) shows the pictures with the interference profiles. And the areas enclosed by the red line are the extracted contours in the binarized image.

For clearing the interference profiles of the appeal, a set of constraints will be given by setting the area range of the contour , the number of the white pixels within the contour , and the number of points that make up the contour boundary . In this way, the outline of the sealing ports can be found out by selecting the qualified contour by threshold. Figure 8(c) shows the filtered images of the contours of the sealing ports.

Through the above operations, we can get initial circular contours. Then the contours will be fitted into the circle by the least-square circle fitting algorithm [26]. Then, the center and radius of the circle can be obtained. Furthermore, the center of the sealing port can be further selected by setting the range of the radius . Hence, the coordinates of the center of the sealing port can be got through visual calibration. As shown in Figure 9, the profiles of the oblique and lateral sealing ports are fitted, respectively. What is more, the center of the sealing ports on the battery lid can be found accurately.

Figure 9: The contour fitting of the sealing port: (a) the oblique sealing ports; (b) the lateral sealing ports.

So the multiple constraints increase the accuracy and reliability of the identification of the sealing ports. Moreover, even if some sealing ports are not recognized because of the visual angle and other reasons at the beginning, this does not affect later identification. The sealing ports can be reidentified and located continuously. In addition, the number of black pixels in the sealed port is more than the unsealed one as shown in Figure 9(a). Therefore, the repetitive recognition of the sealed ports can be avoided by the limitation of the number of pixels in the contour fitting algorithm.

3.3. Dynamic Target Tracking Algorithm

Through the vision calibration and the target recognition algorithms, we are able to recognize and localize the rubber rings and the sealing port of battery lid in real time. But when the robot moves and completes the action of placing, it will produce a time delay. Furthermore, the robot will interfere the place-side camera during motion. So an efficient dynamic target tracking algorithm is proposed combining the robot motion control with visual feedback.

As shown in Figure 10(a), the initial position of the end of the robot is . At the same time, the initial position of the moving target is . Without tracking, when the robot moves from to , the original target will move to along the direction of the conveyor speed . Therefore, the time error of the tracking process (from to ) needs to be taken into account. The core of the tracking algorithm is how to use the known conditions to predict or calculate the coordinate of and adjust the robot trajectory.

Figure 10: Kinematic analysis to tracking algorithm: (a) tracking mathematical model; (b) decomposed schematic of .
Figure 11: The experimental system.

After the completion of the action of grabbing, namely, at , the known speed of the conveyor belt is . If we assume the speed of the end effector for robot is , through the real-time feedback of the visual system, we can know the coordinates of and . So we can get the distance of the robot to move from to : . And the time can be got from to : . Then as shown in Figure 10(b), the angle between the conveying belt and the -axis of the robot coordinates is . Hence, we make the decomposition of to get and . Furthermore, we can calculate the offset coordinates of the battery lid at time : and . Through this series of projections, we can calculate the position coordinates of the fast-moving battery lid: . Finally the robot will be controlled to move from to directly.

In the above analysis, the speed of the end effector for robot is an assumed known variable. But, in the robot control system, the robot movement is a complex process of changing speed. Hence, obtaining becomes cumbersome and the complexity of the tracking algorithm will be increased. In our system a higher response rate is required. Based on the above analysis, an efficient target tracking solution is proposed in this paper.

Firstly, through the real-time feedback of the visual system, we can know the coordinates of and . When the place-side control unit transfers the coordinates of to the robot controller, the robot controller will send the start flag of timing to the place-side control unit and start moving from to .

Secondly, when arriving at , the end flag of timing is sent to the place-side control unit. Hence, we can calculate the time of the robot movement from to .

Finally, the position coordinates of the moving battery lid can be calculated as . Then the robot will be controlled to move to . And the action of placing will be completed. The offset coordinates of the battery lid are calculated as Here and are the decompositions of ,   is the speed of the conveyor belt, and is the time of the robot movement from to . Because there exists time error when the robot moves from to and places the sealing ring, and are, respectively, the correction errors in horizontal and vertical directions. The time error is shorter, and the correction errors in horizontal and vertical directions can be regarded as a fixed value. Because the conveyor belt is parallel to the -axis of the robot coordinates, the error exists only in the -axis direction. What is more, the correction error is different, while and are different.

The robot controller and the place-side control unit are communicated through serial communication in a manner similar to the TCP/IP three-packet handshake. Using such a serial communication, we can guarantee the timing synchronization of each system, as well as the real-time and reliability of the communication. Through the real-time feedback for the first moving time and the calculation of the kinematics model, we can modify the trajectory of the robot motion dynamically and realize the dynamic target tracking.

4. Experimental Results

We build an experiment system as shown in Figure 11. The system utilizes a 4-DOF industrial robot made by Denso, Japan (http://www.globaldenso.com), and two Gigabit Ethernet industrial digital cameras made by Germany AVT (https://www.alliedvision.com). The conveyor belt and the designed light sources are also shown in Figure 11. And the set ratio of the robot speed is 100%. The conveyor belt is parallel to the -axis of the robot coordinates, and the set speed is 25 mm/s. Various experiments are carried out to test the system proposed in this paper. Furthermore, based on the system platform environment such as the specific light source and vision range, the parameters of related algorithm are obtained by comparing the results of a series of experiments and set in Table 1.

On the basis of the above experimental conditions, the correction error is 13 mm, and the correction error is 0 mm after several tests. Because of the lens distortion in the visual system, the target far away from the camera scanning center will have the calibration error. So, in order to reduce the error and improve the accuracy of the calibration, we separate the grabbing and placing area. As shown in Figure 12, the G area is the grabbing area for the robot, which is a 10 10 cm rectangular area. And, for the placing area, the conveyor belt is divided into three different regions P1, P2, and P3 in length, respectively, of 5 cm, 25 cm, and 10 cm. The P2 is the middle area of the camera scanning. So the calibration of the area P2 is the most accurate. Most of the work during the placing process will be completed in this area. However, due to the fact that P2 region is too short, we have to extend it to P1 and P3 region where the calibration error is also smaller.

Figure 12: The actual grabbing and placing operations: (a) grabbing operation, (b) placing operation (the battery lid is in horizontal direction), and (c) placing operation (the battery lid is in inclined direction).

On the basis of accurate identification, the visual calibration of the sealing port and the sealing ring is tested. Under the static condition of the conveyor belt, the calibration coordinates are obtained through the visual calibration, the same with the actual coordinates through moving the robot to the corresponding sealing port and the sealing ring in the teaching mode, as shown in Table 2.

Table 2: The test results of the visual calibration.

Because the sealing ring is rubber material, the system can also complete the seal within a certain tolerance; in other words a certain error can be allowed between the calibration coordinates and the actual coordinates. Furthermore, the value of the tolerance is 0.5 mm after multiple experiments. Hence, the calibration coordinates of P2 and G meet the requirements of the seal. After the actual measurement and statistics, we make the offset to correct the error caused by the lens distortion in the P1 and P3 region. And the correction coordinates are calculated by setting the offset coordinates as (−0.5 mm, 1 mm) and (0.5 mm, 1.4 mm) for the P1 and P3 regions.

Under these test conditions and error correction, we use five different sets of the conveyor speed. After various tests, the test results are shown in Table 3. When all ports of the battery lid are placed with the sealing rings, the test is successful. And the time in Table 3 is the average time of several tests. As shown in Table 1, with the designed light sources, the proposed system can successfully grab the sealing rings and place them on the sealing ports of the fast-moving battery lid.

Table 3: The test results with different conveyor speeds.

As shown in Figure 12, the robot grabs the front-view sealing ring successfully in Figure 12(a). As shown in Figure 12(b), the battery lid is placed on the conveyor belt horizontally. The proposed system can well track the coordinates and place sealing rings on the sealing ports of the fast-moving battery lid. As shown in Figure 12(c), the battery lid is placed obliquely. The proposed system can also well track their coordinates and complete the placing operation.

5. Summary

In this paper, the automatic assembling system for sealing rings is based on robot motion planning and machine vision. It can not only grab the sealing ring, but also track the fast-moving battery lid and complete the action of placing. Firstly, the target in a certain range can be positioned accurately through the visual calibration and image processing. Secondly, while the front-viewed sealing ring is grabbed successfully, the coordinates of the fast-moving battery lid on a conveyor belt will be tracked by dynamic target tracking algorithms and error correction. Finally place the sealing ring on the sealing port.

Experimental results show that the system can accurately calibrate the sealing rings and the sealing port on the fast-moving battery lid. And the robot can realize the dynamic target tracking. Compared with the robot without the machine vision system or with machine vision only for a static target, the working efficiency and adaptability of our robot are stronger, so that, for the changes of the external environment, it can make adjustment to the robot motion in real time. More importantly, the system can adapt to the requirements of the current production mode of the production line.

The system can be further improved in the future. For example, we will add one more conveyor belt to achieve two-way work mode. So the utilization rate of the robot and the efficiency of the production will be improved. And furthermore, through the necessary modifications of the hardware and software, we can also use a similar system in the production line of the component and sorting of assembly.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by Zhejiang Natural Science Foundation of China under Grant LY17F010020 and Natural Science Foundation of China (61401129 and U1609216).

References

  1. M. R. Pedersen, L. Nalpantidis, R. S. Andersen et al., “Robot skills for manufacturing: from concept to industrial deployment,” Robotics and Computer-Integrated Manufacturing, vol. 37, pp. 282–291, 2016. View at Publisher · View at Google Scholar · View at Scopus
  2. F. Tang, P. Zhang, and F. Li, “The fuzzy feedback scheduling of real-time middleware in cyber-physical systems for robot control,” Journal of Sensors, vol. 2016, Article ID 3251632, 10 pages, 2016. View at Publisher · View at Google Scholar
  3. G. C. Smith and R. A. Smith, “A non-contact method for detecting on-line industrial robot position errors using a microwave doppler radar motion detector,” International Journal of Advanced Manufacturing Technology, vol. 29, no. 5, pp. 605–615, 2006. View at Publisher · View at Google Scholar · View at Scopus
  4. W. Kong, L. Zhou, Y. Wang, J. Zhang, J. Liu, and S. Gao, “A system of driving fatigue detection based on machine vision and its application on smart device,” Journal of Sensors, vol. 2015, Article ID 548602, 11 pages, 2015. View at Publisher · View at Google Scholar
  5. X. W. Ye, C. Z. Dong, and T. Liu, “A review of machine vision-based structural health monitoring: methodologies and applications,” Journal of Sensors, vol. 2016, Article ID 7103039, 10 pages, 2016. View at Publisher · View at Google Scholar
  6. H. Hong, X. Yang, Z. You, and F. Cheng, “Visual quality detection of aquatic products using machine vision,” Aquacultural Engineering, vol. 63, pp. 62–71, 2014. View at Publisher · View at Google Scholar · View at Scopus
  7. W. Zhang, J. Mei, and Y. Ding, “Design and development of a high speed sorting system based on machine vision guiding,” Physics Procedia, vol. 25, pp. 1955–1965, 2012. View at Publisher · View at Google Scholar
  8. K. H. Ghazali and S. Razali, “Machine vision system for automatic weeding strategy in oil palm plantation using image filtering technique,” in Proceedings of the 3rd International Conference on Information and Communication Technologies: From Theory to Applications, ICTTA, pp. 1–5, IEEE, Damascus, Syria, 2008. View at Publisher · View at Google Scholar · View at Scopus
  9. L. Ren, L. Wang, J. K. Mills, and D. Sun, “3-D automatic microassembly by vision-based control,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, pp. 297–302, IEEE, San Diego, CA, USA, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. L. Pérez, Í. Rodríguez, N. Rodríguez, R. Usamentiaga, and D. F. García, “Robot guidance using machine vision techniques in industrial environments: a comparative review,” Sensors, vol. 16, no. 3, 2016. View at Publisher · View at Google Scholar
  11. J.-K. Oh, G. Jang, S. Oh et al., “Bridge inspection robot system with machine vision,” Automation in Construction, vol. 18, no. 7, pp. 929–941, 2009. View at Publisher · View at Google Scholar · View at Scopus
  12. X. Wei, K. Jia, J. Lan et al., “Automatic method of fruit object extraction under complex agricultural background for vision system of fruit picking robot,” Optik, vol. 125, no. 19, pp. 5684–5689, 2014. View at Publisher · View at Google Scholar · View at Scopus
  13. J. F. Aviles-Viñas, I. Lopez-Juarez, and R. Rios-Cabrera, “Acquisition of welding skills in industrial robots,” Industrial Robot, vol. 42, no. 2, pp. 156–166, 2015. View at Publisher · View at Google Scholar · View at Scopus
  14. Y.-Z. Yang, C.-Y. Wu, and X.-D. Hu, “Study of web-based integration of pneumatic manipulator and its vision positioning,” Journal of Zhejiang University: Science, vol. 6, no. 6, pp. 543–548, 2005. View at Google Scholar · View at Scopus
  15. Z.-Y. Liu, B. Zhao, and H.-B. Zhu, “Research of sorting technology based on industrial robot of machine vision: image processing and machine vision,” in Proceedings of the 5th International Symposium on Computational Intelligence and Design, ISCID, vol. 1, pp. 57–61, IEEE, Hangzhou, China, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. U. Khan, I. Jan, N. Iqbal, and J. Dai, “Uncalibrated eye-in-hand visual servoing: an LMI approach,” Industrial Robot, vol. 38, no. 2, pp. 130–138, 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. G. Ma, Y. Lou, Z. Li et al., “A machine vision based sealing rings automatic grabbing and putting system,” in Proceedings of the 14th International Conference on Industrial Informatics, INDIN, IEEE, Poitiers, France, 2016. View at Publisher · View at Google Scholar
  18. A. W. Catalano, “Light source for uniform illumination of a surface,” 2016, United States Patent Application 20160097511, A1.
  19. P. T. Zacharia, E. K. Xidias, and N. A. Aspragathos, “Task scheduling and motion planning for an industrial manipulator,” Robotics and Computer-Integrated Manufacturing, vol. 29, no. 6, pp. 449–462, 2013. View at Publisher · View at Google Scholar · View at Scopus
  20. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. View at Publisher · View at Google Scholar · View at Scopus
  21. W. Zhang, Y. Bian, and F. Dong, “Comparation of calibration methods for bubbly flow video image,” in Proceedings of the 24th Chinese Control and Decision Conference, CCDC, pp. 2602–2606, IEEE, Taiyuan, China, 2012. View at Publisher · View at Google Scholar · View at Scopus
  22. E. H. B. Smith and C. An, “Effect of ‘ground truth’ on image binarization,” in Proceedings of the 10th IAPR International Workshop on Document Analysis Systems, DAS, pp. 250–254, IEEE, QLD, Australia, 2012. View at Publisher · View at Google Scholar · View at Scopus
  23. Z. Yao and W. Yi, “Curvature aided Hough transform for circle detection,” Expert Systems with Applications, vol. 51, pp. 26–33, 2016. View at Publisher · View at Google Scholar · View at Scopus
  24. A. Jagani and V. Pandey, “Evaluation of circle fitting algorithm for coordinate measuring machine,” International Journal of Material Research, Electronics and Electrical Systems, vol. 4, no. 1-2, pp. 39–57, 2011. View at Google Scholar
  25. S. Qiang, L. Guoying, M. Jingqi, and Z. Hongmei, “An edge-detection method based on adaptive canny algorithm and iterative segmentation threshold,” in Proceedings of the 2nd International Conference on Control Science and Systems Engineering, ICCSSE, IEEE, Singapore, Singapore, 2016. View at Publisher · View at Google Scholar
  26. C. Fan, C. Liu, G. Ding, and L. Xu, “A method of circle curve fitting based on the cumulative error of the radius error,” in Proceedings of the 11th International Conference on Computational Intelligence and Security, CIS, pp. 211–214, IEEE, Shenzhen, China, 2015. View at Publisher · View at Google Scholar · View at Scopus