About this Journal Submit a Manuscript Table of Contents
Advances in Mechanical Engineering
Volume 2013 (2013), Article ID 347921, 7 pages
http://dx.doi.org/10.1155/2013/347921
Research Article

Multicamera Fusion-Based Leather Defects Marking System

1Department of Mechanical Engineering, National Yunlin University of Science and Technology, 123 University Road, Section 3, Douliou, Yunlin 64002, Taiwan
2Mechanical and Systems Research Laboratories, Industrial Technology Research Institute, Hsinchu 31040, Taiwan

Received 12 July 2013; Accepted 28 October 2013

Academic Editor: Liang-Chia Chen

Copyright © 2013 Chao-Ching Ho et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Real-time acquisition of ultra-high wide view video images is essential in leather defects marking systems because this enables either the leather manufacturing process or an inspector to identify leather defects. The major challenge in this work is blending and stitching the multiple camera views to form a single fusion image map. In the fusion video image, the viewer is quite sensitive to incorrect stitching and blending when the marking pen is passing through the border between views. Hence, a set of geometric and photometric corrections have to be applied to each camera view in addition to careful blending of the image borders. In this paper, we present a real-time image capturing system that uses four cameras at 30 fps and stitches their views together to create a panoramic video with a resolution of  px. The real-time fusion video image can be observed on a large screen next to the leather defects marking system and no manual alignment is necessary, resulting in savings in both labor and time.

1. Introduction

The leather defects marking process utilizes the assessment of human experts in the classification of leather according to their surface quality. In modern times, the leather manufacturing process is digitized using an image processing system. Consequently, leather image analysis and surface defects detection require the establishment of a coordinate system in order to evaluate the location of defects. However, real natural leather is quite large, and so it is difficult to detect all of the defects that may exist on the leather surface using a single camera. A single camera typically provides only a limited field of view. The piece of leather could be as large as 2 m × 3 m, which means that the resolution of the leather image provided by a single camera is not very high. In recent years, great efforts have been made by researchers to automate the leather manufacturing process by applying machine vision.

Lerch and Chetverikov [1, 2] proposed a machine vision system for the analysis of hide images for a computer-aided layout design system used in the leather industry. Aranda Penaranda et al. [3] implemented an automatic artificial vision-based system for the inspection, distribution, and water-jet cutting of leather. Anand et al. [4] used a machine vision system as a front end to acquire the image of each irregular sheet and part and thereby solve the two-dimensional stock cutting problem in the leather and apparel industries. Paakkari et al. [5] employed machine vision to provide accurate real-time data on the outline and position of the preform utilized in the nesting process, leading to material and cost savings. Lanzetta and Tantussi [6] proposed a vision-based laboratory prototype for leather trimming to increase the automation level in the leather industry. Yeh and Perng [7] established a reference standard of defects for inspectors to classify leather. Krastev and Georgieva [8, 9] proposed using fuzzy neural networks to determine leather quality. However, in their proposal, exact features values determination is difficult to achieve and misclassification errors pose a problem. Governi et al. [10] designed and built a machine vision-based system for automatically dyeing free-form leather patch contours. He et al. [11] proposed to use wavelet transform and the energy of the wavelet coefficients distributed in different frequency channels for leather inspection.

The research cited above deals with many aspects of the leather manufacturing process associated with vision; however, to the best of our knowledge, a multiple camera calibration method to increase the resolution of the system discussed above is not yet available. For large leather machines, multiple camera fusion is the solution for constructing high quality and high resolution image maps [12]. In addition to the development of an effective defect classification algorithm, the other main task tackled in our work is the development of a highly accurate multiple camera fusion system. Detection of accuracy is affected by distortion of camera lenses; further, high fusion quality must be assured in processing a number of leather patches with multiple cameras.

In this paper, to address these problems, we propose a multiple images fusion method that utilizes four cameras and homography matrices to calculate overlapping pixels and finally implement boundary resampling to blend the images. The rest of this paper is organized as follows. A camera calibration method capable of compensating for optical distortions caused by the camera lenses is presented in Section 2. Multiple camera fusion and the results obtained from evaluation of our proposed system are discussed in Section 3. Finally, we conclude this paper in Section 4.

2. Calibration of the Leather Defects Marking System

Our proposed leather defects marking system comprises the following main parts: a machine vision system consisting of the illumination devices and four acquisition cameras that facilitate the acquisition of images of any piece of leather placed on the working plane. The illumination devices are designed to highlight the differences between the leather and the supporting working plane. Image acquisition is conducted using four commercial cameras placed above the working plane at a distance of approximately 1200 mm. The size of the working plane is 2 m × 3 m; hence, the spatial resolution of the cameras is approximately 2 mm/px. Each of the cameras is connected to a graphic card, with resolution  px, in a personal computer (PC). The overall system is depicted in Figure 1. The leather is positioned on a bright white panel table to enhance the contrast with most leather types.

fig1
Figure 1: The proposed leather defects marking system.

The system requires precalibration to compensate for lens distortions and camera misalignments. In our proposed leather defects marking system, the video signal-capturing process is conducted using four cameras with pixel resolution of . The captured synchronized video frames are transmitted via a grabber card to a PC, which functions as the image processor, and then buffered in the PC’s system memory. The next step is to process them. However, camera calibration is a fundamental step that needs to be taken into consideration before any reliable image processing or even further defect marking can be performed. Camera calibration can be used to obtain the intrinsic parameters of the cameras. The intrinsic parameters, which are independent of a camera’s position in the physical environment, describe the camera’s focal length , principal point , and distortion coefficients. Distortion compensation is carried out by means of a flat calibration pattern. The calibration methods used here also employ digital checkerboard-patterned images. The pattern is a checkerboard with checkers and 100 mm edge length. A set of 10 images of the calibration pattern in different positions is acquired for each individual camera, as illustrated in Figure 2. To verify the calibration results, raw images from another checkerboard are captured for each camera. The raw images from each individual camera, that is, (positioned at top left), (positioned at top right), (positioned at bottom left), and (positioned at bottom right), are shown in Figure 3. The undistorted images for the individual cameras, obtained following distortion compensation, are shown in Figure 4.

347921.fig.002
Figure 2: System calibration to recognize intrinsic parameters and homography matrices of cameras.
fig3
Figure 3: Raw images from the individual cameras: (a) (positioned at top left), (b) (positioned at top right), (c) (positioned at bottom left), and (d) (positioned at bottom right).
fig4
Figure 4: Undistorted images for the individual cameras, after distortion compensation: (a) (positioned at top left), (b) (positioned at top right), (c) (positioned at bottom left), and (d) (positioned at bottom right).

Large pieces of leather are partially captured by the four separate cameras; the image of each leather object can then be recombined by joining the multiple images from the four cameras in the same coordinate system for better recognition. To stitch four images that are not aligned on the same coordinate plane as a result of camera placement, registered and aligned matrices have to be calculated to facilitate their matching, as shown in Figure 5. The homography matrix of the calibrated cameras can be calculated linearly with the help of one array that stores the coordinates of reference points chosen in the digital checkerboard images and another array that contains the corresponding points in the physical world. The collection of reference points is selected on the checker-patterned model plane for the four cameras used in this paper. The calibration methods used here also employ digital checkerboard-patterned images. A patterned board with checkers and 100 mm edge length is placed on the working plane surface, as depicted in Figure 6.

347921.fig.005
Figure 5: This large piece of leather is partially captured by four separate cameras and four images that are not aligned on the same coordinate plane as a result of camera placement.
347921.fig.006
Figure 6: A patterned board with checkers and 100 mm edge length is placed on the working plane surface.

Part of a planar checker pattern positioned in front of the four cameras is acquired by the four cameras, respectively. The transformation calculation comprises calculation of homography matrix between a partial checker pattern plane and an image , calculation of homography matrix between partial checker pattern plane and an image , calculation of homography matrix between partial checker pattern plane and an image , and calculation of homography matrix between partial checker pattern plane and an image . To express the homography mapping in terms of matrix multiplication, homogeneous coordinates are used to express both the viewed point and the point on the image plane to which is mapped. Suppose that and are the homogeneous coordinates of an arbitrary corner point in the checker pattern and in image plane , respectively. The homography can be expressed as where is an arbitrary scale factor. After the checker pattern corner is located, the corresponding indices are stored for homography calculation. Homography transform operates on homogeneous coordinates. Without loss of generality, we can choose to define a planar scene that maps points on the plane so that final -buffer depth is zero (i.e., ). Then, a coordinate , which is defined on the plane, where The homography matrix that maps a planar object’s points onto the image plane is a three-by-three matrix. On the basis of homography, four cameras in different locations and orientations are registered to recover the rigid rotation and translation of the planar pattern. For the homography matrices of the other three cameras, homography can be expressed as Only reasonable overlapping of the planar pattern in the field of view among the four neighboring cameras is necessary, and therefore the images from the individual cameras are registered and aligned. The fused image is produced by stitching the registered and aligned images, then the overlapping areas between two adjacent regions are calculated, and a new pixel set is obtained on the basis of a weighted linear interpolation process between the two overlapped pixel sets. The seamless blended images based on homography matrices, that is, , , , and , provide a complete image map with a large field of view, as shown in Figure 7.

347921.fig.007
Figure 7: The images from the four cameras are fused together to provide a complete image map.

3. Results and Discussion

We used the 400 checkers on the patterned board to verify our calibrated results. As shown in Figure 8, the digital checkerboard images are characterized by known geometric entities, that is, maximum width , half of the width , maximum height , edge length , and diagonal lengths and , respectively. Conventional global alignment approaches that employ a manual fusion process, after hours of careful alignment along the vertical planes of the checkerboard, obtain a fused accuracy with a mean error of 17.28 mm and a standard deviation of 12.20. Further, their mean absolute percentage error is 1.1%. Our proposed automatic registration approach, which employs the homography based fusion process, results in a fused accuracy with a mean error of 0.37 mm and a standard deviation of 0.41. Further, our mean absolute percentage error is 0.1%. A comparison of our automatic registration approach to the manual fusion process is given in Table 1.

tab1
Table 1: Comparison of the manual fusion process to our automatic registration approach.
347921.fig.008
Figure 8: The digital checkerboard images are characterized by known geometric entities, that is, maximum width , half of the width , maximum height , edge length , and diagonal lengths and , respectively.

Following completion of the fused image map, the defect images are segmented and the areas with differing quality are marked manually by a defect marking pen. To track the marking pen in a complex environment and grab the marking dot are trivial for humans, even if the background is disordered and colorful. To determine the characteristics of the marking pen, image preprocessing is required in order to search for the marking dot. To simplify image analysis, the preprocessing steps use morphological subroutines (HSV color transfer, erosion, dilation, and majority of threshold) to reduce the disturbances, all of which are done prior to the determined motion. There are many other objects with various colors that complicate the identification process and it is difficult for a computer to understand the concept of each color. Therefore, we applied the HSV color model. In order to distinguish the target object from the background environment, information about the hue level of the marking pen must be obtained. After acquisition, each image is individually processed to determine the background and follow the path of the marking pen. At each acquisition, the dots marked by the marking pen are converted from pixel to millimeter and from the image reference system to the machine absolute system. Less than 20 ms is taken to process the fused image. As shown in Figure 9, our proposed machine vision system has been fully integrated into the commercial CAD program used in the leather industry via a DXF link. Demonstration videos of our experiments are available at http://youtu.be/z_TD8L9EH80 and http://youtu.be/Kfu8vyvd6-4. Following the marking of the defects by the marking system, the data are finally passed to the automated process of the extended nesting module created by us for the leather production process, as shown in Figure 10.

347921.fig.009
Figure 9: The proposed multicamera fused leather defects marking system is integrated with a CAD system in the leather industry.
fig10
Figure 10: The nesting module developed by us for the leather production process.

4. Conclusions

In this paper, we proposed an approach that solves the problem of real-time video fusion in a multicamera system for leather defect detection systems. We achieve complete multi-camera calibration by applying homography transformation. We employ a homography matrix to calculate overlapping pixels and finally implement boundary resampling to blend images. Experimental results show that our proposed homography based registration and stitching method is effective for leather multi-camera fused leather defects marking and nesting systems. Our method improves the mean absolute percentage error from 1.1% obtained by manual processes to 0.1% with automatic registration.

Acknowledgments

This research was partially supported by the National Science Council in Taiwan under Grant NSC 101-2221-E-224-008 and Industrial Technology Research Institute under Grants 102-283 and 102-289. Special thanks are due to anonymous reviewers for their valuable suggestions.

References

  1. A. Lerch and D. Chetverikov, “Knowledge-based line-correction rules in a machine-vision system for the leather industry,” Engineering Applications of Artificial Intelligence, vol. 4, no. 6, pp. 433–438, 1991. View at Scopus
  2. A. Lerch and D. Chetverikov, “Correction of line drawings for image segmentation in leather industry,” in Proceedings of the 11th IAPR International Conference on Computer Vision and Applications, vol. 1, pp. 45–48, 1992.
  3. J. D. Aranda Penaranda, J. A. Ramos Alcazar, L. M. Tomas Balibrea, J. L. Munoz Lozano, and R. Torres Sanchez, “Inspection and measurement of leather system based on artificial vision techniques applied to the automation and waterjet cut direct application,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, vol. 1, pp. 863–867, October 1994. View at Scopus
  4. S. Anand, C. McCord, R. Sharma, and T. Balachander, “An integrated machine vision based system for solving the nonconvex cutting stock problem using genetic algorithms,” Journal of Manufacturing Systems, vol. 18, pp. 396–414, 1999.
  5. J. Paakkari, H. Ailisto, M. Niskala, M. Mäkäräinen, and K. Väinämö, “Machine vision guided waterjet cutting,” in Diagnostic Imaging Technologies and Industrial Applications, vol. 3827 of Proceedings of SPIE, pp. 44–51, Munich, Germany, 1999.
  6. M. Lanzetta and G. Tantussi, “Design and development of a vision based leather trimming machine,” in Proceedings of the 6th International Conference on Advanced Manufacturing Systems and Technology, pp. 561–568, 2002.
  7. C. Yeh and D.-B. Perng, “A reference standard of defect compensation for leather transactions,” The International Journal of Advanced Manufacturing Technology, vol. 25, no. 11-12, pp. 1197–1204, 2005. View at Publisher · View at Google Scholar · View at Scopus
  8. K. Krastev and L. Georgieva, “Identification of leather surface defects using fuzzy logic,” in Proceedings of the International Conference on Computer Systems and Technologies, Varna, Bulgaria, 2005.
  9. K. Krastev and L. Georgieva, “A method for leather quality determination using fuzzy neural networks,” in Proceedings of the International Conference on Computer Systems and Technologies, Veliko Tarnovo, Bulgaria, 2006.
  10. L. Governi, Y. Volpe, M. Toccafondi, and M. Palai, “Automated dyeing of free-form leather patch edges: a machine vision based system,” in Proceedings of the International conference on Innovative Methods in Product Design, Venice, Italy, 2011.
  11. F. Q. He, W. Wang, and Z. C. Chen, “Automatic visual inspection for leather manufacture,” Key Engineering Materials, vol. 326–328, pp. 469–472, 2006. View at Scopus
  12. M. Lanzetta and G. Tantussi, “Design and development of a vision based leather trimming machine,” in Proceedings of the 6th International Conference on Advanced Manufacturing Systems and Technology, pp. 561–568, 2002.