Abstract

In order to improve the visual effect of the around view monitor (AVM), we propose a novel ring fusion method to reduce the brightness difference among fisheye images and achieve a smooth transition around stitching seam. Firstly, an integrated corner detection is proposed to automatically detect corner points for image registration. Then, we use equalization processing to reduce the brightness among images. And we match the color of images according to the ring fusion method. Finally, we use distance weight to blend images around stitching seam. Through this algorithm, we have made a Matlab toolbox for image blending. 100% of the required corner is accurately and fully automatically detected. The transition around the stitching seam is very smooth, with no obvious stitching trace.

1. Introduction

In the past decades, because of the rapid growth of road transportation and private cars, road traffic safety has become an important problem in society [13]. The statistics results of the national traffic accidents show that the proportion of accidents caused by drivers’ vision limitation, the reflection of delay, judgment error, and improper operation accounted for up to 40% of the total accidents [35]. In order to solve the above problem, the advanced driving assistance system (ADAS) received more and more attention, such as lane departure warning system (LDWS), pro-collision warning system (FCWS), blind spot monitoring (BSD), and around view monitor (AVM) [6, 7]. Among them, AVM is used to provide the driver with the 360-degree video image information around the body, in the parking, and in crowded city traffic conditions to reduce the user’s visual blind area, to help users to better judge the road traffic conditions around the vehicle, to avoid collision with pedestrians and vehicles around, and to make the driving process safer and more convenient [810].

The key technology of AVM is fisheye image correction and image fusion. In this paper we focus on image fusion, which includes image registration and blending. There are three main types of registration methods, which are region matching method, transform domain based method, and feature matching method. Among them, feature based registration method is fast and accurate, and some features are robust to image deformation, illumination change, and noise. It is a common method of image registration. In the document, the overlap points of two correction images are extracted by matching the SFIT features with scale and direction invariance [11, 12]. The feature operators usually used to extract overlapping corner points include Harris [13, 14], Canny [15, 16], and Moravec [17, 18]. These feature operators are mainly used in the registration process of general image mosaics. However, in AVM system, we match images by scene calibration. Documents [19, 20] detect the corner points of checkerboard patterns by quadrilateral join rules. In documents [21, 22], the initial corner set is obtained by the improved Hessian corner detector, and the initial concentration of false points is eliminated by the intensity and geometrical features of the chessboard pattern. But the above two methods are mainly designed for the chessboard pattern used in camera calibration, which do not meet the requirement of calibration pattern used in scene calibration. Due to the influence of inherent difference of camera and the installation position, there is a difference of exposure between the images, which leads to the obvious stitching trace. Therefore, the image blending is necessary after registration. In the documents, the optimal seam is searched by minimizing the mean square variance of pixel in the region, and the adjacent interpolation method is used to smooth the stitching effect, but this method is not suitable for the scene with too large difference of exposure [23, 24]. The documents fuse several images captured by the same camera with different parameters from the same angle of view by weighting method, but this method can only adjust the brightness and cannot reduce the color difference [25, 26]. In the documents, seamless stitching is achieved by tone compensation near the optimal seam, but the stitching seam of around view system is fixed, so the AVM image cannot be fully fused by this method [27, 28].

Therefore, in order to fully consider the needs of scene calibration, we propose an integrated corner detection method to automatically detect all corners in the process of image registration. In order to fully consider the influence of inherent difference of camera and the installation position, we propose ring fusion method to blend images from 4 fisheye cameras. The main contribution of this paper lies in the following aspects: limitation condition of minimum area and shape are used to remove redundant contours for corner detection successfully. We also improve the corner positions extraction accuracy, by detecting corners in fisheye images at first and calculating the corresponding position in corrected image. The color matching method and ring shape scheme are used in image blending for a more smooth transition successfully, which makes it possible to seamlessly fuse images with large differences in exposure. A Matlab toolbox for image blending in AVM system is designed.

The rest of this paper is organized as follows: Section 2 introduces AVM architecture. Section 3 describes the methodology of image registration and blending in detail. Section 4 describes the experiment result of our method. Conclusions are offered in Section 5.

2. AVM Architecture

The algorithm flow of the AVM is shown in Figure 1. Firstly, we input fisheye images of the calibration scene and detect corner points. Then, positions of these corner points in corrected images are calculated by a correction model. Meanwhile, we use Look Up Table (LUT) to correct fisheye images and obtain the corrected images. Secondly, the target positions of corner points in output image are calculated by size data in calibration scene. Then, positions of corner points in corrected images and their target positions are used to compute homography matrix H. Finally, we project corrected images into the coordinate of output image by using homography matrix H. Then we use ring fusion method to blend them, which is the emphasis of this paper.

In our experiment, the Volkswagen Magotan has been used. The length of the vehicle is 4.8 m, and the width is 1.8 m. We use fisheye camera with a 180-degree large view angle, with a focal length of 2.32 mm. 4 fisheye cameras are mounted on the front, back, left, and right sides of the vehicle separately. The size of the image captured by fisheye camera is . The size of AVM output image is . This paper develops the proposed method on a PC. The adopted simulation processor is the Intel(R) Core(TM) i7-6700HQ CPU at 2.60 GHz and the simulation software is MATLAB.

3. Methodology

3.1. Scene Calibration

Calibration scene is set up for image registration in next step. The distance between vehicle body and calibration pattern in the front and rear positions is 30 cm and the right and left is 0 cm. The reference point of front pattern is A, and the rear is F. We made the point A collinear with the left vehicle body and F collinear with the right vehicle body. And there are 12 point positions we need in every view angle, as shown in Figure 2.

The size data which need to be measured include the following:(1)Car length: the line length of AE.(2)Car width: the line length of AB.(3)Offset: the line length of AC or the line length of BD.

After the measurement of size data, the target positions of corner points in the coordinate of output image are calculated by the following parameters: the size data measured above, the size of output image defined by users, and the size of calibration pattern and vehicle. The calculation process of the target position of all points is the same. We take the target position of point 1 (as shown in Figure 1) as an example. Firstly, we calculate the position of point 1 in the calibration scene, as shown in where the origin of calibration scene is located at the center of the calibration scene, as shown in Figure 1. denotes the position of point 1, denotes the vehicle width, denotes the vehicle length, is the white edge width, and is the width of big black box.

Secondly, we use the position in calibration scene to calculate the position in the coordinate of output image, as shown in where denotes the scaling factor from calibration scene to coordinate of output image. denotes the width of output image. denotes the width of calibration scene. denotes the position in calibration of output image. denotes the position in calibration scene.

3.2. Image Registration Based on Corner Detection
3.2.1. Detect and Calculate the Corner Point Positions

Firstly, the corners are detected automatically in the fisheye image by the integrated corner detection method. Secondly, the corresponding positions of these corners in the corrected image are calculated using the correction model. Finally, we save the positions in the corrected image for the next computation of homography matrix.

Algorithm steps of integrated corner detection method are as follows:(1)Input fisheye images of calibration scene from all 4 cameras.(2)Use the Rufli corner detection method to detect the corners in the chessboard array.(3)Based on the relative position between the black box and the chessboard array, use the detected corners from step to obtain the Region Of Interest (ROI) of the big black box.(4)Preprocess the ROI by adaptive binarization using “adaptivethreshold” function in Opencv and a morphological closing operation to denoise.(5)Obtain the contour of big black box from ROI and the positions of contour vertex by “findContours” function in Opencv. Then we use the following method to remove redundant contours.(1)Limit the minimum area of the contour: according to the size ratio of the chessboard array to the big black box and their relative positions, the threshold of minimum area is calculated, as shown in where denotes the threshold of big black box area and denotes the average area of small box in chessboard array.(2)Limit contour shape: according to the location of the big black box and the imaging features of fisheye camera, the big black box should be in a fixed shape. The shape restrictions are shown in where and denote the diagonal length of the contour, and denote the length of adjacent side of contour, denotes perimeter of contour, denotes the area of contour, and denotes the area of envelope of contour.(6)Use the SUSAN method to locate the exact positions of the contour vertex around positions obtained from step .

3.2.2. Image Registration and Coordinate Unification

After the calibration of Figure 3 and corner detection, the corner positions in coordinates of corrected images and their target positions in coordinates of output images are obtained. Then we need to unify the coordinate of 4 corrected images into the coordinate of output image, as shown in Figure 3. The specific process is as follows. Firstly, we calculate the homography transform matrix, as shown in (5). The form of this matrix is shown in (6).where denotes the corner position in the coordinate of corrected image. denotes the target position in the coordinate of output image.

Secondly, we project every pixel of 4 corrected images into the coordinate of output image, as shown inwhere denotes the pixel position in the coordinate of corrected image. denotes the pixel position in the coordinate of output image.

3.3. Image Blending

As the corrected images from 4 cameras are different from each other in brightness, saturation, and color, we blend them to improve the visual effect of output image by using ring fusion method.

The detailed process is shown as follows.

(1) Equalization Preprocessing. The “imadjust” function in Matlab is used for equalization preprocessing to reduce the brightness difference among images. For example, the original image of left view angle is shown in Figure 4(a) and the processing result is Figure 4(b).

(2) Ring Color Matching

Step 1 (spatial transformation). As RGB space has a strong correlation, it is not suitable for image color processing. So we transform RGB space to the lαβ space where the correlation between three channels is the smallest. The space conversion process includes three transformations, namely, .
Firstly, from RGB space to CIE XYZ space, one hasSecondly, from CIE XYZ space to LMS space, one hasSince the data are scattered in the LMS space, it is further converted to a logarithmic space with a base of 10, as shown in (10). This makes the data distribution not only more converging but also in line with the results of the psychological and physical research of human feeling for color.Finally, from LMS space to lαβ space, one has (11). This transformation is based on the principal component analysis (PCA) of the data, where l is the first principal component, α is the second principal component, and β is the third principal component.After the above three steps, the conversion from RGB to lαβ space is completed.

Step 2 (color registration). Firstly, the mean and standard deviations of every channel in lαβ space are calculated according towhere denotes the mean value, denotes the total number of pixels, denotes the value of the pixel , and indicates the standard deviation.
Secondly, the color matching factors are calculated according towhere denotes the factor that matches the color of image to in channel . denotes the variance of image in channel . denotes the variance of image in channel l. And the rest is similar.
Finally, we match the color of images, as shown inwhere denotes pixel value of image after color matching in channel . denotes the factor of color matching in channel . denotes pixel value of image in channel . denotes average pixel value of image in channel . denotes average pixel value of image in channel . And the rest is similar.

Step 3 (global optimization). Then, we match the color of images from 4 cameras anticlockwise as follows to reach a global optimization result. Firstly, we match the colors of to , then to , then to , and finally to , which forms a ring shape, as shown in Figure 5. The processing result of left view is shown in Figure 4(c).

(3) Weighted Blending. After color matching, the visual effects of output image have been greatly improved. But around the stitching seam between different corrected images, the visual effect is still not enough. Therefore, we use (15) to ensure smooth transition. The interpolation result of left view angle image is shown in Figure 4(d).where denotes the pixel value in output image and is the position index of pixel. and denote the corresponding pixel value in corrected images and . denotes the distance from pixel to the seam. denotes the width of transition field, as shown in Figure 5.

4. Experiment Result

Some details of the experiment have been provided in part 2 of this paper. So, in this part we only introduce the result. The fisheye images captured from 4 cameras are shown in Figure 6. And their corresponding corrected images are shown in Figure 7. The corner detection and calculation result are shown in Figure 8, where Figure 8(a) shows the corner positions detected in the distortion image and Figure 8(b) shows the corresponding positions calculated in the corrected image. The integrated corner detection algorithm is compared with several other corner detection algorithms, in Table 1.

In Table 1, denotes the number of corner points detected correctly in chessboard. denotes the number of corner points detected correctly in big black box. denotes all the number of corner points detected in the calibration scene. The Rufli method cannot detect vertices of the big black box. The Harris and Shi-Tomasi methods cannot detect all target vertices and generate a lot of corner redundancy. And the integrated corner detection algorithm can accurately extract all the target corner points of calibration pattern in the scene. As a result, the integrated corner detection algorithm proposed by us is effective.

The output image result is shown in Figure 9. Figure 9(a) is the result before image blending, and Figure 9(b) is the result after image blending. The experimental results show that the proposed algorithm has visual effect around the stitching seam, which proves that our ring fusion method is effective.

5. Conclusion

This paper has proposed a ring fusion method to obtain a better visual effect of AVM system for intelligent driving. To achieve this condition, an integrated corner detection method of image registration and a ring shape scheme for image blending have been presented. Experiment results prove that this designed approach is satisfactory. 100% of the required corner is accurately and fully automatically detected. The transition around the fusion seam is smooth, with no obvious stitching trace. However, the images we processed in this experiment are static. So, in the future work, we will transplant this algorithm to development board for dynamic real-time testing and try to apply the ring fusion method to more other occasions.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This work was supported by the National High Technology Research and Development Program (“973” Program) of China under Grant no. 2016YFB0100903, Beijing Municipal Science and Technology Commission special major under Grant nos. D171100005017002 and D171100005117002, the National Natural Science Foundation of China under Grant no. U1664263, Junior Fellowships for Advanced Innovation Think-Tank Program of China Association for Science and Technology under Grant no. DXB-ZKQN-2017-035, and the project funded by China Postdoctoral Science Foundation under Grant no. 2017M620765.