Abstract

The traditional image stitching method has some shortcomings such as double shadow, chromatic aberration, and stitching. In view of this, this paper proposes a power function-weighted image stitching method that combines SURF optimization and improved cell acceleration. First, the method uses the cosine similarity to preliminarily judge the similarity of the feature points and then uses the two-way consistency mutual selection to filter the feature point pairs again. Simultaneously, some incorrect matching points in the reverse matching are eliminated. Finally, the method uses the MSAC algorithm to perform fine matching. Then, the power function-weighted fusion algorithm is used to calculate the weight of the center point. The power function weight of the accelerated cell is used to perform the final image fusion. The experimental results show that the matching accuracy rate of the proposed method is about 11 percentage points higher than the traditional SURF algorithm, and the time is reduced by about 1.6 s. In the image fusion stage, this paper first selects images with different brightness, angles, resolutions, and scales to verify the effectiveness of the proposed method. The results show that the proposed method effectively solves the ghosting and stitching seams. Comparing with the traditional fusion algorithm, the time consumption is reduced by at least 2 s, the mean square error is reduced by about 1.32%∼1.48%, and the information entropy is improved by about 0.98%∼1.70%. The proposed method has better performance in matching accuracy and fusion effect and has better stitching quality.

1. Introduction

Image stitching technology includes image registration and fusion. The target of image stitching is to form a wide viewing angle and seamless panoramic image from multiple partially overlapping images. Simultaneously, it needs to ensure that the image is not distorted [1]. These images can be taken from different times, heights, viewing angles, and sensors. Image stitching technology is widely used in computer graphics, computer vision, virtual reality technology, medical image analysis, and other fields [2].

Image registration is the core of image stitching, which includes the extraction and matching of feature points. There are many ways to extract feature points, such as Harris, SIFT, SURF, FAST, and ORB. The SURF algorithm can not only adapt to image rotation and zooming but also maintain stability under the conditions of viewing angle changes, scale changes, and illumination changes. Therefore, the proposed method uses the SURF algorithm [3] to extract feature points. The SURF algorithm is an improvement algorithm of the SIFT (scale-invariant feature transform), and it is not affected by the image rotation, scale scaling, and brightness changes. Simultaneously, it has a faster calculation speed [4]. In 2012, Pang et al. proposed a fully radiation-invariant SURF algorithm, which combines the scale-invariant feature transform (SIFT), accelerated robust feature (SURF), and affine SIFT (ASIFT). Simultaneously, it uses the invariance of ASIFT’s affine and advantage of high efficiency of SURF [5]. The improved SURF algorithm was proposed by using the positive and negative characteristics of the Hessian matrix trace, which saved the time of feature point detection and matching [6]. Additionally, another improved SURF algorithm was proposed by reducing the SURF feature point descriptor from 64 to 36 dimensions. However, the algorithm is not universal and is only suitable for images with large rotation angles and low brightness [7]. In view of this, an improved SURF algorithm based on feature points was proposed on the basis of SURF. The algorithm uses Hessian matrix trajectory to extract the feature points of the image, which improves the speed and accuracy of the algorithm. However, it is complicated and relatively time-consuming [8]. In order to improve the efficiency of registration, a fast image stitching method based on the phase correlation and the improved SURF algorithm was proposed. The algorithm can not only reduce computational complexity but also reduce mismatches. However, it is also easy to eliminate some correct matching points, which is easy to cause stitching in the later image fusion process [9]. In 2021, the performance of the traditional Hitmap version is evaluated in similar GPU clusters. Fernandez-Fabeiro et al. proposed to extend this implementation by adding new functions to the latest version of Hitmap to support arbitrary load distribution of multinode heterogeneous GPU clusters to achieve images [10]. Pasha Hosseinbor et al. proposed 2D point-set registration algorithm for unlabeled data based on linear least squares and subsequently utilized for minutia-based fingerprint matching. The matcher considers all possible minutia pairings and iteratively aligns the two sets until the number of minutia pairs does not exceed the maximum number of allowable one-to-one pairings. The first alignment establishes a region of overlap between the two minutia sets, which is then (iteratively) refined by each successive alignment. After each alignment, minutia pairs that exhibit weak correspondence are discarded [11].

Image fusion plays an important role in image stitching. Because there might be changes in brightness, angle, and scale between images to be stitched, direct fusion will cause problems such as ghosting and stitching gap. A Markov Random Field (MRF) fusion model for image fusion was proposed. The salient structure of the input image is fused in the gradient domain, and then the final result is reconstructed by solving the Poisson equation. For the fusion images, the Poisson equation makes the gradient of the fusion image be close to the fusion gradient [12]. An improved method based on the relative orientation and small area fusion was proposed, which can select the appropriate stitching surface according to the half-angle correction method. Then, it selects a strip area in the middle part of the image overlap area as the transition area to conduct gradual fusion [13]. In 2017, an adaptive weighted image fusion method was proposed. The method designs an adaptive weighted image fusion strategy by combining the fuzzy theory idea based on curvelet transformation [14]. Additionally, the S-shaped nonlinear fusion strategy was proposed to conduct the image fusion stitching, which effectively solved the problems of stitching gap and ghosting [15]. Jung and Hong propose a method for quantifying the parallax level of the input images and clustering them accordingly. This facilitates a quantitative assessment of the various stitching methods for each parallax level. The parallax levels of the images are grouped based on the magnitude and variation in the planar parallax, as estimated with the proposed metric using matching errors and patch similarity [16]. This article has introduced a robust and reliable image stitching methodology (l, r-Stitch Unit), which considers multiple nonhomogeneous image sequences as input to generate a reliable panoramically stitched wide view as the final output. Besides, it has also introduced a novel convolutional-encoder-decoder deep-neural-network (l, r-PanoED-network) with a unique split-encoding-network methodology, to stitch noncoherent input left, right stereo image pairs [17]. Xue et al. proposed a high-quality fisheye image stitching algorithm (LLBI-AW). In the correction part, the traditional bilinear interpolation algorithm leads to blurred edges, and an interpolation algorithm based on latitude and longitude (LLBI) is proposed to solve this problem. In the fusion part, a weighted fusion algorithm based on arc function is proposed for the ghost problem in the overlapped area of the image after stitching. We establish an arc function to determine the weight of the left and right images and then perform the fusion calculation [18]. Vishwakarma and Bhuyan proposed an automatic classification algorithm and an image stitching algorithm based on Harris features of local differences. In the image mosaic process, the automatic classification results are distorted due to image noise and pseudoperiod. By using the structural similarity index to solve the above problems and replacing the standard Sobel edge detector with a local difference operation, the high time complexity problem of the conventional angle detector can be reduced [19]. Although the above methods have been relatively improved, some image stitching quality problems are still unsolved, and the stitching efficiency also needs to be improved.

Aiming at the problems of low matching accuracy and a large amount of calculation in the image registration process, this paper combines improved SURF algorithm and SURF optimization to improve the matching accuracy of image, and the main contributions of this paper are described as follows:(1)First, the cosine similarity is used to detect the preliminary screening of the sign points. If the cosine similarity is greater than K value, the sign points are examined by two-way consistency detection, and the mismatch points are filtered out again.(2)Second, the MSAC algorithm is proposed for the fine matching to determine the final matching pair.(3)Third, aiming at problems such as ghosts and stitching seams that are likely to occur in the image fusion process, a cell-accelerated power function-weighted fusion algorithm is proposed, which uses the power function-weighted fusion algorithm to obtain the weight of the cell center point in the cell acceleration. Finally, an image stitching technology with good stitching quality and high efficiency is finally realized according to the proposed method.

2. Optimize the SURF Algorithm

2.1. Extraction of Feature Points

The SURF algorithm uses the Hessian matrix to detect feature points, and each pixel can find a Hessian matrix. Thus, the Hessian matrix with coordinate x and scale can be expressed aswhere x is the image pixel coordinate and is the scale of the image

The SURF algorithm uses a box filter instead of the Gaussian second-order partial derivative. A multiscale spatial function is constructed by changing the size of the box filter. Then, the multiscale spatial function is convolved with the original image in different directions.

The convolution sum of in equation (1) can be expressed as , so the Hessian matrix can be simplified aswhere is the weight coefficient and it is generally 0.9.

First, the Hessian matrix is used to find the local maximum pixel point, and the point is marked as the interest point. The corresponding two-dimensional array subscript is the position of the interest point in the image. Then, the Harr wavelet is used to determine the main direction of the interest point. Finally, a 64-dimensional SURF characteristic descriptor is constructed.

2.2. Improved Feature Point Matching

For the traditional SURF algorithm, there are two steps to determine the matching degree between the feature points. First, the Euclidean distance between the two feature points is calculated, and then the signs of the Hessian matrix traces are calculated, which are used to determine whether the feature points match or not. Although it has a great advantage over the SIFT algorithm, there are still a large number of mismatched points affecting the stitching effect, and it takes a long time and is less efficient. In view of the above problems, this paper proposes a matching method that combines cosine similarity, two-way consistency, and MSAC algorithm. Cosine similarity and two-way consistency are used for rough matching, and then the MSAC algorithm is used for fine matching.

2.2.1. Cosine Similarity

Cosine similarity means the degree of similarity between feature points. According to the coordinates of feature points, the cosine between the vectors is determined. The cosine value means the similarity between the two feature points. The smaller the cosine value, the larger the angle between the two feature points, the smaller the matching similarity, and vice versa.

First, the Euclidean distance is used to select the preliminary feature point, and then the cosine phase velocity function is used to further determine the fine feature point. If the cosine value of the two vectors is greater than the threshold K, it will be retained, and vice versa. The cosine similarity of vector a and b is obtained by

The cosine similarity is used to test four types of images. The first type of image generates scaling and rotation, the second has the variation of brightness and contrast, the third has the variation of blur, and the fourth has the variation of viewing angle. According to Table 1, we select the average as the final threshold K = 0.975.

2.2.2. Two-Way Consistency

After matching with cosine similarity, there will still be some mismatches. In order to improve the accuracy of matching, this paper uses a two-way consistency constraint to eliminate mismatched point pairs. First, the matching pair is saved after the one-way matching. Second, the relationship between the matching graph and the original graph will be exchanged. Then, we perform reverse matching and select the common matching pair as the final matching result. The single-direction matching point pair is deleted. The relative position relationship calculation of the matching point pairs can effectively improve the accuracy of the matching result. The two-way consistency constraint matching is shown in Figure 1. Assume the homonymy points of point are , and conduct the reverse registration. The homonymy point of is , which meets the two-way consistency constraint. Otherwise, the point is regarded as a mismatched point and should be eliminated.

2.2.3. MSAC Algorithm

The MSAC algorithm is an optimization of the RANSAC algorithm. By modifying the RANSAC cost function, the issue of RANSAC being sensitive to the threshold was resolved. The paper uses the MSAC algorithm to conduct the image registration. The algorithm is used to eliminate the mismatched point pairs in the rough matching. The holography matrix H between two adjacent images is calculated according to the correct matching point pairs. Finally, the matrix to each point in the image to be registered is mapped to the reference coordinate system, and the bilinear interpolation method is used to achieve precise image matching.

3. Improved Image Fusion

Traditional image fusion algorithms perform direct fusion on the registered images, which will produce very obvious stitches and color differences, and severely cause excessive uneven color or artifacts. In addition, some improved methods have been proposed, but these algorithms are cumbersome and time-consuming. In view of this, this paper proposes a cell-accelerated power function-weighted fusion algorithm.

3.1. Gradually Weighted Fusion Algorithm

The gradual weighted fusion algorithm is an improvement of the traditional fusion algorithm. According to the traditional single weighting coefficient, the weight is determined by the distance between the pixel points of the overlapping area and the boundary of the overlapping area. Thereby, a slow and smooth transition is achieved and the stitching gap of the stitched image is eliminated [20].where is the reference image; is the original image; .

Figure 2 shows that the gradual weighted fusion algorithm is easy to implement, and the stitching speed is fast. The weights and vary linearly from 0 to 1. This algorithm is better than the direct fusion algorithm. However, the overlapping regions have shadow after registration because of the same intersection of the two weighting functions and the inflection point of the fusion function. Moreover, the effect of the proposed method is not good when the brightness of the two stitched images differs greatly. It can be seen that the gradual and fade-out weighting method is no longer suitable for most image fusion situations.

3.2. Cell-Accelerated Power Function-Weighted Fusion Algorithm

In order to solve the problems of stitching seams and ghosts in image stitching due to brightness differences and noise generated in image acquisition, this paper proposes a cell-accelerated power function-weighted fusion algorithm, which not only solves the ghosting problem but also saves time and improves stitching efficiency.

According to the Weber–Fechner law [21], the human eye perception response shows a power function nonlinear change law as the background brightness stimulus becomes larger. This law is more appropriate to the trajectory of plant growth. Therefore, the paper uses the Logistic and Cubic function commonly used in plant growth models to derive the power function-weighted fusion model, as shown in equations (5) and (6) [15]:where t is the time, T is the plant growth cycle, and is the initial value and the maximum value, respectively. When t = T, the plant growth is close to the maximum value.

According to equations (5) and (6), the weighting function of the power function model is obtained:

According to equation (7), appropriate r is selected that is more suitable for the weighting function of the power function model.

Finally, combined with the change trend of the power function weighting function in this paper from 1 to 0, equation (8) is adjusted to obtain equation (9). The final improved weighted fusion process is obtained, as shown in Figure 3(a).

In order to clearly express the changing trend of , the derivation of equation (9) is taken.

From the parabolic properties, we can get that when x = 0, y < 0, and the coefficient before is less than 0, the one-variable quadratic equation discriminant shows that there is no intersection with the x-axis, so you can draw a rough image of , as shown in Figure 3 on the right.

In combination with Figure 3, it can be seen that, in the process of approximating from the edge of the coincident area to the middle position, the derivative of the power function weighting function first increases and then decreases. This makes the power function weighting function follow the first fast, then slow, and change. The fast speed means that, during the refusion process of the left and right images, the proportion of pixels will also change relatively, thus alleviating the problem of differences in the fusion process of two images with large chromatic aberration and also solving the problem of stitching seams.

Since each pixel in the overlapping area needs to calculate a weighting coefficient and the weighting coefficient does not change linearly, this will cause the calculation to be more complicated and time-consuming. In response to this situation, a cell-based integration acceleration strategy is proposed. The specific steps are as follows:(1)Divide the overlapping area in the fusion into as much as possible small squares of equal size, as shown in Figure 4, which are defined as cells.(2)Take the center point of the cell or the point close to the center, calculate the value of this point by equation (9), and use the weighting coefficient of point P as the weighting coefficient of each pixel in the entire cell.(3)Points that are close to the edge of the overlapping area and do not participate in the cell need to calculate the weighting coefficient separately. The appropriate value of n can be selected according to the size and shape of the overlapping area between the images to be stitched.

As a result, the number of calculation weighting coefficients will be reduced exponentially, and the fusion efficiency will be significantly improved.

4. Experimental Results and Analysis

The experiment in this paper uses Matlab 2014 for programming and image stitching under a Windows 10 system with an Intel i5 processor and a memory of 12G, which verifies the feasibility and superiority of the calculation in this paper. The first is the comparison between the traditional SURF algorithm and the proposed method in the image registration stage, and the second is the comparison between the image fusion based on this algorithm and the other two algorithms in the image fusion stage.

4.1. Registration Analysis of Feature Points

In order to verify the universality of the proposed method, outdoor parking lots and indoor hall images are used to conduct the image stitching, as shown in Figures 5 and 6. The proposed method extracts and matches the feature points of adjacent images, compared with the traditional SURF algorithm.

According to the proposed method and the traditional SURF algorithm, the feature points of the outdoor parking lot image are extracted and the feature points are matched, as shown in Figure 5. The traditional SURF algorithm will produce a small number of mismatched point pairs, as shown in the red circle and purple square in Figure 5(a). The red circle marks the ground in the two images to be stitched. The reason for the mismatch is that the Euclidean distance between the ground feature points is extremely small, which results in high similarity between the feature points and mismatches; the purple square in Figure 5 shows the matching results of the back of two white cars, the same matrix trace symbols are generated after Hessian matrix calculation, which leads to mismatches. In Figure 5(b), there is no mismatch phenomenon in the matching algorithm used in this paper, and more stable interest points are generated as matching points, which improves the efficiency of the algorithm.

Similarly, according to the proposed method and the traditional SURF algorithm, the feature points of the indoor hall image are extracted and the feature points are matched. The result is shown in Figure 6. The red circle in Figure 6(a) shows the mismatch between the left pillar of the image to be stitched and the right wall of the image to be stitched. The left oblique pillar and the wall of the image to be stitched are mismatched, as shown in the purple box of Figure 6(a). In Figure 6(b), the proposed algorithm eliminates mismatched point pairs.

In order to ensure the accuracy of the conclusion, this paper analyzes the traditional algorithm and the algorithm of this paper to further verify the superiority of the proposed method. In this paper, the correct matching rate [22] is used as the evaluation criterion:where precision is the correct rate; falmatches is the wrong match; correctmatches is the correct match.

Use equation (11) to calculate the matching precision of outdoor parking lots and indoor halls, as shown in Table 2.

From Table 2, it can be seen that the matching precision of outdoor parking images by the proposed method is greater than 80%. However, the matching precision of indoor hall image is 76.8%, and it is lower than outdoor parking. The reason for this matching result is that the feature of outdoor parking is more obvious than that of indoor hall image. The same phenomenon appears in the proposed method. But, the matching precision of the traditional SURF algorithm is lower than the proposed method, and it is about 11% lower than that of the proposed method. It illustrates that the proposed method reduces the wrong match and improves the correct match. Additionally, the time of image match by the proposed method is smaller than the traditional SURF algorithm, and the registration effectiveness of the proposed method is superior to the traditional SURF algorithm. Therefore, the proposed method has obvious advantages in both speed and accuracy of image registration.

4.2. Analysis of Image Fusion

In order to verify the effectiveness of the proposed method in terms of image fusion, this paper uses the improved SURF matching algorithm based on the selection of the progressive gradual-out weighted fusion algorithm, and the algorithm used in [23] is compared with the proposed method for multiple experiments. In order to verify the universality of the proposed method, images with different brightness, different angles, different resolutions, and different scales are selected for fusion.

4.2.1. Effect Analysis of Fusion Results

(1) Different Brightness. For the cameras deployed in different positions of the parking lot, the adjacent cameras are used to collect parking lot images at different times, as shown in Figure 7. Figure 7(a) is a parking lot image taken on a cloudy day, and Figure 7(b) is a parking lot image taken on a sunny day. It can be seen that the brightness of the two images is obviously different.

Two images with different brightness are extracted and matched using the optimized SURF algorithm. First, the Hessian matrix is used to detect the feature points to form a 64-dimensional feature descriptor. Then, the cosine similarity is used to preliminarily judge the similarity between the feature points. The two-way consensus mutual selection is used to filter the feature point pairs again, and the point pairs with reverse matching errors or reverse nonmatching points are eliminated. The MSAC algorithm is used for fine matching and the final feature point is determined. Finally, the proposed method is compared with the other two algorithms, and the experimental results are shown in Figure 8.

Figure 8(a) shows the result of image fusion by the traditional gradual weighted fusion algorithm. From Figure 8(a), it is visible that there are two obvious stitching gaps, and the chromatic aberration is serious after the fusion. The red circle has been marked. Figure 8(b) shows that the stitching gap of the image to be stitched is obviously eliminated. After the fusion, there is still a slight chromatic aberration. Figure 8(c) shows that the proposed method effectively solves the stitching and chromatic aberration.

(2) Different Angles. A camera at a fixed position in the indoor hall at different rotation is used to take images of the indoor hall with overlapping areas, as shown in Figure 9. Figure 9(a) shows the angle shot facing the hall, and Figure 9(b) shows the image obtained by rotating the camera 30° left and right and 15° up and down.

The Hessian matrix is used to extract feature points from two images to be stitched at different angles, and the feature points are matched according to Section 2.2 of this article to eliminate mismatched points. Finally, the proposed method is compared with the other two algorithms, and the experimental results are shown in Figure 10.

There are obvious stitching seams by the traditional gradual-out algorithm, as shown in the blue mark in Figure 10(a). From Figure 10(a), it is clearly visible that there is a fusion gap. Because the weighting coefficient of this method shows a nonlinear trend, it can solve the seam problem well. Figure 10(c) is the fusion result obtained by the proposed method. It is similar to the fusion algorithm in the literature [23] in terms of visual effects, but the stitching efficiency will be greatly improved, as mentioned as follows.

(3) Different Resolutions. The images to be matched in this experiment are from the Internet, and the original images to be stitched are set to different resolutions, as shown in Figure 11. The resolution of Figure 11(a) is , and the resolution of Figure 11(b) is .

The image to be stitched is processed for image matching using the content of the first chapter of this article, and the wrong matching point pairs are eliminated. Then, image fusion is performed according to the method proposed in Section 2 of this article, and the fusion result is shown in Figure 12.

Figure 12(a) shows three stitching seams by the traditional gradual fusion algorithm. The blue square mark on the upper left is enlarged to show that the roof shows that there is obvious misalignment. The bottom left and right of the blue square mark shows that there is a wide stitching gap. The above errors are marked with red circles. Figure 12(b) shows the effect of the fusion algorithm used in [23]. From the left blue box, it can be seen that the algorithm eliminates the problem of misalignment and stitching. Figure 12(c) shows the result of the proposed method, which can also effectively eliminate the problem of misalignment and stitching gaps.

(4) Different Scales (Different Heights). This article uses drones to obtain images to be stitched with overlapping areas by setting different flying heights, as shown in Figure 13. Figure 13(a) is taken at a height of 90 m, and Figure 13(b) is taken at a height of 100 m. Therefore, these two images have different scale, and they will have different resolution. Different resolution images have different pixel size. Therefore, the pixels of the two different resolution images cannot merge together. The different resolution of the two images will cause matching seams.

The original image taken by the drone is first preprocessed to solve the effect of noise generated during the shooting process. Two images with overlapping areas are selected for feature point extraction and matching according to the matching algorithm proposed in this paper. Then, the processed image is fused using the gradual gradual-out algorithm, the fusion algorithm used in literature [23], and the proposed method. The comparison of the experimental results is shown in Figure 14.

Figure 14 shows the stitching effect of different scale pictures taken by drones. Figure 14(a) is the effect of the traditional gradual weighted fusion algorithm. Enlarging the area in the red box, it can be seen that the fused image is slightly misaligned. There are obvious gaps inside the blue box. There are slight gaps in the red area, but the stitching gaps at the blue mark are effectively eliminated, and there is no dislocation. Figure 14(c) is the result of the proposed method. The problem of stitching misalignment in the red area is effectively solved, and the stitching gap in the blue area is effectively eliminated, which solves the ghost or stitching caused by the incorrect weight ratio during the fusion process.

4.2.2. Quality Analysis of the Fusion Image

From the image stitching of different brightness, different angles, different resolutions, and different scales, it is impossible to clearly distinguish that the improved proposed method is better than the fusion algorithm [23]. Therefore, the square error and information entropy are used to evaluate the effectiveness of the proposed method.(1)Time consumption: as shown in Figure 15, since the weighting coefficients are linearly related, the algorithm will take less time. After 4 sets of experiments, it can be seen that the total time consumed by the algorithm used in [23] is more than 2 seconds longer than that of the asymptotic algorithm. This is because the weighting coefficients used in [23] are more complicated. Since the proposed method uses cell to accelerate the weighting coefficient of the power function, it will double the calculation of the weighting coefficient of the power function. Therefore, the total time consumption of the proposed method is more than 2 s less than that of the fusion algorithm in the literature [23], and the total consumption of the gradual-out weighted fusion algorithm time is relatively close.(2)Mean squared error (MSE): the smaller the MSE is, the closer it is to the original image and the better the fusion effect is. As shown in the histogram of Figure 16, the MSE of the proposed method is significantly smaller than the gradual-out weighted fusion method and slightly smaller than the algorithm used in the literature [24]. Compared with the four sets of experimental data, it is reduced by 1.32%, 1.48%, 1.39%, and 1.33%. The algorithm used in [23] is only about 1% smaller than the gradual-out algorithm. From the comparison of the mean square error data, it can be seen that the proposed method not only repairs the fusion seam but also significantly reduces the error with the original image.(3)Information entropy: it is used to describe the average amount of information of an image and measure the richness of image information [25]. The greater the information entropy, the greater the amount of information contained in the fusion result image.where is the ratio of the number of pixels with the gray value i to the total number of images and L is the total number of gray levels.

Figure 17 shows that the information entropy of the gradual-out weighted fusion algorithm is almost equal to that of the algorithm used in [23]. Comparing with the other two algorithms, it can be concluded that, compared with the gradual-out algorithm, the proposed method has increased by 1.70%, 0.98%, 1.42%, and 1.04% in the four groups of experimental information entropy comparisons. Comparing with the gradual-out weighted fusion algorithm, the information entropy of the algorithm in [23] is increased by 0.17%, 0.16%, 0.09%, and 0.13%. After the image is stitched, the proposed method improves the image information and eliminates ghosting. Simultaneously, the overall image information is significantly improved.

According to the above analysis, the proposed method can not only be applicable to changes in brightness, angle, resolution, and scale but also solve the problems of ghosting and stitching that often occur in traditional gradual-out algorithms. The proposed method is less time-consuming and has better detailed descriptions and richer information content. Compared with the other two fusion algorithms, the proposed method can obtain higher stitching and fusion quality.

5. Conclusion

The proposed method uses cosine similarity, two-way consistency selection, and MSAC algorithm to match feature points, which effectively solves the misidentification of matching points. In the image fusion stage, this paper proposes a power function-weighted fusion algorithm based on cell acceleration. It not only solves the ghosting and stitching in traditional fusion algorithms but also improves the efficiency of overall image stitching. Through the comparison of several sets of experimental data results, it can be shown that the proposed method has a higher correct rate of feature point matching than the traditional SURF algorithm, and it takes less time. In the image fusion stage, the three indicators of total time consumption, mean square error, and information entropy are used for evaluation. Compared with the progressive out weighted fusion algorithm, the total time consumed by the proposed method is at least 2 s, and the mean square error is reduced by about 1.32%∼1.48%. Information entropy increased by about 0.98%∼1.70%. The proposed method has better performance in matching accuracy and fusion effect. Simultaneously, it has better stitching image quality and universality than traditional fusion algorithms.

In the future, the registration and fusion of very low-resolution images should be focused on. Simultaneously, the key point of image stitching is the determination of image feature points. The effectiveness of the determination of image feature points will directly determine the speed of image stitching. Additionally, the precision of image feature points will influence the precision of image registration. Therefore, we will conduct the research of fast and accurate determination of image feature points and the fast stitching of video streams.

Data Availability

All the data generated or analyzed during this study are included within this article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was financially supported by the Youth Science Foundation Project (51706165).