Abstract

Image fusion is an image enhancement method in modern artificial intelligence theory, which can reduce the pressure in data storage and obtain better image information. Due to different imaging principles, information of the infrared image and visible images’ information is complementary and redundant. The infrared image can be fused with a visible image to obtain both the high-resolution texture details and the edge contour of the infrared image. In this paper, the fusion algorithm of forest sample image is studied at the feature level, which aims to accurately extract tree features through information fusion, ensure data stability and reliability, and improve the accuracy of target recognition. The main research contents of this paper are as follows: (1) teaching learning-based optimization (TLBO) algorithm was used to optimize the weighted coefficient in the fusion process, and the value range of random parameters in the model was adjusted to optimize the fusion effect. Compared with before optimization, image information increased by 2.05%, and spatial activity increased by 15.27%. (2) Experimental data show that the target recognition accuracy of feature-level fusion results was 93.6%, 13.9% higher than that of the original infrared sample image, and 18.8% higher than that of the original visible sample image. Pixel-level and feature-level fusion have their characteristics and application scopes. This method can improve the quality of the specified region in the image and is suitable for detecting intelligent information in forest regions.

1. Introduction

With the rapid development of sensor technology, single visible light mode is gradually developed into a variety of sensor modes. They differ in imagining mechanism, working environment, and requirements as well as functions. They also work in different wavelength ranges. Due to the limited information of data acquired by a single sensor, it is often difficult to meet the needs of applications. At the same time, more comprehensive and reliable information of observation targets can be obtained by using multisource data. Therefore, in order to take full advantage of increasingly complex source data, various data fusion techniques have been rapidly developed with the aim of incorporating more supplementary information into a new data set by means of more information than can be obtained from any single sensor [1]. Image fusion technology, as a very important branch of multisensor and visual information fusion, has aroused widespread concern and research upsurge in the world in the past twenty years. The main idea of image fusion is to combine multisource images from multiple sensors into a new image by using algorithms, so that the fused image has higher reliability, less uncertainty, and better comprehensibility [2].

Image fusion technology was first used in remote sensing image analysis and processing. In 1979, Daily et al. first applied the composite image of radar image and Landsat-MSS image to geological interpretation, and its processing process can be regarded as the simplest image fusion [3]. In 1981, Laner and Todd conducted a fusion experiment of Landsat-RBV and MSS image information [4]. In the middle and late 1980s, image fusion technology has been applied to the analysis and processing of remote sensing multispectral images, beginning to attract attention. It was not until the end of 1980 that people began to apply image fusion technology to general image processing (visible image, infrared image, etc.) [5]. Since the 1990s, the research of image fusion technology has been on the rise, showing great application potential in the fields of automatic recognition, computer vision, remote sensing, robotics, medical image processing, and military applications. For example, the fusion of infrared and low-light images helps soldiers see targets in the dark [6]. The fusion of CT and MRI images is helpful for doctors to diagnose diseases accurately [7]. Jin et al. extracted more accurate and reliable feature information from images by fusion of infrared and visible images, thus achieving accurate face recognition [8]. Using image fusion, Liu et al. made images with different focal lengths complement each other and improve the resolution of fusion results [9]. In recent years, image fusion has become an important and useful technique for image analysis and computer vision.

The main purpose of this paper is to find an image fusion algorithm suitable for forest environment perception, using visible light image and infrared thermal image fusion technology, to collect the image fusion processing, improve the fusion effect, accurately extract effective forest information, and obtain information for forest intelligent detection. The main research contents are as follows: (1)The fusion background of visible and infrared images, different image processing methods, and the effects of different image fusion processing are introduced(2)The process of fusion coefficient optimization based on teaching learning based optimization (TLBO) algorithm is introduced. The random parameters in the model are set by TLBO optimization algorithm to optimize the fusion effect. The forest images are used for image fusion experiments, and the fusion results are evaluated by objective evaluation indexes(3)In order to enhance the search ability of the algorithm and improve the evaluation index value to a greater extent, the value range of the optimization coefficient and of TLBO algorithm is further set according to the entropy value, and then evaluation index is used for corresponding evaluation

Multisource image fusion algorithm also has broad application prospects in the field of forestry intelligent detection. Using feature-level image fusion algorithm, Bulanona et al. extracted data information of fruits in fruit forests and monitored fruit growth status in real time in 2009 [10]. In 2013, Lei et al. identified obstacles in forest images by using the results obtained by fusion algorithm and two-dimensional laser data and intelligently and accurately distinguished trees, rocks, and animals in the images with an accuracy rate of more than 93.3% [11]. Furthermore, by improving the fusion algorithm, the data accuracy of objects such as trees in the image is improved, and the accuracy of target recognition is increased by 95.3% [12]. The quality of information fusion directly affects the accuracy of forest information detection and is an important part of research on artificial intelligence. This paper is an important branch of research on information fusion algorithm-infrared and visible image fusion algorithm. Due to different imaging principles, the information of infrared image and visible image is complementary and redundant. The target in infrared image has clear edge features and is easy to be segmented and extracted. The texture details and background information of visible image are more prominent, but the target information is difficult to extract because of complex image content. Therefore, the purpose of fusion is to synthesize complementary information, reduce redundancy, improve image quality, and express and extract useful features in images more succinctly and accurately. In this paper, the effective forest information is extracted accurately by fusion of infrared and visible images, and the obtained information is used for intelligent forest detection.

3. Materials and Methods

The detailed process of visible and near-infrared image sample fusion is shown in Figure 1. The ultimate goal is to enhance the search ability of the algorithm, improve the evaluation index value to a greater extent, and obtain the fusion image more suitable for the intelligent detection of forest information.

3.1. Data Modeling Methods
3.1.1. Pixel Level Image Fusion Algorithm

(1) Image Fusion Algorithm Based on Wavelet Transform. Wavelet transform theory was first proposed by Morlet and Gorsmsna in 1984, and its principle is developed on the basis of Fourier transform. Different from Fourier transform, wavelet transform is the local transform of frequency, which can effectively extract the signal in the image. Its advantage is to carry out multiscale analysis of the image without losing the information [13]. Stephane and Matllat proposed fast discrete wavelet and built a bridge between wavelet transform and multiscale image fusion [14].

Two-dimensional image samples after wavelet decomposition can be represented by four subband components: where represents the original sample, represents the decomposition frequency of this layer, represents the low-frequency component, and represents the high-frequency component in different directions. is the scale coefficient that makes up the canonical orthogonal basis of the wavelet, and the wavelet function makes up the canonical orthogonal basis of the space.

(2) Image Fusion Algorithm Based on PCA Transform. Principal component analysis (PCA) transform, also known as principal component analysis, is a multidimensional linear transform based on the statistical characteristics of images, which has the function of centralizing variance information and compressing data volume is mathematically called transform.

The PCA transformation and fusion process of multisensor images is as follows: (1)PCA was applied to the low-resolution multispectral image to obtain three principal components: P1, P2, and P3(2)The high-resolution image was stretched and made to have the same mean and variance as the first principal component P1 of the multispectral image(3)The stretched high-resolution image was used to replace P1 as the first principal component, and a new fusion image P was generated with components P2 and P3 through PCA inverse transformation

(3) Image Fusion Algorithm Based on Contourlet Transform. Contourlet transform, also called contourlet transform, is a multiresolution image representation method proposed by Do and Vetterli in 2002 [15]. In contourlet transform, image multiscale decomposition is realized by Laplace tower decomposition (LP) [16]. The multiscale decomposition of an approximate image can be obtained by repeated Laplacian tower decomposition [17]. However, in the process of image decomposition and reconstruction by contourlet transform, the image needs to be further sampled and upward sampled, which makes the contourlet transform lack shift-invariance (invariance) [18]. As a result, the spectrum of the signal will overlap to some extent, and the Gibbs phenomenon is obvious in the image fusion.

(4) Low-Frequency Coefficient Processing Based on PCNN. In the low-frequency domain of image fusion, Laplacian energy, as excitation input to PCNN, is processed as follows: where represents the variable distance between the coefficients (in this paper, ); is the coefficient at point .

In order to eliminate the block effect or grayscale distortion that may be caused by the boundary discontinuity at the junction between the clear area and the fuzzy area, the sum of modified Laplacian (SML) in the field centered on point is defined as where is the corresponding window function. Experience shows that the best highlighting effect of the window center pixel and its changing boundary should be set as

The sum of modified Laplacian can well represent the edge details of the image, reflect the sharpness of the image, and show superior fusion performance in the fusion image.

(5) High-Frequency Coefficient Processing Based on PCNN. The high-frequency subband image represents the edge details of the image, so the coefficients decomposed by NSCT can be directly input into PCNN as excitation in the process of high-frequency coefficient fusion. The specific steps are as follows:

The PCA transformation and fusion process of multisensor images is as follows: (1)In high-frequency subband images, the normalized gray values of each pixel are directly taken as the external input of PCNN to calculate the ignition times of each input excitation. The formula is expressed as(2)The processing steps for the same low-frequency coefficient are as follows:where , , and , respectively, represent the gray values of the fusion image and the original image and , and represents the NSCT decomposition of the -layer. After each fusion subband image is obtained, the fusion image is obtained by NSCT inverse transformation.

3.1.2. Feature-Level Fusion Algorithm

(1) Low-Frequency Domain Fusion Rule Based on Fuzzy Logic. Based on fuzzy rules, fusion can also be divided into two types: spatial domain fusion and frequency domain fusion. Teng et al. fuzzified all pixel points into five fuzzy subsets based on the gray value of the image, then determined the membership degree of each fuzzy subset in the corresponding domain by a triangular membership function, and formulated fusion rules on this basis to obtain fusion results [19]. Cai and Wei first decomposed the source image into the frequency domain and then formulated the fusion rules in the low-frequency domain by using the fuzzy logic criterion to maximize the information content of the fusion sub-band image in the low-frequency domain [20]. In this paper, Gaussian membership function is used to determine the weight coefficient of image fusion, whose definition is expressed as where is the standard deviation of the sub-band image, is the low-frequency decomposition coefficient of point , is the average value of the decomposition coefficient, and is a constant.

(2) High-Frequency Domain Fusion Rules Based on Segmentation Results. The role of high-frequency fusion rules is to solve the problem that the target is not significant in the pixel-level fusion results and then improve the texture energy and other features of trees in the fusion results, so that the tree features extracted from the fusion results are more accurate. The flow chart is shown in Figure 2.

(3) Fusion Coefficient Optimization Algorithm. Teaching-learning-based optimization algorithm (TLBO) is a swarm intelligence algorithm proposed by Rao et al., an Indian scholar, in 2011 [21]. It imitates the learning process model of students and can be divided into two parts: teaching and learning phases. In 2014, Jin and Wang first applied TLBO optimization algorithm to image fusion to optimize the fusion coefficient and improve the image quality evaluation index [22]. (1)Teaching phase

In the teaching phase, the overall optimization can be achieved by encouraging top students. Figure 3 shows the schematic diagram of the overall optimization process in this phase.

As shown in the curve for a group of student’s overall academic record in Figure 3, the result agrees with the normal distribution, with its average representing students’ overall level. In each optimization process, the teacher scored the best by students is defined, and then the level of the teacher is further optimized in the overall level of students through their influence. (2)Learning phase

In the learning phase, the target function index can be improved through mutual learning among individuals. The process is carried out according to the following rules:

Compared with PSO, GA, and other optimization algorithms, the coefficient in TLBO has less influence on the optimization effect, with better convergence but requires shorter optimization time.

The detailed optimization steps are as follows: (a)The weight coefficients determined by Gaussian membership function during image fusion were converted into a row of vectors, which were used as a group of samples. Another 9 groups of vectors with the same size were randomly generated to form the model to be tested(b)The entropy value of fusion image was selected as the objective function(c)The model was put into the TLBO system, and the fusion coefficient group under the optimal entropy value was obtained through the cycle until the convergence of objective function(d)The cycle was terminated

(4) Improved TLBO Parameter Optimization Algorithm. The basic TLBO algorithm can find the global optimal value when solving simple low-dimensional problems, but when solving complex multimode high-dimensional problems, it is easy to fall into the local optimal value and cannot find the values adjacent to the global optimal value. Many scholars have improved the TLBO algorithm. Rao et al. supplemented and improved the structure of TLBO algorithm. Gao et al. introduced the crossover operation of differential evolution algorithm into the algorithm to further improve the local search ability of the algorithm [23]. All the indexes of the image optimized by using the basic TLBO algorithm were improved but not quite significantly. Therefore, in order to enhance the search ability of the algorithm and improve the evaluation index value to a greater extent, the value range of optimization coefficients and of TLBO algorithm was further adjusted.

The detailed optimization steps of the improved TLBO are as follows: (1)The weight coefficients determined by Gaussian membership function during image fusion were converted into a row of vectors, which are used as a group of samples. Another 9 groups of vectors with the same size were randomly generated to form the model to be tested(2)The entropy value of fusion image was selected as the objective function(3)The value range of was kept unchanged, the range of parameter was set to compare the influence of in different ranges on the image entropy, and then the optimal was selected(4)The model was put into the TLBO system, and the fusion coefficient group under the optimal entropy value was obtained through the cycle until the convergence of the objective function(5)The value range was kept unchanged, the parameter range was set to compare the influence of in different ranges on the image entropy value, and then the optimal was selected(6)The model was then brought into the TLBO system, and the fusion coefficient group under the optimal entropy value was obtained through the cycle until the convergence of objective function(7)The cycle was terminated

3.1.3. The Evaluation Index

In this paper, information entropy, mean gradient, standard deviation, spatial resolution, and interactive information are selected as image evaluation indicators.

(1) Information Entropy. Information entropy is the most widely used objective evaluation index of images at present, which quantitatively describes the information contained in images, and its mathematical definition is expressed as where represents information entropy, and represents the proportion of the number of pixels with gray value of in the total pixel points. The larger the information entropy is, the more scattered the gray value of image pixels is, the richer the content is, the larger the information is, and the better the fusion effect is.

(2) Average Gradient. The average gradient reflects the difference between adjacent pixels in the image. The larger the average gradient is, the greater the image contrast is, the more obvious the edge effect of objects in the image is, and the clearer the texture details are. The mean gradient is defined as where represents the average gradient; and are the size of the image; represents the pixel gray value of coordinate .

(3) Standard Deviation. Standard deviation represents the dispersion degree of pixel gray value distribution. The larger the value is, the more discrete the gray value distribution of image pixels is, and the stronger the contrast. The mathematical expression of standard deviation is defined as where STD stands for standard deviation, stands for pixel value at point , and stands for pixel mean of all pixel points.

(4) Spatial Resolution. Spatial frequency is a parameter used to represent the activity degree of images in space. The higher the value is, the higher the activity degree of images in space is and the better the quality of images is. The formula of spatial frequency is expressed as where is spatial frequency, and represent spatial column frequency and spatial row frequency, respectively. represent the number of rows and columns of the image, and is the gray value of pixel point .

(5) Interactive Information. Interactive information, also known as mutual information, is usually used to demonstrate the correlation between multiple variables. In the image quality evaluation system, it is used to evaluate the correlation between fusion results and original samples. The greater the amount of interaction information, the higher the correlation between the fusion result and the original sample, and the more information can be obtained from the original sample: where is the interactive information, represents the original sample, is the fusion result, and is the image entropy value.

3.2. Data Analysis Materials

Nearly 400 groups of forest infrared and visible images were collected in this study. The equipment used is Fluke TI55 infrared thermal imager. The time period selected in this experiment is the morning and evening when the temperature difference is large, and the afternoon when the visual effect is easily affected. The image sample collection experiment and experimental equipment of this study are shown in Figure 4, and its technical parameters are shown in Table 1.

The infrared lens captures the spectral information in the 8-14 band, which is the middle and far infrared image. The contour of forest edge in infrared samples is obvious and thus easy to be segmented and extracted, but the accuracy of target recognition cannot be guaranteed due to the lack of details such as texture. Visible light samples contain rich texture details, but there is a very small gray difference between trees and background area without any pronounced characteristics, so it is difficult to achieve the stable extraction of tree information alone. Therefore, the purpose of this study is to improve image quality and accurately extract forest tree feature information by integrating the characteristics of infrared and visible images through fusion processing to ensure the accuracy of recognition.

4. Results

4.1. Pixel-Level Image Fusion Results

Figure 5 shows samples of infrared and visible forest images. Figure 6 shows the results of wavelet decomposition algorithm, PCA fusion algorithm, and contoulet combined with PCNN fusion algorithm, respectively. In wavelet transform, the image is decomposed into a low-frequency domain and a high-frequency domain in three directions, including horizontal high-frequency domain, vertical high-frequency domain, and oblique high-frequency domain. Wavelet decomposition can overcome the instability of Laplace decomposition and effectively reduce the influence of noise on the image. However, due to the defect of wavelet decomposition basis, jagged block error is likely to occur when processing smooth curves. PCA transformation of the principal component information is relatively high, using the gray value of panchromatic band image to replace PCA, and then inverse transformation of the enhanced multispectral band image, the information is vulnerable to loss. Contourlet decomposition +PCNN transform can avoid block effect and grayscale distortion while improving image definition and contrast, avoiding generating new noise and expanding the information of a single image.

From the perspective of subjective evaluation, the fusion results synthesize the features of the source image, expand the information content of a single image, and improve the image clarity. Based on contourlet decomposition, this algorithm is a multiscale and multidirection computing framework for discrete images. It can be regarded as the enhancement technology of contourlet decomposition, which can carry out multidirection decomposition and multiscale decomposition of images, respectively. By the improved method, contourlet decomposition +PCNN transform can eliminate the aliasing effect caused by using contourlet and provide a good and stable input signal for the subsequent fusion.

In order to quantitatively evaluate the quality of the fusion results, the image was quantitatively analyzed, as shown in Table 2. As can be seen from the table, the standard deviation and mean gradient data distortion of wavelet fusion are caused by the fact that wavelet transform cannot effectively process the smooth curve in the image, which is likely to result in the jagged noise and interferes with the statistical characteristics of the image. Contourlet decomposition +PCNN transform can avoid block effect and grayscale distortion, while improving image definition and contrast, avoiding generating new noise, and expanding the information of a single image. It can also more accurately describe the forest area of the tree information and its scene details. In terms of quantitative data analysis, the results obtained by contourlet combined with PCNN algorithm are better than those obtained by the other two algorithms in entropy, spatial resolution, and interactive information, and they also have slightly lower standard deviation and mean gradient is slightly lower than the traditional algorithm, but higher than PCA.

Therefore, the fusion result of contourlet decomposition +PCNN transform has a larger amount of information, stronger contrast, and better visual effect. At the same time, it can effectively avoid grayscale distortion and block effect easily caused by the fusion between forest infrared and visible images while avoiding the influence of noises. Therefore, compared with common pixel-level fusion algorithms, this algorithm performs better in improving image information and sharpness.

4.2. Feature-Level Image Fusion Results

Compared with the pixel-level image fusion algorithm, the high-frequency domain fusion algorithm proposed in Figure 7 improves the visual effect of the image. In the forest image, the background area has little influence on the target area, which reduces the block effect and ringing effect behind. The contour is clearer and more information about the target area is retained. Compared with the pixel pole fusion image, the block effect is significantly reduced. As can be seen from Table 3, pixel-level fusion images have the best data in terms of entropy, mean gradient, standard deviation, and spatial frequency, indicating that pixel-level fusion images are better in terms of information content, contrast, and spatial activity. The spatial resolution and interactive information of feature-level fusion images are optimal, which indicates that the image is better than other fusion images in terms of the degree of association with the source image and noise interference prevention.

As can be seen from the table, the result obtained by the feature-level fusion algorithm has a larger amount of information and better visual effect. It effectively avoids grayscale distortion and block effect easily caused by the fusion of forest infrared and visible images while avoiding the influence of noises. Among different evaluation indexes, pixel-level and feature-level fusion results are better, so different fusion methods can be adapted according to different requirements.

4.3. Results of TLBO Algorithm

The feature-level image fusion results of the infrared and visible images mentioned above are shown in Table 4. It can be seen from the table that its entropy value was 7.3981, but was 7.5121 after the optimization of the original TLBO model. Figure 8 shows the comparison between the optimized image and the original feature pole fusion image.

The entropy value, standard deviation, and mean gradient increased by 1.16%, 7%, and 17.69%, respectively. All the indexes of the optimized image were improved, but not quite significantly.

The feature-level image fusion results of the other group of infrared and visible images are shown in Table 5. It can be seen from the table that its entropy value was 7.2378, but was 7.3910 after the optimization of the original TLBO model. Figure 9 shows the comparison between the optimized image and the fusion image of the original feature pole.

The entropy value, standard deviation, and mean gradient increased by 2.12%, 18.84%, and 3.10%, respectively. Except for interactive information, all the indexes of the optimized image were improved, but not quite significantly. The interaction information represents the relationship between the fusion image and the source image, and the larger the value is, the better the fusion effect is. However, in the fusion image, more infrared image information is needed to obtain a more obvious contour and detailed texture with a better entropy value, so the parameters of interaction information are relatively low.

4.4. Results of Improved TLBO Algorithm

Figures 10(a) and 10(b) are a group of infrared and visible light sample images named UN-CAMP, which have been applied to effect comparison in many domestic and foreign literature on image fusion algorithms. Figure 10(c) is the feature-level fusion image. Figures 10(d) and 10(e) are, respectively, the optimized results of the original TLBO algorithm and the improved TLBO algorithm after the random parameter setting. Table 6 shows the corresponding index evaluation results.

The data in the above table show that all quantitative evaluation indexes of the results after optimization of fusion parameters were improved. Compared with before optimization, the amount of information and spatial activity of images increased by 2.05% and 15.27%, respectively, and the standard deviation and mean gradient of image sharpness and visual effect increased by 13.27% and 28.87%, respectively.

For the infrared and visible image samples selected above, after the same random parameter setting process, the entropy value of the fused image reached the optimal value when the value range of the random parameter and were fixed at [0.4,0.9] and [0.5,1], respectively. Figure 11 shows the feature-level fusion image and the optimized results of the original TLBO algorithm and the improved TLBO algorithm after random parameter setting.

Table 7 shows the evaluation results of corresponding indicators. For this sample, when all entropy values, standard deviation, and mean gradient were improved, the spatial resolution and interactive information data decreased compared with the feature fusion results. Due to complex background information, the improved TLBO algorithm is better in terms of the information content, contrast, and spatial activity of the optimized image. However, the processing results are different from the source image in terms of visual effect due to the excessive influence of background information. In general, the algorithm is quite effective in improving the quality of fusion images and execution efficiency and in achieving better extraction results of target forest images than other algorithms.

For the infrared and visible images of the other group, after the same random parameter setting, the entropy value of the fused image reached the optimal value when the value range of the random parameter and were fixed at [0.3,0.8] and [0.5,1], respectively. Figure 12 shows the feature-level fusion image, the optimized results of the original TLBO algorithm, and the improved TLBO algorithm after random parameter setting.

Table 8 shows the evaluation results of corresponding indicators. As can be seen from the table, when the entropy value and standard deviation indicators were improved, the spatial resolution and mean gradient data became lower than the original optimization results, but still higher than the feature-level fusion image. In the algorithm, the amount of information and contrast of the optimized image are better. By comparing the results obtained from multiple sets of data, it can be seen that for different image samples, the algorithm has relatively optimized effects in improving the quality of fusion images and execution efficiency and could achieve better extraction results of target forest images than other algorithms.

5. Discussion and Conclusions

In the pixel-level image fusion algorithm research, the pulse coupled neural network model relying on contourlet transform is applied to avoid block effect and grayscale distortion caused by the fusion of infrared and visible images. Given the significant difference in gray level between infrared and visible forest images, a reasonable threshold value is selected in the low-frequency domain fusion processing. The points with different output pulse signals are treated differently, and the fusion rules are explicitly formulated. Thus, grayscale distortion and block effect are avoided, but the quality of the fusion image can be effectively improved, and all evaluation indexes of the image can be improved to some extent. In the research of feature-level image fusion algorithms, the PCNN model was used to eliminate the influence of noise in path optimization. The segmentation consequences are sufficient to meet the needs of feature-level fusion research even though they are disturbed by confusable issues. Based on the fuzzy logic rules, the fusion rules in the low-frequency domain are formulated by calculating the degree of dissimilarity between corresponding points of source images. The fusion rules in the high-frequency domain are determined by combining the image segmentation results. While ensuring the visual effect of the fused image, the detailed characteristic information of the target region in the image was displayed, making the research on image fusion a more targeted and purposeful algorithm into the algorithm to improve further the local search ability of the algorithm [24, 25]. The experimental results show that the feature-level image fusion algorithm ensures image quality, achieves the detailed display of the tree target area in the image, and improves quality evaluation indexes. Compared with the pixel-level fusion results, the tree texture obtained by this method is more evident, with more apparent edges.

In the research of feature-level image optimization, the teaching learning-based optimization (TLBO) parameter optimization algorithm is introduced to optimize the fusion coefficient in the fusion process to improve the fusion image’s index data. In order to obtain better image results, the optimal parameter combination for different image groups to achieve the optimal effect by setting the value range of random parameters in the TLBO model and various quantitative evaluation indexes of fused images was improved. Pixel-level and feature-level fusion algorithms have appropriate advantages for different occasions. Pixel-level fusion has advantages in improving image information and sharpness, but it takes twice as long to process information as feature-level fusion. Feature-level fusion has a broader application space in forestry intelligent information detection as it can highlight the target area and reduce computation. The setting of algorithm parameters has an important influence on its optimization ability. The teaching factor of the basic TLBO algorithm varies only, which affects the optimization performance of the algorithm. Therefore, an improved TLBO optimization algorithm is proposed to design the teaching factor by segmenting strategy to process the image. Experimental results verify the algorithm’s effectiveness, which has good searching ability and fusion image quality. In the future, this proposed method will be extended to theoretical research and practical applications including time-serial prediction and pattern recognition [16, 2628].

Data Availability

All data included in this study are available upon request by contact with the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Authors’ Contributions

Conceptualization was done by Jinghua Wang and Lei Yan; methodology was done by Jinghua Wang; software was done by Jinghua Wang; validation was done by Jinghua Wang; formal analysis was done by Lei Yan and Fan Wang; investigation was done by Lei Yan and Fan Wang; resources was done by Lei Yan; data curation was done by Jinghua Wang and Lei Yan; writing—original draft preparation was done by Jinghua Wang; writing—review and editing was done by Jinghua Wang and Shulin Li; visualization was done by Jinghua Wang and Shulin Li; supervision was done by Lei Yan; project administration was done by Lei Yan; funding acquisition was done by Lei Yan. All authors have read and agreed to the published version of the manuscript.

Acknowledgments

This research was funded by the National Key Research and Development Program of China (no. 2021YFD2100605), the National Natural Science Foundation of China (nos. 62006008, 62173007, and 31770769), and the Fundamental Research Funds for the Central Universities (no. 2015ZCQ-GX-03).