Abstract
Landslides in nature are harmful to economic development and people’s lives and cause irreparable losses to the environment. With the application of image detection technology and intelligent algorithm, a new way for landslide detection is proposed to achieve effective detection and identification of hazards. This paper takes the landslide as the data set, carries on the noise reduction, the image expansion, and the image segmentation processing to the image, and extracts the object region information. The quantitative description of the azimuth displacement and displacement change of the crack curve is completed in this paper. This method is suitable for 3D simulation model, sand and stone model, soil model, and the sliding test results of Panzhihua Flight Field, which proves that the design method is effective. Experiments show that when sliding occurs, the texture and color become chaotic, the usual mountain becomes more in the regular state, and the extraction of features is very different. The method has better recognition effect for the hillside covered with vegetation, the recognition time is short, and the recognition rate can reach 90%.
1. Introduction
As a common kind of geological disasters [1], the frequency of landslides is increasing year by year. According to the National Geological Disaster Bulletin, 9710 geological disasters occurred in China in 2020, of which 7403 accounted for 76.2% of the total geological disasters. Therefore, it is very important to implement sliding monitoring alarm for disaster prevention and mitigation [2, 3].
Intermittent cracks occur at the trailing edge. If the length of cracks remains unchanged, it indicates the slope shape that begins to slide. The cracks at the trailing edge occur continuously, and if the crack length tends to expand, it means that the sliding becomes fierce slowly [4]. Therefore, it is of great significance for monitoring and alarming sliding slope to monitor the crack change trend at the sliding edge and reflect the sliding displacement trajectory in time. Independent component analysis is used for feature extraction, and basis function is used as pattern template for natural image feature detection [5]. The successful application of this method in edge detection and texture segmentation is given. Remote sensing technology is used to monitor disasters, thus improving the efficiency. In this paper, CNN and texture change are proposed to detect landslides intelligently [6].
At present, landslide monitoring methods are mainly divided into displacement monitoring, physical field monitoring, groundwater monitoring, and external trigger factor monitoring, in which the surface displacement is an important basis for judging the stability of the slope, and also an important indicator for studying the evolution process of the landslide and the management of hidden danger areas, so the accuracy and effectiveness of displacement data in monitoring is particularly important. The technical scheme of landslide disaster monitoring is gradually developing towards more accurate, intelligent, and real-time research. There are some difficulties in landslide monitoring, and there are signal errors and non-real-time characteristics in the use of sensors. Therefore, using image processing technology to deal with the hillside covered with vegetation has better recognition effect, shorter recognition time, and higher recognition rate. Landslide movement is highly complex and affected by many factors, so it is difficult to understand the internal characteristics of each landslide thoroughly at present, but landslide monitoring is helpful to master and analyze the evolution and characteristics of the landslide body. With the continuous progress of landslide monitoring technology, in order to obtain more detailed landslide data and understand the landslide more deeply, it is necessary to design a universal landslide early warning system which is easy to install.
2. Identification of Crack Curve at the Trailing Edge of Landslide
2.1. Characteristics of Cracks at the Trailing Edge of Landslide
After the landslide disaster [7], the soil and rock mass, which were originally a part of the mountain, leave the main body of the mountain due to gravity, and the cracks formed between the trailing edge of the landslide and the immobile mountain are mainly characterized as shown in Table 1.
2.2. Image Recognition
The image processing technology is a technology for processing an image such as noise removal [8], enhancement, restoration, segmentation, and feature extraction. The main contents included in the image processing are shown in Table 2.
Image processing technology is more mature, widely used in various industries, and the work efficiency is greatly improved [9]:(1)In road detection, image processing technology is used to quickly detect the crack position and crack width of road [10](2)In residential buildings, image processing technology is used to detect the crack information of concrete [11](3)In bridge monitoring, image processing technology is used to monitor whether there are cracks at the bottom of the bridge [12](4)In the tunnel, image processing technology is used to collide with the tunnel(5)In edge monitoring, image processing technology is used to monitor the changes of mountain and obtain real-time changes, but it is difficult to record and compare the changes of the whole mountain, for example, it is difficult to analyze the more detailed changes of mountain by using methods such as dividing and strengthening, so as to obtain accurate and comprehensive monitoring data [13](6)Image processing technology can also be used for small displacement motion
2.3. Color Model
2.3.1. RGB Model
Because RGB model quantitatively represents the brightness of three basic colors: red, green, and blue, it is also called additive color mixing model [14]. However, when the brightness values of the three basic colors are the lowest (0), it is set to black. When the brightness values of the three primary colors reach the highest value (255), they are white [15].
This is a basic method of mixing colors by adding colors and mixing colors. red + green = yellow, green + blue = cyan, red + green + blue = white.
The color matching equation of the color mixing model is
2.3.2. HSV Model
HSV is created based on the visual property parameters of three colors. Hue (H), chromaticity (S), and brightness (V) are different [16]: Hue H: hue H stipulates that the color should be measured with a 360 disc and driven counterclockwise from 0. Common colors are red at 0, green at 120, and blue at 240. Saturation S: chroma S is based on spectral color. The closer the color is to spectral color, the higher the saturation of its color. On the contrary, the lower the chroma. Brightness : brightness is used to indicate the brightness of color, and the brightness depends on the brightness of light source.
In the landslide monitoring system, images with different RGB color ratios show different image effects. The analysis of landslide monitoring images can improve the image processing quality and analysis accuracy. Usually, hue and saturation are commonly called chromaticity, which is used to express the category and depth of the color. Because people’s vision is more sensitive to brightness than to shade of color, HIS color space is often used to facilitate color processing and recognition, which is more in line with people’s visual characteristics than RGB color space. In image processing and computer vision, a large number of algorithms can be used in HIS color space, and they can be processed separately and independently of each other. Therefore, the workload of image analysis and processing can be greatly simplified in HIS color space.
2.4. Image Preprocessing
2.4.1. Image Preprocessing
The purpose of image digitization is to convert continuous analog images into discrete digital images [17]. Sampling quantization or coding is usually used to convert the original continuous space and brightness into discrete space and brightness.
As shown in Figure 1(a) of a digital image sample diagram, the original image is divided into an array of M ∗ N in a fixed unit size in a two-dimensional space to generate a “dot” image (Figure 1(b)) in which the number of dots that are finally available is explained. The output efficiency is high, and the images are easily connected among various system platforms.

(a)

(b)
2.4.2. Domains and Connected Domains
Since the pixel points of the digital image are arranged in a 2-bit array, these remaining pixel points are close to 4 (upper, lower, left, and right) and D (diagonal) in common with the pixel points in the vicinity of the pixel point of interest, and as shown in Figure 2, the nearest eight pixel points of interest are P, and the coordinate points are (Z, Y).

Two pixels on the image are adjacent, and the pixel gray values satisfy specific similarity. If they are further equal, the two pixels are called connection area relationship. As shown in Figure 3, p and q constitute a connected region.

2.4.3. Grayscale
Because the CCD image is a color image, it takes a lot of time to calculate and process, so the CCD image is usually converted into a corresponding gray image.
(1) Component Method. Three color fluxes of the three channels of the color image are used as the gray value of the corresponding image:
We choose the best conversion method according to the conversion effect.
(2) Maximum Value Method. The maximum value of the channel component of the color image 3 is used as the grayscale value of the corresponding grayscale image:
(3) Average Method. The average value of the channel components of the color image 3 is used as the grayscale value of the corresponding grayscale image:
(4) Weighted Average Method. The weighted average value of the 3-channel components of the color image is used as the grayscale value of the corresponding grayscale image:
2.4.4. Histogram Correction
Histogram correction belongs to the category of image enhancement, and its essence is to open gray interval or uniform gray distribution. Balance of histogram and specification of histogram are two commonly used correction methods.
(1) Histogram Balance. Histogram equalization refers to the use of one-time gray mapping function to process the original image pixels, and the uniform distribution of the image after the gray probability distribution:(1)Set the gray level of the original image and the gray level of the object image as r and s, respectively. If 0 ≤ r, s ≤ 1, s = T(r), then T is a unified gray level mapping function.(2)When the distribution function of gray level s of the target image is expressed in f(s), then(3)Unified gray mapping function is(4)Derivative of s in the mapping function is(5)Substituting equations (11) into (9) results in
(2) Specification of Histograms. Histogram specification refers to the step of transforming the gray histogram within a specific gray range into a target histogram by using a gray image function:(1)Integrate the gray probability density function Pr(R) of the original image and the gray probability density function Pz(Z) of the object image:(2)The gray probability density function of inverse transformation target image is(3)Replace V in the inverse transformation of the above formula with the gray level of the original image:
2.4.5. Image Noise Reduction Processing
Due to the limitation of photography conditions, outdoor slider images usually contain various noises. Identifying these images directly may not achieve the desired effect.
Image noise reduction processing can improve image recognition and image quality. General noise reduction schemes are average filtering and median filtering in Figure 4.

Generally, the discrete subsequence input into the database is assumed to be {X0, X1,…, X8}, and the corresponding non-negative integer weights are {}. Weighted median filtering is defined as Y = Med × (), where Y represents the filtered output of the database and Med represents the copy of image data.
2.4.6. Image Binarization Processing
In the image binarization process, if the preset threshold value is small, the gray value is set to 0, otherwise it is set to 255, which indicates the effect of whether the image is black or white, and a foreground region and a background region can be distinguished.
The total number of pixels in the image is
The frequencies of pixels with different gray values are also different. The calculation formula of frequency is
Set the threshold T to divide the image into foreground and background, and the frequency of foreground and background is as follows:
The average of foreground and background gray values is
The average value of the overall gray value of the image pixels is
Maximum variance between foreground and background is
T is the optimal threshold. Namely,where , , , , , and are all functions of the threshold value T, is the maximum, and T is the optimal threshold value.
2.5. Image Morphological Processing
Non-morphological algorithm can obtain processing effect through function modeling, convolution transformation and other methods, and play an active role in correcting the pixels of multiple images, but there are few inactive pixels of some images, such as correcting abnormal points through the consistency of functions. Pixel units are also referred to as structural elements and the structural elements typically select a relatively small set of pixel points.
The two basic processing methods of mathematical form processing are corrosion and expansion, and the resulting morphological algorithms include open operation and closed operation.
2.5.1. Expansion
If the original image is Z2, in the mathematical form dilation processing, when the pixel group X is scanned using the dilation structure element D and crosses the pixel group after the dilation structure element D moves parallel to Z, the dilation result group can be considered:
The bright part (white area) of the image can be enlarged and all background points in contact with the foreground area can be integrated with the object to fill the cavity and narrow gap of the foreground area, and intermittent parts of the image can be connected.
2.5.2. Corrosion
If the original image is Z2, in the mathematical form of etching treatment, if the etching structural element E scans the pixel group X and the etching structural element E belongs to the pixel group X after moving Z in parallel, the group is considered as the etching result group:
2.5.3. Open Operation
Open operation will corrode the image and expand. The arithmetic expression is
The calculation may filter the details of the protrusions smaller than the structural element B to segment the edges of the slender connection destination and the smooth object region. This method is inconvenient to preserve the cracks in the sliding trailing edge on the fine edge.
2.6. Boundary Detection
In order to obtain the crack curve of sliding trailing edge, it is necessary to extract cracks in the connection between sliding trailing edge (rock and soil) and immovable mountain (green vegetation). Because of the difference in color gray between sliding trailing edge and immovable mountain, the pixel gray ladder of edge pixels and nearby pixels in the image is large, so edge detection algorithm can be used to extract sliding trailing edge crack curve.
The global search class focuses on the calculation of edge strength, using the main function to represent the pixel gradient pattern value, and replacing the local direction of edge motion with the gradient direction. There are Roberts operator and Sobel operator for global search once edge detection.
2.6.1. Roberts Edge Detection Algorithm
The Roberts operator is proposed by Lawrence Roberts in 1963. The local difference operator is used to find the edge operator. It is shown in Figure 5.

The magnitude of the vertical and horizontal difference approximate gradient in the image is
The cross-difference approximate gradient width of f (x, y) in the image is
When G (X, Y) is greater than a preset threshold, points (Z, Y) are regarded as edge points.
2.6.2. Sobel Edge Detection Algorithm
Sobel operator was proposed by Irwin Sobel in 1973. As a weighted average edge detection operator, Sobel operator thinks that the influence of nearby pixels on the current pixel is not equal, so different weight operators have different influences on the results of pixels with different distances.
Sobel’s nuclear accumulation factor is
The factor consists of two sets of 3 ∗ 3 matrices, which represent transverse and longitudinal directions, respectively. Convolution operations are performed near 3 ∗ 3 centered on F(X, Y) to calculate the deviation in Z and Y directions.
Set the image to I and the threshold to T:
F(X, Y) is regarded as an edge point when the gray degree is greater than the threshold T.
2.6.3. Gauss–Laplacian Edge Detection Algorithm
If only one differential is performed, the gradient change can be a local extreme value, so it is impossible to judge the position of the edge point, so we continue to find the first differential from the second differential. After the zero point is obtained from the meta-pole, there is a peak and a trough before and after the zero point.
And finally, we have
2.6.4. Canny Edge Detection Algorithm
The Canny operator was proposed by John-F. Conny in 1986. As the most common edge detection method, the steps are as follows.
(1) Noise Removal. Like the Gaussian Laplace transform, in order to reduce the interference to the processing result caused by noise or the like, noise removal processing of the object image is required.
The coordinate point (x, y) means close to 3 ∗ 3, and the coordinate of the center point is (0, 0). X, Y are integrals, and I take the value of 0–8. Because of standard deviation, the smaller the value, the better the smoothing effect. The formula of quadratic Gaussian function is
The gray values of the 3 ∗ 3 region are assumed to be Z0-Zs:
Plus these nine values. This is the Gaussian ambiguity value of the center point X0:
(2) The Amplitude and Direction of Gradient Are Calculated by Finite Difference of Principal Deviation. Image edge detection is divided into two parameter attributes: direction and amplitude, and the gray value is displayed along the moving direction of the edge.
The change is slow, but perpendicular to the moving direction of the edge, the gray value changes strongly.
The Gaussian filtered image is a 2 ∗ 2 region, and two gradients in the x-direction and y-direction are calculated by the principal finite difference approximation, and as shown in Figure 6.

The gradients in the X and Y gate directions are
Thus, the distribution value and direction of the point gradient are obtained:
(3) Suppressing the Nonmaximum Value of Gradient Amplitude. For the 8 adjacent spaces of the 3 ∗ 3 region, as shown in Figure 7, the gradient direction can be four directions of 0, 45, 90, and 135.

(4) Edge Connections Are Detected by Two Threshold Algorithms. By setting two default values T1 and T2 to obtain 2T1 = T2 the two threshold edge images N1 and N2 have values using low thresholds and include a number of false edges. N2 is intermittent (not off) by using a high threshold. Therefore, with respect to edge connection, if a cut-off point N2[X, Y] of the edge appears in the N2 image, the algorithm is looking for eight locations that can connect the cut-off points before the N1[X, Y] image is disconnected. As shown in Figure 8, the flowchart turns off the n2 image.

2.6.5. Comparison of Edge Detection Operators Tab
Table 3 shows a comparison of the advantages and disadvantages of each edge detection operator.
Here, the image used in the edge detection step is a binary image, and the position of edge points needs to be correctly determined. In order to integrate the advantages and disadvantages of these operators, canny operator is used for edge detection.
2.7. Feature Parameter Setting
Image feature is a response feature or feature that distinguishes a certain object from the other types of objects, and it is the proportion of features or features of objects different from feature parameters to the whole image source, with numbers ranging from 0 to 1.
Because the interference region and crack curve in the graph are included in the connection region depicted by Carney edge detection operator, it is necessary to set the characteristic parameters to remove the gold crystals in the connection region formed between branches and boulders.
The length of the foreground region projected onto the width of the image length is a projection coefficient occupying the entire width of the image source. In the case that the ratio coefficient is larger than a preset characteristic parameter, the connecting region is considered to contain a sliding trailing edge crack curve.
As shown in Figure 9, the width of the image length is X and Y, respectively, and there are rectangular boxes cutting off the two connecting areas A and B and their edges.

3. Experimental Process and Analysis
This experiment uses MATLAB development platform. The hardware composition of this experiment is a computer with “Pentium 2 CPU 3.00 GHz and 2.0 GHz memory.” Randomly select 173 mountain muscle images without skiers, and generate 110 skier images. Image sources mainly come from network search. When the system detects the position of the image moving mark, the digital image processing module is executed. According to the algorithm design, the digital image processing module uses Wang subroutine to complete RGB color feature extraction, HSI color feature extraction, and gray level co-occurrence matrix texture feature extraction, and finally uses a subroutine to complete the recognition of landslide state.
Taking the typical images of four mountains and the normal images of four mountains as examples, the experimental analysis is carried out. This experiment consists of three stages: image preprocessing, feature extraction, and classifier design as shown in Figures 10 and 11.

(a)

(b)

(c)

(d)

(a)

(b)

(c)

(d)
The purpose of the standardized size is to reduce the image to the same size, so as to extract the features easily.
3.1. Texture Feature Extraction Experiment
Tables 4 and 5 show that the texture features of each subregion do not change greatly, and the feature values are relatively stable. If the mountain slides, the texture characteristics of the slope will change greatly.
3.2. HIS Color Feature Extraction Experiment
Tables 6 and 7 show the color characteristics obtained after obtaining the dominant color of each subregion according to the histogram.
Analysis of Tables 6 and 7 shows that there is no significant change in the color characteristics of each subregion of a typical mountain image. If a landslide occurs, the color characteristics of the subarea of the slope will change greatly, and the dispersion will increase significantly.
3.3. Design of the BP Neural Network Classifier
In the identification of landslide disaster of railway line, there are only two outputs, and the nodes of output layer are p = 1 and h = 5∼14. The selected parameters are displayed in Table 8, and the network structure and the selection of training parameters are displayed in Table 8.
3.4. Landslide Identification Experiment Results
From the collected 173 images of mountain muscle without landslide and 110 images of mountain muscle with landslide, 40 images and 80 images were randomly selected for testing, and the rest were reserved as training images. Among them, 133 common mountain muscle images constitute the training sample group, and 70 slippery mountain images constitute the training negative sample group to train the classifier. Tables 9 and 10show the recognition results of 80 test images, respectively.
The analysis and experimental results show that the recognition rate of SVM method is 90%, while that of BP neural network is only 77.5%. Then, after training the classifier in recognition time, SVM algorithm is better than BP neural network for a single 200 × 150 (or 150 × 200) image, but the recognition time is within the allowable range. Images of mountains without slopes are more accurate than those of Sakamoto, and vary according to the condition of the mountains, the shape of the slopes, and the quality of photography. The scope and area are also very different.
Based on high-resolution images, 30m resolution SRTM highway data, and its derived gradient data, landslides are extracted by object-oriented method. First, a sliding optimum segmentation ratio of the image is determined using the station partial divergence method, and the LV-ROM curve of the image is shown in Figure 12. The image is segmented in this ratio, and the weight of each frequency band is set to 1, the shape index is set to 0.3, and the contraction index is set to 0.3.

4. Conclusion
In this paper, the image processing technology is used to deal with the hillside covered by vegetation, and the recognition effect is good, the recognition time is short, and the recognition rate is high. Using image processing and SVM algorithm, the recognition rate of sliding mountain is higher than that of BP neural network. After training the classifier with recognition time, for a single 200 × 150 image, the proposed method is superior to BP neural network, but the recognition time is within the allowable range. The results of the article have a high recognition rate, and further research needs to analyze the terrain, through the analysis of different mountain characteristics and soil and other factors. Through the comprehensive analysis of different advancing locations and different characteristics of the mountain, the corresponding identification methods are obtained, respectively.
Data Availability
The experimental data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest regarding this work.