Abstract

This paper adopts the matrix reconstruction method to analyze the digital image of urban landscape information in-depth and uses it to study and design the visual communication of its construction. Based on the special structure and properties of Toeplitz matrices, a stepwise structured Augmented Lagrange Multiplier (SALM) algorithm for Toeplitz matrix filling is proposed by introducing structured operators. The main idea of the algorithm is to structure the iteration matrix at each step, that is, to reassign the elements on each diagonal of the matrix by the operator. In this way, the approximation matrix always maintains the Toeplitz structure during the iterative process, and the fast singular value decomposition of the Toeplitz. A matrix can be exploited, thus saving time. The convergence theory of the new algorithm is further discussed. A lightweight progressive feature fusion module is designed to improve network learning efficiency, which consists of two components: progressive local connectivity and feature attention. Specifically, progressive local connectivity extracts multilevel features for fusion by layer-by-layer separation and local splicing. Feature attention evaluates the importance of features using both channel and spatial dual attention modules. The Oculus Rift virtual reality device system is used, and the OSG graphics rendering engine is used as the basis for the scene data transfer of the overall 3D cityscape towards the virtual reality headset display device system. With the support of the Oculus SDK, the secondary rendering of the overall 3D cityscape in the immersive virtual reality module is carried out, and the corresponding OSG camera browsing interface is constructed to realize the two-eye immersive virtual display of the 3D cityscape in a virtual reality headset display device, assisting in connecting the virtual cityscape to reality.

1. Introduction

With the rapid development of visual communication and image processing technologies, the digital image data available to people has exploded, greatly contributing to the rapid development of the field of computer vision. Digital images, as information carriers for recording visual tasks, have become closely related to people’s daily lives. In recent years, scientific research based on computer vision has become increasingly important, and digital images are widely used in security surveillance, medical recognition, satellite detection, and people’s daily lives [1]. The change of landscape pattern index at the landscape category level of farmland, wetland, and forest land was analyzed with emphasis on the ecological land types with the top three area ratios. As a result, there is an urgent need for continuous improvement and refinement of digital image processing technology. However, the imaging process is limited by factors such as the quality of the sensor, which may result in low resolution and blurred images of the captured digital images, preventing the unambiguous recording of information in the scene [2]. The intuitive impact of this is that the visual effect of the image does not meet the requirements of human perception, limiting the value of the digital image application. This limits the value of the digital image and affects the processing of other visual tasks that follow. Therefore, it is important to conduct in-depth research on how to recover reliable high-definition images from low-resolution images, that is, research on image superresolution techniques [3]. A reasonable landscape configuration represents the level of development of a city’s comprehensive construction as well as the construction concept of a quality urban living environment, improving the ecological structure of the city while enhancing its image and promoting its environmental planning process. In terms of grasping the effect of landscaping in the city, traditional landscape planning and presentation use a large number of planning and design drawings, final effect drawings, and planning and design books to express the intention of landscape design and the effect of landscaping, but there are still various implementation problems in the process of field construction, such as the problem of standardizing the expression of planning drawings and the problem of recalibrating ground points.

With the development of computer technology, computer visualization technology, and virtual reality technology, the more flexible use of computer equipment and various types of advanced display equipment for the construction and virtual display of three-dimensional landscapes has gradually become the mainstream idea of landscape expression. Cities rely on a large amount of energy and resources and are increasingly dependent on the surrounding ecosystem. As the level of economic development increases, the population is gradually clustering in cities, and urban space is expanding outwards [4]. As a result of these enormous resource flows, cities have become barely sustainable but paradoxically resilient networks. Indeed, because of these challenges, the capacity of the Earth’s life-support systems to serve is declining. Of the many human activities that have led to the loss of biological habitats, urban development has contributed most to local species extinction rates and has often led to the disappearance of most local species. Levels of urbanization that exceed ecological carrying capacity also reduce environmental habitability and energy resource yield efficiency. The characteristics of landscape patterns, often referred to as landscape or spatial heterogeneity, influence ecological processes and biodiversity and have far-reaching effects on ecological, social, and economic functions, and changes in landscape patterns are more likely to be observed through land use land cover (LULC) [5].

In practice, data can often be missing or corrupted, for example, during file compression and transmission, which can lead to data loss; in fingerprint verification, the fingerprint of the subject to be identified can sometimes be partially missing due to wear and tear; in face recognition, the face to be identified can often be partially missing due to occlusion, distortion, shadows, and make-up. The superresolution results of the algorithm on multiple scale factors in the 5 datasets are better than those of the RCAN algorithm. Different from the SAN algorithm that uses second-order statistics information, the algorithm in this chapter achieves better results on datasets with rich texture edge information, such as Urban100 and Manga109 datasets, through efficient fusion of multilevel features and effective aggregation of global context information. Image superresolution results. These unfavorable factors often lead to the failure of traditional analysis and processing methods, which places greater demands on new theories and algorithms. It is therefore particularly important to recover missing data as efficiently and reasonably as possible. The matrix filling technique is a tool for recovering the data matrix accurately and efficiently. Motion structure photography, which originated in the field of computer 3D vision, is a technique for obtaining a 3D model of a real target from just a few images. The imaging device used is an ordinary optical camera or a smartphone with a photo function. The imaging device can be mounted on ground platforms and low-altitude UAV platforms and is the latest advancement in near-earth remote sensing photogrammetry technology, which has a promising application in geological landscape reconstruction.

Deep learning networks with different architectures have been designed for different image key information extraction tasks [6]. Dong et al. proposed a multiscale convolutional neural network for use in the task of scene recognition [7]. Since scenes are object-related, this neural network extracts both scene-centric and object-centric information from the image and performs scene recognition by combining the two types of information [8]. In addition, since the scene information in an image and the object information in an image are usually at different scales, a multiscale convolutional neural network is designed to improve recognition accuracy by carefully combining them. Oh et al. proposed the residual attention network, a convolutional neural network using an attention mechanism, for image classification [9]. Halawani et al. proposed a new backbone neural network specifically designed for object detection [10]. It proposes that the high downsampling rate between layers of the neural network makes the effective reception area in the deep layer small, which is not conducive to extracting object location information from images. By maintaining a high spatial resolution in the deep layer, the network can extract more accurate image object location information and thus improve the performance of object detection [11].

Changes in driving factors, ecological change impact mechanisms, and spatial scales can cause a lag in ecological effects [12]. It integrates various tree models to form a more realistic garden vegetation area model. In addition, the interaction between humans and the geographical environment in urban areas is intense, and urban ecosystems are dynamic and complex, and the evaluation of ecosystems needs to be based on long-term ecological research, and both temporal and spatial scales need to be considered when evaluating urban ecosystems. Therefore, LUCC research requires data sources with continuity in time to effectively reveal the process of landscape change and the ecological effects produced. Landscape change alters the water cycle processes in urban areas, rainfall-runoff processes, and the state of rivers, and thus distributed hydrological models (SWAT) also use landscape type as an important factor in simulating hydrological processes. SWAT can predict water resources, sediment, and agricultural pesticides in uninformed watersheds and can effectively simulate long time series [13]. Many scholars have analyzed the hydrological processes in watersheds under different landscape types, focusing on the increase in runoff coefficients, the water balance of the watershed, and changes in water quality under different landscape type transitions.

The existing research results have been conducted from the perspective of the coupling mechanism between urbanization and ecology and the mechanism of action, revealing the influencing factors of urban ecology at the macro level and providing guidance for the formulation of urban development policies, but there is still a lack of guidance on how to guide urban ecological construction from a spatial perspective. In 3D geographic scene rendering, the instantiation of the model is one of the methods to improve the efficiency of scene rendering. There are many obstacles to using scientific theory to guide ecological conservation policy and practice, and scientists hope to develop management-related science that balances vivid presentation of results with scientific rigor to help decision-makers make decisions based on scientific judgment and avoid acting despite uncertainty. Landscape ecology has developed a well-developed research theory and methodology based on ecosystem patterns. Another type of corridor is the hidden corridor, a potential network of material and energy exchange within a study area, such as the wind corridor, which is often overlooked because it is not directly observable, but which plays a vital role in the operation of ecological flows and the maintenance of urban ecosystems.

3. Analysis of Digital Image Matrix Reconstruction for Constructing Visual Communication of Urban Landscape Information

3.1. Digital Image Matrix Reconstruction Algorithm Design

Digital images, as input information for computer vision tasks, are affected by imaging sensors as well as the acquisition environment, resulting in blurred acquisition images, which has seriously affected the application of digital images for vision tasks such as security, medical, and remote sensing. Provide more choices of 3D models of garden vegetation to facilitate the configuration of garden landscapes. Therefore, effective recovery of low-resolution or hazy blurred images is particularly important for subsequent vision task processing. Currently, image restoration mainly consists of tasks such as image superresolution, image defogging, image dedrainage, and image denoising. Among them, this chapter focuses on the key processes and techniques of current deep learning-based single-image superresolution and defogging algorithms. Based on the powerful feature extraction capabilities of convolutional neural networks, deep learning methods are being widely used in image superresolution tasks [14]. This section first describes the definition of the image superresolution problem, then specifies the classical structure and several basic modules of super-resolution networks, and finally compares and analyzes the commonly used upsampling modules in superresolution networks.

Image superresolution aims to reconstruct the corresponding high-definition image from a low-resolution input image. Typically, a low-resolution image Il is an output modelled according to a specific degradation pattern, as shown in the following equation:where denotes the degradation mapping function; and are the parameters of the high-resolution image and degradation process corresponding to the low-resolution image , respectively. indicates downsampling according to the s-fold factor. In general, the degradation process such as the degradation mapping function and the parameter are unknown and only provide the low-resolution image . In this case, also known as blind image oversampling, the researcher needs to reconstruct from the low-resolution image high-resolution approximation of the labelled image , as shown in the following equation:

Existing single-image superresolution algorithms mainly use an interpolation- and learning-based approach. Unlike traditional interpolation- and model-based approaches with limited application scope, deep learning approaches have tremendous advantages in feature extraction, and therefore, designing an end-to-end single-image superresolution network has become a mainstream approach. To put it more simply, texture mapping is to “paste” a two-dimensional image to the surface of the mesh model, so that the three-dimensional model has richer visual information. The classical network structure of single-image superresolution algorithms based on deep learning consists of four main parts: shallow feature extraction, deep feature extraction, upsampling module, and reconstruction part. The shallow feature extraction generally uses a 1-layer 3 × 3 sized convolutional layer for feature characterization. Depth feature extraction consists of a depth network that provides a large perceptual field for pixel regression and a nonlinear mapping of features. The upsampling module is responsible for upsampling the depth feature map to the target image size. The upsampled feature map is passed through a reconstruction network to obtain the final superresolution image.

The presence of odd samples can cause difficulties in the convergence of the network during training due to the large variation in values of the input low-resolution images. It calculates the color of triangle fragment pixels through the fragment shader, including calculating vertex attribute color for shaded vertex geometry and calculating texture coordinates for texture. To alleviate this situation, the input image data needs to be preprocessed in a normalized manner to make the distribution more reasonable before the images are fed into the network to eliminate the effect of special values. Taking the 0-1 normalization method as an example, given an input image X, the specific implementation is shown in the following formula:where is the result of image preprocessing, and and are the mean and variance of the input image data, respectively. By image preprocessing, the input data is normally distributed with a mean of 0 and a standard deviation of 1. Keeping the input data homogeneously distributed can also have the effect of accelerating the convergence of the network.

With the optimization of feature extraction networks for other vision tasks, network design for single-image superresolution algorithms is gradually introducing these approaches. Among them, the design structure of a deep feature extraction network contains three main approaches: residual learning, recursive learning, and dense connectivity, as shown in Figure 1.

The purpose of the transposed convolution layer, also known as deconvolution, is to inverse transform the regular convolution operation [15]. This means that the input features of the target size are inferred based on the features of the convolution output, thereby increasing the resolution of the image. As the basis of geographical landscape, terrain is the basis for constructing three-dimensional garden landscape. The input feature size is set to 3 × 3. First, the input feature size is expanded by a factor of 2 using the insertion of zero pixels. Compared to interpolation, transposed convolution scales the image size by convolutional learning, which is more effective and makes use of the convolutional layers. Therefore, this approach is widely used in image superresolution models. However, the use of transposed convolution layers to upsample features can cause problems such as a tessellation pattern of varying sizes due to the multiplication of uneven overlaps on different axes, which can affect image superresolution performance.

It helps mitigate image distortion caused by environmental conditions and the impact on various visual analysis tasks, which is essential for the development of more robust intelligent surveillance systems. Deep learning networks are becoming increasingly mainstream due to their powerful end-to-end feature learning capabilities compared to traditional defogging methods. This section first presents the definition of the image defogging problem, followed by a comparative analysis of the classical network structure of current algorithms.

Due to the presence of smoke, dust, and other particles floating in the atmosphere, images taken in such environments tend to suffer from color distortion, blurring, low contrast, and other degradation of visible image quality. Realize immersive and all-round virtual browsing of 3D garden landscape based on virtual reality display equipment and provide interactive 3D dynamic scene tour experience. The input of blurred images will make it difficult to solve other vision tasks such as classification, tracking, target detection, and pedestrian reidentification. For this reason, image defogging aims to recover clean images from corrupted inputs, which would be a preprocessing step for advanced vision tasks.where is the observed blurred image, is the global atmospheric ambient light, is the medium transmission map, and is the fog-free image. The general singular value decomposition of matrices can take a lot of time, and since the matrix formed by the reorganization does not have a Toeplitz structure, the next calculations can take a lot of time. Therefore, by using the Toeplitz structuring operator on the matrix during each iteration, a Toeplitz-preserving matrix sequence is formed, thus saving time by using the fast singular value decomposition of the Toeplitz matrix, and thus optimizing the algorithm.

The first step in constructing a 3D scene is therefore to extract and match feature points on the image, as shown in Figure 2.

Since images lose their depth information after photography, it is difficult to restore a three-dimensional scene using only one image [16]. Realize the model import, rendering, and visualization module, realize the rendering and drawing of the parametrically controlled high-fidelity scene model, build a complete three-dimensional garden landscape, and realize real-time browsing and roaming based on different garden scene organizations. The most natural way is to imitate the human eye by setting up two observation points and using at least two images to complete the 3D reconstruction of the object. There is a pairwise geometric relationship between the two images (described in detail in the next section), and if at least eight pairs of matching points can be obtained between the two images, the rotation and translation matrices of the two images can be calculated based on this constraint, allowing the relative position of the camera in the 3D scene to be located. In combination with triangulation, the position of the points in 3D space can then be obtained.

However, there are still several problems with existing single-image superresolution algorithms. Firstly, these methods mainly focus on single-level features for reconstruction and are not sufficient for mining multi-level features. In addition, global contextual information is particularly important for image superresolution reconstruction. However, current superresolution networks are mainly constructed by building depth networks to provide a large range of perceptual fields, and this way of aggregating contextual information is too inefficient and ineffective.

3.2. Analysis of Urban Landscape Information Construction

The construction and the overall organization of parametric landscape vegetation models, terrain models, building models, and other scene models are explained [17]. If at least 8 pairs of matching points between the two images can be obtained, the rotation and translation matrices of the two images can be calculated according to this constraint relationship, so that the camera can be positioned in the 3D scene in a relative position. By acquiring field landscape data, 3D scene models are constructed, and the import, rendering, and visualization modules of the models are realized, and rendering and drawing of parametrically controlled high realism scene models are realized, a complete 3D garden landscape is constructed and organized, and real-time browsing and roaming based on the organization of different garden scenes are realized. Based on the above, the GPU rendering acceleration strategy for tree models is studied.

Typically, terrain elevation features acquired by remote sensing satellites are suitable for large-scale geographical analysis, with resolutions generally ranging from 10 m to 30 m. There is a peripolar geometric relationship between the two images. Terrain acquisition of laser point cloud data with high accuracy, sampling detailed characteristics, can accurately obtain the ground a point elevation and express the undulating features of the garden surface. The disadvantage is the need for accurate three-dimensional elevation point cloud data in different areas, accurate expression of a small area of terrain features, and the need for separate laser between different sites scanning, which is not applicable for a general garden landscape of several square kilometers of terrain area; this paper applies a technique based on the acquisition of laser point clouds by UAV to obtain garden terrain features, as shown in Figure 3.

The inverse distance weighting method stipulates that the observation points closer to the unknown point in the corresponding search radius contribute more to it, and the distance between the observation points and the unknown point is inversely proportional to the contribution value, so it also called the inverse distance weighting method [18]. The inverse distance weighted spatial interpolation algorithm specifically determines the specific interpolated point location by assuming that the unknown terrain point P is more influenced by points within the immediate radius than by terrain points further away, through the inverse power function of distance.

The fragment shader is like the vertex shader in that it is essentially a program that runs on the GPU through the shader source code and is responsible for processing the fragments and outputting the final color of each triangle pixel. Therefore, this method is widely used in image superresolution models. However, using transposed convolutional layers to upsample features can cause problems such as a checkerboard pattern of different sizes due to multiplication of uneven overlaps on different axes. Unlike the vertex shader, the input parameter to the fragment shader is the unprocessed vertex shader output stream, which goes through the fragment shader to calculate the color of the triangle fragment pixels, including calculating vertex attribute colors for the shaded vertex geometry and texture coordinates for the texture. The fragment shader can generate a vast array of different lighting and material effects that ultimately determine how the drawable rendering pipeline renders on-screen and is, therefore, the core code component of the programmable drawing pipeline [19]. As the core component of the programmable drawing pipeline, the vertex shader can do more than simply shading graphics, assigning textures, etc., it can generate complex, realistic 3D effects, including dynamic light sources, complex material rendering, and much more. In landscapes, such as trees swaying in the wind, fluctuating water bodies, ambient mapping of water bodies, and reflections from light sources are all appropriate for efficient GPU-based rendering using fragment shaders.

To improve the realism of the model, the 3D effect was more realistic. The resulting mesh model needs to be texture mapped. Texture mapping is the process of mapping an object’s texture information onto the surface of a 3D model to simulate the patterned details of the object’s surface. More simply put, texture mapping is the process of “pasting” a 2D image onto the surface of a mesh model, making the 3D model more visually informative, as shown in Figure 4.

The 3D geometric tree models are constructed according to the different seasonal characteristics of the garden vegetation landscape construction requirements. The three models of different weather periods can provide rich expressions for the construction of garden vegetation landscape in different seasons, thus realizing diverse expressions of garden vegetation landscape, providing more choices of garden vegetation 3D models, and facilitating the work of garden landscape configuration. Compared with the interpolation method, the transposed convolution amplifies the image size through the convolution learning method, and the effect is outstanding, and the convolution layer is used. The instantiation of models is one of the ways to improve the efficiency of drawing scenes in 3D geographical scenes, integrating different types of tree models to form more realistic models of garden vegetation areas. The combination of various tree morphological structure models enriches the 3D garden landscape, while different tree models can be selected and matched according to different needs, to show a more realistic 3D garden landscape with regional characteristics, as shown in Figure 5.

As the parametric vector layer can not only contain spatial location relationship data, but also define vegetation attribute data such as vegetation density and vegetation distribution location in space, the organization of the vegetation landscape scene can be greatly accelerated by automatic acquisition [20]. The classical network structure of single-image superresolution algorithm based on deep learning mainly includes the following four parts: shallow feature extraction, deep feature extraction, upsampling module, and reconstruction part. Garden vegetation includes regional green vegetation, and boxes need to be parameterized layout management, planning, and organization of the spatial location of tree vegetation. Garden vegetation includes regional green linear street tree vegetation. This study is based on parameterized data such as faceted regional green vegetation or linear road vectors for constrained planting of vegetation, where the vector data specifically includes the following attributes: vegetation type, vegetation distribution, area of the region, planting interval, boundary information, etc.

A segmentation threshold t that integrates the mean and standard deviation within a row into two categories can be obtained. When the product of the mean and standard deviation of a row is greater than the segmentation threshold t, this row is the row where the water level is located, and the vertical coordinate of this row is the vertical coordinate of the water level.

4. Results and Analysis

4.1. Analysis of Algorithm Performance Results

In terms of average error, the differential edge detection method has the largest average error, but it is within 5 cm, which is acceptable; the deep learning classification method has the best accuracy, and the average error is almost negligible. Designing an end-to-end single-image superresolution network has become the mainstream method. In terms of error variance, the variance of the differential edge detection method is the largest, indicating that the stability of the algorithm is poor, while the stability of the two classification methods is better. A time of the above experiment was chosen to plot a line graph of the water level information calculated by the three algorithms against the true water level value, as shown in Figure 6.

The two classification methods, on the other hand, have relatively small errors, with their maximum errors being within 10 cm, with the deep learning classification method performing somewhat better. The advantages and shortcomings of several classical network structures and base modules are compared and analyzed; then, the open standard datasets used by the current algorithms, the evaluation criteria for image recovery quality, and the optimized loss functions used for training several models are introduced in detail.

The algorithms with the best PSNR and SSIM performance metrics on each dataset are bolded, and the next best algorithms are underlined. Therefore, effective restoration of low-resolution or haze blurred images is particularly important for subsequent visual task processing. The algorithms in this chapter achieved the best current PSNR and SSIM results on multiple scale factors in all five open standard test sets under the self-integration evaluation strategy, as shown in Figure 7. Even without the self-integration strategy, the algorithm achieves similar experimental results to the SAN method and outperforms other methods.

The algorithm outperforms the RCAN algorithm in terms of superresolution on multiple scale factors across the five datasets. Unlike the SAN algorithm, which uses second-order statistical information, this algorithm achieves better superresolution results on texture-edge-rich datasets, such as Urban 100 and Manga 109, by efficiently fusing multilevel features and aggregating global contextual information.

A single-image superresolution algorithm based on cascaded feature pyramids is proposed. The proposed cascaded feature pyramid module conveniently embedded in the image superresolution network, using the structure of convolutional kernel cascade and layer-by-layer fusion to alleviate the drawback of excessive semantic information gap between adjacent layer features of ASPP structure and to broaden the range of feature multiscale representation. This effectively enhances the feature fusion effect; in addition, the algorithm in this chapter efficiently aggregates the global contextual information in the network by proposing an improved asymmetric residual nonlocal module, which further improves the image superresolution performance while maintaining the advantage of lightweight.

4.2. Design Results for Visual Communication of Urban Landscape Information

As this system of 3D roaming and virtual reality immersive roaming are user static browsing methods, that is, the relative position of the user is fixed, the use of interactive devices for the whole of the merit of the scene Nuo speak N wind is controlled; therefore, combined with a simpler keyboard intersection completely covered and obscured, the field of view cannot see the mouse; it is difficult to control the scene; therefore, it is combined with a simpler keyboard-interactive roaming operation, through a simple key. It has seriously affected the application of digital images in vision tasks such as security, medical care, and remote sensing. Therefore, it is an effective solution to enhance the user experience by controlling the movement of the viewpoint in the immersive scene through simple keystroke operations.

Based on the different view distances, we analyzed viewpoint 1 and viewpoint 2 as scenes with a larger amount of data, and viewpoint 3 and viewpoint 4 as scenes with a smaller amount of data, respectively, independently, and we had the following results shown in Figure 8.

The results in Figure 8 show that, in the case of Viewpoint 1 and Viewpoint 2, which have a large amount of data, the overall rendering frame rate of the scene fluctuates around 4 frames per second, which is less than ideal for drawing scenes. Landscape pattern refers to the spatial layout of landscape elements of different sizes and shapes, and changes in landscape pattern are largely affected by natural factors and human activities. Therefore, it can be concluded that the GPU-accelerated rendering method based on the GLSL method does not meet the demand for real-time rendering of 3D landscapes when the overall data volume of the scene is large, which is limited by the performance of the hardware device and requires consideration of various factors.

With an acceptable amount of data, the rendering frame rate of the woodland scenes performed better, from the traditional OSG rendering method of 35–42 frames per second to around 60 frames per second, an increase of around 30%, while the maximum rendering speed of 60 frames per second for the scenes was mainly due to the high refresh rate of the glass that was restricted to the desktop side of the device, while in the virtual reality headset display device, as the device In a virtual reality headset display device, where the lens display has a high refresh rate, a GPU-accelerated rendering method based on the GLSL method can significantly improve the rendering efficiency of a 3D landscape scene with a large number of trees and vegetation by specifying a smaller user first-view field of view. Therefore, the application of the GLSL programming method and the use of GPU-accelerated drawing of tree models with many facets can effectively improve the rendering efficiency of 3D scenes with a small user view range and a small amount of view body cropping operations, which to a certain extent can help in the immersive virtual reality display.

In terms of the landscape pattern index of different areas, the urban fringe area has the smallest value, indicating that the patches in the urban fringe landscape are more fragmented and less connected. Landscape is the material component of the environment, and it is also a specific component of the generalized land, and the landscape also includes the land. The core area has the largest value, mainly because the built-up area in the core area, that is, the built-up land patch type, forms good patch connectivity and is close to homogeneity. The smallest value for the urban area indicates that the landscape within the urban area is close to natural conditions, while the larger value for the core area indicates that the connectivity between the different patch types is homogeneous and has typical artificial landscape characteristics. Within the 300 m measurement range, the urban core is larger and more physically connected than the urban fringe, as shown in Figure 9.

Ecological land is an important landscape type for maintaining regional ecological balance and providing ecological goods and services. Therefore, in this study, based on the analysis of the characteristics of landscape-level pattern changes, the analysis focused on the landscape pattern index changes at the landscape category level for the top 3 ecological land types in terms of area proportion, that is, agricultural land, wetland, and woodland. While enhancing the city’s image, it also improves the urban ecological structure and promotes the city’s environmental protection planning process. The moving window method means that, for rasterized landscape type data, given window size, the landscape index within the window is calculated and assigned to the central image element of the window, and the landscape index for each point is calculated by moving one raster size at a time, row by row, column by column. The spatialized landscape pattern index is useful for visualizing the characteristics of the landscape index in different zones, interpreting the characteristics, and influencing factors of landscape pattern change, and for linking it to natural, economic, and social development processes.

The choice of scale is an issue that requires special attention in landscape pattern analysis. The scale includes both magnitude and granularity. The extent to which a landscape lasts in time or space called magnitude, and the length and area of the smallest distinguishable unit in the landscape are called granularity. The results of landscape pattern analysis are altered by differences in spatial magnitude or granularity. The scattering and juxtaposition indices form three zones of concentration from the center of the study area outwards, low-value zone, high-value zone, and low-value zone, with higher scattering and juxtaposition indices in the urban fringe, lower values in the urban core, and lower scattering and juxtaposition indices in the peripheral areas of the fringe, with a clear difference in a gradient from the inside out.

5. Conclusion

The results of the model show that the ecological flow of large ecological spaces such as the South Lake Park, the Flora, Fauna Park, and the city is less resistant to expansion and more connected but lacks a channel for external connectivity. The software running environment has been updated to extend the interactive organization and management functions of vegetation needed for the construction of 3D landscapes, the construction functions of real-time fluctuating water bodies, the construction functions of multichannel landscape architecture, and the organization functions of landscape elements based on the integrated management of geometric vector element layers. Its intuitive impact is that the visual effect of the image cannot meet the requirements of human perception, which limits the application value of digital images and cannot clearly record the information in the scene. An integrated system with the ability to flexibly build 3D virtual landscapes is formed. Based on actual measurement data, highly realistic vegetation, water bodies, and buildings can be built, providing some support for the virtual simulation of 3D landscapes, and providing more convenient technical means for users to appreciate, evaluate, and visit landscapes. Combining the theories of acceptance aesthetics and perceptual phenomenology, an evaluation index system for the acceptance effect of public art consisting of expectation effect index, call effect index, communication effect index, and body effect index is constructed. According to the results of expert and public evaluations, the expectation effect and communication effect indicators are the key dimensions influencing the evaluation of the acceptance effect, followed by the summoning effect and the physical effect indicators, which shows that the public is more receptive to public art landscape environments with obvious regional characteristics, close to life and strong interactive experiences.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest or personal relationships that could have appeared to influence the work reported in this paper.