Abstract

Public art planning and design in the context of smart cities need to keep pace with the times, but the integrity of the original scene needs to be maintained in the process of public art design. Therefore, this paper combines the elements of the scene and integrates the Internet of Things smart city to conduct public art planning and design research. Moreover, based on the multimedia Internet of Things environment, this paper analyzes the effects of virtual reality technology in urban public art planning and design and gives the overall optimization ideas for the organization and rendering of VR scene data. Then, this paper studies the organization and rendering optimization methods of the terrain scene model and the scene model, respectively. The experimental research results show that the smart city public art planning and design system under the multimedia Internet of Things environment designed in this paper has a good smart city public art planning and design effect.

1. Introduction

In recent years, public art that has emerged from the West has gradually spread and flourished globally, and various forms of public art works are displayed in front of the world. Public art design should actually be called public space art design. It is actually a general term for environmental landscape design, public sculpture art, mural art, ornament decoration design, craft design, video media design, and so on. Public art is also an emerging art category in my country. In recent years, it has gradually prospered, many new artists continue to emerge, and many outstanding public art works are also presented in front of people [1].

The content of public art is quite extensive. In a broad sense, all visual and auditory art that has an artistic relationship with the public in time or space should belong to the category of public art, such as drama, film, dance, and performance art. However, due to the differences in the cultural history and reality of various nationalities, different countries, regions, and nationalities often have different focuses when implementing public art design, so public art does not have a clear theoretical concept [2]. The full name of “Public Art” is “Public Space Art Design.” The role of public art in urban public space is not only a materialized structure but also the catalyst of urban cultural spirit of events, performances, plans, festivals, and occasional or derived urban stories [3].

Public art design plays an important role in beautifying the city. Its decorative function has never disappeared from ancient times to the present. The foundation of any art form is inseparable from decoration. First, the decoration of the layout: when painting, we pay attention to the coordination and completeness of the picture to produce a reasonable composition, make the picture produce different effects, and give people different visual feelings. The same is true for public art. When conceiving and designing a public space, the first thing to consider is a reasonable spatial layout, such as a stable horizontal layout, a solemn vertical layout, an elegant curved layout, a sharp triangular layout, a tensioned radial layout, and a clear theme. The central layout, the progressive layout with a sense of rhythm, the free three-point layout, and so on can make the public space decorative and bring people a different visual experience. Secondly, the decorativeness of the expression technique: whether it is a black-and-white painting or decorative painting, different expressions are needed to display it to achieve its decorativeness, and public art also needs it. Different colors bring different psychological feelings to people. Public art expresses the effect of decoration through the change of colors or the attributes of materials.

Public art planning and design in the context of smart cities need to keep pace with the times, but the integrity of the original scene needs to be maintained in the process of public art design. Therefore, this paper combines the elements of the scene and integrates the Internet of Things smart city to conduct public art planning and design research.

In the practice of urban public art, there are various problems. In the implementation of urban construction, it is necessary to face up to the many practical issues that must be solved. For example, it is necessary to safeguard the comprehensive interests of the local and even the country in the political, economic, and cultural fields, and it is necessary to respect the customs and customs that reflect the geography, history, and reality. The spatial order and functions that are suitable for the established or planned city need to follow the aesthetic laws, technical requirements, and evaluation standards of the art itself. It is necessary to establish rules including decision-making and public opinion exchanges, qualification identification and authority norms of the creative subject, conventional procedures, related laws and regulations, the whole set of operating mechanisms within, and so on [4]. With the development of society and the improvement of urban economic strength, my country’s cities have taken on a new look, and the quality of urban public has been greatly improved. However, compared with developed countries, there are still big gaps and many problems in urban public art design in my country [5]. For example, some urban public and landscape have improved, but the overall urban public quality has declined. There have been many destructive constructions, resulting in new visual pollution. Urban public art design emphasizes construction and neglects protection, and many historical and cultural heritages have been destroyed. Urban public construction emphasizes surface, neglects substance, is eager for quick success and instant benefits, and so on [6].

Literature [7] believes that public art is a concept with strong sociology and culture, rather than a concept of pure art. Although in recent years, critics and artists have raised the issue of the publicity of art, public art in the strict sense has not appeared in China. Literature [8] believes that public art is not equal to urban sculpture and landscape art. The core of public art is the publicity of art. The premise of publicity is respect for the individual, and at the same time, publicity means communication and communication, emphasizing the common social order and personal social responsibility. Literature [9] proposes that, in a broad sense, all artworks in public spaces can be called public art. As for public art, it may be limited to works under percentage for art programs. Document [10] mentioned that public art refers to art located in public space, which can reflect the characteristics of the base and express the surrounding environment; it is given the task of conveying social and cultural information and meaning, making it for the general public understand; at the same time, it also stimulates the vitality and vitality of the area or place and promotes the production of activities. In other words, the activities of daily life, the emotions of people, and the environment can all be conveyed through the medium of public art. Document [11] proposes that public art can be discussed in the following four areas: (1) Placed in a public space, through the existence of artworks, it emphasizes the importance of public welfare. (2) The artist will directly face an unspecified third party. (3) The process of consciousness formation (from top to bottom or bottom to top). (4) Decision-making should include the concept of “user participation.” Through the above four methods, the definition of public art is staggered. Literature [12] mentions that public art has the following characteristics: (1) Public art is an artistic creation that emphasizes teamwork, and it must work with architects, landscape architects, engineers, and so on. Collaborate with other professionals. (2) Public art aims at serving the public and is an artistic creation involving the public. (3) Public art has a closer relationship with people. Everyone has the opportunity to appreciate and contact. It is almost a part of people’s daily life. It is an art of living. (4) Public art often needs to be created in accordance with the location of the furnishings, sometimes even part of the overall environment, and has an indispensable characteristic. (5) The content and materials of public art are diversified, and there are no set factors that limit the content of creation. (6) Public art initially aims at beautifying the environment and focusing on visual aesthetics, which can be one of the tools of government construction. (7) One of the characteristics of public art is that it never stands for politicians. It is not the essence of the past social elites. It is contemporary civilian art.

3. Smart City Public Art Planning and Design Technology

Height map is a storage method that uses the gray value to represent terrain elevation information, and it has a wide range of applications in virtual simulation modeling. Using height maps to build terrain is a common method for VR platforms, so choosing a height map to build terrain scenes has higher efficiency.

The following is the height map production process, as shown in Figure 1[13].(1)First of all, the DEM data is converted into coordinates, which is converted from WGS84 coordinates to rectangular coordinates to match the coordinate format under the VR platform. The following formula is used to convert the WGS84 latitude and longitude coordinates of the northern hemisphere China into rectangular coordinates. The specific conversion formula is as follows:Among them, X, Y, and Z are the converted rectangular coordinates, L, B, and H are the longitude, latitude, and height in the WGS84 coordinate system, N is the radius of the unitary circle, and a and b are the semimajor axis and semiminor axis of the reference ellipsoid respectively. The calculation formula is as follows:Among them, e is the first eccentricity of the reference bin ball, and the calculation formula isUsing ArcGIS and other types of software can generate data that satisfies the terrain modeling of the VR platform.(2)The DEM data of the converted coordinate system is used to make a grayscale image. The grayscale value of the grayscale image corresponds to the elevation of the terrain. The specific corresponding relationship is shown in formula 4 [14]:Among them, is the actual elevation value of the DEM grid, Hmax and Hmin represent the maximum and minimum actual elevation values of the DEM grid, max and min represent the maximum and minimum gray values in the grayscale image, and grid represents the gray value of the DEM grid. Using the above formula, the relationship between the DEM data and the grayscale image can be established, and the production of the grayscale image can be realized.(3)Finally, since the initial grayscale image format is tiff and the format required by the VR platform is the height map in RAW format, the format needs to be converted. This paper chooses to use the commercial software Photoshop to convert the above two formats, and no data is lost in the conversion process.

In the VR platform, the steps of using the height map to construct the terrain include height map importing, setting parameters, generating terrain, and texture mapping. The specific steps are shown in Figure 2. The process is described in detail.

First of all, the height map that has been made is imported into the VR platform. At the same time, according to the actual attributes of the terrain, related parameters are set to avoid changing the actual size of the terrain, mainly including depth, terrain size, width, and height. Among them, the size of the terrain is set according to the length and width of the actual terrain. Then, use the VR platform to read the grid and its value of the height map to automatically generate a terrain model. Finally, the texture mapping relationship is established according to the height map index, the terrain texture map is completed, and the urban public art VR terrain scene is constructed.

3.1. Principle of Mask Test

Mask testing occurs in the stage of GPU chip-by-chip operation. This stage is the last step of the rendering pipeline. Among them, one of the most important tasks is to determine the visibility of each element, which involves a series of tests, including pixel ownership test, crop test, alpha test, mask test, depth test, and so on. The complete process is shown in Figure 3. In this process, only after the fragment data has passed all the tests, can the newly generated fragments be mixed with the colors of the pixels already in the buffer and finally written into the buffer for rendering. This fusion display method is to use template testing to discard the fusion area fragments in order to achieve the purpose of fusion display of urban building openings and terrain. The specific process of the template test is described in detail in the following.

Template testing is a more complicated process, the details of different graphics interfaces are different, and it is usually used to limit the area of rendering. When the template is tested, the GPU will first read the template value of the fragment position in the template buffer. Then, it compares the value with the read reference value, and the developer can specify a comparison function, such as discarding the fragment when it is less than the reference value. If the fragment data does not pass the mask test, the fragment will be discarded and will not enter the following depth test. At the same time, the color value of the fragment and the color already stored in the color buffer will not be merged, thereby achieving the effect of limiting rendering. The process of mask testing is shown in Figure 4.

3.2. Calculation Method of Fusion Area

(1)The curved wall section is shown in Figure 5, and its outer contour is composed of a straight line and a circular arc. The points on the straight line P and Pn + z are easy to draw and will not be described in detail here.The coordinate system XOY is established based on the urban building center lines O and P; then, any point P(X, Y) on the outer contour of the urban building can be calculated by formula 5 [15].(2)The straight wall section is shown in Figure 6. Its outer contour includes straight and curved sections. The points on the straight line are easy to calculate.

d is the distance between o and O, H is the distance between point P and point Pn + z, R is the radius of the arc, point P is any point on the arc, and αx is the coordinate azimuth angle of the two points o1 and Pi. We establish the coordinate system XOY based on the centerline of the city buildings О and P; then, any point on the outer contour of the city building can be calculated by formula 6 [16].

After obtaining the plane coordinates of any point on the outer contour of the city building’s cross section, it is also necessary to calculate the Z-axis coordinate value of the cross section. According to the orthogonal relationship between the cross section and the terrain slope, the Z value can be calculated by projecting the cross section of the city building in the orthogonal direction, as shown in Figure 7 [17].

We take the orthogonal direction of the urban building’s cross section as the Z-axis direction and obtain the slope s of the urban building’s slope according to the design data. Then, the Z value of point P can be calculated according to formula 7.

The above can obtain the intersection area of the urban building entrance and the terrain, that is, the fusion area, and then realize the restriction on the terrain rendering of the fusion area based on the template test principle and achieve the integrated display of the urban building entrance.

The urban building entrance fusion display is generally divided into three aspects: fusion area calculation (the area where the urban building opening and the terrain intersect), template testing, and updated terrain rendering. The specific steps are shown in Figure 8.

First, before starting the fusion display, it is necessary to judge whether it needs fusion according to the name and location information of the model. If fusion is not required, it ends the fusion display. If fusion is needed, it calculates the topographic grid of the fusion area according to the calculation method of the fusion area and the slope parameters of the urban building opening. Then, establish a reference value in the fusion area. When it is used in the template test, the template value of the topographic fragment is compared with the reference value of the fusion area to determine whether the topographic fragment is in the fusion area. If it is, it discards the terrain patch metadata. If it is not, it retains the terrain fragment. Finally, the rendering of the terrain of the fusion area is precisely limited to realize the dynamic fusion display of the urban building opening.

The viewing frustum refers to the range of the cone visible to the camera in the scene. Based on the terrain scheduling of the camera frustum, the range of the frustum must be calculated in real time according to the camera position, and the terrain bounding box under the node must be determined. Then, it performs real-time detection of whether the bounding box is within the camera’s field of view to determine the terrain data that can be retrieved. The following describes the frustum calculation and bounding box detection separately.

3.2.1. Frustum Calculation

Figure 9 shows a schematic diagram of the camera frustum. The viewing frustum is the space visible to the camera, which is composed of top, bottom, left, right, near-cut planes, and far-cut planes. The four sides (upper, left, lower, and right) forming the viewing frustum correspond to the four boundaries of the screen, respectively. In order to prevent the object from getting too close to the camera, set the near-cut surface. At the same time, in order to prevent objects from being too far away from the camera to be visible, we set a far-cut plane, and only objects within 6 planes can be rendered. The calculation method of the frustum section is described in detail in the following [18].

The first step is to obtain the camera’s opening angle in the vertical direction, that is, the vertical opening angle shown in Figure 9. The second is to calculate the aspect ratio (aspect in Figure 9). The specific calculation formula is shown in Figure 9. Then, the parameter fY is calculated, which is used to represent the offset between the upper and lower sides of the viewing frustum and the XZ plane. The calculation method is as public 8. In the same way, the f parameter is calculated to indicate the offset between the left and right sides of the viewing cone and the YZ plane, as shown in formula 9.

The calculation formula for the offset of the XZ plane is as follows:

The calculation formula for the offset of the XY plane is as follows:

In the formula, fv represents the vertical opening angle, and the unit is radians. Since the camera’s vertical angle fv has an angle of fv/2 up and down centered on the XZ plane, it must be divided by 2. Aspect is the ratio of width to height, that is, the ratio of the amount of offset calculation in the horizontal and vertical directions.

Finally, the three-dimensional coordinates of the 8 vertices of the viewing cone are calculated as shown in formulas 11 and 12.

The calculation formula of the side direction vector is as follows [19]:

The formula for calculating the vertices of the far-tangent plane is as follows:

The calculation formula for the vertex of the near-cut surface is as follows:

In the formula, Matix is the matrix for transforming local coordinates to world coordinates, fX is the offset of the XY plane, fY is the offset of the XZ plane, P is the three-dimensional coordinates of the camera, and is the direction vector of the four sides. is the distance from the camera to the far-tangent plane, is the distance from the camera to the near-tangent plane, are the coordinates of the lower-left vertex, upper-left vertex, lower-right vertex, and upper-right vertex of the far-tangent plane, respectively, and are the coordinates of the lower-left vertex, upper-left vertex, lower-right vertex, and upper-right vertex of the near-tangent plane, respectively.

3.2.2. Bounding Box Detection

The function of bounding box detection is to determine the terrain data to be rendered, and it is a key issue to determine whether the terrain node is inside the frustum. The space plane equations of the six faces of the viewing cone can be established according to the calculated vertices of the viewing cone (such as formula 13), and the point on the bounding box is determined by formula 14 on which side of the viewing cone. If it is ①, the point is on the plane. If it is ②, the point is on one side of the plane. If it is ③, the point is on the other side of the plane. Through the comprehensive judgment of the six faces of the viewing frustum, it can be concluded whether the point on the bounding box is inside the viewing frustum [20].

By judging all the vertices on the bounding box, it can be concluded whether the node is in the frustum, including the following three situations:(1)If all vertices are within the viewing cone, the bounding box of the node to be judged must be within the viewing cone.(2)If some of the vertices are within the viewing cone, the bounding box of the node to be judged intersects with the viewing cone. This situation is considered visible.(3)If all the vertices are not within the range of the viewing cone, the bounding box of the node to be judged may be outside the viewing cone, but there is an exception; that is, the viewing cone is within the node bounding box, which is distinguished by setting a threshold at this time.

Figure 10 is a flowchart of octree bounding box detection and terrain scheduling. In order to improve efficiency, first determine whether it is the first determination. If it is, the root node of the entire scene will start to find the left and right largest nodes in the frustum and save them in the terrain node library. If it is not the first judgment (when the viewpoint is moving), first look up the largest node contained in the viewing cone from the recorded terrain nodes, and then search for other nodes on this basis and record this node for updating the node database. At the same time, the terrain data in the frustum saved in the node library is drawn in real time.

Model scheduling based on VR ray collision detection mainly includes three contents: VR ray creation, model collision detection, and model scheduling, which will be explained separately in the following:(1)Creating a ray: it refers to the generation of a ray from the camera to the viewing cone. The end of the ray is the farthest position that the viewpoint can see. By using the function RayCast, a uniform-density ray is created. The ray is created with the center of the camera as the origin and the viewing cone as the boundary.(2)Model collision detection: the key to model collision detection is to determine whether a model intersects with each ray in the direction. Since the constituent unit of the three-dimensional model is a triangle, judging whether the model intersects the ray is judging that the ray intersects the triangle. In this paper, the method of solving the ray and the triangle is used to determine whether the ray and the triangle intersect. The formula is as 3–8. The left side of the equation is the parametric equation of the ray, and the right side of the equation is the parametric equation of the triangle [21].Among them, 0 is the starting point of the ray, D is the direction, and are the three vertices of the triangle. The coefficients of l, u, and can be obtained by using Cramer’s law. Use the RaycastHit class to save some information on the collision detection model so that the model can be scheduled based on this information. The information is mainly the distance between the ray origin and the collision model and the three-dimensional coordinates of the model position.(3)Model scheduling: the main idea of model scheduling is to dynamically schedule the scene model to the rendering pipeline for rendering according to the results of VR ray collision detection, as shown in Figure 11. According to the information obtained from collision detection, it is judged whether the model is inside the frustum.

The solid red line in Figure 11 represents the VR ray. Model scheduling based on collision detection results includes several situations. It is not in the viewing frustum, and collision does not occur; the model does not perform scheduling and drawing. It is occluded in the viewing frustum, collision does not occur, and the model is not scheduled and drawn. It is not occluded in the viewing frustum, the collision occurs, and the model is scheduled and drawn.

4. Smart City Public Art Planning and Design in a Multimedia Internet of Things Environment Integrating Scene Elements

Smart city public art is the product of the combination of design language such as the form, color, quality of the product entity, and the way of setting it in the space. Its connotation cannot be created out of thin air without the facility entity and space. Moreover, it mainly refers to the existence of values that can be perceived, read, and arouse people’s emotional resonance on the spiritual level of the city image, city spirit, and public emotion attached to the material entity of the facility product. When new fashion trends enter the daily life of the public in the form of styles and concepts, the connotation of facilities usually presents new forms and styles. The facility reflects the emotional needs of the contemporary public by integrating multiple cultural elements and provides new vitality for the formation of new art types and the extension and expansion of the connotation and expansion of the fashion field. Figure 12 shows the planning and design of smart city public art in a multimedia Internet of Things environment that integrates scene elements.

In this study, 20 sets of image adjectives are evaluated on public art modeling samples. After the process of factor analysis, the 20 sets of image adjectives are reduced to two sets of image adjectives. Moreover, this paper has renamed it based on the meaning and characteristics of two factors, namely, “innovative factors” and “affinity factors.” Next, we will use these two new factors to construct the relationship position of each sample in the image perception space coordinates and discuss the relative relationship between each sample based on the distance in space. It is hoped that we can better understand the perception and meaning of images presented by each modeling sample. The image recognition map of public art modeling is shown in Figure 13 [22].

On the basis of the above model, the effect of this model in public art planning and design is studied. First, the effect of scene element fusion is evaluated, and the result shown in Figure 14 is obtained.

Through the above research, we can know that the smart city public art planning and design system in the multimedia Internet of Things environment designed in this paper has a good integration effect of scene elements. After that, the effect of the smart city public art planning and design of the system in this paper is evaluated, and the results shown in Figure 15 are obtained.

Through the above research, it can be known that the smart city public art planning and design system under the multimedia Internet of Things environment that integrates scene elements designed in this paper has a good smart city public art planning and design effect.

5. Conclusion

The practicality of public art can be divided into two categories: public landscape and indoor landscape, which are distributed in every corner of the urban space. Large and small sculptures, installations, and plant shapes are scattered all over the streets and alleys, and signs and floor coverings also cover the walls and floors of public places. Bionics is a design method often used in public landscapes. With the help of a certain biological feature, the designer uses this feature to express in the design of public service facilities in the public space on the basis of imitating the shape so as to increase the sense of interest while being practical. In addition, a simple relief or floor covering can also be a group of visual guides. It uses labeling to guide the vision, which not only beautifies the public space environment but also fully reflects the practical function of public art. Based on the integration of scene elements, this paper combines the Internet of Things smart city to conduct public art planning and design research. The experimental research results show that the smart city public art planning and design system under the multimedia Internet of Things environment that integrates scene elements proposed in this paper has a good effect on smart city public art planning and design.

Data Availability

The labeled dataset used to support the findings of this study is available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This study was sponsored by Hebei Academy of Fine Arts.