Abstract

The study expects to solve the bottleneck problem in the field of intelligent navigation visual sensing and build a network system of information physics, to achieve a visual sensor which can correspond the depth and color of a scene. It aims to improve the robustness and accuracy of color recognition of color structured light in a complex environment. Based on the digital twins (DTs) technology, the effective transformation of logistics process and physical entity to quasi-real-time digital image is realized. The omnidirectional vision sensing technology of a single viewpoint and the panoramic color volume structured light generation technology of a single emission point are integrated. The new active 3D panoramic vision sensor is achieved for which makes the current visual sensing technology developed into the visual perception of body structure. This technology is adopted in the preliminary design of environmental art in the scenic spot, and it can predict the design feasibility of environmental art in the scenic spot, avoid mistakes in decision-making to a great extent and save human and material resources. And also it can analyze and predict some possible dangerous situations, which can greatly improve the environmental safety factor of the scenic spot. In the improvement stage of environmental art design in the scenic spot, using active 3D vision sensing technology can obtain more comprehensive information and is more conducive to the selection of design schemes.

1. Introduction

With the integration and application of a new generation of information technology (such as cloud computing, Internet of things (IoT), big data, and spatial computing) and industry, countries around the world have issued their own advanced manufacturing development strategies to realize the interconnection and intelligent operation of the industrial physical world and the information world so as to realize the intelligent industry. The Notice on Accelerating the Digital Transformation of State-owned Enterprises issued by the State-owned Assets Supervision and Administration Commission of the State Council in September 2020 clearly pointed out that it is necessary to use 5th generation mobile communication technology (5G), cloud computing, blockchain, artificial intelligence, digital twins (DTs), BeiDou communication, and other new-generation information technologies to explore and build a new IT architecture modes such as “data center” and “business center” to meet the business characteristics and development needs of enterprises and build an agile, efficient, and reusable new generation of digital technology infrastructure. DTs are to create a virtual model of physical entities in a digital way, simulate the behavior of physical entities in the real environment with the help of data. It adds or expands new capabilities for physical entities through virtual and real interactive feedback, data fusion analysis, decision iterative optimization, and other means. DTs products face the whole product life cycle process, play the role of bridge and link between the physical world and the information world, and provide more real-time, efficient, and intelligent services. At present, the main way for humans to obtain information is vision, and vision can give people intuitive images, which are easy to be accepted and understood. However, the perception of human vision is not unlimited, and as an extension of human vision, visual sensing technology has the advantages of strong adaptability, uninterrupted work, high speed, high efficiency, and objective judgment standard and other advantages. Thus, visual sensors are more and more widely used in all walks of life in contemporary society. With the development of society, visual sensing technology is constantly improving and perfecting, and its role in the environmental art design of scenic spots is also changing, but there are still some technical problems to be solved. For example, how to realize intelligent visual perception has become a key technical problem to be solved urgently in China’s social development.

At present, intelligent visual perception technology is facing a bottleneck problem, which is how to improve the signal source quality of the acquired video image. When converting a three-dimensional space scene into a two-dimensional image through the system, important information such as the object point depth and azimuth information will be lost. These will affect the judgment of the real environment. The ideal visual sensing technology can help to obtain the three-dimensional panoramic image “centered on the visitors in the scenic spot” [1]. At present, the technology for which is widely used in product quality detection, target detection and tracking, reverse engineering, obstacle detection, robot positioning, and navigation is called passive stereo vision perception technology. It has many advantages, such as simple structure, rich information, and convenient use. However, there is a bottleneck problem in the field of computer vision, that is, how to improve the accuracy of binocular stereo matching and analyze and calculate quickly. The effective way to solve this problem is to use active vision technology instead of passive stereo vision perception technology [2, 3]. But, there is still an urgent problem to be solved in active vision technology, that is, how to improve the accuracy and robustness of color recognition of color structured light in complex environments.

The study mainly explores the following two aspects: (1) combined the omnidirectional vision sensor and panoramic color structured light generator, it makes the current visual sensing technology be further developed into the visual perception of body structure; (2) through research, the accuracy of color recognition of projected light in the environment is further improved, achieving the panoramic vision sensor with scene depth corresponding to color one by one. Based on the above studies, it greatly improves the visual perception of visual sensing technology and more effectively applies it to the design of environmental art in scenic spots.

2. Materials and Methods

The active 3D panoramic vision sensor is obtained by combining ODVS (omnidirectional vision sensor) and PCSLG (panoramic color structured light generator). Through the application of ASODVS (active stereo omnidirectional vision sensor) in practice, it can obtain the color map and scene depth corresponding to the actual environment.

The omnidirectional vision sensor with a single viewpoint and the panoramic color volume structure light source with a single emission point are fused, and a certain connection mode is adopted to assemble them on the same main axis. According to the relevant working principle of ODVS and the relevant definition of epipolar geometry, the radiation whose origin is the center point of the panoramic image can correspond to the epipolar line of the stereo image pair obtained by ODVS. Through the above methods, it can become a constraint condition for stereo matching [4]. The active stereo vision can simplify the epipolar matching step in passive panoramic stereo vision. Because the color light is emitted by PCSLG and the reflected light is received by ODVS are on the same epipolar plane and also because the used technology is time-sharing control technology, when the power supply state of the light source of panoramic color projection is inconsistent, the color of the collected object points is also different. When the power state is ON, the color information in the object point A contains the information of the projected light source; when the power state is OFF, the color information of the object point B is that of the object point itself [5], as shown in Figure 1.

2.1. DTs

Based on the development of DTs technology, remarkable results have been achieved in various industries. Meanwhile, it effectively accommodates various technologies such as big data technology, 5G technology, and artificial intelligence technology so as to promote China’s construction to a new level, as shown in Figure 2.

Generally, it is considered that the information between the human interaction interface and the database needs to be retrieved or interconnected, which can break the information isolation more conveniently and timely. Nowadays, with the development of related technologies, the industrial IoT as a DTs communication system has been realized. The application of this technology is more conducive to realize the real-time and intelligent independent access, and effectively collect and analyze the process, service information, and products in the actual industrial environment. The industrial IoT is connected by multiple communication software. Based on this, it can also realize the required operations such as monitoring, exchange, collection, and analysis of the information at the equipment layer and at all levels on the equipment layer network. In practice, enterprises can use sensors, software, and mechanical learning at the same time, and realize the collection and analysis of physical objects and other related big data flow data in the application of a series of technical means. After that, the data can be managed and operated after the collection and analysis are completed.

The above is an overall description of the DTs application technology, which will give more new vitality to the environmental management system of the scenic spot. Later, a virtual digital environment will be built based on this technology to realize the seamless switching of multiple applications of the environmental management system of the scenic spot.

2.2. ODVS Structure Analysis

As one of the components of an active 3D panoramic vision sensor, an omnidirectional vision sensor can collect the video information of the whole environment and further analyze it through the system. This design adopts the single view imaging method, which has the advantage of ensuring the image unique for which is formed by perspective at any point in the scene, and it makes the whole process conforms to the imaging principle [6, 7]. Then in practice, knowledge for the relevant parameters of the omnidirectional vision sensor is enough. Based on this, the incident angle information of any point in the scene by inverse operation can be calculated.

When placing the perspective camera, it should be noted that the optical center and the focus of the mirror should be placed to keep them coincident.

The optical imaging process of ODVS can be expressed by equations (1)–(5):

X, Y, Z are the space point coordinates of points on hyperboloid; x, y are the imaging point coordinates; c is the distance between focus and origin of hyperboloid mirror; 2c is the distance between two focal points; a, b are the length of real and imaginary axes of hyperboloid mirror; ß is the included angle of incident light on the knife projection plane; λ is the angle between the incident light and the horizontal plane; α is the angle between the principal axis of the hyperboloid and the catadioptric light of the space point. The imaging principle of a single viewpoint is shown in Figure 3.

As Figure 3 suggests, point P is generated after object point G is imaged through hyperboloid, and then the camera optical center point is formed through reflection. Then an intersection is generated after intersecting with the imaging plane, which is set as the point . After the main axis of ODVS intersects with the reflected light at the point an included angle is generated, for which is set as . After the incident light intersects with the reflected light at a point on the plane, an included angle will also be generated, which is set as . If and are known in practice, the incident light can be further obtained. The camera position of the omnidirectional vision sensor is placed behind the hyperboloid mirror, and the camera lens is placed at the virtual focus generated by the catadioptric mirror. The advantage is that it can monitor most object points in real time [8].

Before calculating the depth of the object point, it is necessary to determine the incident angle of the corresponding light, which can be determined by calibrating the position of the pixel in the panorama [9].

The imaging steps of single view ODVS are as follows: first, the light is transmitted to the sensor plane through the mirror, then to the image plane, and then the image is integrated and analyzed. However, the following two planes must be considered: Image plane and sensor plane . Suppose the observation point is projection point of X, then point is the projection point of X for which is generated on the plane of the camera. Set the coordinates of the point to is a point on the image plane corresponding to point , and its coordinates are

The imaging model is constructed according to the principle of single view imaging, as shown in Figure 4.

In the actual operation process, due to various reasons, it can lead to a certain degree of error between the approximate collinearity of the camera focus and the mirror axis. Further, a certain offset occurs between the sensor center point and the image center point . The relationship between the sensor center point and the image center point can be expressed by

In (8), - fixed transformation matrix; - translation vector. In Figure 3, the midpoint and the center point together form a vector , and the space vector generates a projection through the optical center point Point is projected through the plane . It is assumed that the relationship between spatial point and spatial vector can be described by

In (9), the projection matrix of and the function describes the mirror shape. The function can describe the relationship between and . All the above functions are related to the parameters of the catadioptric mirror of the camera.

Saramuzza further explained functions and based on the perspective projection model, and proposed that function can be added to replace function . Then the equation can be changed to

Assuming that function has rotational symmetry and that the plane where the catadioptric mirror and the sensor are located is perpendicular to the camera lens, the error between the approximate collinearity of the camera focus and the mirror central axis described above can be compensated. Taylor polynomials [10, 11] can be used in functional representation, specifically shown aswhere is the distance from sensor plane point to sensor center point .

The parameters in the above equation include the above parameters can be solved by using the relationship between a given reference point and the corresponding imaging point. The incident angle of each pixel in the panoramic picture can be obtained through equation (12) and the relevant parameters of ODVS. The specific equation is as follows:

2.3. PCSLG Structure Analysis

The panoramic color structured light generator is an important part of ASODVS. It can obtain more three-dimensional information from a given image under the problems of object point matching and simplifying sensor calibration [12, 13]. The panoramic color structured light generator used is a single emission point. The reason for using a single emission point is that it can solve the problems of accuracy, real-time, and robustness in the field of visual sensing technology [14].

PCSLG can be realized in the following three methods: (1) after fusing the mirror where the catadioptric reflection is located with the shape wavelength variable filter when a visible light source with white color is used to irradiate any focus placed on it. The color peak wavelength of each circular wavelength variable filter emitted along different angular positions of the circular substrate will show a linear variation relationship. Panoramic color light coding will be formed after refraction or reflection through the plane. At this time, each color peak wavelength aperture will correspond to a unique projection angle. Although the normal of each LED light will pass through the center of the spherical body, the color and wavelength of the light emitted by different LED lights are different. Therefore, they can be placed on different longitude and latitude lines of the spherical surface. At this time, the light wavelength generated by LEDs of different colors will correspond to a projection angle. Method (3) is similar to the method (2), but the slight difference is that because the laser semiconductor has the advantages of long propagation distance and good performance, the laser semiconductor is selected to replace the LED as the emission light source, but the basic principle has not changed [15].

The study verifies the influence of LED lamps on brightness under different wavelengths and chromaticity through experiments [16]. Red and blue LED lights are used to illuminate the wall, respectively, and measure the points in the aperture emitted by the LED lights. The specific experimental process is shown in Figure 5.

The red “×” in the figure is the set measuring point.

2.4. ASODVS Imaging Principle Analysis

The spatial point is determined by combining ODVS and PCSLG, and the projection angle and incidence angle of the point is determined, that is, the relevant depth information of point is obtained. Theoretically, the different colors generated by the light source emitted by PCSLG after plane projection will include the color of a pixel in ODVS. By analyzing the color of the light generated by the panoramic color structure light source after plane projection, the projection angle of each pixel can be determined. If the calibration parameters of ODVS are known at this time, the incident angle of the reflected light corresponding to the projected light source of the pixel can be calculated according to the relevant parameters. The imaging principle of ASODVS is verified through the following examples, place the light emitted by PCSLG at 30° north latitude. It is known that the color of the emitted light source is red and the wavelength is 650 nm. It is refracted and reflected through the object point, and then it is imaged. According to the known correspondence between the color emitted by PCSLG and the projection angle, the analysis shows that the projection angle of the object point is 30°, and the incident angle of the object point can also be calculated according to the calibration parameters of ODVS. At this time, if the baseline distance is known, the distance between the object point and the central eye point can also be calculated according to [17, 18]where is the baseline distance; is the incident angle of light; and is the emission angle.

In addition, the azimuth of object points a relative to the central eye [19] of the active 3D panoramic vision sensor can be calculated.where is the coordinate values of pixels on panorama; is the coordinate value of center point pixel.

3. Results

3.1. Experimental Results

The ODVS parameter results obtained through experiments are shown in Figure 6.

In this essay, the calculation method of equation (12) is verified by setting the measured points in the real scene. The experimental environment is shown in Figure 7.

The measured points set in Figure 7 are black spots on the left white wall. The actual incident angle is calculated through the geometric relationship between ODVS and the set pixel points, and the theoretical incident angle is calculated according to the relevant parameters of ODVS. Record relevant data as shown in Figure 8.

Figure 8 presents that the difference between the theoretical incidence angle and the actual incidence angle is not large, and the resulting absolute error is relatively small. Figure 8 is drawn using the data measured after illumination with red and blue LED lights. The test data of red and blue LED lamps under different brightness are shown in Figure 9.

When designing the panoramic color structured light generator, it expects to achieve the ideal effect that the color of the light generated after plane projection will change with the different positions of the emitted light sources. In short, each emission angle generated by PCSLG can correspond to the color generated in the image one by one, as shown in Figure 10.

3.2. Existing Problems

Although many researchers are studying the active vision system, there are still some problems, and no suitable solutions have been found. For example, the current technology cannot recognize the color of projected light in a more complex environment. Through further research, in the future, the projection angle can be determined according to the corresponding relationship between the color and the projection angle of the projection light source and the color recognized in the image [2022]. However, this is only an expectation, and it is difficult to achieve this effect in practical application. The reflected light received by any pixel in ASODVS will contain the following two aspects of information: color information of projected light (panoramic color volume structured light), object color, and environment color. In addition to the above problems, the recognition of projected light color will also be affected by factors, such as the coupling difference between imaging equipment and projection equipment. Therefore, in practical application, in order to obtain the spatial position attribute (such as object point depth), it is expected to avoid the interference of other factors, and the obtained color information of light is relatively accurate.

3.3. Application

The application of visual sensing technology in the environmental art design of scenic spots is not limited to a certain stage, but it is interspersed with the whole design process [2326].

In the early stage design of environmental art in the scenic spot, any small mistake in the early stage may cause a waste of various human and material resources. If we only rely on manpower to estimate the indicators, there will inevitably be some errors, resulting in unnecessary waste of resources. However, if the visual sensing technology is used to predict the design feasibility of environmental art in the scenic spot in the early stage, it can greatly avoid mistakes in decision-making and save human and material resources. In the early stage of design, the safety of the environmental art project needs to be inspected and evaluated in advance. However, in order to ensure safety, people generally do not experience it in person. Then, visual sensing technology is particularly important at this time. It can detect the environment in real-time, and analyze and predict some possible dangerous conditions, to greatly improve the safety factor of the scenic spot environment.

The design of environmental art in scenic spots is not achieved overnight. It often needs continuous deliberation, and then improves the problems. In practice, visual sensing technology can be used to compare the scheme design. For example, the water body and greening in the scenic spot design, although these elements are fixed, the feelings of visitors are indeed changing. Another example is large-scale sculpture, visitors will get completely different feelings from different angles, so the use of visual sensing technology can obtain more comprehensive information, which is more conducive to the selection of design schemes.

Through the DTs application technology, the top-level planning of the energy management system can replan the matching relationship such as pipeline layout and energy type in the holographic mirror display environment and can also be integrated with additive manufacturing. This will shorten the construction cycle of remote diagnosis, operation, and maintenance upgrading of the energy management system. The data information of each energy equipment can be presented in the DTs system, including equipment fault type data, historical data, and real-time data. Based on this, the system can be operated and maintained more intuitively to provide support for remote diagnosis and operation and maintenance. The energy management system is a multi-business collaborative, and the future development of the smart park will be more intelligent and digital. As an important part, the energy management system will play an important role in realizing synergy with other management business systems.

4. Conclusion

Human beings perceive the real world through vision and perceive the depth of space objects from the parallax of two eyes. The ideal visual perception device is a large field of vision sensor that can obtain the scene depth and color map corresponding to the actual space object one by one. However, the current imaging technology loses the depth information of the space object in the process of obtaining the image. The stereo vision technology in computer vision can enable machines to perceive the stereo information of objects like humans. Therefore, the stereo vision has become a research hotspot of computer vision.

Based on DTs technology, a new active three-dimensional panoramic vision sensor is realized by combining ODVS with PCSLG, and its architecture and stereo imaging principle is introduced. DTs technology is applied to environmental art design. Through analysis, it can be seen that visual sensing technology is indeed widely used in the environmental art design of scenic spots. On the basis of DTs technology and visual sensing technology, the interpersonal interaction interface can realize the real-time information interaction between all personnel and DTs, which can greatly improve work efficiency, save human, material, and financial resources, and predict some possible dangerous situations in advance. However, in the practical application, the influence of the projected light source and the light in the environment on the object surface illumination model has not been considered, which is still a problem of the active vision sensor. In further research, the above factors need to be analyzed through modeling. There are still many problems to be solved in future research. For example, the current experimental research has not deeply studied the inherent color, ambient natural light, and specular reflection of the object in the case of panoramic vision. Future experimental research needs to solve the problem of rapid differentiation of solid color, conditional color, and light source color of the object in the case of panoramic vision. In the specific implementation, the correction model algorithm needs to be added to the three-dimensional detection algorithm to further improve the accuracy and robustness of light source color detection.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This work was supported by the Sichuan landscape and Recreation Research Center of Planning and Construction of Urban Commercial Recreation Area in Sichuan (Project No.: GAYP2016041).