Mobile Information Systems

Mobile Information Systems / 2021 / Article
Special Issue

Artificial Intelligence and Edge Computing in Mobile Information Systems

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 9312425 | https://doi.org/10.1155/2021/9312425

Lijia Zeng, Xiang Dong, "Artistic Style Conversion Based on 5G Virtual Reality and Virtual Reality Visual Space", Mobile Information Systems, vol. 2021, Article ID 9312425, 8 pages, 2021. https://doi.org/10.1155/2021/9312425

Artistic Style Conversion Based on 5G Virtual Reality and Virtual Reality Visual Space

Academic Editor: Sang-Bing Tsai
Received22 Apr 2021
Accepted22 May 2021
Published30 May 2021

Abstract

With the rapid development of digital information technology, virtual reality (VR) and VR visual space technology have become important branches in the field of computer 5G. Their application research has attracted more and more attention, and their practical value and application prospects are also very broad. This paper mainly studies the artistic style conversion based on 5G VR and VR visual space. This paper starts from the two concepts of VR technology and VR vision, analyzes the development process and characteristics of the two, discusses the possibility and inevitability of the fusion of the two, and leads to the space art produced by the fusion of VR technology and VR vision. This kind of art space gives people an “immersive” experience. This paper analyzes multiple immersive experience works, analyzes its multi-sensory and multi-technical spatial art style transformation form, and summarizes the advantages and disadvantages of the current art style transformation form based on 5G VR and VR visual space, with a view to the future development of VR immersion for reference. This paper analyzes the ease-of-use indicators. The experimental results show that, except for the sensory experience indicators, the average values of other indicators are less than 1. This is a project with better ease of use, and the use of 5G VR and VR vision technology can improve the transformation of space art style.

1. Introduction

With the continuous development of science and technology and information technology, a variety of new technology display methods have begun to be used in space art, making space art continue to innovate in the ways and methods of exhibition display. With the emergence of 5G virtual reality (VR) technology, science and technology are constantly being injected into the design of space art exhibitions. It eliminates the sense of boredom that subverts the original space art exhibition, while greatly improving the interest and visibility of space art.

Virtual reality technology directly intervenes in other fields related to space art at the technical level, directly affecting the production, materials, and performance techniques of space art and also changing the way of feeling space art. The intervention and influence of virtual reality technology on space art will happen now and more in the future [1]. This paper focuses on the characteristics of 5G virtual reality, combining the spatial art space expression method with the art space scientific research theory in the VR visual field and then combining traditional art and modern visual display. After having a more reasonable understanding of the concept of space, it is hoped that modern technology can realize the perception of space on more levels [2, 3].

This paper describes virtual reality technology and its research status on the basis of consulting-related documents and uses 5G virtual reality and VR vision technology to apply to space creation. The generation and transformation of artistic space style are introduced, and the virtual reality system and its key technologies are emphasized. This paper is divided into five sections, and the content is arranged as follows:Section 1 introduces some relevant knowledge of 5G virtual reality technology, expounds the research background and research status of this technology, and determines the research direction of the thesis.Section 2 introduces the related work of 5G virtual reality technology and the innovations of this paper.Section 3 introduces the basic principles of VR stereo vision, the composition of the virtual reality system, the virtual design of the time and space of multimedia art, and the evaluation indicators of visual variables.Section 4 discusses virtual reality technology and applies it to VR visual variable spatial cognition experiments.Section 5 mainly introduces the data analysis of artistic style conversion based on 5G virtual reality and VR visual space.Section 6 summarizes the research work of the full text, analyzes the existing problems, and puts forward a prospect for the future research work of virtual reality technology.

At present, many scholars have made more in-depth research on 5G virtual reality technology. Bastug E research believes that the success of immersive VR experiences depends on solving numerous challenges across multiple disciplines. He emphasized the importance of VR technology as a disruptive use case for 5G (and beyond), leveraging the latest developments in storage/memory, fog/edge computing, computer vision, artificial intelligence, and so on. In particular, the main requirements of wireless interconnection VR are described, and then some key elements are introduced. Then, he introduced the research approach and its potential major challenges. In addition, he studied three VR case studies but did not provide numerical results under various storage, calculation, and network configurations. Finally, he revealed the limitations of the current network and provided reasons for more theories and for the public to take the lead in VR innovation [4]. Elbamby MS research believes that virtual reality is expected to become one of the killer applications in 5G networks. However, many technical bottlenecks and challenges need to be overcome to promote its widespread adoption. In particular, the demand for VR in terms of high throughput, low latency, and reliable communications requires innovative solutions and basic research across multiple disciplines. In light of this, he discussed the challenges and drivers of ultrareliable and low-latency VR. In addition, in the case study of an interactive VR game mall, the intelligent network design using mm-Wave communication, edge computing, and active caching can realize the future vision of wireless VR [5]. Due to the increasing popularity of augmented reality and virtual reality (AR/VR) applications, Sukhmani S has made great efforts to bring AR/VR to mobile users. The purpose of this survey is to introduce the latest technologies related to edge caching and computing, with a focus on AR/VR applications and the haptic Internet, and to discuss applications, opportunities, and challenges in this emerging field. Both AR/VR and haptic Internet applications require a lot of computing power, high communication bandwidth, and ultralow latency, and current wireless mobile networks cannot provide these functions. By 2020, long-term evolution (LTE) networks will begin to be replaced by 5G networks. Edge caching and mobile edge computing are some of the potential 5G technologies that bring content and computing resources closer to users and reduce latency and backhaul load [6]. Roche E M research believes that the next generation of wireless communication (“5G”) is expected to revolutionize the world’s communication methods. Autonomous vehicles, robots, RN virtual reality, drones, the “Internet of Things” (IoT), and all mobile communications will be realized through the new radio spectrum from 6 to 300 MHz (GHz). The physical network infrastructure will be separated from the logical or “virtual” infrastructure. Software-defined networks (SDN) will be established and dismantled as needed. Network management will use artificial intelligence. The new standard being developed by the International Telecommunication Union will allow network control and RN circuit supply to be opened to the outside world, including competitors and users. This complexity of the emerging 5G architecture will provide a lot of opportunities for practitioners and open up exciting new prospects for research [7].

The main innovative work of this paper includes the following aspects. (1) This paper is a cut from the perspective of art, combining artistry and technology to study the spatial art form of immersive art, combining the immersive art of the past interactive forms and various types of immersive mutual effect. It summarizes some shortcomings of the current interaction. On this basis, it predicts the future development of embedded interactive art. (2) The application of VR technology and the spatial attribute of virtual reality space are summarized based on existing application programs. With the birth of virtual reality technology, designers and audiences can experience a virtual experience produced by space artworks.

3. 5G Virtual Reality and VR Vision

3.1. Basic Principles of VR Stereo Vision

Stereo vision is often used to denote the perception of depth and three-dimensional structure based on visual information. This visual information is also called binocular stereo vision and is obtained by human eyes. Because human eyes are on different sides of the face, the binocular vision will generate two different images to process the differences in the visual cortex of the brain and generate depth perception [8, 9].

Stereo vision is the core of the interactive vision of the VR system. People can experience the virtual world through stereo vision. The stereo image is made in a pair, and if it is a pair of different angles in the same scene, it corresponds to the angle at which the person observes the object itself. Stereo rendering technology is a two-dimensional projection technology used to display discrete three-dimensional sample data sets [10, 11]. Stereo vision is an important binocular vision for people, allowing people to directly perceive depth. Remind each participant to feel the visual stimulus through different aspects. Generally speaking, animals and humans with overlapping eyes have this ability. Stereo vision can also be seen as a linear crossover problem. When the light source emits light, part of the light will be reflected by the object and reflected in the direction of human eyes. Faint light enters the pupil of the human eye, where it ends the journey [12]. At this point, there are two pieces of information that need to be processed. This is information about the position of the light source entering the eye and the light equivalent to the retina of the eye. The problem with the visual cortex and the processing part of the brain is deciding where the light comes from.

To solve this problem, you must decide the direction of your eyes. Assuming that light enters the human eye in a straight line at this moment, the visual cortex will track the light in the opposite direction. This is done for both eyes to determine the focus of the two straight lines. If the light actually moves in the straight line, the intersection is where light is reflected by the object. By solving the problem of pairs of lines crossing the visible light, our brain reconstructs the depth image we see. Therefore, before studying the function of the lens, the virtual object is projected on the display, the corresponding pixel arrangement is filled with the color value of the object, and the light of these pixels on the screen will be projected on the eyes [13, 14].

3.2. Composition of the VR System

In order to realize the artistic style of simulated space, this paper adopts the current popular 5G VR technology. VR is the latest high technology with computer technology as the core, which can generate a very real virtual environment of vision, hearing, and touch. The experiencer can use some related equipment to talk and communicate with objects in the virtual environment in a simple and natural way, thus giving the user a feeling of being in the actual environment [15, 16].

Use computer monitors to allow users to observe the VR world, and provide users with a full-bodied VR experience. It feels like being in a real environment. Embedded VR uses a display or other types of equipment installed on the helmet to completely surround the experiencer’s vision, hearing, and other feelings through the helmet, providing users with a brand new and very real virtual feeling space. Distributed VR is connected to each other in the form of a network by two or more users in different virtual environments [17]. Other users of the exchange can share information in the same environment. A typical VR system usually consists of several main parts shown in Figure 1.(1)Detection Module. It detects input commands such as user operation information and sends them to the virtual simulation environment using the system’s human-computer interaction interface.(2)Feedback Module. It receives output information from the human-computer interaction interface and feeds it back to the user.(3)Human-Computer Interaction Module. Usually with hardware support, information is sent to the virtual environment, and the information received and processed by the system is returned to the user in a form acceptable to the user [18, 19].(4)Control Module. It processes information from the interactive interface between humans and computers and simulates according to specific mathematical and physical control models, which will affect the virtual environment, users, and the real world.(5)3D Model Library Module. It is the application of the virtual scene displayed on the front end.(6)Modeling Module. It obtains the detection information and data input by the user and establishes the corresponding simulation model.

3.3. Virtual Design in the Time and Space of Multimedia Art

Virtual design is a comprehensive development product of computer graphics, artificial intelligence, computer networks, information processing, mechanical design, and manufacturing. It was developed at the end of the 20th century. Multimedia art is a new category of art derived from computer technology. Computers can create an entity that people can feel, but these are actually virtual, not real. The virtual space-time of multimedia art is actually the space-time of virtual fantasy produced by various feelings, and it is a means of transmitting information and creating art [20]. For example, at the entrance of a large exhibition hall, the multimedia art design is used to display the historical charm of the content, displayed in virtual time and space, and the virtual scene is clearly displayed in front of the audience. The audience’s immersion and actual experience can be both time and space show. The space limit can be exceeded.

The virtual space-time of multimedia art design is mainly embodied in the following aspects. The first aspect is the virtual space-time system generated by graphics, video animation, text, and text. The second aspect is the spatiotemporal system of sound and music production, and the third aspect is the interactive spatiotemporal system. Time and space are the basic form of material existence, and multimedia art is also the form of time and space art. When designing multimedia works, first use actual objects as references, and then convert them into virtual environment entities. In other words, it includes real-time and multidimensional informational time-space mapping such as modeling, real-time tracking, sound positioning, visual tracking, and viewpoint sensing. Waiting for modeling based on actual objects is a three-dimensional virtual world constructed by computer technology, and part of its physical characteristics will be maintained as required. Seasonal changes such as spring, summer, autumn, and winter, changes in weather, obstacles or encounters in different environments, and the actions of people are different, so you can experience the real mood and experience in the virtual space-time environment. Sound and music also constitute virtual time and space. For example, the time difference, phase difference, and sound pressure difference of various sound sources arriving at a specific location will form different feelings of time and space.

3.4. Visual Variable Test Evaluation Index

The gaze tracking module integrated into the VR device can record the eye movement of the test subject without hindrance. This research focuses on the effectiveness and efficiency of these three visual variables (color, size, and spatial posture) and investigates the impact of these three visual variables on human visual behavior in a VR environment. The effectiveness and efficiency of visual variables can be evaluated by the fixation point ratio and reaction time of two quantitative indicators. The fixation point ratio (GPP) can be calculated based on the number of fixation points distributed in the following 7-color quadrilaterals:

In formula (1), is the gaze points distributed in the color cube, and is the total number of gaze points in each color cube. Through the collision detection of the eye movement light and the model, the data of the gaze point can be obtained. The reaction time (RT) is calculated based on the time before the initial fixation. This is on a colored cube. Taking into account the different response times of each test subject, the average response time of all test subjects is used to measure efficiency.

In formula (2), RT is the average response time, is the response time of the i-th object, and n is the total number of objects. In addition, the choice of RT index refers to the research method of the AOI interest field.

4. VR Visual Variable Spatial Cognition Experiment

4.1. Experimental Conditions and Data Collection

In order to ensure the consistency of system display and interaction effects, all performers use the Pic ou VR device group integrated with the same Android smartphone to record the overall test data in real time in the experiment and use Excel for data analysis to filter out invalid data. In order to ensure the objectivity and validity of this quantitative experiment, taking into account the influence of environmental factors on the experimenter, nonspecific experimenters can independently complete the experiment content in a relatively quiet environment. The experimental site is the Provincial Institute of Media Technology and Art.

4.2. VR Visual Variable Spatial Cognition Experiment Design

In this experiment, first, we briefly introduce the experiment to the subject and entrust the test subject to sit at the origin of the VR helmet coordinates. Because the characteristics of the eyes vary from person to person, the test subject must calibrate the eye movements before the experiment to ensure the accuracy of the test subject’s line of sight results. The task of this research is mainly divided into three parts. In the first step of the experiment, let the test subjects wear HTC Vive VR helmets, and let them watch the model in the scene that they like. The scene has only 7 color cubes (red to purple). Collect and find the eye movement data of the participants to grasp the area of interest of the experimental participants and calculate the effectiveness of these visual variables. In the second step, the participants will be shown the disordered scene of the model. The tasks of this step include the following: the participants discover 7 color cylinders (red to purple) and find the largest and smallest sealed cylinders needed in the experiment. Record the response time of the task. The last step of the experiment is to repeat the second step of the experiment in the scene of model ordering. There is no time limit for the entire experiment.

4.3. Realization of Virtual Design Platform

The space digital virtual design platform has the nature of information sharing and sending, so the realization of the virtual design platform uses FTP (File Transfer Protocol). FTP is a two-way sending application that controls files on the Internet. The program is based on the Internet. The audience can use FTP to connect their computers to all servers in the world that implement the FTP protocol, access the information on the download server, and realize the sending and sharing of information. The realization and design analysis of the virtual design platform are as follows:(1)The design of the display interface of the virtual design platform. The design of the platform is based on the principles of learning and communication. The platform interface is one of the visual methods to realize information exchange and sharing. Direct, friendly, and user-friendly communication interface facilitates the operation of the audience. While maximizing information sharing, correctly find the necessary information. The display interface design of the digital virtual design platform is realized in the form of web page display based on the Internet. The field of communication is usually embodied as an independent forum in the form of a web page. Promote the audience to communicate with each other and learn from each other.(2)FTP upload and download interface. The FTP application has two interfaces, a text interface and a graphical interface. The FTP application with a graphical interface is simple and easy to use, while the FTP commands of the role interface are much more complicated. Therefore, by using the FTP application with a graphical interface, the virtual design platform used in digital virtual design can be constructed.

5. 5G VR and VR Visual Space Art Style Conversion Data Analysis

5.1. Somatosensory Data Analysis of Space Art of VR Equipment

First, the scores of the 20 subjects on the VR devices in the original data table were added and summarized, respectively, the average value was calculated, and the score values were sorted. Then, based on the usability evaluation index, the scores of all the questions of each evaluation index are added and the average value is calculated, and the 4 usability indexes are comprehensively sorted to obtain the average value of the original data and the ranking result as shown in Table 1. The average ranking analysis of the original data is shown in Figure 2, and the easy-to-use data indicators are shown in Figure 3. According to the design, the average value in the ranking results reflects the user’s recognition of the ease of use of the VR device space art somatosensory interaction system.


Serial numberTotal scoreMeanSortEase of use typeTotal scoreMeanSort

1587.75Fluency149.86
28101.89
33812.75

4665.67Sensory experience1611.38
58127.95
6484.66

7108.958Easy to learn2412.8710
8126.0510
9137.876Efficiency227.737
101110.958.5

First, we analyze the ease-of-use indicators. Except for sensory experience indicators, the average values of other indicators are less than 1, which is a project with better ease of use. The fault tolerance score is only 0.32, indicating that the system has a good error recovery function, and the average subjective satisfaction is only 0.36, indicating that users are highly satisfied with the system. According to the experimental data, it can be found that the average sensory experience is 12.87, and it can be seen from the data that, except for items 2, 3, and 8, the average value of other questions is less than 10. It shows that users experience more obvious dizziness when experiencing this system, and somatosensory interaction mainly lies in building modules. On the whole, the overall ease of use of the system is good, with the scores of all indicators below 10 except for sensory experience, and it is worth noting that, in the sensory experience with poor performance, the visual and auditory effect scores are higher. If it is small, it shows that the audiovisual effect of this system is very good. The reason for the poor overall sensory experience is mainly due to the high value of the problem caused by dizziness, which increases the overall index value. Dizziness is a common problem for all VR products. Because the experience environment of the VR product is special, the user must wear a specific device to experience. At this stage, the hardware device of technology and experience quality cannot resolve vertigo. As far as the system itself is concerned, ease of use has reached the target set in all indicators.

5.2. Analysis of Experimental Results of Visual Simulation Degree Verification

The visual simulation degree verification experiment adopts the Likert scale, which is divided into 5 grades for the sense of space, material illumination reality, distance error, and movement perception; each scored 5–15 points. The data obtained from the subjective experiment are averaged according to the gender of the subjects. The average value of the subjective index of the visual simulation degree verification experiment is shown in Table 2, and the average value of the experimental subjective index is shown in Figure 4.


Sense of spaceAuthenticityDistance errorMobile sensing

Male10.586.858.939.65
Female8.657.6712.099.53.

It can be seen from Table 2 that, in terms of vision, both males and females have reached very high indicators, and the authenticity of model materials and lighting is the basis for achieving immersion. The sense of space is almost completely consistent. The average value of the subjective judgment of the distance error is about 5, indicating that the distance error of the virtual space is less than 1.5 cm. It can also ensure that there will be no excessive errors in the mobile conversation distance in the language conversation experiment. In terms of mobile perception, it is found that there is no big difference in movement between virtual space and real space. Although both men and women have reached high targets, there are also certain differences. Among them, women pay more attention to details than men. Other indicators except for distance errors are lower than men, which also shows that men are more sensitive in distance. Therefore, gender is also an important controlling factor in the privacy experiment of language conversation in this paper. Among the 20 subjects, no abnormal feelings such as dizziness occurred. However, through experiments, it is found that the time for experimenting with VR equipment should be controlled within 25 minutes as much as possible; otherwise, fatigue may occur and affect the data conclusion.

5.3. Feasibility Analysis of VR of Space Art Works

The concept of VR not only is an exploration of technology but also involves technological innovation in the field of art. Only by realizing the mutual integration of technology and art can the charm of VR design be better expressed. Compared with the artistic expression of traditional physical forms, digital VR has achieved logical breakthroughs in two aspects. Recognizing the actions corresponding to the place of the experiencer and strengthening the feedback of single or multiple senses can also produce the user’s sense of being on the spot. Next, through the performance of VR technology, the experiencer does not passively accept information from the outside world but can interact with virtual objects in the scene. This action refers to the degree of interaction between the user and the object in the simulated environment and the actual degree of feedback obtained from the environment, mainly based on the interaction of two forms of vision and movement. The application comparison of traditional technology and VR technology is shown in Figure 5. It can be seen from the figure that the VR of the art experiment form is not limited to photos, free-frame photos, and specific audio but is inclusive. Entering the era of universal “experience” VR, with the advancement of human creativity, the existing lifestyle has also changed. The practice scene of traditional art must be based on the form of the objective world, while the art practice place of VR only needs to give the creator a subjective virtual space. Compared with traditional art, it can provide creators with a wider imagination.

6. Conclusions

This paper first introduces the theory of VR vision technology, analyzes multiple problems in VR technology, and also gives a variety of solutions. Aiming at the cause of the problem, this paper proposes that the VR system is divided into 6 modules, which are detection module, feedback module, human-computer interaction module, control module, 3D model library module, and modeling module. This paper focuses on the research of 5G virtual technology and spatial art style conversion in VR vision.

This paper combines VR vision and VR technology to design a VR space system based on VR vision and applies the 5G VR-based visual variable test evaluation index proposed in this paper to the real-time optimization part of the camera pose in the system. The experimental platform of the system was built, and the experimental results of the system were displayed in detail. Users can experience the synchronous interaction between the real space and the virtual space through this system. This paper has deeply studied the visual variable test evaluation indicators in VR vision and learned that the working principles and applicable scenes are different. The traditional method estimates the camera pose by extracting image features and matching feature point pairs but cannot face the lack of image features. At the same time, in the absence of a VR environment and VR vision methods, traditional methods cannot predict the camera's dynamics.

This paper attempts to improve the simulation effect of VR by dealing with the concept of space art design, that is, the relationship between art and technology based on human needs. The improvement of the simulation effect depends on the user’s needs and the improvement of the consistency of the attributes of the virtual world. The research results of this paper can lead to the characteristics of VR art design. The “creative selection process” of the author’s creative function theoretically provides space for artistic design, while the “multiple insights” of the appreciation function determine the way people understand in the virtual world. The research done in this paper can only provide a certain reference value.

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

There are no potential conflicts of interest in our paper.

References

  1. T. Cao, C. Xu, M. Wang et al., “Stochastic optimization for green multimedia services in dense 5G networks,” ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 15, no. 3, pp. 1–22, 2019. View at: Publisher Site | Google Scholar
  2. B. Kwintiana and D. Roller, “Ubiquitous virtual reality: the state-of-the-art,” Journal of Computer Science and Technology, vol. 8, no. 7, pp. 16–26, 2019. View at: Google Scholar
  3. J. Yan, D. Wu, H. Wang, and R. Wang, “Multipoint cooperative transmission for virtual reality in 5G new radio,” IEEE Multimedia, vol. 26, no. 1, pp. 51–58, 2019. View at: Publisher Site | Google Scholar
  4. E. Bastug, M. Bennis, M. Medard, and M. Debbah, “Toward interconnected virtual reality: opportunities, challenges, and enablers,” IEEE Communications Magazine, vol. 55, no. 6, pp. 110–117, 2017. View at: Publisher Site | Google Scholar
  5. M. S. Elbamby, C. Perfecto, M. Bennis, and K. Doppler, “Toward low-latency and ultra-reliable virtual reality,” IEEE Network, vol. 32, no. 2, pp. 78–84, 2018. View at: Publisher Site | Google Scholar
  6. S. Sukhmani, M. Sadeghi, M. Erol-Kantarci, and A. El Saddik, “Edge caching and computing in 5G for mobile AR/VR and tactile Internet,” IEEE MultiMedia, vol. 26, no. 1, pp. 21–30, 2019. View at: Publisher Site | Google Scholar
  7. E. M. Roche and L. W. Townes, “Millimeter Wave “5G” wireless networks to drive new research agenda,” Journal of Information Technology Case and Application Research, vol. 19, no. 1, pp. 190–198, 2017. View at: Publisher Site | Google Scholar
  8. A. M. French, M. Risius, and J. P. Shim, “The interaction of virtual reality, blockchain, and 5G new radio: disrupting business and society,” Communications of the Association for Information Systems, vol. 46, no. 25, pp. 603–618, 2020. View at: Publisher Site | Google Scholar
  9. I. Brinkman and B. Clist, “Spaces of correlation. The art of conversion: christian visual culture in the Kingdom of Kongo. By Cécile Fromont. Chapel Hill, NC: published for the Omohundro Institute of early American history and culture, Williamsburg, Va. by the University of North Carolina Press, 2014. Pp. xviii + 283. $45, hardback (ISBN 978-1-4696-1871-5),” Journal of African History, vol. 57, no. 1, pp. 163-164, 2016. View at: Google Scholar
  10. A. Das, S. Kottur, K. Gupta et al., “Visual dialog,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 5, pp. 1242–1256, 2019. View at: Publisher Site | Google Scholar
  11. D. Martín-Sacristán, C. Herranz, J. F. Monserrat et al., “5G visualization: the METIS-II project approach,” Mobile Information Systems, vol. 2018, no. 3, Article ID 2084950, 8 pages, 2018. View at: Publisher Site | Google Scholar
  12. A. O. Al-Abbasi, V. Aggarwal, and M.-R. Ra, “Multi-tier caching analysis in CDN-based over-the-top video streaming systems,” IEEE/ACM Transactions on Networking, vol. 27, no. 2, pp. 835–847, 2019. View at: Publisher Site | Google Scholar
  13. A. Hamacher, “Engineering a future-proof education,” Professional Engineering, vol. 30, no. 8, pp. 24–28, 2017. View at: Google Scholar
  14. L. Anatole, “Playing with senses in VR: alternate perceptions combining vision and touch,” IEEE Computer Graphics and Applications, vol. 37, no. 1, pp. 20–26, 2017. View at: Publisher Site | Google Scholar
  15. A. S. Priyanka, K. Nag, V. R. Hemanth Kumar, D. R Singh, S Kumar, and T. Sivashanmugam, “Comparison of king vision and truview laryngoscope for postextubation visualization of vocal cord mobility in patients undergoing thyroid and major neck surgeries: a randomized clinical trial,” Anesthesia, Essays and Researches, vol. 11, no. 1, pp. 238–242, 2017. View at: Publisher Site | Google Scholar
  16. L. Yin, “Virtual design method of indoor space environment based on VR technology,” Paper Asia, vol. 34, no. 5, pp. 39–43, 2018. View at: Google Scholar
  17. F. Adnan, “The design of augmented reality android-based application as object introduction Media learning to the children,” Advanced Science Letters, vol. 23, no. 3, pp. 2392–2394, 2017. View at: Publisher Site | Google Scholar
  18. A. Vankipuram, P. Khanal, A. Ashby et al., “Design and development of a virtual reality simulator for advanced cardiac life support training,” IEEE Journal of Biomedical & Health Informatics, vol. 18, no. 4, pp. 1478–1484, 2014. View at: Publisher Site | Google Scholar
  19. A. Selivanova, E. Fenwick, R. Man, W. Seiple, and M. L. Jackson, “Outcomes after comprehensive vision rehabilitation using vision-related quality of life questionnaires: impact of vision impairment and national eye Institute visual functioning questionnaire,” Optometry and Vision Science, vol. 96, no. 2, pp. 87–94, 2019. View at: Publisher Site | Google Scholar
  20. S. Feng and A.-H. Tan, “Towards autonomous behavior learning of non-player characters in games,” Expert Systems with Applications, vol. 56, pp. 89–99, 2016. View at: Publisher Site | Google Scholar

Copyright © 2021 Lijia Zeng and Xiang Dong. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views446
Downloads404
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.