Complexity / 2021 / Article
Special Issue

Complexity Problems Handled by Advanced Computer Simulation Technology in Smart Cities 2021

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 7572654 |

Xiaojuan Xu, Jin Zhu, "Artistic Color Virtual Reality Implementation Based on Similarity Image Restoration", Complexity, vol. 2021, Article ID 7572654, 12 pages, 2021.

Artistic Color Virtual Reality Implementation Based on Similarity Image Restoration

Academic Editor: Zhihan Lv
Received24 Apr 2021
Revised20 May 2021
Accepted28 May 2021
Published22 Jun 2021


In this paper, exploratory and innovative research is done on the implementation technique of artistic color virtual reality for similarity image recovery. Based on similarity images, a nonlocal natural image before the regular term is proposed to deal with the single-image blind deblurring problem. This paper designs a new artistic color virtual reality realization technology based on similarity image restoration, which exploits the low-rank property between nonlocal similarity blocks in images and combines a strong convex term to enhance the convexity of the artistic color virtual reality model. We analyze virtual reality interaction design from the perspective of art color design, sort out the concept and content of design, analyze the elements, design principles, and evaluation criteria included in virtual reality interaction art color design, and explore the conceptual principles of virtual reality interaction art color design. A full understanding of the characteristics of the medium of the virtual reality interaction can help us to better use this medium as a tool to create works that aim to bring higher quality and experiential feeling with a perceptual communication method that is closer to natural interaction. Combining the power of technology, artistic colourful thinking, and a design approach paves the way forward. The study shows that virtual reality technology can effectively improve the status quo and promote the cultivation of professional practice ability in art color design, which is conducive to the cultivation of applied design talents.

1. Introduction

Similarity image restoration is an important branch in the field of image processing, and its study has strong practical significance. Most of the acquired images inevitably suffer from different degrees of noise pollution, and how to eliminate these noises has been the focus of attention [1]. Also, in the actual application of digital images, due to various reasons, the acquired images will suffer from various blur degradation, and how to remove the blur has also received the same attention. The common blurs are Gaussian touch blur, smooth blur, motion blur, booker blur, and atmospheric stream blur [2]. When there is relative motion between the imaging device and the target due to the imaging rate lags behind the rate of object motion, resulting in a superposition of the captured motion target, resulting in imaging blur, this blur is called motion blur [3]. As most of the imaging equipment is composed of many optical components, in the imaging process due to inaccurate focusing of the lens, often making part of the image unable to be in focus, this kind of equipment due to their problems caused by the blur is called scattered focus blur. The process of similarity image recovery is objective; that is, the original image is assumed to exist objectively, and the original image is recovered from the given degraded image [4]. Similarity image recovery generally uses some a priori information of the image before degradation to recover the original image [5, 6].

A large proportion of current traditional similarity image restoration methods are driven by degradation models, which usually require modeling the degradation process and solving further inverse problems as a way to restore potentially clear images. Such problems are often pathological, and thus the objective function needs to be formulated in a way that taps as much as possible into the valid information contained in the observed degraded images or exploits the inherent features of natural images [7]. At the same time, from the perspectives of technology and artistic concepts, technological progress, and human spiritual needs, we systematically sort out the background and significance of the development of virtual reality, the process of its formation and its composition, and analyze the technology involved in virtual reality art, artistic expression, and the fusion of artistic concepts from flat to the panoramic view, as a basis for constituting the conditions and background of interaction in virtual environments. The analysis of virtual reality art from the perspective of the development of Western art and technology is the mainline to sort out the development of virtual reality art [8]. On this basis, this paper attempts to explore the complementary relationship between the virtual and the real in the creation of visual art, from the perspective of cultural and artistic mixing, and concerning the Eastern perspective. From flat art to the formation of art in space, virtual reality art develops along a hybrid path. We explore the deeper origins of the fusion and formation of virtual reality art concepts, the development of virtual reality technology in concert with art, and a deeper understanding of this medium [9].

This paper takes similarity image restoration as the technical support reins and combines artistic color virtual reality to realize the technical requirements. It is an image restoration method based on the combination of similar image features and graphical cutting technology. In the restoration process, the filling order affects the restoration results. Sample block-based image restoration techniques use priority to determine the order of image repair. The rich image content often interferes with the calculation of priority, and the stability of the repair order can be enhanced if the calculation of priority is performed on multiscale images. The first chapter is the introduction section, which introduces the research background and significance of image restoration in artistic color virtual reality implementation techniques, and also gives a brief description of the main contents and contributions and organization of this paper. The second chapter is the related work section, which mainly analyzes the status of domestic and international research. The third chapter proposes a local neighbourhood-level image-guided filtering enhancement method and successfully solves a variety of image enhancement problems by combining a simple image decomposition model. The model studied in this paper is also applied to the design of artistic color virtual reality implementation. The fourth chapter proves the value of this paperʼs research through the analysis of the art color realization algorithm, the analysis of the art color realization model, and the analysis of the color virtual realization effect based on this paper. The fifth chapter briefly summarizes the main research work, innovation points, and main contributions of this paper and gives an outlook on the future research work and directions.

The physical model-based similarity image restoration method is to improve the degraded image from the physical point of view, and its purpose is to restore the degraded image to its original appearance with the maximum fidelity of the observed degraded image, combined with a priori knowledge [10]. For specific applications, scholars at home and abroad have researched similar image restoration methods, and after years of development, they can be divided into nonblind restoration methods and blind restoration methods according to whether the point expansion function is known or not [11]. Padcharoen et al. blind image recovery methods firstly estimate the point expansion function based on the observed image and then use the nonblind recovery method to realize the similarity image recovery [12]. Vergel et al. proposed a constrained model based on the TV model combined with higher-order partial differential filtering to address the problem of step effect caused by the ROF model [13]. Cassidy et al. use imitation and simulation as keywords to create and share the art of illusion from a technical perspective [14]. From panoramic wallpapers, panoramic paintings combined with artificial topography, stereoscopic movies, to Monetʼs water lily panoramas, early immersive 3D movies, stereoscopic glasses, to 3D digital images realized by helmets and head-tracking devices, are the clues of the evolution of immersive interactive virtual reality from especially scientific and technological perspectives nowadays [15].

Various restoration methods have been proposed for specific problems, and maximizing the advantages of each method is one of the main concerns of most scholars [16]. The image is separated into texture and structure, and the structural component map is restored by the variable component restoration method, while the texture component map is restored by the texture synthesis method, and then combined after each completes the restoration [17]. Yang et al. proposed the Patch Match method, which uses EM to estimate the value of pixels and then uses a block matching strategy to find similar sample blocks [18]. Singh et al. use multiresolution features for image restoration, first on low-resolution images and then on high-resolution images, and optimize the restoration order for single-layer images [19]. There are many similar restoration processing methods. However, a common drawback of such methods is that if the low-resolution image is repaired incorrectly, it will propagate to other layers, resulting in an unsatisfactory final restoration result of the image [20].

With the increasing demand, more and more scholars will join the image restoration research, and more image restoration methods are bound to emerge. Currently, image restoration based on deep learning is a hot research topic [21]. Using a large number of sample resources to train the network will affect the application of image restoration, and combining traditional restoration methods with deep features or using small samples to train to learn to improve the quality of image restoration will be the direction of further research. It is believed that better restoration methods will be proposed; the research on virtual reality interaction and virtual reality interaction design, from the available information, seems to focus more on the research on hardware, software technology, psychology, and human-computer interaction, which is related to the path and process of virtual reality development [22]. The interaction and interaction design in immersive virtual reality is a comprehensive cross-presentation of a variety of expertise in technology, art, and science, which are closely interlinked and gradually developed based on technology. At the present stage when technology has developed to a certain extent, understanding the characteristics of the medium and improving the quality of works and interaction experience become key issues in virtual reality art design.

2.1. Artistic Color Virtual Reality Implementation Technology Research
2.1.1. Artistic Color Feature Extraction Based on Similarity Image Recovery

In the field of computer vision, extracting features with scale invariance is a crucial step. Traditional feature detection algorithms are based on linear Gaussian pyramids to extract feature points; for example, algorithms use algorithms that construct Gaussian differential scale-space frame structures, and algorithms use box filter methods that approximate Gaussian differentiation [23]. These methods use Gaussian blurring in constructing the image Gaussian pyramid, but Gaussian blurring not only does not preserve the boundary information of the object but also does the same level of smoothing of details and noise at all scales, which loses the accuracy of localization. To process image data with noise blurring while details and boundaries are not affected, methods for feature detection and description on nonlinear scale-space have been proposed [24]. However, the traditional method is based on the forward Euler method to solve the nonlinear diffusion equation, which produces short convergence steps, long time consuming, and high computational complexity in iteration. To solve the above problems, this paper focuses on introducing and implementing an algorithm based on nonlinear scale-space that can detect and match features on two-dimensional images at multiple scales. And the original algorithm is improved by using the improved numerical method and the improved feature descriptor method, thus improving the performance of feature detection and matching. Figure 1 shows the system block diagram of the nonlinear algorithm.

In nonlinear diffusion filtering, the change in image brightness with increasing scale is described as the dispersion of a definite stream function, and the process of diffusion can be controlled by this stream function. From the nature of the nonlinear partial differential equation, it is known that the nonlinear scale space can disperse the brightness of the image. Thus, the nonlinear diffusion filtering method can be expressed by the nonlinear partial differential equation as equation (1), where div and dL denote the scatter and gradient, respectively, L is the brightness of the image, and f (x, y, t) is the transfer function, which can be in vector form or tensor form. Time t is the scale parameter, which determines the complexity of the image representation.

The smaller the value of t in equation (1) is, the more complex the representation L is. In anisotropic diffusion, the gradient value of the image controls the diffusion on each scale; therefore, the conduction function can be defined as equation (2), and is the gradient of the original image L after Gaussian smoothing.

Since there is no analytical solution in the nonlinear partial differential equation, the differential equation needs to be approximated by using numerical analysis [25]. Therefore, the discretized diffusion equation can be transformed into an implicit or semi-implicit form as shown in equation (3), where is the image conductivity matrix in each dimension.

Assuming an input image, the original image is first smoothed using a Gaussian function with a standard deviation to reduce the effect of noise and other unfavorable factors. Then the gradient histogram of the image is calculated to obtain the contrast factor k. Finally, from the k values and a set of evolution timeʼs ti or tj, the nonlinear scale space is then constructed using the AOS algorithm as well as using simple iterations as shown in the following equation:

The same nearest neighbour and second nearest neighbour distance ratio methods are used in the algorithm as the similarity measure criterion for feature point matching. By calculating the Hamming distance between each feature descriptor in the target image and all feature descriptors in the reference image, the nearest neighbour matching points and the second nearest neighbour matching points are selected. When the ratio of the Hamming distance of the nearest neighbour to the Hamming distance of the second nearest neighbour is less than a set threshold, it is considered as a candidate matching point [26]. Then, we use the algorithm to eliminate the false match points and finally achieve the correct matching of the image.

2.1.2. Artistic Color Virtual Reality Realization Model Construction

The different colors of light in nature are electromagnetic waves of different frequencies, and the different colors people observe are the result of the human brainʼs eye perceiving electromagnetic waves of different frequencies as different colors. It is not intuitive for the human eye visual perception system to distinguish various colors in the spectrum according to their wavelengths [2733]. Therefore, hue, saturation, and luminance are usually used to represent the various colors perceived by the human eye visually. Hue depends on the wavelength in the spectrum where the color is located and is used to indicate the type of color. Saturation is related to the purity of the hue. Luminance depends on the light intensity of the color, proportional to the reflectance of the object itself, and is used to characterize the brightness of the color. To use color correctly, various methods of representing color have been proposed, i.e., building color models. The most used ones are the RGB model and the HSI model. In this paper, we need to convert between these two color models when processing artistic images, and the model diagram is shown in Figure 2.

An RGB image can be thought of as consisting of three image components, R, G, and B. When fed into the red, green, and blue inputs of a color monitor, a color image is produced by mixing them on the screen. In the RGB model, any color image can be divided into three independent planes of R, G, and B. Then, this model is more suitable for processing systems that can represent images as three planes, such that the RGB model is commonly used in multispectrum satellite remote sensing image processing systems. The RGB model is a color addition process, where the three primary colors are mixed and superimposed in different proportions to produce different colors, and more than sixteen million different colors can be expressed in a 24 bit RGB image.

In the RGB color model, the hue is affected by the ratio of the three color components R, G, and B. If the three components of the RGB model are processed directly, different degrees of variation in the three components will lead to severe color distortion. For color art color images, the RGB image is first converted to the HSI color space. In the HSI color model, H, S, and I represent hue, saturation, and luminance, respectively. Hue and saturation describe the chromaticity information of the color image, and the luminance component has nothing to do with the color information of the image; it only determines the brightness of the pixel. In this paper, the CLAHE method only operates on the luminance component and outputs it as an RGB model at the end, so that the color information of the image will be maintained and the processed image can keep the same hue as the original image. Given a color image with an RGB model, it can be converted into the HSI color space by the following equation:

If the region of interest occupies only a small percentage of the image, it is difficult to achieve the purpose of enhancing this part of the region by directly using the histogram equalization method, and the shear histogram overcomes these problems by limiting the degree of enhancement. The degree of image enhancement is proportional to the slope of the cumulative distribution function, which is shown in the following equation:

The degree of image enhancement can be controlled by increasing and decreasing the value of . To limit the contrast to the desired degree, a clipping threshold β is set in the CLAHE algorithm, and the clipping threshold is defined by equation (7), where N and L denote the total number of pixels and the number of gray levels within each subblock, respectively, denotes the maximum allowed slope, and α denotes the shear factor.

The power transform is also very helpful for image contrast enhancement and helps to improve the color fidelity of the image, so it is effective to perform the power transform on the image after CLAHE processing. The basic form of the power transformer is equation (8), where c and r are positive constants. When r < 1, the power transform maps the input dark value area with a narrower range to the output interval with a wider range, and the input light value area with a wider range to the output interval with a narrower range. When r > 1, the power transform maps the input dark values of the wide range to the output of the narrow range and maps the input light values of the narrow range to the output of the wide range. When r = 1, it is a positive ratio transform. In this paper, according to the steps of the CLAHE algorithm, the contrast enhancement of artistic color images is limited due to the clipping histogram, and the low gravy value areas need to be compressed to further improve the contrast. Therefore, after the CLAHE operation, a power transformer with parameter r > 1 is selected and applied to the image to further improve the contrast and highlight local details to produce the satisfying output image.

The input and output in virtual reality interaction can be analyzed as visual, auditory, dynamic, gesture, touch, etc. from the perspective of perception and hardware.

2.1.3. Artistic Color Virtual Reality Realization Design Evaluation

Through the art design practice of virtual reality works, excellent works experience to reference, the design principles are considered and organized, combined with the evaluation of art design works experience from various media categories from previous experience, sorted out and summarized, and verified in the process of works iteration, coming up with a comprehensive law: the consideration of empathy, availability, reasonable surprise, feedback, testing, iteration, artistry, and physical characteristics are in the basic principles and reference points of the immersive virtual reality interactive art design process. Design evaluation is an important method to improve and advance the design. The advancement of virtual reality interactive art design is achieved through continuous testing and iteration, and its evaluation is an important step to disassemble the comprehensive experience and then feedback in the art design work, in addition to referring to the relevant evaluation methods in the 3D user interface, from the perspective of the art design and comprehensive experience of the work, and combining the associated factors in the work to structure the immersive evaluation criteria of virtual reality interactive art design.

Based on their subjective perception of the image, evaluators give their respective evaluation scores according to the level, and on this basis, all the scores are weighted and averaged according to the weight coefficient, and the result of the weighted average is the result of the subjective evaluation. According to the reference system of evaluation images, subjective evaluation is divided into absolute evaluation and relative evaluation. Absolute evaluation is performed on a single image, and the evaluator evaluates the absolute good or bad quality of the image to be evaluated according to his subjective feeling and gives a direct quality evaluation score. Relative evaluation is to compare a group of images to be evaluated with each other and judge the order of merit of each image, based on which the evaluation level of all images is given, as shown in Table 1. The image quality obtained from the subjective evaluation is based on human subjective perception and reflects human visual factors, but it cannot be described by mathematical models or reflected by specific technical indexes, resulting in certain limitations in its quality evaluation.

Evaluation IDEvaluation gradeAbsolute evaluationRelative evaluationEvaluation score

1First levelVery badWorst<60
2Second levelPoor (80–90)Poor60–70
3Third levelGeneral (70–80)Average70–80
4Fourth levelBetter (60–70)Better80–90
5Fifth levelWell (<60)The best90–100

Due to the limitations of subjective evaluation, objective evaluation has been of great interest in practical research. There are three types of objective evaluation, such as full-reference, half-reference, and no-reference evaluation. Full-reference evaluation requires complete information of the original image, half-reference evaluation is when partial information of the original image is known, and no-reference evaluation is without any information of the original image. In the study of practical engineering problems, in most cases, the original image information is not known and the reference-free evaluation method needs to be used. The contrast and texture characteristics of an image can be indicated by the gravy average gradient value, and a larger gravy average gradient value indicates better image quality, and for an image of M × N pixel size, C can be expressed in equation (9), where M × N is the size of the image, denotes the gradient amplitude of the image at pixel point , and the derivative of the image signal concerning variable is called the gradient of the image.

Regarding the discussion of artistry, different people will always have different experiences under the changing concepts brought about by the background of different times and the different forms of expression caused by the characteristics of the medium. However, there is no doubt that artistry and aesthetic level are the ultimate and highest goal to enhance the sense of experience. Virtual reality art belonging to the digital era has the characteristic of remixing, in which artistry contains multisensory feelings, sound, visuals, and other emotional resonance. Unlike the classical aesthetic experience in traditional art, due to its digital media characteristics, it brings a new aesthetic system, where both ink and cyber styles can be virtualized through computer systems, and the creation of artistry falls on the sense of order, coordination or contrast, and the completion of the vision, whether through audio-visual or other perceptions, bringing a harmonious enjoyment to the experiencer from an aesthetic point of view. Efforts to improve artistry can always be made with higher demands.

3. Results and Analysis

3.1. Artistic Color Realization Evaluation Analysis

When analyzing and comparing the performance of image processing by different methods, in addition to subjective observation and evaluation of the image, it is also very important to objectively evaluate the processing results. This section uses the peak signal-to-noise ratio (PSNR), information entropy (IE), and brightness absolute error (AMBE) indicators to objectively evaluate the enhanced image. MG, PSNR, IE, and AMBE parameter metrics were used to objectively evaluate the effect of artistic color virtual imaging image enhancement (Figure 3), which correspond to the laboratory artistic color imaging image (Figure 3(a)), and the model standard imaging image (Figure 3(b)), respectively. It can be seen that, compared with HE and CLAHE algorithms, the algorithm proposed in this paper produces the highest IE value, and the information entropy indicates the amount of image information. In general, the larger the value, the richer the details in the image, indicating that the algorithm in this paper has the best performance in detail enhancement. The PSNR of the HE algorithm is the lowest because the global histogram is prone to overenhancement and amplification of background noise. Therefore, the PSNR of the HE algorithm is the smallest, and the algorithm in this paper is slightly smaller than that of the CLAHE algorithm. The AMBE value of the HE algorithm is the largest and the CLAHE algorithm is the second-largest because of the overenhancement caused by the HE algorithm, and the brightness retention ability of the CLAHE algorithm is the best. According to the comparison results of MG data, although the value of the HE algorithm is the largest, the HE algorithm has the highest value at the cost of overenhancement, which causes serious image color bias phenomenon. Corresponding to these two indicators, this algorithm has the maximum value, indicating that this algorithm has good local contrast enhancement effect and high clarity. As comprehensive indicators, the algorithm in this paper has the best performance in enhancing contrast and highlighting detail ability, also maintaining the image brightness characteristics to a certain extent and effectively suppressing the background noise.

Figure 4 presents the objective evaluation metrics data of the artistic color virtual image tested with different enhancement methods, including HE, BOHE, POSHE, MLBOHE, BBHE, RMSHE, and the proposed CALHE-PL and POSHEOC methods, with objective metrics including PMGSIM, PSNR, IE, and AMBE. It can be noted that when compared with other methods, the POSHEOC algorithm proposed in this paper produces the highest PMGSIM value, followed by CLAHE-PL. PSNR and AMBE metrics data show that POSHEOC has the best performance value except for the RMSHE algorithm. During the RMSHE processing, the output image brightness means value becomes larger with the number of iterations and gradually approaches the mean brightness of the input image so that the mean brightness of the original image is preserved and the noise or artificial artefacts are largely reduced. Therefore, the RMSHE algorithm can obtain high PSNR and low AMBE values; however, its contrast enhancement capability is insignificant, which leads to low-quality enhancement results. By observation, BOHE and POSHE algorithms have higher IE values than other methods because uniformly distributed histograms have the largest information entropy; therefore, these two methods have high IE values attributed to the uniformity of their grayscale distribution. The POSHEOC algorithm proposed in this paper has medium-sized IE values and can obtain better visual effects because too high or too low of these two metrics will result in overenhanced or underenhanced images. With similar visual effects of CLAHE-PL and POSHEOC algorithms, the objective metrics of the latter have better data than the former except for IE which is slightly lower than the former, and the rest of the metrics are better than the former. This shows that the proposed method has better enhancement effect.

3.2. Artistic Color Realization Model Analysis

The CMFGCT method was compared with other image restoration methods in terms of time consumption and the experimental results are shown in Figure 5. From Figure 5, it can be seen that the Criminisi method and Deng method took less time, the Darabi method took the longest time for restoration, and the CMFGCT method also consumed a lot of time. This is because searching on multiple image resources consumes a certain amount of time, and the time consumed increases dramatically if the images are large.

Although the proposed model achieves good reconstruction results, this model requires adjustment of two different parameters, shape parameter α and scale parameter β, compared to other models. For the analysis of the effects of the two parameters on the constructed images, the reconstruction of 20 angles of MSL and 30 angles of NACT is analyzed in this paper. The PSNR variations of the reconstructed images based on different combinations of shape and scale parameters for 6000 iterations are depicted in Figure 6. It can be seen from Figure 6(a) that the MSL reconstruction based on multiple angles achieves the highest PSNR value when α = 1.1 and β = 8. In Figure 6(b), the highest PSNR is achieved for the NACT reconstruction based on 30 angles when a = 1.2 and β = 8.

3.3. Analysis of the Effect of Virtual Realization of Artistic Colors

In this paper, we compare the algorithm performance of different algorithms in artistic color virtual reality implementation, as shown in Figure 7. Figure 7(a) shows the comparison of different algorithm performance under scale and rotation changes, Figure 7(b) shows the comparison of different algorithm performance under viewpoint changes, and Figure 7(c) shows the comparison of different algorithm performance under noise changes.

As can be seen from Figure 7(a), the nonlinear algorithm and the improved algorithm are higher than the other linear algorithms in terms of the total number of matches extracted and the correct matching rate, and the SIFT algorithm is the second highest. In terms of running time, the SIFT algorithm, KAZE algorithm, and the improved algorithm are significantly higher than the other algorithms by nearly an order of magnitude. However, the improved KAZE algorithm improves the correct rate and shortens the running time by about one order of magnitude compared to the algorithm. It can be seen that the improved KAZE algorithm not only extracts more matching points and has a high correct rate, but also has a better real-time performance. Therefore, the correct matching rate of the nonlinear algorithm is better than the linear algorithm under the scale and rotation changes, and the comprehensive performance of the improved algorithm is better than the algorithm.

As shown in Figure 7(b), the nonlinear KAZE algorithm and the improved KAZE algorithm are higher than the other linear algorithms in terms of the total number of matches extracted and the correct matching rate. In terms of running time, the SIFT algorithm, the KAZE algorithm, and the improved KAZE algorithm are significantly higher than the other algorithms. However, the improved KAZE algorithm improves the correct rate by nearly 1.67% compared to the KAZE algorithm, shortens the running time by about 80.5% compared to the KAZE algorithm, and has a lower processing time than the SIFT algorithm. Therefore, the matching accuracy of the nonlinear algorithm is higher than that of the linear algorithm in the case of viewpoint change, and the comprehensive performance of the improved KAZE algorithm is also superior to that of the original KAZE algorithm.

From Figure 7(c), it can be seen that in terms of noise immunity, the linear algorithms BRISK, ORB, FREAK, SIFT, and SURF algorithms extract less than 300 total matching points and all have a correct rate of less than 74.4%. In contrast, the nonlinear algorithms extracted more than 780 total matching points, and the correct rates were all greater than 75.5%. This is because the nonlinear diffusion filtering preserves the boundary information of the object while smoothing the noise, while the Gaussian smoothing does the same level of processing of noise and details and loses the boundary information of the object. In the nonlinear algorithm, the improved KAZE algorithm not only improves the correct rate by nearly 3.51% over the original algorithm but also reduces the processing speed by nearly 72.1%. Although the nonlinear algorithm is relatively larger than the other algorithms in terms of running time, the improved KAZE algorithm improves the correct rate by about 37.2% over the BRISK algorithm, nearly 8.5% over the ORB and FREAK algorithms, and nearly 16.3% over SIFT and SURF. Therefore, the noise immunity of the nonlinear algorithm is better than the linear algorithm, and the improved KAZE algorithm is more robust to noise changes, and it is insensitive to noise changes and has the best performance.

To consider the performance of the algorithms in this paper in terms of integrated variation, two images of pixel size generated by the actual camera shot are used in this section, with scale and rotation variations, viewpoint variations, and image distortions. The comparison of the performance of different algorithms under the integrated variation is shown in Figure 8. From Figure 8, it can be seen that the correct feature points of BKSK and improved KAZE algorithms are below 100 points for scale and rotation changes, viewpoint changes, and image distortion, while the correct rates of SIFT and KALB are guaranteed to be above 40.1%, while the correct rates of other linear algorithms, nonlinear KAZE, and improved KAZE algorithms are guaranteed to be above 40% and the correct rates of other linear algorithms are below 35%. In terms of running time, the improved KAZE algorithm not only shortens about 76% compared to the KAZE algorithm but also improves the correct rate by 3%. Therefore, the comprehensive variation performance of the nonlinear algorithm is better than that of the linear algorithm, and the improved KAZE algorithm outperforms the KAZE algorithm in terms of both running time and correctness.

In summary, the nonlinear image feature extraction and matching algorithm outperform the linear algorithm under scale change, rotation and scaling, viewpoint change, blur, illumination change, compression change, and image distortion, and both the nonlinear algorithm and the improved algorithm can have better robustness. However, although the algorithm can guarantee the correct rate, its processing time is very time consuming and does not meet the real-time requirement, while the improved algorithm not only shortens the processing time but also improves the matching accuracy. Therefore, the improved nonlinear algorithm has the best overall performance.

4. Conclusion

In this paper, an elastic net rank prior based on nonlocal self-similarity is proposed and used to solve the blind single-frame fuzzy similarity image recovery problem. Statistical analysis shows that the prior favours clear images over fuzzy ones, which helps to avoid the drawback of certain traditional methods that tend to obtain mundane solutions. For the fuzzy kernel estimation model containing mixed parametric terms, the authors give an efficient solution method, and numerical convergence analysis is given by a benchmark test set. The method in this paper does not require the edge selection step which is very critical in many existing methods. Experimental results on synthetic and real blurred image sets illustrate the effectiveness of the method in this paper. With the support of relevant theories, the method of carrying out the cultivation of artistic color design practice ability based on virtual reality technology is proposed by combining the characteristics of artistic color design practice ability cultivation and virtual reality technology, and a virtual evaluation system that can serve artistic color design practice ability cultivation is designed and produced under the guidance of the method strategy. The method in this paper does not require the critical edge selection step in many existing methods. The experimental results on synthetic and real blurred image sets demonstrate the effectiveness of this method. The development of virtual reality unified standard refers to the need to develop a unified standard for the current virtual reality devices. As far as it is concerned, virtual reality standards are not unified, there are many products on the market, and these control devices have different standards, and the devices cannot be connected well. Technology is visible and humanities are hidden, but it is always the hidden things that drive the development of the world, and the power of humanities is eternal. What we can do is combine the power of technology, the thinking of art, and the method of design to pave the way forward.

Data Availability

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

Informed consent was obtained from all individual participants included in the study references.

Conflicts of Interest

The authors declare that there are no conflicts of interest.


This work was supported by the Project of Social Sciences Federation of Henan Province (Explore the New Mission of Aesthetic Education in the New Era from the Perspective of Art Popularization, no. SKL-2019-1719) and the Educational Curriculum Reform Research Project of Henan Provincial Department of Education (Research and Practice of Chinese Traditional Art Curriculum for Professional Undergraduate Preschool Education Major, no. 2020-JSJYYB-107).


  1. Z. Zha, X. Yuan, J. Zhou, C. Zhu, and B. Wen, “Image restoration via simultaneous nonlocal self-similarity priors,” IEEE Transactions on Image Processing, vol. 29, pp. 8561–8576, 2020. View at: Publisher Site | Google Scholar
  2. Y. Chen, W. He, N. Yokoya et al., “Hyperspectral image restoration using weighted group sparsity-regularized low-rank tensor decomposition,” IEEE Transactions on Cybernetics, vol. 50, no. 8, pp. 3556–3570, 2019. View at: Google Scholar
  3. J. Li and J. S. Pan, “A novel pose and illumination robust face recognition with a single training image per person algorithm,” Chinese Optics Letters, vol. 6, no. 4, pp. 255–257, 2008. View at: Google Scholar
  4. L. Qin, T. Zheng, and Z. Zhixiang, “Shrinkage-divergence-proximity locally linear embedding algorithm for dimensionality reduction of hyperspectral image,” Chinese Optics Letters, vol. 6, no. 8, pp. 558–560, 2008. View at: Publisher Site | Google Scholar
  5. J. Novotny, J. Tveite, M. L. Turner et al., “Developing virtual reality visualizations for unsteady flow analysis of dinosaur track formation using scientific sketching,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 5, pp. 2145–2154, 2019. View at: Publisher Site | Google Scholar
  6. Y. Keaomanee, A. Heednacram, and P. Youngkong, “Implementation of four kriging models for depth inpainting,” ICT Express, vol. 6, no. 3, pp. 209–213, 2020. View at: Publisher Site | Google Scholar
  7. R. Beams, A. S. Kim, and A. Badano, “Transverse chromatic aberration in virtual reality head-mounted displays,” Optics Express, vol. 27, no. 18, pp. 24877–24884, 2019. View at: Publisher Site | Google Scholar
  8. Z. Shi, Y. Feng, M. Zhao et al., “Normalised gamma transformation-based contrast-limited adaptive histogram equalisation with colour correction for sand–dust image enhancement,” IET Image Processing, vol. 14, no. 4, pp. 747–756, 2019. View at: Publisher Site | Google Scholar
  9. W. Z. Shao, F. Wang, and L. L. Huang, “Adapting total generalized variation for blind image restoration,” Multidimensional Systems and Signal Processing, vol. 30, no. 2, pp. 857–883, 2019. View at: Publisher Site | Google Scholar
  10. F. Fang, T. Wang, T. Zeng et al., “A superpixel-based variational model for image colorization,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 10, pp. 2931–2943, 2019. View at: Publisher Site | Google Scholar
  11. S. Johnson, F. Samsel, G. Abram et al., “Artifact-based rendering: harnessing natural and traditional visual media for more expressive and engaging 3D visualizations,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, pp. 492–502, 2019. View at: Publisher Site | Google Scholar
  12. A. Padcharoen, D. Kitkuan, P. Kumam et al., “Accelerated alternating minimization algorithm for Poisson noisy image recovery,” Inverse Problems in Science and Engineering, vol. 28, no. 7, pp. 1031–1056, 2020. View at: Publisher Site | Google Scholar
  13. R. S. Vergel, P. M. Tena, S. C. Yrurzum et al., “A comparative evaluation of a virtual reality table and a HoloLens-based augmented reality system for anatomy training,” IEEE Transactions on Human-Machine Systems, vol. 50, no. 4, pp. 337–348, 2020. View at: Publisher Site | Google Scholar
  14. B. Cassidy, G. Sim, D. W. Robinson et al., “A virtual reality platform for analyzing remote archaeological sites,” Interacting with Computers, vol. 31, no. 2, pp. 167–176, 2019. View at: Publisher Site | Google Scholar
  15. M. Martino, A. Salvadori, F. Lazzari et al., “Chemical promenades: exploring potential‐energy surfaces with immersive virtual reality,” Journal of Computational Chemistry, vol. 41, no. 13, pp. 1310–1323, 2020. View at: Publisher Site | Google Scholar
  16. F. Li and J. Fei, “Gesture recognition algorithm based on image information fusion in virtual reality,” Personal and Ubiquitous Computing, vol. 23, no. 3, pp. 487–497, 2019. View at: Publisher Site | Google Scholar
  17. P. Caserman, A. Garcia-Agundez, R. Konrad et al., “Real-time body tracking in virtual reality using a Vive tracker,” Virtual Reality, vol. 23, no. 2, pp. 155–168, 2019. View at: Publisher Site | Google Scholar
  18. J. Yang, C. Wang, B. Jiang et al., “Visual perception enabled industry intelligence: state of the art, challenges and prospects,” IEEE Transactions on Industrial Informatics, vol. 17, no. 3, pp. 2204–2219, 2020. View at: Publisher Site | Google Scholar
  19. A. Singh and J. Singh, “Survey on single image based super-resolution—implementation challenges and solutions,” Multimedia Tools and Applications, vol. 79, no. 3, pp. 1641–1672, 2020. View at: Publisher Site | Google Scholar
  20. Z. Trost, C. France, M. Anam et al., “Virtual reality approaches to pain: toward a state of the science,” Pain, vol. 162, no. 2, pp. 325–331, 2021. View at: Publisher Site | Google Scholar
  21. Y. Zhang, Y. Feng, X. Liu et al., “Color-guided depth image recovery with adaptive data fidelity and transferred graph Laplacian regularization,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 2, pp. 320–333, 2019. View at: Publisher Site | Google Scholar
  22. O. Schreer, I. Feldmann, T. Ebner et al., “Advanced volumetric capture and processing,” SMPTE Motion Imaging Journal, vol. 128, no. 5, pp. 18–24, 2019. View at: Publisher Site | Google Scholar
  23. A. Puig, I. Rodríguez, J. L. Arcos et al., “Lessons learned from supplementing archaeological museum exhibitions with virtual reality,” Virtual Reality, vol. 24, no. 2, pp. 343–358, 2020. View at: Publisher Site | Google Scholar
  24. C. G. Canning, N. E. Allen, E. Nackaerts et al., “Virtual reality in research and rehabilitation of gait and balance in Parkinson disease,” Nature Reviews Neurology, vol. 16, no. 8, pp. 409–425, 2020. View at: Publisher Site | Google Scholar
  25. M. Hui, Y. Wu, W. Li et al., “Image restoration for synthetic aperture systems with a non-blind deconvolution algorithm via a deep convolutional neural network,” Optics Express, vol. 28, no. 7, pp. 9929–9943, 2020. View at: Publisher Site | Google Scholar
  26. M. Jiu and N. Pustelnik, “A deep primal-dual proximal network for image restoration,” IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 2, pp. 190–203, 2021. View at: Publisher Site | Google Scholar
  27. D. Kitkuan, P. Kumam, J. Martínez-Moreno et al., “Inertial viscosity forward–backward splitting algorithm for monotone inclusions and its application to image restoration problems,” International Journal of Computer Mathematics, vol. 97, no. 1-2, pp. 482–497, 2020. View at: Publisher Site | Google Scholar
  28. Q. Jiang, F. Shao, W. Gao et al., “Unified no-reference quality assessment of singly and multiply distorted stereoscopic images,” IEEE Transactions on Image Processing, vol. 28, no. 4, pp. 1866–1881, 2018. View at: Google Scholar
  29. Q. Jiang, F. Shao, W. Lin et al., “Optimizing multistage discriminative dictionaries for blind image quality assessment,” IEEE Transactions on Multimedia, vol. 20, no. 8, pp. 2035–2048, 2017. View at: Publisher Site | Google Scholar
  30. F. Orujov, R. Maskeliūnas, R. Damaševičius et al., “Fuzzy based image edge detection algorithm for blood vessel detection in retinal images,” Applied Soft Computing, vol. 94, Article ID 106452, 2020. View at: Publisher Site | Google Scholar
  31. J. Yang, Z. Bian, Y. Zhao et al., “Full-reference quality assessment for screen content images based on the concept of global-guidance and local-adjustment,” IEEE Transactions on Broadcasting, pp. 1–14, 2021. View at: Publisher Site | Google Scholar
  32. W. Lu, J. Duan, Z. Qiu et al., “Implementation of high‐order variational models made easy for image processing,” Mathematical Methods in the Applied Sciences, vol. 39, no. 14, pp. 4208–4233, 2016. View at: Publisher Site | Google Scholar
  33. R. W. Liu, L. Shi, W. Huang et al., “Generalized total variation-based MRI Rician denoising model with spatially adaptive regularization parameters,” Magnetic Resonance Imaging, vol. 32, no. 6, pp. 702–720, 2014. View at: Publisher Site | Google Scholar

Copyright © 2021 Xiaojuan Xu and Jin Zhu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.