Abstract

This paper starts with the study of realistic three-dimensional models, from the two aspects of ink art style simulation model and three-dimensional display technology, explores the three-dimensional display model of three-dimensional model ink style, and conducts experiments through the software development platform and auxiliary software. The feasibility of the model is verified. Aiming at the problem of real-time rendering of large-scale 3D scenes in the model, efficient visibility rejection method and a multiresolution fast rendering method were designed to realize the rapid construction and rendering of ink art 3D virtual reality scenes in a big data environment. A two-dimensional cellular automaton is used to simulate a brushstroke model with ink and wash style, and outlines are drawn along the path of the brushstroke to obtain an effect close to the artistic style of ink and wash painting. Set the surface of the model with ink style brushstroke texture patterns, refer to the depth map, normal map, and curvature map information of the model, and simulate the drawing effect of the method by procedural texture mapping. Example verification shows that the rapid visualization analysis model of ink art big data designed in this paper is in line with the prediction requirements of ink art big data three-dimensional display indicators. The fast visibility removal method is used to deal with large-scale three-dimensional ink art in a big data environment. High efficiency is achieved in virtual reality scenes, and the multiresolution fast rendering method better maintains the appearance of the prediction model without major deformation.

1. Introduction

Three-dimensional display technology has developed rapidly and matured and has gradually become a part of people’s lives. However, the current stereoscopic display technology is mostly used to display realistic scenes and is rarely used in nonrealistic fields. Ink art is one of the representatives of painting art. It uses three-dimensional display technology to display ink art three-dimensionally. It is positive in showing the artistic expressive power of ink and painting, expressing the artist’s artistic emotion, and showing the charm of oriental culture. Nowadays, traditional visual effects based on two-dimensional planes are gradually unable to meet people’s needs for a higher quality of life [1]. More expressive three-dimensional displays are increasingly appearing in daily life. 3D games and 3D movies have begun. In recent years, three-dimensional display technology has developed rapidly, and some general technical standards have been formed. This paper presents a three-dimensional display model of ink and wash style three-dimensional model: a realistic three-dimensional model, using a computer ink and wash simulation model, combined with a three-dimensional display, and finally a three-dimensional image of ink and wash style [2]. The three-dimensional display of ink and wash style can more abundantly and intuitively show the aesthetic appeal and thought of Chinese ink painting. At the same time, the use of stereoscopic display related technologies in nonrealistic fields is also an attempt and exploration for stereoscopic display [3].

Ink style rendering belongs to the research field of nonphotorealistic rendering (NPR). It is different from the pursuit of photorealistic rendering to generate realistic effects like photos. Nonphotorealistic rendering emphasizes the artistic effect and meaning expression ability of the picture, by simulating traditional painting styles and techniques generating images with an artistic style, such as oil painting, watercolor painting, pastel painting, pencil painting, and ink painting. Among them, the research content of ink painting simulation roughly includes simulating the diffusion effect of ink on rice paper, using two-dimensional reference images or three-dimensional models to generate images or image sequences with the ink art style, and simulating the input of a brush [4]. From an application point of view, the main applications of ink and wash simulation can be divided into two aspects: ink and wash simulation drawing system and real-time rendering of ink and wash style. The ink and wash simulation drawing system mainly uses computer hardware and software to realize the digitalization of ink and wash painting for artistic work. The authors provide digital ink painting creation tools; the real-time rendering of ink painting styles is more inclined to the automatic generation of ink paintings, which automatically generate ink style images through given input photos or 3D models [5].

The main research goal of this paper is to realize a three-dimensional object-oriented ink and wash style rendering method from the perspective of real-time rendering technology, based on general civilian-grade computers, generate ink style two-dimensional images with three-dimensional scene input, and make the rendering method meet real-time performance requirements, and it can render and generate ink and wash animation sequence frames in real-time. The first chapter is the introduction. This chapter explains the research background and significance of this thesis and analyzes the aesthetics and market demand of ink painting. The second chapter is the construction and realization of three data-dimensional models of ink art. We analyze the current research status at home and abroad, specifically study and analyze several models of block-based and pixel-based texture synthesis methods, analyze and improve the models, and propose a parameter-based adaptive ink texture synthesis model [5]. The spatial colonization model is used to automatically extract the three-dimensional skeleton of the tree, traverse the skeleton points of the tree to establish a skeleton point sequence, and determine the topological structure of the branches. The branch thickness is calculated based on the pipe model, and some skeleton points are fine-cut according to the included angle of adjacent skeleton line segments, and the generalized cylinder is determined by the reserved key points, and finally, a three-dimensional tree model is constructed. The third chapter is a detailed analysis of the research results. Through the application of big data analysis tasks in the model designed in this paper, the visual analysis is carried out by combining data and scenes to verify the feasibility and effectiveness of the model designed in this paper. The effect of ink and wash rendering is demonstrated and analyzed. Finally, it summarizes the work carried out on this topic, the innovations proposed and the remaining problems, and the follow-up research plan, as well as the expectations and prospects of the research in the field of three-dimensional big data stereo display of ink art.

2. Research on the Predictive Model of Ink Art Three-Dimensional Display Index

2.1. Related Work

The early research on ink and wash rendering mostly focused on the establishment of brush models, the structure of rice paper, and the principle of ink diffusion. Ink painting style rendering the existing stylized drawing models based on 3D models usually follows a similar model framework. First, extract the surface feature lines of the 3D model, including contour lines, fold lines, and borders, and then stylize the extracted feature lines. Get an artistic three-dimensional model outline. After the model is outlined, the interior of the model is colored according to the lighting model to depict the effects of light, shade, and shadow [6]. Wang et al. proposed a dynamic balance model based on the principles of physics and chemistry, which explained the diffusion characteristics of ink on rice paper [7]. They pointed out that ink diffusion is mainly caused by water molecules and between water molecules and carbon molecules. Charrier and others believe that, in addition to the intermolecular force and gravity, the porosity of the capillary fiber structure in rice paper will also affect the diffusion of ink [8]. Combined with the capillary fiber structure of rice paper, a filter model is proposed, which determines the direction of ink flow according to the direction and gap of the rice paper texture [9]. Wallis established a paper model with a grid structure by dividing the rice paper into multiple rectangular unit areas and calculating the fiber distribution in each rectangular unit [10]. And based on the method of cellular automata to simulate the effect of ink diffusion, Schwab et al. proposed a fluid flow model based on the Lattice Boltzmann Equation (LBE) and used this method to achieve a good real-time ink diffusion effect in a real-time rendering system, but the method that converged the speed is too slow and not suitable for real-time rendering systems for 3D objects [11].

The simulation prediction of ink is generally based on the mixed simulation of ink particles and water particles, and their motion behavior in paper molecules is simulated for different properties such as ink color. The paper model is also an indispensable foundation for the simulation of the ink diffusion effect. Goodstadt et al. proposed an automatic calculation model for water and ink particles based on the movement of paper elements and then applied them to the rendering of different parameters of the tree [12]. Vempati et al. proposed using texture synthesis based on a probability model to render the 3D tree model. Yeh and Ming studied the automatic rendering method of the 3D animal model [13]. Shick et al. proposed a fluid simulation method based on Lattice Boltzmann Equation to simulate ink diffusion [14]. Here, the constraint of joint sparsity across different tasks can provide other useful information for the classification problem because different tasks may support different sparse representation coefficients, and joint sparsity may enhance the robustness of coefficient estimation. The color feature mainly represents the color information contained in the image and is more inclined to describe the overall characteristics of the image, which is a feature that cannot be ignored. The color and texture features can well represent the information contained in the image, and they are also the most frequently used features in the current classification and recognition of Chinese painting images. The diffusion movement of ink particles on paper is also a natural movement process, which follows certain physical diffusion laws and has self-similarity [15]. Attarin used Purlin noise to realize the wrinkles of rice paper skirts. Stroke-based rendering is the focus of nonrealistic rendering of computer images [16]. It can achieve rendering in cartoon style, oil painting style, sketch style, and other styles. By imitating the painter’s drawing process, the shape of the brush can be controlled. Elements such as color and direction produce various artistic styles. This research takes trees as the research object, intends to consider the lighting model, and uses the drawing method based on brushstrokes to realize the nonrealistic drawing of the tree [17].

Three-dimensional display technology has developed rapidly and matured and has gradually become a part of people’s lives. However, the current stereoscopic display technology is mostly used to display realistic scenes and is rarely used in nonrealistic fields. Ink and wash art is one of the representatives of painting art. The use of three-dimensional display technology to display ink and wash art in three dimensions is positive in showing the artistic expressiveness of ink and wash, expressing the artist’s artistic emotions, and showing the charm of oriental culture. The model framework proposed in this paper is divided into two parts. The first part is to render the ink style image of the 3D model. First, based on the realistic three-dimensional model, the parameters of the projection matrix are changed, and an image containing binocular vision is obtained by rendering. Then, use the existing ink and wash simulation model to process the image with ink and wash style. The second part is to process the three-dimensional model of ink and wash style to realize a three-dimensional display. First, based on the image of ink and wash style, using texture mapping technology, the ink and wash image is mapped to the surface of the three-dimensional model, so that the three-dimensional model has the artistic effect of ink and wash. Then, set the projection matrix of the left and right viewpoints, respectively, render the 3D model to obtain the images of the left and right eyes, complete the stereo display, and achieve the effect of the stereo display.

2.2. 3D Model Construction Algorithm

The three-dimensional model is displayed on the screen, and the rendering of the three-dimensional model is completed, and a two-dimensional image is obtained, but the two-dimensional image obtained at this time is observed from a viewpoint, and it is mapped back to the three-dimensional model by ink and wash. Later, although the ink and wash three-dimensional model can be obtained, it can only be seen from the original viewpoint, and holes will appear in the three-dimensional model after the viewpoint is changed. Therefore, the first step in making a 3D model texture image is to render an image containing two viewpoint information. The ink coloring method in this article is completely based on the GPU programmable rendering pipeline that is packaged by Unity. The relevant formula calculations are implemented through Unity Shade programming and are mainly executed in the two stages of the vertex shade and fragment shade of the GPU pipeline. The ink color rendering flowchart is shown in Figure 1. In this paper, the three effects of hooking, scorching, and dyeing are realized in one Pass, and then the coloring results of the three effects are mixed using Alpha channel mixing and outputting the screen display.

In OpenGL, the 3D model is clipped in CVV. CVV is a cube whose x, y, and z values are all between [−1, 1]. The method of using original training samples to form dictionaries is simpler, but the dictionary capacity increases with the number of training samples or categories, which makes the computational burden increase and efficiency decrease, and the redundant information cannot be effectively expressed for the original signal, thus leading to the decrease of detection accuracy. The dictionaries obtained by learning are morphologically rich, with the structured, discriminative ability and better flexibility to better adapt to different image data and obtain sparser representations. The depth value is also between [−1, 1]. When the z value is transformed so that the auxiliary viewpoint can see the range of the binocular viewpoint, the z value is inverted, which means that after the 3D model is projected, the vertices that were originally not occluded are inverted and become occluded vertices, which were originally occluded that the live vertex becomes an unclouded vertex. After the depth value z is inverted, the projection matrix at this time becomes equations (1) and (2).

In the ink stylization of images, the most important content is the simulation of ink brush strokes and ink diffusion effects. At this stage, the simulation model of ink diffusion effect is mainly based on physical modelling methods, considering the movement mechanism of ink particles to simulate the diffusion effect, but the physics-based method is complex, time-consuming, and difficult to be applied to the automatic conversion of image ink style.

When the ink spreads on the paper, the paper absorbs the ink to reduce the ink content, so the edge of the stroke will gradually fade. By observing and analyzing the real ink brush strokes, the simulation of the ink brush stroke effect is realized in 3 steps. First, generate the main diffusion area of the stroke; then, generate the diffusion edge area in the main diffusion area; finally, calculate the local color fluctuation error of the sample through the sample image, and use this error to adjust the color of the stroke to form a rough effect. The calculation formula is shown in (3).

The edge diffusion zone is at the edge of the main diffusion zone and continues to diffuse to form an irregular edge. The decoder restores the data distribution of the region to be repaired based on the compressed feature information in the hidden space. During the training process, the encoder and decoder adjust their parameters according to the loss function, so that the encoder can encode and compress the features common to the data, and the decoder can restore the data distribution of the region to be repaired based on these encoded features. Using a method like cellular automata, starting from the edge pixels of the main diffusion zone, each pixel is used as the basic unit for absorbing ink, and ink is absorbed in a certain proportion from the boundary pixels in the connected neighborhood.

The depth of the ink in the superimposed area is related to the ink content of the superimposed strokes. The higher the ink content of the stroke, the darker the color of the superimposed area. Based on the statistical analysis of the color change and color difference of the overlapping area, the calculation formula for the color change of the overlapping area is as follows:

The data and three-dimensional space information of the target image are obtained by scanning the real ink art image with an optical scanning device, and the three-dimensional model is reconstructed. The three-dimensional laser scanning technology mainly uses the principle of laser ranging to sample the surface of the object in space, can measure the length, width, and height of the measured object, and obtain the structural information of the real ink image in a noncontact manner, with fast and real-time data acquisition. The advantages are strong performance, large amount of data, and high precision. Considering that there are many three-dimensional images of ink art and the thickness of ink pens, using laser scanning equipment to obtain three-dimensional images of ink art and then constructing a model can more accurately and completely present the model of the three-dimensional image of ink art.

2.3. Stereoscopic Display Index Prediction Model

The stereo display model is a mathematical model created based on the principle of binocular stereo vision. Different scenes need to choose different binocular stereo vision models to achieve the expected stereo effect. At this stage, there are two widely used vision models: parallel binocular stereo vision model and convergent binocular stereo vision model. The two models are similar and show two different situations when the eyes are watching the scene. The parallel binocular model simulates the situation where the eyes of the two eyes observe the object in parallel, and the convergent binocular model simulates the situation of viewing the object when the eyes of the two eyes intersect. The three-dimensional big data three-dimensional display index prediction model of ink art constructed in this paper is shown in Figure 2.

When the paper and ink model is established, the pore structure inside the paper element is abstracted into a pipe support structure composed of smooth and regular capillaries. Data enhancement has a great role in the training of neural networks, which adds multiple copies to a single image, improves the utilization of the image, and effectively prevents the network from overfitting the learning of the structure of an image. Images have more redundant information, and data augmentation can create different noises, and if the neural network can overcome these noises, its generalization performance is bound to be good. Among them, each capillary constituting the pore structure of the paper element has the same length and radius, and the length of the capillary is equal to the width of the paper element. These capillaries are evenly distributed in three dimensions, and the capillaries in each dimension are parallel to each other.

Paper penetration is the process of ink flowing from the surface of the paper element into the pore structure of the paper element [18]. It traverses the surface of all paper elements in the rice paper model. When ink exists on the surface of the paper element, the ink will flow into the paper element from the paper surface under the influence of gravity.

In this process, the volume of ink flowing into the paper element from the surface of the paper at any time should not exceed the volume of the remaining pore space inside the paper element; that is, when the ink inside the paper element i reaches a saturated state, the paper element i the ink on the surface flows into the paper element.

The ink used for the diffusion of ink is generally formed by mixing pure ink and water in a certain ratio. For the convenience of simulation, the paper ink model regards the ink as an ideal fluid that conforms to Newton’s law of viscosity. The accuracy of the spatial segmentation results obtained by the edge detection algorithm greatly depends on the results of postprocessing. In the face of complex airspace, edge detection is arguably the easiest way to think and use according to normal thinking. Space segmentation based on edge detection is also one of the most studied methods. Through the above analysis, it can be known that the ink concentration is the main factor that affects the result of ink diffusion. In the paper ink model, the ratio of the volume of pure water to pure ink is used to describe the concentration of ink in the paper element.

Assuming that the ink in the paper element can only flow between adjacent paper elements in the horizontal direction, the volume of ink flowing into and out of the paper element is equal to that of the paper element flowing in through the boundary contact surfaces of the four adjacent paper elements around it. And the sum of the volume of the ink flowing out, namely,

When simulating the fog effect, it is necessary to use the particle system to create nodes and then use the fog function to write specific parameter details, use the particle system as a node for scene management and organization, and produce cloud and fog effects in the 3D terrain model [19]. If you want to observe more fog effects in the experiment, you can set the properties of the fog function. The main related property settings are shown in Table 1.

The key to texture mapping is to find the mapping relationship between the vertices of the 3D model and its texture coordinates. The ink style image is obtained by the ink style processing after the 3D model is projected, so the mapping relationship can be known through projection. The texture mapping can be used to easily paste the image onto the surface of the 3D model. When rendering the 3D model to obtain left and right eye images, you need to set the left and right viewpoint parameters. After the ink and wash style image is pasted on the surface of the three-dimensional model, it has the effect of ink and washes. In the ink and wash scene, set the left and right viewpoints to render the three-dimensional model, respectively, then you can get the left and right eye images with ink and wash style, and then synthesize the left and right eye images. With the assistance of stereo glasses, you can view the three-dimensional display effect of the ink.

3. Results Analysis

3.1. Model Analysis

In terms of performance, we compared the frame rates of the two methods in the simulation at different resolutions. As shown in Figure 3, the method and the seepage model in this paper are slightly better than the seepage when simulating 512 and 1024 resolutions. The model can meet real-time performance requirements. At a resolution of 2048, the frame rate of the method and the percolation model in this paper are both lower than 60fps, which cannot meet the real-time performance requirements. The ink diffusion effect generated by the method in this paper is smoother in the color transition inside the ink, and the intensity of the ink color changes more obviously with the ink concentration. The simulation results of the 15 and 25 ink volume subgroups are almost consistent with the results of the seepage model. In the 50 group, the results of this paper show more obvious water erosion of ink. In the group with ink volume fractions of 70 and 90, the simulation results of this article show that the carbon particles in the ink diffuse more uniformly without forming a clear black circle on the diffusion boundary, and the color of the ink after diffusion is lighter.

Figure 4 shows the performance efficiency of the proposed method under different rice paper structure resolutions and different diffusion simulation iteration times. The method in this paper is mainly divided into two parts: nonphotorealistic coloring and ink effect simulation. The computational complexity of the coloring part is directly related to the number of vertices of the input three-dimensional object; the main performance consumption of the ink effect simulation part is concentrated in the seepage simulation part. The computational complexity is related to the resolution of the rice paper structure and the number of iterations of diffusion simulation. Through the analysis of the data, the resolution of the rice paper structure has a great influence on the rendering efficiency, and the number of diffusion simulation iterations also has a great influence on the rendering efficiency. The worsening effect of the creasing error is a side effect of a continuous deepening of all deep networks, and it is also the reason deep networks cannot deepen and increase complexity indefinitely. In this process, the disappearance of the gradient will make training impossible. In contrast, the number of model vertices has a small effect on rendering efficiency, because the GPU parallel structure characteristics have a great optimization effect on shading calculations. The main performance consumption of the rendering method in this paper is concentrated in the diffusion simulation process. According to the data, the method in this paper can meet the requirements of real-time rendering at the frame rate of each group at a resolution of 512 rice paper structure. At a resolution of 1024 rice paper structure, when the number of diffusion simulation iterations is less than 10, the rendering frame rate is about 80 fps, which can meet the performance requirements of real-time rendering.

The rendering method proposed in this paper can meet real-time requirements in performance. In terms of visual effects, the method in this paper can achieve a better smudging effect of ink and wash, and it can also reflect certain characteristics of hand-painting. This paper implements a real-time ink and wash rendering method, which generates ink attribute information through a coloring method based on nonphotorealistic rendering, and uses a percolation model to simulate ink diffusion. The experimental results show that this method can perform better ink color transition and show better feathering effect between different shades of ink. In terms of performance, the method in this paper can meet the real-time rendering requirements.

3.2. Big Data Analysis

In the actual application of the big data environment, some 3D scenes may be extremely large. For example, the ink art 3D big data stereo display index prediction model needs to be visualized in the 3D scene together with the ink art image. First, all the parts except the generator part are guaranteed to be the same, and then the models at different stages are selected to perform the repair task for the defective region separately to verify whether the improved generator has improved performance compared with the previous unimproved generator. Currently, the number of models in the scene, the number of objects, the total number of vertices, and the number of triangles are all in a larger order of magnitude. The model in this paper is used below to randomly select several viewpoints in this large-scale scene to test the occlusion query efficiency and use 100,000 rays for the occlusion query. Compare the effectiveness of this model with a voxel-based conservative model, test the number of visible objects obtained by the occlusion query of the voxel method and this model, and give the actual number of visible objects. The results are shown in Figure 5. The accuracy of the model in this paper is relatively high, and the effectiveness is much higher than the voxel method in some cases.

Since the basic component model may be used multiple times, and there are multiple instances of this model in the scene, parsing the ink art image data multiple times will cause multiple I/O costs and increase memory usage, which is unreasonable and unnecessary. The moving objects in this video scene include pedestrians and people riding electric vehicles in addition to vehicles, but because of the problem of shooting angle, such moving objects are smaller in size, and effective features cannot be extracted in the feature extraction stage, so, in the moving object detection stage, actively filter out these moving objects whose pixel area is too small, and only detect large-sized moving vehicles. For different instances of the same model, their stereo display indicators are the same, and only the prediction results will be different. Therefore, when reading the scene description file, if the same model appears repeatedly, the corresponding ink art image data should not be parsed repeatedly. You only need to use the resolved and corresponding stereo display indicators in the memory and read at the same time. The prediction result information is corresponding to the instance in the scene description file. This not only reduces the number of I/O, but also reduces the number of prediction results maintained in memory. In the 3D scene model studied in this paper, the total number of model vertices in the scene is 576,141, and the total number of triangles is 1,757,096. Under different indicators, the results of the 3D model prediction efficiency in this paper are shown in Figure 6.

3.3. Forecast Result Analysis

After determining the texture block size k and the degree of overlap c, according to the model steps in this article, the ink texture synthesis is carried out, and the texture synthesis experiment is performed on different ink textures, and relatively ideal results are obtained. Figure 7 is the best corresponding one to different ink texture samples, synthesis parameters and synthesis effect chart. The optimal texture block size and overlap degree corresponding to different texture samples are different. If the same parameters are used, the synthesis result will inevitably not obtain a texture image with good structure and strong diversity. This paper adopts a model of adaptively determining synthesis parameters, and through quantitative analysis, the scientific nature of parameter determination is guaranteed, and it can be applied to different texture samples, and the experimental effect is good.

Figure 8 shows that the model greatly improves the frame rate while greatly reducing the vertices and triangles in the mesh. When simplified to 0% (that is, the original state is not optimized), the 3D scene is rich in details, with an FPS of 3.91 frames per second; when the simplified ratio is 70.01%, the frame rate rises to 15.5 frames per second, and there is almost no loss of details, maintaining all kinds of equipment completeness. The scene optimization effect is the best at this time; when the simplified ratio parameter reaches 85.23%, there is a lot of loss of detail, but the general shape of the main equipment of the substation can still be maintained. At this time, the dynamic optimization frame rate reaches 18 frames per second and has been relatively close to the human vision continuous frame rate threshold of 28 FPS, which can meet the needs of human-computer interaction and scene roaming. In practical applications, it is usually not focused on observing the panorama but needs to be specific to a certain area, the equipment in the observation area.

This article introduces texture synthesis methods and related algorithms, summarizes the advantages and disadvantages of these algorithms, according to the comparison of the effects of these algorithms on ink texture synthesis, chooses the texture synthesis method based on block collage, improves this algorithm, and designs A texture synthesis algorithm that can adaptively determine the size of the texture block and the degree of overlap, and experiments have been carried out to prove that the algorithm can be applied to different ink texture samples, and the synthesis effect is better.

4. Conclusion

This paper combines the current development of computer two-dimensional simulation technology of ink art and three-dimensional display technology, starting from the realistic three-dimensional model, stylizing the realistic three-dimensional model with ink, and applying the three-dimensional display technology to complete the three-dimensional display. In this paper, when acquiring the ink style three-dimensional model, we use projection to obtain the realistic image of the three-dimensional model containing the binocular field of view, then use the existing ink stylized model to stylize the image, and finally use the texture mapping technology to transform the ink style. The image is pasted on the surface of the 3D model, achieving the effect of stylizing the 3D model with ink. We designed a system framework for the rendering of 3D models in ink style. The ink and wash style rendering of the three-dimensional terrain is carried out through the method of texture mapping, and the ink-wash rendering process is carried out on the characteristic lines, which achieves a better rendering effect. In this study, the 3D scene part only involves the acceleration and optimization model at the geometric level, and the research on texture and lighting is not deep, and there are many problems to be further studied. The focus of this research is on acceleration and real-time. If the ease of use is used as the starting point, such as improving the reality of the scene, there is still a lot to study. For example, consider the real-time ray tracing optimization method of ink and wash images based on big data to achieve a higher degree of simulation of the three-dimensional scene.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request [20].

Conflicts of Interest

The authors declare that they have no known conflicts of interest or personal relationships that could have appeared to influence the work reported in this paper.