Abstract

In the history of human society, information is an indispensable part of development, and extracting useful data from the huge amount of data can effectively solve some real-life problems. At the same time, with the continuous improvement of modern technology and computer hardware and equipment, the demand for information subjects is getting higher and higher. The field of human research is gradually moving in the direction of multiscientific exploration and deepening of related work. The related information disciplines have been studied by more scholars and experts in the field to a greater extent and put forward more demanding theories and methods. In the process of human social development, things are constantly changing and updating, which makes data mining technology more and more attention. This article first describes the relevant basic theoretical knowledge of 3D reconstruction, and secondly, it analyzes the big data platform technology, which mainly includes the analysis of the big data platform architecture, the description of the Hadoop distributed architecture, and the analysis of the HBase nonrelational database. Finally, it studies the video image feature extraction technology of video content and uses this to study the design of a big data image processing platform for 3D reconstruction models.

1. Introduction

Before constructing and processing the data, the person concerned needs to analyze the original dependencies. If there is a degree of correlation between the original variables, they are modeled as an object independent of the real world. Traditionally, it is an equivalence in the sense, that is, two or more separately existing objects are linked in any way to another object in the form required, and in the construction and processing of data, this is usually achieved using methods that represent, describe, or express in a graphical language the properties of entities [1]. In the case of large amounts of data, traditional methods are often used to deal with large amounts of scattered and fragmented information. However, these scattered and low-quality unstructured descriptions are difficult to solve by conventional means [2].

In the process of data processing, the people concerned need to transform some meaningful or useless information into images that have a specific meaning and can be understood and used by people. An object-oriented model has been developed based on Big Data technology. The approach is based on counting and calculating the results of a large number of user inputs and outputs, and then, it will choose the appropriate way to reach the final conclusion according to different situations [3, 4]. In this process, the tool we use is a visual language to transform the information; finally, according to the data analysis results, different objects are classified, modeled, and predicted. The research shows that a simple model based on big data technology can achieve accurate interaction between information. This method has been widely used both at home and abroad [5].

In the era of big data, people gradually realize the processing and utilization of information, and some scholars have started to use image processing technology to solve some problems, and papers on large database research were published in the United States in the early 1980s. With the rapid development of computer hardware and software, network, and other information technology and the increasing popularity of the Internet, a large number of complex things have become more concise and intelligent [6]. At the same time, many new methods have been proposed to solve problems, such as cloud computing. At present, big data has become a new economic growth point. With the continuous development of computer technology, network information technology, etc., people put forward higher requirements for the information industry and the Internet industry. In this context, it is particularly important to develop an auxiliary tool that meets the needs of the times and can accurately and quickly obtain relevant data information and efficiently deal with large amounts of complex things [7]. The dependency-based 3D modeling method is widely used in various fields as a class of algorithms that can better express model relationships and attribute characteristics. This paper will be based on BP neural network theory, which is an inevitable trend in the era of big data, and it can realize the conversion between 3D model information and physical information, and it can reflect the essential characteristics of things to a certain extent [8, 9].

2. 3D Reconstruction Model and Algorithm Research

The processing functions in the big data image processing system mainly use the 3D reconstruction model algorithm; the system uses the 3D reconstruction algorithm which not only can effectively build a variety of object models, it can also effectively build object model database, and it can identify the objects in the video image by using the system, which also uses the image feature point detection and matching algorithm and 3D variable model algorithm.

2.1. 3D Object Model

The epipolar geometry only depends on the internal parameters and relative pose of the camera, and it does not depend on the projective geometry between the two scenes synthesized by the scene. The straight line connecting the centers of the two cameras is called the baseline, and the epipolar geometry between the two scenes is the intersection geometry between the image plane and the plane with the baseline as the beam axis. The specific schematic diagram is shown in Figure 1.

The intersection of the straight line connecting the centers of the two cameras and the image plane is called the inverse pole. The same image with opposite poles in the middle of another camera in the scene is also a missing point in the fundamental direction. The field containing the baseline is called the epipolar field, which has a single parameter cluster [10].

The algebraic representation of epipolar geometry is called the fundamental matrix. The algebraic derivation of the fundamental matrix is given below.

The form of the single-parameter cluster solution obtained by solving the equation is shown in where is the pseudoinverse of , that is, , and is the center of the camera, that is, the zero vector of and is defined as . This ray is parameterized by the scalar . And the two special points on the ray are (when ), and the center of the first camera (when ). These two points are, respectively, projected by the second camera to the points and on the second view [11]. The straight line connecting these two projection points is called the epipolar line, as shown in

Point is the opposite pole of the second image, that is, the projection of the center of the first camera, and this can be denoted as . It is as shown in

Solve to get the basic matrix as shown in

The required matrix can be regarded as a special form of the original matrix in the generalized image coordinate system. The basic matrix eliminates the assumption that the camera needs to be calibrated. Compared with the original matrix, the required matrix has fewer degrees of freedom, but adds other properties, such as normalized coordinates. The essential matrix can be directly calculated using the normalized image coordinates, as shown in

Or use the basic matrix-related formula to calculate, as shown in

2.2. Detection and Matching Algorithms of Image Feature Points

In the entire 3D reconstruction process, the detection and matching of feature points is one of the important links, and it is also an important link that restricts the reconstruction efficiency. The feature point detection algorithm used in this paper is also different due to different reconstruction methods [12]. Among the methods based on sports restoration structure, methods such as screening, surfing, or spheres are effective. In the reconstruction based on the anomaly model, the alignment based on specific point features uses the point detection algorithm.

Traditional feature point detection algorithms generally use Shift, Surf, and Orb. This paper mainly uses SiftGpu for feature point detection. Among them, the Sift algorithm was proposed in 2004, and it can have scale-invariant characteristics in a multiscale space. Surf algorithm is an improved algorithm of Sift algorithm. It replaces the decomposition of the Shift scale space by integrating images, etc., which greatly reduces the number of calculations. On the basis of Shift, SiftGpu uses GPU to rewrite the algorithm to achieve the effect of speeding up.

Sift feature vector matching mainly refers to the calculation of the similarity between the Sift feature vectors of two images. The nearest neighbor matching of each local feature point of the first image in the feature point set of the image to be matched is calculated, and the vectors are similar. The formula for measuring sex is shown in

At the same time, in order to eliminate the feature points that do not have a matching relationship caused by image background confusion and image occlusion, it is necessary to use the nearest neighbor algorithm to eliminate mismatches, as shown in

The so-called detection and matching algorithm based on the alignment of certain points mainly needs to be trained on certain objects to obtain the structure or regional characteristics of the object. This article mainly analyzes the point detection of facial features. For a given object, it is necessary to design a key point sequence that is usually applicable to this object. For example, the key points of the face use 68 coordinate points commonly used at home and abroad [13].

For a specific object, the feature points have been defined in advance, so there is no need to match the feature points, only a series of accuracy estimations are needed on the basis of detection, and the points with lower accuracy are removed. Face alignment methods are now mainly divided into three methods based on whether to detect a single point: a local-based method, a holistic-based method, and a hybrid method.

The local-based method is divided into the method of expressing characteristic points and the method of local constraint model. These methods are computationally intensive, and it is difficult to strike a balance between global constraints and local responses. In the overall basic method, the cascade method and the refinement to refinement mode are mainly used. The cascade method is similar to the traditional AAM algorithm. The coordinates of the feature points are connected in turn to describe the shape of the surface. The iterative method can be optimized step by step to get the final result.

Since each grid model has the same topological structure, including the number and coordinates of the coordinate points, and the corresponding RGB color, it is assumed that each grid is composed of vertices, as shown in

Principal component analysis is applied to the data matrix formed by grids, and -1 eigenvectors are obtained. Their corresponding variances are , and the average model is [14]. Therefore, any human face can be approximated as a linear combination of the change mode, as shown in

Assuming that there is a three-dimensional deviation model generated by -scan, the principal component of -1 can be used to model the model. However, there are some conditions, such as the number of scanned models is relatively small, and accurate model reconstruction cannot be obtained. The easiest way to solve this problem is to improve the scan data, but this is obviously impossible. Another method is to split the model into separate nonoverlapping areas, such as the nose, mouth, and eyes, but splitting the defect model increases flexibility, which makes the setup process more complicated and time-consuming .

The th three-dimensional coordinate of the deformable model is , and the texture of the corresponding point is . Both can be transformed into a two-dimensional coordinate system.

In order to form a texture image, it is necessary to map the three-dimensional vertices from the reference frame to the two-dimensional image. This process can be completed by a projection matrix. In this paper, cameras are usually divided into orthographic cameras, low-view cameras, and projection cameras. In each case, the camera matrix contains the camera’s rotation relative to the three directions, as well as zooming and translational movements, and its mathematical expression is shown in

According to the lighting model, the model texture will be affected by the number, direction, and color of the light source, which will cause the pixel content to change. Therefore, the model map needs to be constructed according to the lighting model. There are two main construction methods: physical modeling and statistical modeling. Physical modeling comes from the field of graphics and is an intuitive form of lighting effect modeling, and the modeling effect is very accurate. Statistical modeling regards the Lambertian illumination model as a low-pass filter and converts its high-frequency components into shadow effects. This article mainly models the face, so the expression of the pixel value at any point in the face model is shown in where is the intensity and direction of the light source, is the surface normal, is the reflectivity, is the intensity of the ambient light, and is the simulated specular highlight. The schematic diagram of the lighting model is shown in Figure 2.

2.3. Analysis of Video Images Based on Big Data Platform

The core of the video image analysis based on big data platform is the big data platform and the extraction of video images, where the big data platform mainly uses a master-slave architecture-based platform and a P2P architecture-based platform; the database uses HBase nonrelational database; the storage space of the database is large, so it can store a large number of videos; the extraction of video images mainly uses the content-based video image feature extraction method; the method can not only effectively extract video images, it can also be compared with the database of this video for video recognition, as follows.

2.4. Analysis of Big Data Platform Architecture

In the master-slave-based master-slave architecture, a typical example is Hadoop’s HDFS distributed file system. The architecture of the HDFS file system is shown in Figure 3. An HDFS cluster has only one NameNode node and multiple DataNode nodes. Among these nodes, the NameNode node belongs to the master, and the DataNode belongs to the slave. For all clusters, HDFS provides users with a single namespace through NameNode. The NameNode manages all file system metadata and file read and write access. In the entire cluster, each DataNode is associated with a different physical host. DataNode manages the data stored on the node and periodically sends the block information on the node to the NameNode node [1517].

In this architecture, in order to ensure the reliability of data storage, the NameNode manages block replication in the DataNode, so that each NameNode has each other’s backup blocks, so that there is no problem with the data. The downtime of a single DataNode affects the normal operation of the entire cluster.

In this master-slave architecture, since the master central point manages the read and write operations of all systems, it has strong read and write stability. However, the master-slave architecture transfers the read and write control rights to a single central master point in the system for management, which causes the entire system to fail to operate when the master point is down, thereby reducing availability. In the system, the metadata of the entire cluster is stored in the master node, and the performance of the master node will affect the scale of the cluster and the performance of the entire cluster [18].

In the distributed system architecture, the typical P2P architecture is the Cassandra system architecture, which is a complete P2P architecture based on persistent hashing. The architecture of the Cassandra system is shown in Figure 4. In a Cassandra cluster, there is no concept of a master node, and each node in the cluster has the same role. Through the gossip protocol, each point performs P2P communication to form a ring. This allows the system to completely avoid the instability of the entire system due to the single-center point problem. Cassandra uses multiple data centers to improve cluster availability, but these provide a certain degree of stability, and these can only provide ultimate stability for the cluster [19].

The P2P-based distributed architecture uses a consistent hash method to avoid changes in the mapping relationship due to the addition and deletion of points. The consistent hash algorithm is shown in Figure 5; in this consistent hash ring, if the value of the file to be stored falls between two points after the hash calculation, it is placed clockwise first. Since each file and node gets a fixed check after calculating the hash function, this also makes it a fixed position in the hash ring. When adding or deleting points in a hash ring, only part of them will cause the ring to change. If a point 5 is added between point 2 and point 4, it will only affect the file mapping relationship between point 2 and point 5, but it will change the data stored from point 4 to point 5. If point 2 is removed from the hash ring, it will only affect the data mapping relationship between point 1 and point 2, and the mapping from point 2 to point 4 [20].

Hadoop is a software framework developed by the Apache Software Foundation, which can perform distributed processing of large amounts of data. According to its design concept, it supports a single server and a cluster of thousands of machines, each machine performs calculations and storage locally. Hadoop mainly includes Hadoop Common, YARN, HDFS, and MapReduce modules. In addition, Hadoop has a complete ecosystem, such as the data warehouse tool Hive, the column storage-based database HBase, and the platform Pigu for analyzing and evaluating large data sets.

HBase is a distributed columnar storage system built on HDFS based on the Google Big Table model. It is a special key-value system with a nonrelational database that can store structured and unstructured data [21]. It can provide random storage services with high concurrency, powerful real-time performance, and scalability. The storage structure of HBase is shown in Figure 6.

2.5. Content-Based Video Image Feature Extraction

SIFT feature extraction algorithm: SIFT (scale invariant feature transform) is a local feature descriptor proposed in 1999 and a complete summary in 2004. The essence of SIFT is to produce a dramatic effect on a local area of an image. It can produce features of continuous rotation, scale change, and brightness change between images, which can be used between images. The matching feature is performed under the conditions of generating translation, rotation, etc., and the algorithm is stable. However, in a large area of similar landscapes, SIFT still results in a large number of mispairs after using Euclidean distance for feature point matching. Euclidean distance refers to the relationship between the projection of an object in space and the coordinates of the object itself, which reflects the direct or indirect propagation of electromagnetic wave signals to the receiver from a certain angle. It can reflect the changes in the position of objects in space, but its scope of application is relatively small. Therefore, researchers use the principle of NCC algorithm based on SIFT to further match the image feature points in a large area of similar grayscale and use the NCC coefficients as a measure to eliminate the mismatch points. In addition, the traditional SIFT algorithm uses a square window, and the dimension of the traditional SIFT algorithm feature description vector is 128, which makes the complexity of the algorithm very high, and the number of feature points obtained by the SIFT algorithm is too large, so the algorithm is very time-consuming. The SIFT algorithm also has the problem of nonreal-time performance. In the process of image processing, the amount of information obtained is large, and the timeliness is poor. At the same time, the accuracy of data analysis of complex images is low. In order to solve this problem, a method based on image reconstruction and recognition, feature extraction, and classifier is proposed on the basis of SIFT algorithm. It is capable of efficiently processing and describing mathematical models of complex images, thereby improving their accuracy. This method is based on pixel points as the core, which can achieve global optimization and quickly reflect the distribution of real-world information. In addition, in SIFT algorithm, our main research direction is how to process a large number of images. However, due to the lack of measurement methods for the relationship between relevant parameters and model data, which leads to a large influence of sample set sampling on modeling results, we propose a modeling method based on big data. The method is to build a model and use the relevant parameters and measurement sample set to obtain effective information and improve the accuracy of modeling to some extent.

The essence of SIFT algorithm is the process of extracting SIFT key points from the image. They use the points in the image to compare and search for key points at different scale locations. The SIFT algorithm completes feature point extraction and can be divided into four steps: scale space construction, key point positioning, key point direction parameter determination, and key feature vector descriptor.

The traditional SIFT algorithm uses a square window, and the dimension of the feature description vector of the traditional SIFT algorithm is 128, which makes the algorithm very complex. Researchers proposed an improved SIFT algorithm. The algorithm descriptor uses a circular window. At the same time, it reduces the dimension of the original SIFT algorithm feature description vector from 128 to 48. The improved algorithm has good rotation invariance; it also reduces the complexity of the original algorithm and improves the matching speed.

The number of feature points obtained by the SIFT algorithm is too large, so the algorithm is very time-consuming. So the researchers proposed an improved SIFT feature extraction and matching algorithm, and they accelerated the processing on the GPU. The algorithm is ensuring the accuracy of matching, and the speed increases with the increase of image complexity. In addition, the SIFT algorithm has a nonreal-time problem; in order to solve this problem, researchers have proposed a SIFT+KLT hybrid feature without logo tracking method, which effectively solves the nonreal-time problem of SIFT algorithm.

The advantages of the SIFT algorithm are the following: (1) since there is similarity and heterogeneity between sample points, it is necessary to model and analyze these data sets. (2) The feature parameters selected by building the model are used as the training set. If a neural network method is to be used to process all images on an image with similar attributes, the same size, or a high degree of similarity and good discrimination, it can be used to classify and identify them based on the same type of image by labeling or merging, etc.

Locality sensitive hashing: the local sensitive hash (LSH) algorithm is a very commonly used method in high-dimensional vector estimation requests. Its basic idea is based on the existence of a certain relationship between each pixel in the image; it performs certain geometric transformation and interpolation on it and then converts the calculated result into a set of functions with this type of relationship. The algorithm can effectively deal with some nonlinear problems. The locally sensitive hashing algorithm is a statistical learning-based method that is centered on the principle of least squares estimation and then uses a fitting function to calculate the optimal value and global error minimization. It can decide to iterate or not based on the unknown parameters in the observed data. They can also solve the dimensionality catastrophe problem caused by higher-dimensional vectors, and this can reduce the time complexity of the nearest almost linear neighbor problem. It is often used to judge the similarity of text, video, and image.

However, this method has some drawbacks:

First, the locally sensitive hashing algorithm requires each bucket to store only one piece of data, which greatly reduces the retrieval speed of the algorithm, and the algorithm cannot be used directly for real numbers, but only for nonnegative integers. Therefore, researchers have proposed a locally sensitive hash algorithm based on Hamming distance. Compared with the traditional locality sensitive hashing algorithm, this algorithm does not require that each bucket only store one piece of data, and it also greatly improves the retrieval speed of the locality sensitive algorithm. This algorithm converts the distance calculation of the original data to the Hamming distance calculation by converting the coordinates of the data into a sequence containing only 0,1.The traditional locality sensitive hashing algorithm cannot be directly used for real numbers, but can only be used for nonnegative integers. Therefore, the researchers proposed a locally sensitive hash algorithm based on the stable distribution of , which can be directly used for real numbers.

Second, it cannot fully reflect the real situation, and the poor image quality and serious loss of texture information lead to the inability to accurately describe the image texture information. Therefore, the researchers propose an improved method to model the model and improve the usefulness of the locally sensitive hashing algorithm in practical applications by constructing a model that can reflect the attribute types and feature distribution of the big data objects and can accurately describe these attributes and their relationship with the original sample points.

Finally, a large and accurate sample size is often required in practical problems to deal with complex objects or attribute types of data, and the existence of raw data points further limits the usefulness of locally sensitive hashing algorithms in practical applications. To solve this problem, the researchers propose an improved method to model the model and improve the practicality of the locally sensitive hashing algorithm in practical applications by constructing a model that reflects the distribution of attribute types and features of large data objects and can accurately describe these attributes and their relationships with the original sample points.

LSH algorithm has the following advantages: (1) it has little impact on the original image reconstruction; (2) the accuracy is relatively high; (3) the equivalent coefficients are almost zero, and there is no obvious distortion occurrence and degradation tendency, and more accurate 3D data can be obtained by processing the original image; (4) a complete dependency degree model is realized by using BP neural network based and Rython language programming.

Disadvantages of the partial sensitive hashing (LSH) algorithm: the number of data points to be extracted for the local data model is too large, which leads to too much computation or too long computing time. Therefore, researchers used ABVQI algorithm based on rough set theory for modeling and analysis and used a method for large data image processing: coefficient-dependent 3D reconstruction model. This method can not only solve the problems of distortion and long computation time caused by the large amount of local data.

Extraction method of video content features: the lens-based 3D model is obtained by digitally processing a 2D image to obtain its projection on a computer, then using a camera to obtain the corresponding data, and finally, converting the obtained information into a graphical image. This method requires extensive calculation of the distance between each pixel and the focal distance error value.

However, the lens-based method only selects a key frame from a lens, and it cannot handle a lens with a higher intensity of motion. Therefore, the researchers proposed a shot boundary detection method based on the “dual threshold” method, which uses different color histogram change thresholds to detect sudden changes and gradations of shots. This can effectively solve the lens processing under sports.

Content-based analysis is an approach to describing and explaining things. Content-based analysis is a method that integrates multiple sources of information and multidimensional variable relationships to process large amounts of data and obtain useful results. Its main advantages are as follows: (1) it achieves effective modeling by inputting the original image into the construction of a complete and ordered structural model; (2) it can easily achieve the required results when a matrix of multiple attribute values is established, and the linkage between them is stronger than using one algorithm alone; at the same time, content-based analysis can also better reflect the essential characteristics of things. This approach enables to recognize its essence and simplify complex problems for easy understanding. Here, we introduce two image processing methods commonly used in image reconstruction-based research: superposition and collision. Superposition is to enhance the image contrast by using the same or similar distance between the same pixel points in different scales, shapes and sizes, or coordinate systems; collision is to use certain algorithms to score and correct the original image and then use it to finally get a gray bright image, thus improving the photo quality.

However, the key frames selected by the method based on content analysis may not have representative significance, and it is easy to select too many key frames when the camera moves. Therefore, the researchers proposed a motion-based method, which can select the corresponding number of key frames according to the structure of the lens.

2.6. Big Data Image Processing Platform Architecture

This platform provides the distributed storage services required by the characteristics of massive video image data, and on this basis, the video image data is processed and retrieved. As shown in Figure 7, the big data processing platform is mainly divided into three parts: video image collection, distributed clustering, and video image retrieval.

In the video image big data processing platform, the distributed cluster is the core part of all big data platforms, mainly composed of the HDFS file system in Hadoop, the MapReduce computing framework, and the nonrelational HBase database, as shown in Figure 8. The storage and computing functions of all big data platforms are completed by distributed clusters.

3. Experiment and Analysis

3.1. Experimental Platform

This experiment mainly uses MATLAB, which uses a lot of mathematical theoretical knowledge to analyze the 3D modeling of big data in different regions, different time, and space dimensions. (1)Time dimension description: use MATLAB to establish a mapping relationship between the left-to-right coordinate axis and the direction map; use the time domain model method to obtain the area size and distance of the corresponding area when each element in the image is related to the location of the point information(2)Spatial distribution description: use SPSS22.0 software to model the data set in different regions and different time periods and generate different three-dimensional big data tables. Using SPSS22.0 software to carry out spatial analysis of the model, it is obtained that the sample set is divided into two categories on a time scale in different regions and different time periods. One is a large data table on the time scale, including time series, basic variables, and boundary conditions; the other is that different regions in nonspace have different requirements for image quality. When using SPSS22.0 software to extract in different regions, the offline average method and the sample set based on statistical analysis method are used for modeling, and the model is representative and has a high similarity with the model and satisfies the following characteristic attributes (such as object shape and size), and by comparing the information content contained in the images of the same area, different time periods, and the same type of images, it is found that these images are more evenly distributed

3.2. Experimental Parameter Settings

Since the data mentioned in the experiment are all from the images collected in the MATLAB software, and when the 3D reconstruction is obtained, because it is built linearly on the original sample, it is necessary to set the corresponding parameters under a certain number and quality. (1)Pick a measurement sample. The location of the sample point and its coordinate information are determined according to the previous results, and then, these known characteristic parameters are converted into a digital data format that meets the requirements of the law of large numbers and input into the computer system for calculation processing and analysis; finally, MATLAB software is used. Generate valid numerical images to obtain correct and incorrect values, and analyze the images to obtain features that meet the requirements. The experimental results are compared with theoretical data to verify whether the proposed method is accurate in terms of effectiveness(2)Select the appropriate measurement model parameters; determine the various scales and angles and other relevant factors suitable for modeling, according to the large sample principle to select the corresponding variables to establish an effective numerical three-dimensional world model and the corresponding function values in different dimensions to calculate the final obtained accurate values after the analysis, conclusions, and recommendations

4. Conclusion

By analyzing the basic principles and steps of 3D modeling of big data, as well as the research status at home and abroad, this paper summarizes several commonly used extraction algorithms and related technologies based on different information sources. A simple, convenient, fast, effective, and high-precision orientation map is used to obtain sample point images; after obtaining the target area, it is necessary to further process the original image and use the model to calculate the required depth information. With the rapid development of the Internet and mobile Internet, multimedia data such as video images has begun to show explosive growth. Multimedia data contains information of great value, and mining this information can bring more convenience to users. This article describes a big data image processing platform based on 3D image reconstruction technology, which provides a retrieval solution based on video image content by extracting and storing the content features of video images in a distributed manner. Moreover, the entire big data platform has good fault tolerance and scalability, and this can well adapt to the massive growth of video image data. Through the sorting and screening of data sets and attributes, improvements can be made in the following aspects: first, a complete and reliable 3D big data system must be established to screen a large amount of data; a comprehensive system of various information; the second is to select a representative and high signal-to-noise ratio feature extraction method under a large sample size.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author declares no conflicts of interest.