#### Abstract

Since 3D models can intuitively display real-world information, there are potential scenarios in many application fields, such as architectural models and medical organ models. However, a 3D model shared through the internet can be easily obtained by an unauthorized user. In order to solve the security problem of 3D model in the cloud, a reversible data hiding method for encrypted 3D model based on prediction error expansion is proposed. In this method, the original 3D model is preprocessed, and the vertex of 3D model is encrypted by using the Paillier cryptosystem. In the cloud, in order to improve accuracy of data extraction, the dyeing method is designed to classify all vertices into the embedded set and the referenced set. After that, secret data is embedded by expanding direction of prediction error with direction vector. The prediction error of the vertex in the embedded set is computed by using the referenced set, and the direction vector is obtained according to the mapping table, which is designed to map several bits to a direction vector. Secret data can be extracted by comparing the angle between the direction of prediction error and direction vector, and the original model can be restored using the referenced set. Experiment results show that compared with the existing data hiding method for encrypted 3D model, the proposed method has higher data hiding capacity, and the accuracy of data extraction have improved. Moreover, the directly decrypted model has less distortion.

#### 1. Introduction

With the rapid advancement of multimedia processing technologies on the internet, data hiding methods have been developed for the security of three-dimensional (3D) models [1–3]. Data hiding methods achieve integrity authentication and copyright protection by embedded secret data into the content of the original carrier [4, 5]. However, for the occasions with high data security requirement, such as medical images and judicial certification, no modification is allowed on the original carrier. Therefore, reversible data hiding (RDH) has attracted more researchers for potential applications [6–8].

Traditional RDH methods can be divided into three categories: difference expansion (DE), histogram shifting (HS), and lossless compression (LC). DE-based RDH methods embed secret data into carrier image by expanding the difference among adjacent pixels [9, 10]. Prediction error expansion (PEE), which belongs to difference expansion, embeds secret data by expanding the difference between the actual value and the prediction value of pixels [11–14]. HS-based RDH methods generate the feature histogram of original image and embed secret data into the smallest point of the histogram [15, 16]. LS-based RDH methods compress the specified area of the carrier image and embed secret data into the compressed area [17].

With the development of outsourced storage in the cloud, reversible data hiding in encrypted domain (RDH-ED) has been studied for security of multimedia files in the cloud. The existing RDH-ED methods are mainly classified into reserving room before encryption (RRBE) and vacating room after encryption (VRAE). The RRBE method reserves embedding room before encrypting the original image. For example, Ma et al. proposed a reversible data hiding method, in which the room is reversed by self-embedding before encryption. Zhang [19] constructed the histogram of prediction error and reserved room by HS, which is the most popular method for reversible data hiding. Cao et al. improved the method of [18, 19] by generating prediction error with a small entropy so that the reserved room can be increased, and the data hiding capacity can be improved [20].

The VRAE method directly implements data embedding by modifying the encrypted image [21, 22]. For example, Zhang embedded secret data by flipping the three least significant bits of a pixel [19]. With the help of the spatial correlation of natural images, the receiver extracted the secret data through the evaluation function of texture complexity. Based on the method of [19], Hong et al. increased data hiding capacity by improving the evaluation function [23]. Zhang [19] emptied out space for data embedding by using the typical manner of cipher-text compression.

However, the above RDH methods are for images and cannot be directly used to 3D models because the data structure of the 3D model is different from that of the image. Therefore, Wu and Cheung proposed a RDH method for 3D models based on DE, which embeds secret data by modifying the difference among adjacent vertices [24]. Jhou et al. constructed the histogram of the distance between all vertices and the center of 3D model and embedded secret data by histogram shifting. However, these methods cannot be implemented in the encrypted domain [25]. Jiang et al. proposed a RDH-ED method for 3D models based on stream cipher encryption [26]. By flipping several least significant bits of vertex coordinates, one bit was embedded. The receiver extracted secret data by using the spatial correlation of the original 3D model. Shah and Zhang proposed a watermarking method based on the Paillier cryptosystem, which used VRAE framework to vacate space before encryption [27]. Wang et al. embedded secret data in the encrypted domain by constructing direction histogram and histogram shifting [28].

Combining homomorphic encryption and prediction error expansion, a RDH method for encrypted 3D model is proposed in this paper. In this method, the original 3D model is preprocessed, and the vertex of 3D model is encrypted by using Paillier cryptosystem. In the cloud, in order to improve the accuracy of data extraction, the dyeing method is designed to classify all vertices into the embedded set and the referenced set. After that, the secret data is embedded by expanding direction of prediction error with direction vector. The prediction error of the vertex is computed by using the referenced set, and the direction vector is obtained according to the mapping table. Moreover, the mapping table is constructed to map several bits to a direction vector. Secret data can be extracted by comparing the angle between the direction of prediction error and direction vector, and the original model can be restored using the referenced set. The contributions of the paper are organized as follows. (1)By designing the dyeing method to classify all vertices into the embedded set and the referenced set, the error rate of data extraction can be reduced(2)The mapping table is constructed to map several bits to a direction vector, so that several bits can be embedded into a vertex, which improves data hiding capacity(3)Compared the existing RDH-ED methods, the proposed method has higher capacity, lower bit error rate, and the directly decrypted 3D model has less distortion

The rest of this paper is organized as follows. The Paillier cryptosystem is briefly introduced in Section 2. The related reversible data hiding method is proposed in Section 3. The experimental results are shown in Section 4. The conclusions are discussed in Section 5.

#### 2. Related Algorithm

The Paillier cryptosystem [29], which has been widely used in encrypted signal processing, has homomorphism and probability. Homomorphism means that the product of two ciphertexts is consistent with the sum of two corresponding plaintexts. Probability means that different ciphertexts, which are obtained by encrypting the same plaintext with different parameters, can be decrypted to the same plaintext. The following describes the process of key generation, encryption, decryption, two characteristics, and subtraction homomorphism expansion.

##### 2.1. Key Generation

Randomly pick up two large prime numbers and . Calculate and , where stands for the lowest common multiple. Afterwards, select randomly, which satisfies where , and means the greatest common divisor of two inputs. and are the numbers in which prime with . Finally, we get the public key and corresponding private key .

##### 2.2. Encryption

Select a parameter randomly. The plaintext can be encrypted to the corresponding ciphertext by where denotes the encryption function.

##### 2.3. Decryption

The original plaintext can be obtained by

Moreover, two important characteristics are described as follows (which has been applied in the proposed method).

##### 2.4. Lemma One

For two plaintexts , compute corresponding ciphertexts with according to Equation (1), respectively. The equation holds if and only if and .

##### 2.5. Homomorphic Multiplication

For , two plaintexts and corresponding ciphertexts satisfy

The original Paillier encryption system only has addition homomorphism and multiplication homomorphism. The subtraction homomorphism of the Paillier encryption system can be realized as follows.

##### 2.6. Subtraction Homomorphic Expansion

In order to calculate the subtraction of two numbers in the encrypted domain, the negative number should be expressed by a positive number . Suppose that denotes the ciphertext of , the ciphertext can be calculated by Euclidean algorithm [30] and Modular Multiplication Inverse. Hence, the corresponding result of in the encrypted domain can be obtained with .

#### 3. The Proposed Reversible Data Hiding Method

In order to protect 3D model in the cloud, a reversible data hiding (RDH) method for encrypted 3D model is proposed. Figure 1 shows the flowchart of the proposed method. An original 3D model is preprocessed, and the vertex of 3D model is encrypted by using the Paillier cryptosystem. In the cloud, all vertices are classified into the embedded set and the referenced set. Then, the secret data is embedded by expanding direction of prediction error with direction vector. The prediction error of the vertex is computed by using the referenced set, and the direction vector is obtained according to the mapping table. Receiver can extract secret data by comparing the angle between prediction error and the direction vector, and the original model can be restored using the referenced set.

##### 3.1. Preprocessing

Because the input of the Paillier cryptosystem should be a positive integer, the vertex coordinates firstly are converted from decimal to positive integer.

3D models are consisted of vertex data and connectivity data. The vertex data includes the coordinates of each vertex in the spatial domain. The connectivity data reflects the connection relationship between vertices. A 3D model Fairy and its local region are illustrated in Figure 2, and the corresponding format file is shown in Table 1. Each vertex and each face of the 3D model have a corresponding index, respectively. For a 3D model , let represent the sequence of vertices, where , and is the number of vertices. Note that each coordinate , and the significant digit of each coordinate is 6.

Normally, uncompressed vertices are 32-bit floating point numbers with a precision of 6 digits. Therefore, the vertex coordinates are converted into an integer with significant digits by using.

Moreover, all vertex coordinates should be converted to positive integers for encryption by using.

If is greater than 6, the model can be restored losslessly; however, the time cost of encryption and decryption is large. If is less than 6, the time cost is small, while the model cannot be restored losslessly. In order to balance the time cost and distortion of 3D model, the best will be selected through the experiment.

##### 3.2. Encryption

Referring to Equation (2), an integer can be randomly selected to encrypt the vertex coordinates with the public key . where is the ciphertext from the plaintext .

##### 3.3. Data Embedding

Firstly, the dyeing method is designed to classify all vertices into the embedded set and the referenced set. Secondly, the prediction error of the vertex in the embedded set is computed using the referenced set. Thirdly, a one-to-one mapping is constructed to map several bits to a direction vector. Finally, for embedding secret data, the direction vector and the embedded key are used to expand the direction of the prediction error. In addition, the embedded key is calculated to reduce the accuracy of data extraction.

###### 3.3.1. Classifying the Vertices

For the vertex of 3D model, 1-ring neighborhood of the vertex refers to these vertices that are directly adjacent to the vertex. As shown in Figure 3, the vertex is adjacent of the vertex . If a vertex is modified to embed secret data, its 1-ring neighborhood cannot be modified, and it is mainly because the 1-ring neighborhood is required to calculate the prediction value of the vertex. Therefore, the dyeing algorithm is designed to classify all vertices into the embedded set and the referenced set because the color of adjacent vertices cannot be same. The vertices in the referenced set can be used to calculate the prediction value of the vertices in the embedded set because the referenced set consists of 1-ring neighborhood of all vertices in the embedded set, while the vertices in embedded set are modified to embed secret data.

Suppose that denotes the embedded set, and denotes the referenced set. In order to obtain large data hiding capacity, the number of the vertices in the embedded set should be increased. The embedded set is the nonadjacent vertex set. According to graph theory, the existing polynomial algorithms can find the largest nonadjacent vertex set if the graph is a bipartite graph. However, the connectivity data of a 3D model cannot be represented by a bipartite graph since all loops of 3D model are 3. Hence, finding the largest embedded set in a 3D model is NP-hard problem. In order to increase data hiding capacity, the dyeing algorithm is designed to increase the number of the vertices in the embedded set.

Referring to the four-color theorem, different colors are required for any two adjacent vertices. For 3D models, at least seven colors are needed to dye the vertices to ensure that the colors of the adjacent vertices are different. After completing the process of dyeing, the color with most vertices is selected, and all vertices of this color are regarded as the embedded set, and the remaining vertices are regarded as the referenced set. Suppose that denotes the color set of dyeing all vertices, denotes the color set of 1-ring neighborhood of the vertex, and denotes the color of the vertex. The steps for the vertex classification using the dyeing algorithm are listed as follows.

*Step 1. *Suppose that the color of all vertices is black, and traverse all vertices in order of the vertex index.

*Step 2. *For an unclassified vertex , statistic the color set of its 1-ring neighborhood.

*Step 3. *Dye the vertex using the first color in but not in .

*Step 4. *Determine whether the next vertex is black. If is black, the classification ends. If is not black, loop from Step 2 until no vertex is black.

*Step 5. *Select the most frequently used color, add all vertices of this color to the embedded set, and the remaining vertices are added to the referenced set.

Since all vertices are traversed only once, the time complexity of the dyeing algorithm is . For example, as illustrated in Figure 4, let . Traverse the vertex , and let since . Traverse the vertex , and let since . Traverse the vertex , and let since . After traversing all vertices, the most frequently used color green is selected. The vertex set {2,6,7,11} of color green is regarded as the embedded set, while the remaining vertices as the referenced set.

###### 3.3.2. Computing of Prediction Error

The accuracy of prediction error directly affects the performance of data hiding. As shown in Figure 3, the vertex is adjacent to the vertex , and the vertex is the prediction value of vertex . According to the correlation of adjacent vertices, prediction value can be calculated by using where denotes the number of the adjacent vertices of the vertex .

The prediction error of the vertex can be calculated by using where denotes the prediction error of the vertex . is a three-dimensional vector with random direction. Due to the spatial correlation, the modulus length is usually small, and experiment results show that the modulus length has the maximum . Hence, the range of is as follows.

###### 3.3.3. Constructing the Mapping Table

In order to embed several bits into a vertex in the embedded set, several bits should be mapped to a direction vector.

Data embedder converts secret data into several groups with bits, and is a shared parameter. Let a group with bits denote as . Suppose that denotes weighted sum of , and is calculated as

In order to embed into a vertex, the corresponding direction vector of should be constructed. Suppose that denotes the direction vector of , and is embedded by expanding the direction of prediction error with direction vector . The one-to-one mapping between weighted sum and direction vector is constructed as shown in Table 2. Let , and if , eight direction vectors can be constructed. The mapping between direction vector and weighted sum is shown at the top three rows in Table 2. Let , and if , twenty-six direction vectors are constructed as shown at the fifth row in Table 2. Let , and if or , 110 direction vectors are constructed. Let denote the direction weight of direction vector. As shown in Table 2, the direction vector is sorted according to the value , and the direction weight of the direction vector is calculated as follows.

Hence, if the shared parameter is obtained, the mapping table between weighted sum and direction vector can be constructed. For different , the corresponding direction vector can be found through the mapping table.

For example, if , four direction vectors should be constructed to represent four weighted sum , respectively. Let , eight direction vectors can be constructed, and the first four direction vectors are selected according to the direction weight . As a result, the four direction vectors (-1,-1,-1), (-1,-1,1), (-1,1,-1), and (-1,1,1) are constructed to represent , respectively.

###### 3.3.4. Data Embedding

In order to embed secret data into the vertex, the vertex coordinates are required to be modified by using the direction vector and the embedding key. The weighted sum of is embedded in the vertex by using Equation (13). where is the modified vertex coordinate of , parameter is the embedding key, and is the corresponding direction vector of .

Referring to Equation (13), data embedding in the encrypted domain is performed as where is the ciphertext of , and is the ciphertext of . Moreover, if , addition homomorphism of Paillier cryptosystem is utilized to embed secret data. If , subtraction homomorphism expansion of Paillier cryptosystem is utilized to embed secret data.

After data embedding, the corresponding change of the prediction error in the plaintext is as follows.

After being changed, the angle between the direction of prediction error and direction vector will be small.

In order to improve accuracy of data extraction, two inferences about vector are provided to calculate the embedded key.

*Inference 1. *Suppose that and are two three-dimensional vectors. If the directions of and are the same, the modulus length has the maximum. If the directions of and are the opposite, the modulus length has the minimum. The proof is listed as follows.
where is the angle between and . According to Equation (16), since is related to the angle, the above inference holds.

*Inference 2. *For two unit vectors and , let denote the angle between two vectors, and let denote the angle between and . and are positively related. The smaller , the smaller . In addition, . The proof is listed as follows.
where denotes cross product. Since and are unit vector, Equation (18) can be obtained according to Equation (17).
According to Equation (18), , and . The above inference holds.

###### 3.3.5. Embedding Key Calculation

The embedding key influences the accuracy of data extraction. If the embedding key satisfies a certain condition, secret data can be extracted correctly.

Figure 5 shows the angle between prediction error and three direction vectors. Suppose that denotes the angle between prediction error and direction vector corresponding to the secret data , and denotes the angle between prediction error and other direction vector . is computed as

Since the secret data is extracted by using the smallest angle between the prediction error and the direction vector, in order to improve the accuracy of data extraction, should be smaller than . After data embedding, Equation (20) should be satisfied.

The following equation can be derived by using Cosine Theorem.

It can be derived to the following equation.

Suppose that the vector , and Equation (22) can be simplified as

According to Inference 2, the angle between and is smaller than , so . Equation (24) can be derived.

Since the modulus length , Equation (25) can be derived according to Inference 1.

It can be derived to the following equation by using Cosine Theorem.

Suppose that denotes the angle between and , in order to ensure that is smaller than , should satisfy the following equation.

If satisfies Equation (27), secret data can be extracted correctly. According to Inference 2 and Equation (27), has the maximum if the angle between and is smallest.

Since direction vector is related to the shared parameters , the shared parameters influence the value of . According to Equation (27), the relationship between and is obtained as follows.

For example, for a clear description, let . In this condition, the angle between and direction vector is the smallest. The embedding key can be calculated by the following equation.

In order to explain the whole processes, the data embedding example is given as follows. For convenience, suppose that , the vertex , the prediction value , and the secrete data . Since , the weighted sum according to Equation (11), and four direction vectors can be obtained according to the mapping table. Four direction vectors , , , and correspond to , , , and , respectively. Since , then can be computed, and corresponds to . In addition, can be obtained by Equation (28). The prediction error can be computed, and according to Equation (9). Initially, the angle between and can be computed, and , , , and . After data hiding, between and should be the smallest angle. During data hiding, and . Hence, and . Then, , , , and can be computed. The result shows that between and will be smallest after data hiding, and the secret data can be extracted by finding the smallest angle.

Figure 6 shows the changes of angles between prediction error of 100 vertices and direction vector corresponding to secret data before and after data embedding when the shared parameter is 3. The result shows that the angle will become smaller after data embedding.

In addition, a large embedding key will make 3D model disturbed obviously. Hence, the embedding key will be discussed specifically in Section 4 to balance the distortion of the directly decrypted model and the accuracy of data extraction.

##### 3.4. Data Extraction and Model Recovery

After receiving the encrypted model with secret data, receiver can decrypt 3D model with private key and obtain the directly decrypted 3D model. The decryption of 3D model is as follows.

The directly decrypted 3D model is similar to the original model because only the coordinates of some vertices are modified slightly during data embedding.

After decrypting 3D model, all vertices are first classified into the embedded set and the referenced set using the dyeing algorithm. Secondly, prediction error of the vertex in embedded set is computed and the mapping table is constructed with the shared parameter . Then, the angles between prediction error and all direction vector are computed, and the smallest angle is selected, which is the angle between and . At last, can be obtained, and the corresponding can be found by using the mapping table.

is converted into bit by using

After data extraction, the embedding key and direction vector can be used to recovery the original 3D model by using

#### 4. Experiment Results and Discussion

The proposed method was implemented in Matlab 2016b under Window 7. We implemented the following experiment on 100 3D models and calculated the average of 100 3D models. Figure 7 shows a group of experiment results of five 3D models in different parses. The phases from left to right are original 3D model, encrypted 3D model, data-embedded 3D model, directly decrypted 3D model, and recovered 3D model. Directly decrypted 3D models have low distortion, and recovered 3D models have high similarity compared to original 3D model.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

The similarity of disturbed 3D models is evaluated by the signal-to-noise ratio (SNR). The higher the SNR, the better the imperceptibility after embedding watermark. SNR is computed as where are the mean of vertex coordinates, are the original vertex coordinates, and are the coordinates of disturbed 3D model.

is used to evaluate the similarity of directly decrypted models, and is used to evaluate the similarity of recovered models. In addition, the bit error rate (BER) is used to measure the error rate of data extraction. The lower the BER, the higher the accuracy of data extraction.

##### 4.1. Decimal Reservation Digits

In order to observe the effect of decimal reservation digits on the quality of the decrypted model and time cost, we changed the value of from 1 to 8 and perform encrypting and decrypting on 3D models. As shown in Figure 8(a), is a threshold, which enables 3D models recovery losslessly or limits recovery losslessly. Since the significant digits of vertex coordinates is 6, it is easy to cause permanent distortion if . The distortion of directly decrypted model () is calculated, which is shown in Figure 8(b). The time cost of encryption and decryption is related to the value , which is shown in Figure 8(c). In order to obtain high quality of decrypted models and reduce the time cost, is set to 4 in the next experiment.

**(a)**

**(b)**

**(c)**

##### 4.2. The Maximum of the Modulus Length

is the maximum of the modulus length of the prediction error. Due to the spatial correlation of natural 3D model, the actual value and its prediction value of the vertex coordinates are relatively close. Hence, the modulus length of the prediction error always is small. In order to calculate the range of the prediction error, the experiment is performed on 40 3D models. Figure 9 shows the maximum modulus length of 40 3D models. For all 3D models, it can be observed that all maximum modulus lengths are less than 250. Hence, is set to 250 in the next experiment.

##### 4.3. The Choice of the Embedding Key and the Shared Parameter

According to Equation (13), the embedding key directly affects the distortion of decrypted model. If is large and satisfies Equation (27), secret data will be extracted correctly, but the quality of 3D models will be decreased obviously. If is small, the change of prediction error also will be small, and the accuracy of data extraction will reduce. However, there are several existing methods that can improve the accuracy of data extraction, such as ECC code and BCD code. Hence, in order to balance the accuracy of data extraction and the decrypted model, the experiment is carried out by select from 50 to 350 with the interval of 20.

The larger the shared parameter, the greater the embedding capacity. However, the larger the shared parameter leads to that, more direction vectors are constructed, which affects the accuracy of data extraction. The experiment is carried out by selecting from 1 to 6.

Table 3 shows the effect of the embedding key and shared parameter on BER of data extraction. Figure 10(a) shows the effect of the embedding key and shared parameter on of decrypted model. Figure 10(b) shows the effect of the embedding key and shared parameter on of decrypted model. With the change of from 1 to 4, there are corresponding values of with high accuracy and low distortion. When and , , , and . When and , , , and . When and , , , and . When and , , , and . It is observed that the proposed method has high accuracy, high embedding capacity, and low distortion when and .

**(a)**

**(b)**

Figure 11 shows decrypted models when , and changes from 90 to 150. Figure 12 shows decrypted model when changes from 1 to 4.

**(a)**

**(b)**

**(c)**

**(d)**

**(a)**

**(b)**

**(c)**

**(d)**

##### 4.4. The Effect of the Number of the Vertices on

Table 4 shows the relationship between the number of the vertices and . If a 3D model has a large number of vertices, then the distance between the adjacent vertices will become small, which makes 3D model has a small . In addition, directly influence because of the relationship between and . Hence, if a 3D model has more vertices, then the value of will be small, which means that the decrypted model has low distortion.

##### 4.5. Performance Comparison

In order to show the performance of the proposed method, we compare the proposed method with the existing method in [26] and in [28], as shown in Table 5. Compared with method in [26], the proposed method has three times the capacity and lower distortion than method in [26]. Moreover, the proposed has a lower BER, which can be reduced to zero by making the embedding key satisfy Equation (27).

Compared to method in [28], the embedding capacity of the proposed method is smaller. However, since several bits can be mapped to a direction vector using the mapping table, the capacity can be improved by embedding several bits into a vertex. Hence, the proposed method has a higher embedding capacity.

In the proposed method, the distortion and the accuracy can be adjusted by using the embedding key. When the embedding key is 110, 3D models after data hiding has lower distortion. When the embedding key is 250, the BER of data extraction can be reduced to zero. In addition, the distortion will increase as the embedding key increases, the BER will decrease as the embedding key decreases.

#### 5. Conclusion

The method is proposed to preserve privacy and protect copyright of 3D models. Moreover, the proposed method has very good potential for practical applications since the directly decrypted models have lower distortion than the original models. Original 3D model is preprocessed, and the vertex of 3D model is encrypted by using Paillier cryptosystem. In the cloud, the dyeing method is designed to classify all vertices into the embedded set and the referenced set. After that, secret data is embedded by expanding direction of prediction error with direction vector. The prediction error of the vertex in the embedded set is computed by using the referenced set, and the direction vector is obtained according to the mapping table. Secret data can be extracted by comparing the angle between the direction of prediction error and direction vector, and the original model can be restored using the referenced set. The proposed method is efficient to protect copyright of 3D models in the cloud when the cloud administrator does not know the content of the 3D models. Moreover, the proposed has higher capacity and lower distortion than the existing methods.

For the future work of RDH-ED method, we will investigate the following two possible research directions. (1) Extracting information from plaintext is expanded to extracting from plaintext and ciphertext. (2) Further improve the similarity between the directly decrypted model and the original model.

#### Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

#### Conflicts of Interest

The authors declare no conflict of interest.

#### Authors’ Contributions

The conceptualization and funding acquisition are credited to Li Li. The methodology and writing-original draft are due to Shengxian Wang. The conceptualization and supervision are credited to Ting Luo. The writing-review and editing are credited to Ching-Chun Chang. English grammar is credited to Qili Zhou. Formal analysis and investigation are originated by Hui Li. All authors read and approved the final manuscript.

#### Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (No. 61370218 and No. 61971247), Public Welfare Technology and Industry Project of Zhejiang Provincial Science Technology Department (No. LGG19F0-20016), and Ningbo Natural Science Foundation (No. 2019A610100).