Abstract

Although many research works have been carried out in the area of transmission 3D data through sensor networks, the security issue of transmission remains to be unsolved. It is important to develop systems for copyright protection and digital right management (DRM). In this paper, a blind watermarking algorithm is proposed to protect the transmission security of 3D polygonal meshes through sensor networks. Our method is based on selecting prominent feature vertices (prongs) on the mesh and then embedding the same watermark into their neighborhood regions. The embedding algorithm is based on modifying the distribution of vertex norms by using quadratic programming (QP). Decoding results are obtained by a majority voting scheme over neighborhood regions of these prongs. Assuming that cropping cannot remove all prongs, we can achieve robustness against the cropping attack both theoretically and experimentally. Experiments indicate that the proposed method is also robust against noise, smoothing, and mesh simplification. The proposed method has provided a solution for 3D polygonal watermarking which is potential to withstand a variety of attacks.

1. Introduction

Nowadays, the processing, transmission, and visualization of 3D objects are a part of possible and realistic functionalities over sensor networks [1]. Confirmed 3D processing techniques exist and a large scientific community works hard on open problems and new challenges, including progressive transmission, fast access to huge 3D databases, or content security management. Although many research works have been carried out in the area of transmission 3D data through sensor networks, the security issue of transmission remains to be unsolved. 3D objects can be duplicated, modified, transformed, and shared easily during the transmission process. In this context, it is important to develop systems for copyright protection and digital right management (DRM).

Watermarking is a promising area for reinforcing the security of 3D object transmission, which has received much attention in the past years, as summarized by [2, 3]. 3D objects can be represented by polygonal meshes [1], nonuniform rational B-splines (NURBS) [4], point-sampled surfaces, [5] and voxel representation [6]. Among these structures, polygonal mesh is the most popular one due to its simplicity and easiness to be converted to other representations. A 3D polygonal mesh is represented by a set of vertices and connections. In consequence, watermark can be embedded by modifying positions or connections of these vertices.

Watermarking algorithms can be classified into blind and nonblind ones. In the blind watermarking, only the watermarked objects are needed for the decoding process, while in the nonblind watermarking, both the original and watermarked objects are needed. Blind watermarking has wider applications, but it is generally more difficult to be designed, and not as robust as nonblind one in resisting attacks. Algorithms for 3D mesh watermarking can be classified into the spacial domain methods [713] and transformed domain ones [1419]. For the spatial domain methods, the vertices, the normals, and the geometrical invariants are modified for embedding, while for the transformed domain methods, 3D watermarking is treated as a common signal processing problem. The regular signal processing conceptions, such as frequency analysis and wavelet decomposition, are implemented for the watermarking approach.

A watermarked mesh is likely to be attacked by a malicious user with the aim of eliminating the embedded watermark. In addition, channel noise can also degrade the watermark during the transmission process. Thus, robustness against attacks and transmission channels is one of the major concerns in designing a watermarking system. Common attacks include affine transforms (rotation, scaling, and translation), noise, smoothing, connectivity attacks (modification of vertex connectivity), vertex reordering (reordering the sequence of vertices), simplification (removing vertices and faces but keeping the 3D shape unchanged), remeshing (resampling the 3D objects to obtain new meshes), and cropping (part of the 3D mesh being cropped from the original mesh) [13]. Each algorithm has its own preferences in resisting attacks. For example, compared with the spatial domain approaches, the transformed domain ones are more robust against noise. However, most transformed domain methods are not robust against connectivity attacks because they use connectivity information for watermarking.

Cropping is a special attack which aims at removing part of the mesh. It is possible to design a watermarking system which is robust against cropping by repetitively embedding the same watermark into different patches of the mesh. Suppose that several patches remained after the cropping attack; it is possible to recover watermark from these unaffected patches. The work of Ohbuchi et al. [20] has been one of the first to propose a nonblind approach based on repetitive embedding to resist the cropping attack. Other nonblind approaches are proposed in [21, 22]. For nonblind watermarking, the original mesh serves as a reference to indicate embedding regions. By using synchronization and registration techniques, it is not difficult to extract the embedded watermark. However, for blind watermarking, the situation is relatively more difficult because of the lack of a reference to indicate the place where watermarking occurs. The basic idea is to segment the mesh into patches, which have special geometrical and topological properties, as references for watermark embedding, with the expectation that the same patches can be extracted during the decoding process [10, 12, 23]. Such an approach has aroused the causality problem, the watermarking algorithm should produce the same patches before and after embedding, which is a difficult problem because embedding may change mesh properties which are important to the segmentation algorithm. From our knowledge, this problem is not fully solved by the previous research works.

In this paper, we propose a blind watermarking scheme which is robust to various routine attacks during transmission of 3D polygonal meshes through sensor networks, for example, noising, smoothing, simplification, and cropping. The basic idea follows our previous work [36] and the work of Rondao-Alface et al. [25]. Firstly, protrusive feature vertices (referred as “prongs” in this paper) of the mesh are selected as references for segmentation. Secondly, watermark is repetitively embedded into neighborhood regions of these prongs. Because the selection procedure is local, prongs are evenly distributed on the mesh. If an attacker crops all the prongs, most probably he/she will obtain a meaningless mesh, so it is likely that there will be several prongs remained after the cropping attack. For decoding, prongs are retrieved and then their neighborhood regions obtained. Watermark can be decoded from the neighborhood region associated with each prong. Then a majority voting scheme is used to obtain the final decoding results.

The rest of this paper is organized as follows. The proposed watermarking scheme is described in Section 2, which includes prong selection, watermark embedding, and decoding. Robustness of the proposed method against cropping attack is shown in Section 3. Section 4 theoretically proves that the performance of the watermarking scheme is a function of the number of correct prongs and the total number of prongs. Section 5 shows simulation results of the proposed method against noise, smoothing, and simplification attacks. Finally, Section 6 concludes this paper.

2. The Proposed Watermarking Method

Figures 1 and 2 illustrate the watermark embedding and decoding processes, which are described in the following subsections.

2.1. Selecting Prongs

The first step of watermark embedding is to select prominent feature vertices, or “prongs”, from 3D meshes. Feature vertices are common descriptors of 3D surfaces, which have been used for mesh segmentation [26, 27] and object recognition [28]. In this application, we require feature vertices to be protrusive because protrusive regions contain most information of the shape [29]. If an attacker removes all protrusive regions, most probably he/she would obtain a meaningless shape. We associate a prongness value to each vertex. In this paper, the prongness value is an indicator of how protrusive a vertex could be. Prongness value is calculated by adding up the dot products between the normal direction of this vertex and the vector from this vertex to its nearest neighbors: where represents the normal direction of vertex and is the set of vertices which are close to vertex in geodesic distance. Various methods have been proposed to calculate normal directions in 3D meshes [30, 31]. In this paper, we use a simple and common one. The first step is to calculate the surface normal at each polygon from the neighborhood of . The surface normal of a polygon is calculated as the vector product of the orientations of two of its edges divided by the vector length. The normal direction at vertex is taken as the average of surface normal directions corresponding to its adjacent polygons; that is, where is the set of vertices which are connected with , and is the total number of these vertices.

The algorithm to calculate geodesic distances was presented by Dijkstra [32]. In 1998, Kimmel and Sethian proposed the fast marching algorithm running in complexity [33]. The fast marching algorithm was modified by our previous work to increase its speed [34, 35]. In this paper, we use the method of our previous work to calculate geodesic distances. The prongness calculation is to add up the cosines between and for all 's which are geodesically close to , as shown in Figure 3.

Prongs are selected as local minimums of prongness values. In this paper, a vertex is selected as a prong if it takes the lowest prongness value compared with its neighborhood vertices. We use to represent the set of 's neighborhood vertices which need to be compared with . We choose as the set of the first vertices which are close to in geodesic distance. In other words, a vertex is selected as a prong if it has the lowest prongness value among its closest vertices in geodesic distance.

Figure 4 shows the selected prongs of the bunny, head, and hand models. Each prong is represented as a red point on the mesh. There are also prongs at the back of each model, which are not shown here. We choose , which means that we take the closest 200 vertices to calculate the prongness value. For the head and bunny models, which have 11,703 and 14,007 vertices, respectively, we choose . In other words, a prong is selected by comparing its prongness value with its 1000 neighborhood vertices in geodesic distance. For the hand model, which has 38,219 vertices, we set , a little bit greater than the bunny and hand models to limit the number of selected prongs. We can see that prongs are scattered in protrusive regions and evenly distributed on the whole mesh.

2.2. Segmentation Based on Prongs

The next step is to segment patches for watermarking based on the obtained prongs. These patches are geodesic circles centered on the prongs. For each prong, we segment the patch by taking the first vertices which are geodesically close to it. Here is a predefined value, which is decided by considering the number of prongs, the watermarking algorithm, and the number of watermarking bits to be embedded. For example, we can choose for the bunny model if we need to embed 32 bits by using the histogram-based approaches, because the histogram-based algorithm can obtain good results if approximately 20 vertices are used to embed each bit [13].

Figure 5 shows the patches of the bunny model by taking , , and , where each patch is represented by a unique color. Because there are overlapping regions belonging to more than one patch, we use the blue color to represent them.

The next step is to embed the same watermark into each patch. The embedding algorithm is an extension of the histogram-based approach proposed by [13], which is based on modifying the distribution of vertex norms. We have extended their work by minimizing distortions of vertex norms using quadratic programming (QP) and thus further improved its robustness [36]. In this paper, we use the proposed QP method for watermark embedding.

Another advantage of the QP method is the simplicity to deal with overlapping vertices (vertices with the blue color in Figure 5). Because overlapping vertices belong to more than one patch, they will be assigned different displacements during the watermarking process. Thus, embedding into one patch may destroy watermarks of other patches. By using the QP method, the overlapping vertices can be constrained to their original positions to ensure correct embedding.

2.3. The Embedding Algorithm

In this section, we describe our embedding algorithm. Part of this section has been published in [36]. The main difference is that we have added a scheme to deal with overlapping vertices belonging to more than one patch.

For embedding watermark into each patch, firstly, the Cartesian coordinates of vertices in that patch are converted into spherical coordinates : where is the number of vertices in the patch, is the th vertex norm, and is the patch's center of gravity, which can be calculated as

Secondly, vertex norms are divided into distinct bins according to their magnitude. Each bin is used to hide one bit of watermark. In this paper, we use to represent the watermarking bit to be embedded into the th bin.

Here and represent the minimum and maximum values of all vertex norms. The th bin is defined as follows ():

Here and are lower and upper boundaries of the th bin and is the th vertex norm in the th bin. In this paper, we also use and to represent the spherical angles of the th vertex norm in the th bin. In addition, we use to represent the number of vertex norms belonging to the th bin.

The third step is to map the vertex norms belonging to the th bin to the normalized range : where is the normalized th vertex norm in the th bin. The aim of the watermarking process is to slightly modify , so that the mean of the vertex norms is moved into a specific range according to the watermarking bit to be embedded. We introduce the normalized distortion for the th vertex in the th bin, which is represented by . Our aim is to calculate each . After is obtained, we can calculate the new vertex norm by adding the previous one with its distortion:

Then we need to transform the vertex norms to the original ones by (8), which is an inverse transformation of (6):

The watermark embedding process is completed by converting the spherical coordinates to Cartesian coordinates. Let be the th vertex norm. A watermarked mesh consisting of vertices is obtained by

Our aim is to minimize the sum of squares of :

Three constraints are applied to ensure that the embedded watermarking bits can be correctly decoded later. The first constraint is to limit the distortion of each vertex into a reasonable range. If a vertex belongs to more than one patch (the vertex with the blue color in Figure 5), then its displacement is set to zero; for the nonoverlapping vertices, we limit the transformed vertex norm into the range of . Here is a parameter to control the distance gap between adjacent bins. The constraint is given as follows.

Constraint 1. For every and , if vertex is an overlapping vertex which belongs to more than one patch, then else As discussed in Section 2.2, the overlapping vertices have to be constrained into their original positions to ensure correct embedding, and nonoverlapping vertices are modified according to this constraint.

We can see from (7) and (12) that after watermarking, will be in the range of if the above constraint is satisfied, so constraint 1 ensures that vertices belonging to the th bin still belong to that bin after the watermarking process, which is also implied in [13].

The second constraint is directly derived from [13], which ensures that the mean of the transformed vertex norms in the th bin is greater (or smaller) than a reference value when the embedded watermarking bit (or ). This constraint must be satisfied to ensure that the embedded watermarking bits could be correctly extracted later. Our aim is to make the mean of the vertex norms in the th bin: greater than (or smaller than ) when (or ). Here is a strength factor to control the watermarking effect. The second constraint is given as follows.

Constraint 2. For every ,(1)if , then (2)if , then

It can be deduced from (7) and (13) that when Constraint 2 is satisfied, is greater than (or smaller that ) when (or ).

We proposed another constraint to guarantee that the center of gravity of the watermarked patch is the same as the original one. If the center of gravity has been changed, by (3), the vertex norms will also be changed. Thus, it is possible that the decoding process fails to extract the embedded bits. Such a problem is not addressed in [13]. Here we propose the following constraint to solve it.

Constraint 3.

Thus, we have changed the problem of assigning distortions to an optimization problem, with a quadratic objective function and three linear constraints. This is exactly a quadratic programming problem and can be solved efficiently [37, Chapter 4].

2.4. Solving the Causality Problem

Because watermark embedding modifies positions of vertices which are close to each prong, the local minimum of the prongness value is also likely to be changed. If this situation occurs, the prongs cannot be retrieved during the decoding process. Such a problem, referred as the causality problem in [3], is illustrated in Figure 6(a).

Suppose after the watermarking process, , , and have been changed to , , and . It can be seen that the prongness value of increases because the angles between the norm and , become smaller. Thus, after watermarking, other vertices may substitute as the new local minimum in prongness value.

We propose an iterative approach to solve the causality problem—ensuring that the same prongs can be retrieved after the watermarking process. Firstly, we embed the watermark into the neighborhood region of each prong. After embedding, we check whether the prongness value of each prong is the local minimum among its neighborhood vertices. If so, the embedding is successful and the iteration finishes; else we slightly move the prong along its normal direction to decrease its prongness value and obtain local minimum again, as shown in Figure 6(b).

The iteration continues until watermark is embedded and simultaneously all prongs take local minimum prongness values. We also set up a maximum iteration number , which means that if this process does not converge after rounds, watermarking is stopped and embedding is failed on this prong. Figure 7 illustrates the procedure of the iterative process. In experiments, around out of prongs fails to be embedded. Considering that there are more than one prong in a mesh, this ratio does not significantly influence the performance of the whole scheme.

Figure 8 shows the watermarking effects of the bunny, head, and hand models. In each model, 32 bits are repetitively embedded into each patch.

2.5. Watermark Decoding

The watermark decoding process is as follows. We assume that the decoder knows the values of , , and . These values can be transmitted with the watermarked mesh, or predefined. Based on , , and , we can obtain the prongs and patches by using the same method as the embedding process. Then we can extract watermark for each patch. Firstly, the center of gravity of each patch is calculated by (4); then the Cartesian coordinates are converted to spherical coordinates by (3). After obtaining the maximum and minimum, the vertex norms are classified into bins and mapped onto the range of by (5) and (6). Then, the mean of the th bin is calculated by (13), and compared with the reference value . The watermark hidden in the th bin, represented by , is extracted by

Each patch produces a series of decoded bits. The final result is obtained by a majority voting scheme. Each bit is decoded by counting the 0’s and 1’s of all the series at that bit. If there are more 0’s than 1’s, the bit is decoded as 0; otherwise it is decoded as 1.

3. Robustness Test against the Cropping Attack

Because watermark embedding is a local process, several prongs remain unchanged after the cropping attack. It is also possible that cropping introduces new prongs that no watermark is embedded around it. However, the majority voting scheme ensures that if the number of incorrect prongs is less than half, watermark can be extracted without errors. In Section 4, we will show that low bit error rates can also be achieved even if the number of incorrect prongs is more than half. Figure 9 shows the bunny, head, and hand models where vertices of each model have been cropped (i.e., cropping ratio = ). Compared with the original meshes in Figure 8, it can be seen that many prongs remain on the cropped meshes, but cropping also introduces new prongs.

We embed bits into each mesh. The settings of , , and for each model are the same as in Section 2. The final decoding results are shown in Table 1. We have listed the bit error rates (BER) of each model with three cropping ratios. The BER is calculated as the ratio of incorrectly decoded bits to all embedded ones. We also listed the total number of prongs in the cropped mesh, and the number of prongs in which we have embedded the watermark (denoted by “correct prongs”).

From Table 1, we can see that the watermark can be correctly decoded even if of the mesh has been cropped. The robustness is decided by the number of correct prongs and the number of all detected prongs. If the number of correct prongs is more than half of the number of all detected prongs, the embedded bits can be decoded without errors. Otherwise there will be decoding errors, such as the first line of the bunny model, where out of prongs are correct prongs and the BER is in this situation.

4. Theoretical Analysis of the Decoding Scheme

In this section, we will theoretically prove that the performance of the watermarking scheme is a function of the correct prongs and the total number of prongs. Suppose the cropped mesh totally produces prongs, in which there are correct prongs and incorrect prongs. We further assume that each of the prongs decodes the watermarking bits without any error, and the incorrect prongs randomly guess the watermarking bits (i.e., half correct and half wrong). Then the probability that a bit can be correctly decoded in this situation, denoted by , can be obtained as follows:  if is odd, then   if is even, then

The deduction of the above equation is as follows. When is odd, by the rule of majority voting, a watermarking bit can be correctly decoded if at least prongs correctly decode that bit. Because from of these prongs we can always obtain the correct bit, for the remaining prongs, at least () prongs should produce the correct bit. We further assume that the probability of correct decoding for each of the remaining () prongs is (i.e., half correct and half wrong), so the total probability that one bit can be correctly decoded is times the sum of , with varying from () to (), as indicated in (18).

When is even, the probability that the watermarked bit can be correctly decoded is considered under two situations.

Situation 1. Correct decoding can be obtained if there are at least () prongs correctly decoding the bit. By similar deductions, the probability relating to this situation is

Situation 2. When exactly prongs correctly decode the bit, the decoder will randomly guess the watermarking bit. We further assume that the probability of correct guess is . Then the probability related to this situation is If we sum up probabilities of these two situations, we can obtain the probability of correct decoding when is even:

Figure 10 plots with , respectively. It can be observed that the theoretical analysis coincides with experimental results. For example, it can be seen from the first line of Table 1 that the BER is 12.5% when the number of correct prongs is 2 and the number of all prongs is 6, while the theoretical curve in Figure 10 indicates that . Thus, the theoretical BER is , which is consistent with the experimental results.

From the above analysis, we can see that the performance of watermark decoding can be improved if takes smaller value and takes greater value. The number of correct prongs increases if we decrease the value of . However, if is smaller, the number of vertices which can be used for embedding decreases, so the watermarking capacity also decreases. In practice, we need to make compromise between robustness and watermarking capacity. We can also decrease by discarding incorrect prongs. It can be seen from Figure 9 that the number of incorrect prongs are distributed near the cropping boundary. If the cropping boundary is known beforehand or can be estimated, we can discard prongs which are close to the cropping boundary in order to improve robustness. Normally, if the cropping boundary is simple, we can estimate it from experiences. For example, we can easily distinguish the cropping planes in Figure 9 even if we do not see the original models, because we have the experience that a rabbit should have legs and so forth. We can directly discard prongs close to the cropping boundary to improve performance. However, it is difficult to build algorithms that can automatically estimate cropping boundaries due to the complexity of the human visual systems (HVS).

5. Simulations on Other Attacks

Because of the good property of histogram-based approaches, our system is invariant to rotation, scaling, and translation (RST) attacks and vertex reordering. We then test the robustness of the system under four distortion attacks—noise, smoothing, simplification, and mixed ( cropping + noise). The same as in previous sections, we use the bunny, head, and hand models for this test. We embed bits into each model. The settings of , , and are the same as those in Section 2. The similarity between the original mesh and attacked mesh is measured by the Hausdorff distance, which is calculated by Metro [38].

Examples of the noise, smoothing, and simplification attacks are shown in Figure 11. Prongs are also shown on these attacked models. For noise and smoothing attacks, , , and are the same as in Section 2; for simplification attacks, they need to be adjusted according to the reduction ratio. For example, if percent of the vertices are vanished after simplification, the values of , , and should be reduced percent accordingly. Compared with prongs in Figure 4(a), we can see that most prongs are preserved, while some prongs are missing and new prongs appear after these attacks.

The experimental results are shown in Tables 2, 3, 4, and 5. Generally, decoding errors of the system are caused by two aspects. The first is that attacks may change the positions of prongs, delete prongs, and introduce new prongs, so that we cannot obtain the same patches as not attacked. The second aspect is that vertex positions in patches can also be modified by attacks, which also produces decoding errors. Because of the interaction of these two factors, the proposed method is not as robust as the one which does not take prongs into consideration (such as the watermarking system proposed in [36]). This fact indicates that if robustness to the cropping attack is to be improved, we need to sacrifice robustness against other attacks.

The robustness against the noise attack is shown in Table 2. Gaussian noise is added to each of the vertices in the watermarked mesh. The mean of the Gaussian noise is zero, and its variance is proportional to the maximum vertex norm in the mesh. We define the noise level as the ratio of the noise variance to the maximum vertex norm in the mesh. In order to filter out the randomness, we repeat the noise-adding process five times and obtain the bit error rates in different noise levels. Three noise levels have been tested in experiments. It can be seen that the BERs increase when noise level increases.

Table 3 shows the performance of the watermarking scheme after the smoothing attack [39]. The relaxation parameter is set to 0.03 and three different pairs of iteration are applied. Because the head and hand models are smoother than the bunny model, they are more robust against smoothing attacks.

For simplification attacks, watermarked models are simplified by three reduction ratios, 5%, 10%, and 15%. The reduction ratio is defined as the percentages of vanished vertices to the total number of vertices. In order to obtain similar patches as the original mesh, , , and are adjusted according to the reduction ratio. We use to represent the reduction ratio. Then the adjusted values of , , and can be obtained by Here , , and are used for watermark decoding, and the results are obtained in Table 4.

The proposed method is not very robust against simplification because the modification of , , and is based on the assumption that vertices are evenly distributed on the mesh so that after simplification, approximately the same number of vertices vanishes in the neighborhood region of each prong. However, most meshes have relatively dense and sparse regions, which will introduce big errors for patch estimation after simplification. Another observation is that the Hausdorff distance between the simplified mesh and the original mesh does not change with different reduction ratios. This is because the Metro software interpolates vertices into the simplified mesh before calculation and thus obtains relatively similar Hausdorff distances.

Finally, robustness against a mixture of cropping and noise attacks is shown in Table 5. Here 50% vertices of the watermarked models are cropped; then 0.1% noise is added to the cropped models. We repeat the noise-adding process five times to obtain the BERs. It can be seen that the BERs for the mixed attacks are around 30% for these three models.

In summary, our method is robust against the cropping attack, as well as other attacks such as RST attacks, connectivity attacks, noise, smoothing, simplification, and mixtures of these attacks. Although robustness level against some attacks still needs to be improved, it is one of the first blind watermarking schemes which can withstand such a variety of attacks.

6. Conclusions

Although many research works have been carried out in the area of transmission 3D data through sensor networks, the security issue of transmission remains to be unsolved. In this context, it is important to develop systems for copyright protection and digital right management (DRM). In this paper, a blind watermarking algorithm is proposed to protect the transmission security of 3D polygonal meshes through sensor networks. Our method is based on selecting prominent feature vertices (prongs) on the mesh and then embedding the same watermark into their neighborhood regions. The embedding algorithm is based on modifying the distribution of vertex norms by using quadratic programming (QP). Decoding results are obtained by a majority voting scheme over neighborhood regions of these prongs. Assuming that cropping cannot remove all prongs, we can achieve robustness against the cropping attack both theoretically and experimentally. Experiments indicate that the proposed method is also robust against noise, smoothing, and mesh simplification. The proposed method has provided a solution for 3D polygonal watermarking which is potential to withstand a variety of attacks.

In this paper, watermark is retrieved by a majority voting scheme under the assumption that most prongs remain after the cropping attack. We also tested our method on other attacks such as noise, smoothing, simplification, and a mixture of these attacks. Experiments indicate that our method is robust against these attacks. Although robustness level against some attacks still needs to be improved, the simulation results demonstrate a blind watermarking scheme for 3D polygonal meshes which can resist a wide spectrum of attacks.

Conflict of Interests

The authors declare that they have no competing interests regarding the publication of this paper.

Acknowledgments

This work is supported by the Nature Science Foundation of China (Grant no. 61202400), the National Research Foundation for the Doctoral Program of Higher Education of China (Grant no. 20110101120053), the Fundamental Research Funds for the Central Universities (Grant no. 2011FZA5003), and the Nature Science Foundation of Zhejiang Province (Grant no. LQ12F02014). Part of this work has been done when Roland Hu was a postdoctoral researcher in Université catholique de Louvain, Belgium. Roland Hu is indebted to Benoit Macq, Patrice Rondao-Alface and Joachim Giard for their ideas and suggestions to finish this paper.