Security and Communication Networks

Security and Communication Networks / 2020 / Article
Special Issue

Machine Learning for Wireless Multimedia Data Security 2020

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 8822126 | https://doi.org/10.1155/2020/8822126

Shujiang Xu, Qixian Hao, Bin Ma, Chunpeng Wang, Jian Li, "Accurate Computation of Fractional-Order Exponential Moments", Security and Communication Networks, vol. 2020, Article ID 8822126, 16 pages, 2020. https://doi.org/10.1155/2020/8822126

Accurate Computation of Fractional-Order Exponential Moments

Academic Editor: Zhaoqing Pan
Received27 Mar 2020
Revised19 Jun 2020
Accepted04 Jul 2020
Published03 Aug 2020

Abstract

Exponential moments (EMs) are important radial orthogonal moments, which have good image description ability and have less information redundancy compared with other orthogonal moments. Therefore, it has been used in various fields of image processing in recent years. However, EMs can only take integer order, which limits their reconstruction and antinoising attack performances. The promotion of fractional-order exponential moments (FrEMs) effectively alleviates the numerical instability problem of EMs; however, the numerical integration errors generated by the traditional calculation methods of FrEMs still affect the accuracy of FrEMs. Therefore, the Gaussian numerical integration (GNI) is used in this paper to propose an accurate calculation method of FrEMs, which effectively alleviates the numerical integration error. Extensive experiments are carried out in this paper to prove that the GNI method can significantly improve the performance of FrEMs in many aspects.

1. Introduction

The research on image retrieval has been started since the middle and late last century. At that time, it was mainly text-based image retrieval technology, and the description of image features included text-related information. Later, the image retrieval technology was extended to cloud retrieval, i.e., content-based image retrieval technology, which analyzes the color, texture, and layout of images. Shape is the basic image feature used in content-based image retrieval systems. The image moments are robust and effective shape features. The image moments are excellent image descriptors, and they have strong geometric invariance and global feature description ability. Therefore, image moments have also been widely used in the field of image processing [1], including object recognition, image reconstruction, image encryption, and information hiding [2].

The existing image moments are mainly divided into nonorthogonal moments and orthogonal moments. Nonorthogonal moments such as Hu moments [3] and complex moments [4] project images to a set of the nonorthogonal functional polynomial. The translation and rotation of the image and the scale change invariant can be constructed based on nonorthogonal moments. While because their basis function does not have the orthogonal relationship and nonorthogonal moments have large redundancy, it is difficult to realize the image reconstruction, and they are more sensitive to the image noise. Orthogonal moments are projection coefficients which project the image to a set of orthogonal polynomials. An image can be reconstructed based on orthogonal moments. The research work shows that it has high robustness to the image noise, image blur, and other related operations [5]. Orthogonal moments are divided into discrete orthogonal moments and continuous orthogonal moments. The continuous orthogonal moments use the continuous function as the basis function, and it has the rotation, scaling, and the translation invariance, which have been greatly developed in recent years, including Legendre moments (LMs) [6], Zernike moments (ZMs) [7], pseudo-Zernike moments (PZMs) [7], orthogonal Fourier–Mellin moments (OFMMs) [8], Chebyshev–Fourier moments (CHFMs) [9], radial harmonic Fourier moments (RHFMs) [10], Bessel–Fourier moments (BFMs) [11], polar harmonic transforms (PHTs) [12], and exponential moments (EMs) [13], Among them, EMs have good antinoise performances and less information redundancy, and their basis functions have the simple form, low computational complexity, and good image description performance [14]. However, EMs have various errors and numerical instabilities at high orders, which affects the accuracy of EMs [15]. The ubiquitous errors have a very negative impact on the image analysis and reconstruction [16] so that when the order of EMs reaches a critical value, the reconstruction errors are too large to be imaged [17]. The promotion of fractional-order exponential moments (FrEMs) afterward compensated for the numerical instability of EMs effectively [18] and improved the reconstruction and antinoise performance of EMs. In the study of fractional moments, scholars first define the fractional parameter of and then use to replace in the radial basis function of the orthogonal moment. The radial basis function is further modified to maintain the orthogonality of the moment [19]. The orthogonal moment promoted to the fractional order can adjust the gradient of the radial basis function by assigning different values to the fractional variable of to further alleviate the problem of numerical instability [20]. The existing fractional moments include fractional-order Legendre–Fourier moments (FrOLFMs) [21], orthogonal fractional-order Fourier–Mellin moments (FrOFMMs) [22], fractional-order Zernike moments (FrZMs) [23], fractional-order polar harmonic transforms (FrPHTs) [24], fractional-order orthogonal Chebyshev–Fourier moments (FrCFMs) [25], and fractional-order radial harmonic Fourier moments (FrRHFMs) [26].

Although FrEMs have the excellent image description ability, various errors generated by the traditional calculation method still affect the accuracy of FrEMs. And the calculation accuracy restricts the development and application of continuous orthogonal moments in the fields of pattern recognition and image processing. Among the various errors, the numerical integral error is especially prominent. Since the digital image is stored in the form of Cartesian coordinates in computers and other devices, in the Cartesian coordinate system, the direct calculation of the continuous orthogonal moments of the image cannot obtain the integral value of the polynomial correctly, and it can only be replaced with the estimated value. The numerical integration error occurs during this process [27]. The numerical integration error is more distinct when the high order moment is calculated. To solve the problem of the numerical integration error of the image moment, Liao and Pawlak propose a method based on the numerical integration to reduce the numerical integration error. Their method is to use a unit disk with the radius of [28]. The reduction of the radius is to ensure that the sampling points used in the numerical integration do not cross the boundary of the unit disk to avoid the radial basis function from becoming unbounded. Because the disk area with a reduced radius is further affected by the reduced radius, this method will cause the geometric error. Therefore, they conclude that the geometric error and the numerical integration error cannot be reduced at the same time. Later, Singh et al. proposed a technique that can simultaneously reduce the geometric error and the numerical integration error based on the Gaussian numerical integration (GNI) [29]. This paper uses GNI to propose an accurate calculation method of FrEMs based on this idea. This method provides very accurate FrEMs and reduces the reconstruction error.

From the above description of image moments, we have summarized two problems: (1) in the traditional algorithm, the calculation of image moments mainly uses zeroth order approximation method, which will produce numerical integration errors and affect the calculation accuracy of moments. (2) The numerical instability of continuous orthogonal moments is common at high order, which affects the accuracy of continuous orthogonal moments. The goal of this paper is to take EMs as an example to solve the above two problems of EMs. The experiments prove that the accuracy of EMs is improved after using the new method. The main innovations of this paper are as follows: (1) an accurate calculation method of FrEMs is proposed, and the nature and comparison of the GNI method and the traditional calculation method are analyzed in depth; (2) the experimental result shows that FrEMs using the GNI accurate calculation method have stronger image reconstruction performance and the antinoising attack performances than FrEMs using the traditional method.

The rest of this paper is described as follows: in Section 2, we introduce the construction process of FrEMs in detail; Section 3 mainly introduces the traditional calculation methods and GNI methods of FrEMs; Section 4 conducts detailed experiments and discussions on image reconstruction, antinoising attack, antirotation attack, antiscaling attack, antifiltering attack, and anti-JPEG compression attack performance; and Section 5 summarizes the full text.

2. Proposed FrEMs

2.1. Definition of EMs

EMs are the mapping of the image on the basis function. The basis function of EMs is mainly composed of the radial basis function and the angular Fourier factor. The definition of the radial basis function of EMs is as follows [30]:where is the order and the value range is . The range of is . The definition expression of EMs iswhere is the repetition and the value range is . is the conjugate of a complex number. is the image function in polar coordinates. is the angular Fourier factor, and is the polar angle with the value range of .

EMs have good reconstruction performance, and they can use a limited number of EMs to reconstruct the original image. The reconstruction formula is as follows:

2.2. Definition of FrEMs

In order to improve the performance of EMs, this section we extend EMs to fractional order and construct FrEMs. The radial basis function of FrEMs is defined as follows [31]:where the fractional parameter , and the basis function of FrEMs is defined as follows:

The definition of FrEMs is

It can be known from formulas (1) and (4) that when , the radial basis functions of FrEMs will be those of EMs; therefore, EMs can be deemed as a special form of FrEMs. The radial basis function of EMs is orthogonal within the range of :where is the Kronecker delta. From the properties of angular Fourier factor and radial basis function, it can be known that the basis function of EMs is orthogonal in the unit circle:

From the definition of radial basis function of FrEMs expressed in formula (4), it can be known thatand then,The basis function of FrEMs satisfies the following orthogonal relationship [32]:

FrEMs have very strong image reconstruction ability, and the reconstruction formula is as follows:

2.3. Analysis of Radial Basis Function

In this section, we analyze the influence of the selection of the fractional parameter on the radial basis function of the fractional exponential moment. Figure 1 shows that when is taken as 1, 1.3, 1.6, and 1.9, respectively, and the order is 30, the radial basis function changes from . From Section 2.2, we can see that when the fractional parameter , the fractional radial basis function is equivalent to the traditional radial basis function. As can be seen from Figure 1(a), when the fractional parameter , the traditional radial basis function has a larger variation range around ; thus, the variation rate of the radial basis function is larger around , which leads to numerical instability and large errors. From Figure 1, we can see that the rate of change in the radial basis function gradually becomes moderate with the continuous increase in the fractional parameter . Therefore, we can alleviate the numerical instability of the exponential moment by adjusting the fractional parameter . However, different fractional parameters also lead to different calculation emphasis areas. Therefore, the specific application of fractional exponential moments should also be considered when selecting fractional parameter .

3. Accurate Computation Method of FrEMs

3.1. Traditional Method

When the image moment of a digital image is calculated by using the computer simulation, the expression of the integral form should firstly be discretized and the integral should be converted to a sum [8]. The discrete integration first needs to discretize the integration area into small areas. In these small areas, the center point is served as the sample point of the function value of the integrand, and then, the area of each small area is multiplied by the integrand value on the sample point. The products for all the small areas are summed, and the result is the approximate integral value [33]. Since FrEMs are defined in the polar coordinate system, while the image is defined in the rectangular coordinate system, the traditional calculation method needs to first convert FrEMs to the rectangular coordinate system, and then, FrEMs are calculated in the rectangular coordinate system. The polar coordinates and the rectangular coordinates are converted first here, and the conversion formula is as follows:

The infinitesimal relationship between rectangular coordinates and polar coordinates is

The definition of FrEMs in the rectangular coordinate system is obtained as follows:

The range of the integral change is , so the image needs to be mapped in the unit circle when FrEMs are calculated in the rectangular coordinate system. Since FrEMs calculated by the circumcircle mapping method do not have rotation invariance, this paper uses a calculation method based on the inscribed circle, as shown in Figure 2.

The formula for mapping the inscribed circle portion of a grayscale image with the size of into the unit circle is as follows:

The above mapping relationship is shown in Figure 2(b). The image center is mapped to the center of the unit circle. represents the center of the small image region of , where . The discrete summation form of FrEMs can be obtained as follows:where .

3.2. GNI Method

The traditional calculation method has the numerical integration problem [34]. In view of this defect, an accurate calculation method of FrEMs is proposed in this section by using GNI.

For the one-dimensional function of , the integral over the interval can be expressed asdenotingand then,so , and it can be obtained thatwhere and are the weight and position of the image sampling point, respectively, is the order of GNI, and the above formula can be transformed into the following form:

Similarly, for the two-dimensional function of , its expression of the double GNI in the integration area can be expressed as

Now, we use the double GNI method to precisely calculate FrEMs. For formula (13), it can be obtained as follows:where

The constraint given in formula (21) is an improvement over the constraint used in the zeroth-order approximation for inscribing circular disk. This constraint also allows those grids to take part in computation whose centers fall outside the circle.

4. Experiment and Result Analysis

In this section, image reconstruction, antinoising attack, antirotation attack, antiscaling attack, antifiltering attack, and anti-JPEG compression attack performance of the accurate calculation method of FrEMs were tested via the experiment. Thirty grayscale images with size of were used as the image library. The images shown in Figure 3 are ten images selected randomly from the image library. For convenience, TFrEMs and GFrEMs were used in the following to refer to the traditional method and GNI method, respectively; GFrPHFMs, GFrRHFMs, GFrPCETs, GFrPCTs, and GFrPSTs, respectively, represent FrPHFMs, FrRHFMs, FrPCETs, FrPCTs, and FrPSTs calculated by the GNI method. Here, we chose the order of GNI as .

4.1. Rotation Angle Estimation

Let the rotation angle of the image be , assuming ,and then,where referred to the real part of and referred to the imaginary part of . According to the above formula, the rotation angle could be estimated by the inverse trigonometric function. GFrEMs of any order could be used to estimate the rotation angle [35]. From Section 2.3, we know that the selection of will lead to different focus areas. After a lot of experiments, we know that when , the angle estimation experiment result of GFrEMs is the best. Here, we chose the maximum moment order for each estimate as and the fractional parameter . And for each maximum moment order , we selected moments with all the repetition of , respectively, to estimate to get estimated angles. Finally, the average values of these estimated angles were taken as the corresponding final result of the maximum moment order . The experiment used a Lena grayscale image with a size of . Denote by the estimated angle and mean relative error (MRE) as the measurement standard, and the experimental result was as follows.

Figure 4 shows the estimated rotation angle after the original images were rotated and , respectively. As could be seen from Figure 4, the angle estimated by GFrEMs is relatively accurate at low maximum moment order, and some deviations occur as the maximum moment order increases, but the MRE can still be kept small. It could be seen from Table 1 that when was rotated, the MRE of the estimated rotation angles of the real and imaginary parts of used was 0.0688 and 0.0425, respectively. When was rotated, the MRE of the estimated rotation angles of the real and imaginary parts of used was 0.0247 and 0.0399, respectively, which verified that the rotation angle estimation using GFrEMs was relatively accurate.


MRE

30.050629.733230.676230.503630.703331.412331.527531.320030.247830.21410.0688
31.132231.473331.857433.112633.559133.592932.864133.256333.557033.7286
30.533330.509930.342830.192730.297630.352930.151930.906830.855731.02080.0425
30.964430.916730.284132.317832.971632.316532.260932.035931.742332.3871

60.540060.275459.734359.774459.536859.931459.940859.668256.623957.01710.0247
57.226457.634057.934458.138158.107257.243958.528958.514957.761457.8638
60.686460.941459.917959.182358.785559.181958.582457.747456.838158.38670.0399
57.330556.682356.125656.972756.294155.943757.063656.753257.036356.6987

4.2. Rotation Invariance

The rotation invariance of GFrEMs was tested in this section. The Lena grayscale image with size of was rotated by , , , , and , respectively. The GFrEMs amplitude of the original image and the GFrEMs amplitude of the rotated image were compared. MRE was used to represent the change rate of the GFrEMs amplitude of the rotated image relative to the original image. The selection of the fractional parameter will affect the zero distribution of the radial basis function and further affect the rate of change in the radial basis function. After lots of experiments, it was known that the MRE was the smallest when the fractional parameter , so the fractional parameter was selected here. Figure 5 shows the experimental image after rotating at different angles, and the experimental results obtained are shown in Table 2.


RotationMRE

4.69892.30312.83172.01841.76300.41702.23830.68680.47590
4.70392.31492.83482.01641.76740.42092.23100.68620.47380.0052
4.69352.32072.82862.02751.75910.41592.22150.70150.48160.0071
4.67732.30402.81562.03271.74320.39732.20460.70840.49180.0124
4.68092.30302.80582.03021.73860.39832.21220.71200.48790.0105
4.69332.30102.81552.02141.75040.40062.22400.69400.48380.0082

It could be seen from the results that the value of MRE was less than 0.02 under different rotation angles, which indicated that the FrQEMs amplitude after the image was rotated was approximately the same as that of the original image, which verified the rotation invariance of GFrEMs.

4.3. Scaling Invariance

GFrEMs were calculated for a set of scaled images below. In this experiment, the Lena grayscale images with a size of were scaled by 0.75, 1.25, 1.5, 1.75, and 2 times, and their GFrEMs amplitudes were calculated, respectively, to be compared with GFrEMs amplitudes values of the original images. The fractional parameter was selected here. Figure 6 is the experimental image after it was scaled different times. The experimental result obtained is shown in Table 3.


ScalingMRE

14.69892.30312.83172.01841.76300.41702.23830.68680.47590
0.754.70652.31172.83662.02431.75870.41792.23010.68070.48760.0142
1.254.70982.30482.82432.02011.77730.41612.22810.67600.45750.0089
1.54.70142.30522.82082.02141.77170.40562.21780.69490.46430.0088
1.754.70152.31002.81412.02351.77760.40922.21950.69560.46290.0111
24.70412.30942.81622.02411.78200.41002.21840.68930.45420.0129

In the experiment, the original image was scaled different times, and the moment value of the scaled image was calculated. As could be seen from Figure 6, the original image was blurred to different degrees after being scaled to different degrees. From the above experimental data, it could be seen that the amplitude of the same GFrEMs of each scaled image was approximately equal, which verified the scaling invariance of GFrEMs.

4.4. Filtering Attack

The filtering attack blurred the edges of the image [36], including the median filtering, Gaussian filtering, and the average filtering. Here, we selected the fractional parameter and added the filtering attacks of (3 × 3) and (5 × 5) median filtering, Gaussian filtering, and average filtering to the original image, respectively. The Lena grayscale image with a size of was adopted in the experiment. The images after filtering attack are shown in Figure 7, and the experimental result obtained is shown in Table 4.


FilteringMRE

Original image4.69892.30312.83172.01841.76300.41702.23830.68680.47590
Median filtering 3 × 34.85202.43642.92862.06921.84430.40402.26370.77850.50970.0462
Median filtering 5 × 54.88872.49273.09072.12341.92490.41872.26920.78590.56360.0730
Gaussian filtering 3 × 34.69692.30422.82452.02241.76030.41152.22230.68670.47570.0042
Gaussian filtering 5 × 54.69692.30422.82452.02241.76030.41152.22220.68680.47570.0043
Average filtering 3 × 34.69352.30842.81072.02981.75800.40142.18930.68400.47420.0133
Average filtering 5 × 54.68352.31562.78352.04151.75360.38462.09620.66850.46060.0375

As could be seen from Figure 7, filtering attacks did blur the edges of images, and different filtering attacks had different effects. As could be seen from Table 4, as the filtering quality increased, the MRE of the GFrEMs amplitude of the filtered attack image also gradually increased, which showed that the filtering attack did affect the image quality. However, the MRE of the GFrEMs amplitude of the image after (3 × 3) and (5 × 5) Gaussian filtering attack still remained below 0.05, which indicated that GFrEMs could well resist the Gaussian filtering. For median filtering and average filtering, MRE could be kept below 0.1, indicating that GFrEMs had a certain degree of resistance to median filtering and average filtering.

4.5. JPEG Compression Attack

JPEG compression attack was a common image attack method [37]. The purpose of the compression was to compress the amount of data and improve the effectiveness. However, pixels of the image were lost in this process. Here, we selected the fractional parameter and performed the JPEG compression attack on the original image with a quality factor of 10, 20, …, 90 and then compared the amplitude of the GFrEMs of the attacked image with the amplitude of the GFrEMs of the original image. Figure 8 shows the Lena image after the JPEG compression attack with different quality factors, and the experiment result is shown in Table 5.


JPEGMRE

Original image4.69892.30312.83172.01841.76300.41702.23830.68680.47590
JPEG 104.73252.36062.91331.94551.68140.49322.18430.68320.59100.0534
JPEG 204.70132.30342.82322.03561.83300.35952.26490.65270.49860.0283
JPEG 304.72242.28602.79632.02991.76130.45112.24190.68870.47350.0155
JPEG 404.67832.30612.81492.02351.79300.42542.23500.69980.47050.0102
JPEG 504.71682.30282.81581.99981.75600.42752.21860.67180.49400.0109
JPEG 604.67972.28902.83202.03181.77960.42272.27310.67860.47860.0069
JPEG 704.69262.29232.83762.01921.75740.42172.24080.68160.47180.0048
JPEG 804.69862.30202.82582.02541.76350.41792.23860.69550.46700.0041
JPEG 904.69812.30172.83272.01381.76120.41812.24130.68090.47630.0016

As could be seen from Figure 8, after JPEG compression attacks with different quality factors are added to the original image, the image quality deteriorates to different degrees. As could be seen from the above Table 5, as the quality factor became larger, the MRE also became smaller and smaller. When the quality factor reached 30, the MRE of the GFrEMs amplitude of the image after the JPEG compression attack was kept below 0.02 compared with the original image, which indicated that GFrEMs had strong resistance to JPEG compression attacks.

4.6. Image Reconstruction

The image reconstruction performance was an important feature of the image orthogonal moment, which reflected the accuracy of the image moment. For the image with a size of and its reconstructed image , the mean square error was used in this paper to measure the reconstruction error [38]:

4.6.1. Experiment 1

GFrEMs have good image reconstruction ability. GFrEMs, TFrEMs, GFrPHFMs, GFrRHFMs, GFrPCETs, GFrPCTs, and GFrPSTs were compared in this section. The Lena grayscale image with a size of was used in the experiment. The maximum moment order , and the fractional parameter . The experiment result is shown in Table 6.


1020304050

TFrEMs
0.04440.02500.01890.19650.5314
GFrEMs
0.04440.02480.01720.01240.0095
GFrPHFMs
0.10870.10100.09960.09740.0963
GFrRHFMs
0.05020.02870.02030.01530.0121
GFrPCETs
0.08520.05760.04500.03820.0334
GFrPCTs
0.07900.05210.05210.03330.0291
GFrPSTs
0.19020.13110.10610.09040.0783

As can be seen from Table 6, the images reconstructed by TFrEMs and GFrEMs kept small errors at low orders, but when the maximum moment order reached to a certain value, the images reconstructed by TFrEMs had a large error; both the edge region and the center region of the image reconstructed by TFrEMs are deteriorated. And the image errors reconstructed by GFrEMs were smaller and smaller. There was no deterioration in the edge area and center area of the image reconstructed by GFrEMs, and the image quality got better and better with the increase in the maximum moment order. For a more intuitive explanation, the line chart of the reconstruction error is shown in Figure 9. It can be seen from the line chart that the image reconstructed by GFrEMs always maintained a small error, which indicated that the GNI method further improved the accuracy of FrEMs. From Figure 9, we can see that when , the mean square error of the reconstructed images of GFrEMs and GFrRHFMs was similar, and both GFrEMs and GFrRHFMs could maintain great reconstruction effect. GFrPCETs and GFrPCTs had similar reconstruction effects and could keep the mean square error small, but the overall reconstruction effect was worse than GFrEMs. However, GFrPHFMs and GFrPSTs had the worst reconstruction effect compared with other fractional moments calculated by the GNI method. On the whole, the reconstruction error of GFrEMs is always smaller than other fractional-order moments calculated by the GNI method, which once again verifies the reconstruction performance of GFrEMs.

4.6.2. Experiment 2

Since the noise would seriously affect the reconstruction performance of images [39], this experiment tested the comparison of the image reconstruction errors between GFrEMs, TFrEMs, GFrPHFMs, GFrRHFMs, GFrPCETs, GFrPCTs, and GFrPSTs after adding the salt and pepper noise. This experiment used the average value of thirty images. Maximum moment order , and fractional parameter was selected. The salt and pepper noise was added separately. After the salt and pepper noise was added, the image reconstruction result is shown in Table 7, and the error line chart is shown in Figure 10.


1020304050

TFrEMs
0.05890.03800.02810.20100.5298
GFrEMs
0.05650.03530.02590.02120.0168
GFrPHFMs
0.12160.11460.11210.10790.1059
GFrRHFMs
0.06260.04180.03420.02540.0195
GFrPCETs
0.09850.07050.05510.04790.0433
GFrPCTs
0.09070.06430.05050.04370.0380
GFrPSTs
0.19890.14370.11580.10110.0873

It can be seen from Table 7 that the overall effect of the image reconstruction became worse with the addition of noise, which indicated that the noise did affect the image reconstruction. However, from Figure 10, the image reconstruction effect of GFrEMs was always better than TFrEMs and other fractional-order moments, which indicated that GFrEMs had further improved the antinoise performance of FrEMs. This verified again that the GNI method was superior to the traditional method in the image reconstruction performance.

4.7. Application of GFrEMs in Medical Images

In this section, the image reconstruction, antinoising attack, antirotation attack, antiscaling attack, antifiltering attack, and anti-JPEG compression attack performance of GFrEMs applied to medical images were tested via the experiment [40]. Seventy grayscale images with a size of were used as the image library. The images shown in Figure 11 are ten images selected randomly from the image library.

4.7.1. Experiment 1

In this section, GFrEMs is applied to the reconstruction of grayscale medical images, and the salt and pepper noise were added separately to test its antinoising attack performance. The maximum moment order , and the fractional parameter . The experiment result is shown in Table 8.


1020304050

0.08480.04170.02240.01340.0093
0.10310.06070.04050.0210.0213
0.10130.06060.03910.02730.0207
0.10440.05980.03350.02800.0208

After adding salt and pepper noise, the reconstruction effect of the image becomes worse and the relative error also increases, which proves once again that salt and pepper noise can seriously affect the reconstruction performance of the image moments. From Table 8, it can be seen that the GFrEMs applied to the reconstruction of medical images can still maintain a small error, which once again verifies the reconstruction performance and antinoising attack performance of GFrEMs.

4.7.2. Experiment 2

In this section, we test the antigeometric attack capability of GFrEMs applied to medical images. The GFrEMs amplitude of the original image and the GFrEMs amplitude of the image being attacked were compared [41]. The fractional parameter was selected here. Details of the attack are as follows: median filtering with window size 3 × 3; Gaussian filtering with window size 3 × 3; average filtering with window size 3 × 3; JPEG compression quality factor 90, 70, 50, 30, 20, and 10; image rotation by , , , , and ; and image scaling with factor 0.5, 0.75, 1.5, and 2. The experiment result is shown in Table 9.


AttackMRE

Original image2.72694.55182.77433.14752.40921.88282.43461.45920.58930
Rotation 2.73024.54582.76973.14322.39741.88102.42411.44980.58730.0035
Rotation 2.72634.54962.76453.14762.40041.88072.42921.45190.58690.0034
Rotation 2.71544.54442.75463.15602.39681.88622.43571.44750.59390.0057
Rotation 2.71434.54662.75063.16392.40141.89032.43581.44330.58980.0061
Rotation 2.72324.54602.74113.15412.39631.89352.43421.45650.59840.0061
JPEG 902.71704.53592.76733.14152.40181.87452.41381.44660.59300.0054
JPEG 802.71114.52862.75683.14072.40271.86142.39841.43610.59330.0085
JPEG 702.70804.50532.75153.14992.40621.88282.37911.45800.59720.0080
JPEG 602.71154.52232.74503.13022.39681.87092.36091.43500.59100.0111
JPEG 502.69424.51072.76733.12652.38941.86072.35181.41570.59010.0162
JPEG 402.70244.54542.74283.11172.37901.84962.35021.42510.60310.0194
JPEG 302.63884.50372.72693.10862.39401.85652.36561.40650.58760.0160
JPEG 202.70844.45522.81153.02552.38611.83642.29661.37420.59750.0316
JPEG 102.59274.40602.74803.15082.31031.75182.39701.24390.58750.0378
Scaling 0.52.73774.60982.72223.04432.41411.92772.31031.51370.68640.0405
Scaling 0.752.72794.56532.76063.11832.39821.89712.39951.46400.62500.0147
Scaling 1.52.72244.52112.76633.17322.38201.86992.44401.41650.56070.0131
Scaling 2.02.72504.50642.77223.18692.36941.85812.45361.39770.54060.0184
Median filtering 3 × 32.94194.67672.80923.24982.53481.82742.58451.36540.48370.0575
Gaussian filtering 3 × 32.72664.54502.76633.14692.39411.88202.42491.44570.58930.0045
Average filtering 3 × 32.72654.53022.74813.14462.36221.88112.40331.41730.59030.0141

As can be seen from Table 9, as the rotation angle of the original image increased, the value of MRE also increased, but the MRE could always be kept within 0.01. And as the quality factor became larger, the MRE also became smaller and smaller, and the MRE could always be kept within 0.04. However, when scaling the original image, we found that too large or too small scaling will lead to larger errors. When the original image was scaled by 0.05 times, the MRE reached 0.0405, which showed that scaling attacks do affect the image quality. As can be seen from Table 9, GFrEMs had strong resistance to Gaussian filtering attacks and certain resistance to average filtering attacks. However, GFrEMs had relatively weak resistance to median attacks, but it could still keep small errors. On the whole, GFrEMs could still keep small errors after various geometric attacks, which proved that GFrEMs applied to medical images have strong robustness.

5. Conclusion

EMs have good image description abilities, but they have numerical instability problems, which seriously affects the accuracy of EMs. And the traditional calculation method of EMs will generate the numerical integration errors, which restricts the development and application of EMs in the field of pattern recognition and image processing. The main advantages of this paper are manifested as follows: (1) the promotion of FrEMs effectively solves the numerical instability problem and improves the antinoising attack performance and reconstruction performance of EMs to a certain degree. (2) The accurate calculation method based on GNI proposed in this paper effectively reduces the numerical integration error. The experiments show that FrEMs which use this calculation method have a very strong performance in the image reconstruction, rotation angle estimation, geometric invariance, and resistance to filtering attacks and JPEG compression attacks. Although the proposed method has the above advantages, it takes a long time to calculate. Because the core of the proposed scheme is GFrEMs, for a sized image, we know from (24) that the number of multiplications in the computation of one GFrEMs is . Since the GFrEMs with maximum moment order contains moments, the number of multiplications of the GFrEMs with maximum moment order is . Hence, the computational complexity of the proposed scheme is , which is very high. Thus, our future work is to propose a fast calculation method for this method.

Data Availability

The data used to support the findings of this study are available at https://download.csdn.net/download/weixin_40394701/10228207.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Acknowledgments

This work was supported by the National Key Research and Development Program of China (2018YFB0804104), National Natural Science Foundation of China (61802212, 61872203, 61806105, 61701212, 61701070, and 61672124), Shandong Provincial Natural Science Foundation (ZR2019BF017), Project of Shandong Province Higher Educational Science and Technology Program (J18KA331), Shandong Provincial Key Research and Development Program (Major Scientific and Technological Innovation Projects of Shandong Province) (2019JZZY010127, 2019JZZY010132, and 2019JZZY010201), Jinan City “20 Universities” Funding Projects Introducing Innovation Team Program (2019GXRC031), Password Theory Project of the 13th Five-Year Plan National Cryptography Development Fund (MMJJ20170203), and Key Research and Development Program of Shandong Academy of Science.

References

  1. C. Wang, X. Wang, Z. Xia, B. Ma, and Y.-Q. Shi, “Image description with polar harmonic fourier moments,” IEEE Transactions on Circuits and Systems for Video Technology, p. 1, 2020, In press. View at: Publisher Site | Google Scholar
  2. B. Ma and Y. Q. Shi, “A reversible data hiding scheme based on code division multiplexing,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 9, pp. 1914–1927, 2016. View at: Publisher Site | Google Scholar
  3. M.-K. Hu, “Visual pattern recognition by moment invariants,” IRE Transactions on Information Theory, vol. 8, no. 2, pp. 179–187, 1962. View at: Publisher Site | Google Scholar
  4. Y. S. Abu-Mostafa and D. Psaltis, “Recognitive aspects of moment invariants,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-6, no. 6, pp. 698–706, 1984. View at: Publisher Site | Google Scholar
  5. C. Wang, X. Wang, Y. Li, Z. Xia, and C. Zhang, “Quaternion polar harmonic fourier moments for color images,” Information Sciences, vol. 450, pp. 141–156, 2018. View at: Publisher Site | Google Scholar
  6. M. R. Teague, “Image analysis via the general theory of moments∗,” Journal of the Optical Society of America, vol. 70, no. 8, pp. 920–930, 1980. View at: Publisher Site | Google Scholar
  7. C.-H. Teh and R. T. Chin, “On image analysis by the methods of moments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 4, pp. 496–513, 1988. View at: Publisher Site | Google Scholar
  8. Y. Sheng and L. Shen, “Orthogonal fourier-mellin moments for invariant pattern recognition,” Journal of the Optical Society of America A, vol. 11, no. 6, pp. 1748–1757, 1994. View at: Publisher Site | Google Scholar
  9. Z. Ping, R. Wu, and Y. Sheng, “Image description with chebyshev-fourier moments,” Journal of the Optical Society of America A, vol. 19, no. 9, pp. 1748–1754, 2002. View at: Publisher Site | Google Scholar
  10. C. Wang, X. Wang, Z. Xia, and C. Zhang, “Ternary radial harmonic fourier moments based robust stereo image zero-watermarking algorithm,” Information Sciences, vol. 470, pp. 109–120, 2019. View at: Publisher Site | Google Scholar
  11. B. Xiao, J.-F. Ma, and X. Wang, “Image analysis by bessel-fourier moments,” Pattern Recognition, vol. 43, no. 8, pp. 2620–2629, 2010. View at: Publisher Site | Google Scholar
  12. P.-T. Yap, X. Jiang, and A. C. Kot, “Two-dimensional polar harmonic transforms for invariant image representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 7, pp. 1259–1270, 2009. View at: Publisher Site | Google Scholar
  13. H.-T. Hu, Y.-D. Zhang, C. Shao, and Q. Ju, “Orthogonal moments based on exponent functions: exponent-Fourier moments,” Pattern Recognition, vol. 47, no. 8, pp. 2596–2606, 2014. View at: Publisher Site | Google Scholar
  14. H.-Y. Yang, S.-R. Qi, C. Wang, S.-B. Yang, and X.-Y. Wang, “Image analysis by log-polar Exponent-Fourier moments,” Pattern Recognition, vol. 101, Article ID 107177, 2020. View at: Publisher Site | Google Scholar
  15. T. Wang and S. Liao, “Computational aspects of exponent-Fourier moments,” Pattern Recognition Letters, vol. 84, pp. 35–42, 2016. View at: Publisher Site | Google Scholar
  16. H.-Y. Yang, S.-R. Qi, P.-P. Niu, and X.-Y. Wang, “Color image zero-watermarking based on fast quaternion generic polar complex exponential transform,” Signal Processing: Image Communication, vol. 82, Article ID 115747, 2020. View at: Publisher Site | Google Scholar
  17. C. Singh and R. Upneja, “Error analysis in the computation of orthogonal rotation invariant moments,” Journal of Mathematical Imaging and Vision, vol. 49, no. 1, pp. 251–271, 2014. View at: Publisher Site | Google Scholar
  18. H. Zhang, Z. Li, and Y. Liu, “Fractional orthogonal fourier-mellin moments for pattern recognition,” in Communications in Computer and Information Science, pp. 766–778, Springer, Singapore, 2016. View at: Publisher Site | Google Scholar
  19. P. Kaur, H. S. Pannu, and A. K. Malhi, “Plant disease recognition using fractional-order Zernike moments and SVM classifier,” Neural Computing and Applications, vol. 31, no. 12, pp. 8749–8768, 2019. View at: Publisher Site | Google Scholar
  20. M. Yamni, A. Daoui, O. El ogri et al., “Fractional charlier moments for image reconstruction and image watermarking,” Signal Processing, vol. 171, Article ID 107509, 2020. View at: Publisher Site | Google Scholar
  21. K. M. Hosny, M. M. Darwish, and T. Aboelenen, “New fractional-order legendre-fourier moments for pattern recognition applications,” Pattern Recognition, vol. 103, Article ID 107324, 2020. View at: Publisher Site | Google Scholar
  22. H. Zhang, Z. Li, and Y. Liu, “Fractional orthogonal fourier-mellin moments for pattern recognition,” in Proceedings of the Chinese Conference on Pattern Recognition, pp. 766–778, Springer, Chengdu, China, November 2016. View at: Publisher Site | Google Scholar
  23. J. Yang, D. Jin, and Z. Lu, “Fractional-order zernike moments,” Journal of Computer Aided Design & Computer Graphics, vol. 29, pp. 479–484, 2017. View at: Publisher Site | Google Scholar
  24. K. M. Hosny, M. M. Darwish, and T. Aboelenen, “Novel fractional-order polar harmonic transforms for gray-scale and color image analysis,” Journal of the Franklin Institute, vol. 357, no. 4, pp. 2533–2560, 2020. View at: Publisher Site | Google Scholar
  25. R. Benouini, I. Batioua, K. Zenkouar, A. Zahi, S. Najah, and H. Qjidaa, “Fractional-order orthogonal chebyshev moments and moment invariants for image representation and pattern recognition,” Pattern Recognition, vol. 86, pp. 332–343, 2019. View at: Publisher Site | Google Scholar
  26. K. M. Hosny, M. M. Darwish, and M. M. Eltoukhy, “Novel multi-channel fractional-order radial harmonic fourier moments for color image analysis,” IEEE Access, vol. 8, pp. 40732–40743, 2020. View at: Publisher Site | Google Scholar
  27. R. Upneja, M. Pawlak, and A. M. Sahan, “An accurate approach for the computation of polar harmonic transforms,” Optik, vol. 158, pp. 623–633, 2018. View at: Publisher Site | Google Scholar
  28. S. X. Liao and M. Pawlak, “On the accuracy of zernike moments for image analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp. 1358–1364, 1998. View at: Publisher Site | Google Scholar
  29. C. Singh, E. Walia, and R. Upneja, “Accurate calculation of zernike moments,” Information Sciences, vol. 233, pp. 255–275, 2013. View at: Publisher Site | Google Scholar
  30. Y. J. Jiang, Exponent Moments and its Application in Pattern Recognition, Beijing University of Posts and Telecommunications, Beijing, China, 2011.
  31. B. Chen, M. Yu, Q. Su, H. J. Shim, and Y.-Q. Shi, “Fractional quaternion zernike moments for robust color image copy-move forgery detection,” IEEE Access, vol. 6, pp. 56637–56646, 2018. View at: Publisher Site | Google Scholar
  32. B. He, J. Cui, B. Xiao, and Y. Peng, “Image analysis using modified exponent-Fourier moments,” EURASIP Journal on Image and Video Processing, vol. 2019, no. 1, Article ID 72, pp. 1–27, 2019. View at: Publisher Site | Google Scholar
  33. K. M. Hosny and M. M. Darwish, “Accurate computation of quaternion polar complex exponential transform for color images in different coordinate systems,” Journal of Electronic Imaging, vol. 26, Article ID 023021, 2017. View at: Publisher Site | Google Scholar
  34. C. Singh and R. Upneja, “A computational model for enhanced accuracy of radial harmonic fourier moments,” in Proceedings of the World Congress of Engineering, pp. 1189–1194, London, UK, July 2012. View at: Google Scholar
  35. B. Yang, J. Kostková, J. Flusser, T. Suk, and R. Bujack, “Rotation invariants of vector fields from orthogonal moments,” Pattern Recognition, vol. 74, pp. 110–121, 2018. View at: Publisher Site | Google Scholar
  36. B. Ham, M. Cho, and J. Ponce, “Robust guided image filtering using nonconvex potentials,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 1, pp. 192–207, 2018. View at: Publisher Site | Google Scholar
  37. B. Ma, L. Chang, C. Wang, J. Li, X. Wang, and Y.-Q. Shi, “Robust image watermarking using invariant accurate polar harmonic fourier moments and chaotic mapping,” Signal Processing, vol. 172, Article ID 107544, 2020. View at: Publisher Site | Google Scholar
  38. C. Singh, E. Walia, R. Pooja, and R. Upneja, “Analysis of algorithms for fast computation of pseudo zernike moments and their numerical stability,” Digital Signal Processing, vol. 22, no. 6, pp. 1031–1043, 2012. View at: Publisher Site | Google Scholar
  39. A. Daoui, M. Yamni, O. El ogri, H. Karmouni, M. Sayyouri, and H. Qjidaa, “Stable computation of higher order charlier moments for signal and image reconstruction,” Information Sciences, vol. 521, pp. 251–276, 2020. View at: Publisher Site | Google Scholar
  40. Y.-Q. Shi, X. Li, X. Zhang, H.-T. Wu, and B. Ma, “Reversible data hiding: advances in the past two decades,” IEEE Access, vol. 4, pp. 3210–3237, 2016. View at: Publisher Site | Google Scholar
  41. Z. Xia, X. Wang, W. Zhou, R. Li, C. Wang, and C. Zhang, “Color medical image lossless watermarking using chaotic system and accurate quaternion polar harmonic transforms,” Signal Processing, vol. 157, pp. 108–118, 2019. View at: Publisher Site | Google Scholar

Copyright © 2020 Shujiang Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views196
Downloads262
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.