#### Abstract

A pixel-based pixel-value-ordering (PPVO) has been used for reversible data hiding to generate large embedding capacity and high-fidelity marked images. The original PPVO invented an effective prediction strategy in pixel-by-pixel manner. This paper extends PPVO and proposes an obtuse angle prediction (OAP) scheme, in which each pixel is predicted by context pixels with better distribution. Moreover, for evaluating prediction power, a mathematical model is constructed and three factors, including the context vector dimension, the maximum prediction angle, and the current pixel location, are analyzed in detail. Experimental results declare that the proposed OAP approach can achieve higher PSNR values than PPVO and some other state-of-the-art methods, especially in the moderate and large payload sizes.

#### 1. Introduction

Reversible data hiding (RDH) originated from a US patent by Barton in 1997 [1]. This technique can not only hide secret information into a cover medium, but also restore the host medium perfectly in the meantime decoding the embedded data. The superexcellent feature has aroused much attention in the whole research community in no time. For a long time, plenty of applications have been developed in all walks of life, such as secret message transmission, authentication, court evidence, military affairs, and intellectual property protection. What is more, the cover medium contains image, text, audio, and even video. This paper focuses on the image RDH technique.

In the past for more than two decades, RDH has been developed at full speed. So far, several application examples have been reported in numerous literatures, such as image authentication [1, 2], integrity check [3], trusted cloud data coloring [4], medical image processing [5, 6], and stereo image coding [7, 8]. All the aforementioned reality applications benefit from a number of outstanding methods, such as lossless compression, least significant bit (LSB) replacement, difference expansion (DE), integer wavelet transform (IWT), histogram shifting (HS), prediction error expansion (PEE), and sorting. For clarity, four categories shall be designated and elaborated for a brief review in terms of major technical routes.

Firstly, lossless-compression-based methods [9–13] hide data bits into some space of the cover image by losslessly compressing a subset. Fridrich et al. [9] released space by compressing a bit-plane of cover image. This plane was determined as the lowest one to hide a 128-bit hash value. Goljan et al. [10] proposed the so-called R-S scheme. In this way, the cover image was classified into three different block categories, including regular, singular, and unusable, on the basis of the smoothness captured by a discrimination function. R-blocks and S-blocks prepared for data embedding using a flipping function. U-blocks were skipped. Xuan et al. [12] raised a high capacity RDH project on account of IWT, which utilized the coefficients of high frequency subbands to carry data bits. Furthermore, they compressed some selected middle bit-planes of IWT coefficients to make space. Celik et al. [13] put forward a compression method named generalized least significant bit (G-LSB). This technique took advantage of unaltered cover data portions as side information to improve the compression efficiency.

Secondly, DE-based schemes expanded some pixel parameter differences to make space for data hiding. Tian [14, 15] presented the first DE method using pair-wise pixel differences in view of integer Haar wavelet transform [16]. Afterwards, Alattar [17] generalized DE from two pair-wise pixels to arbitrary sized pixel blocks, which improved the embedding rate nearly to bits per pixel (BPP). In viewpoint of computational complexity, Coltuc and Chassery [18] proposed a reversible contrast mapping scheme. Considering the magnitude of difference value, Weng et al. [19] adjusted the sum of pixel pairs and pair-wise differences for the invariability characteristic. Wang et al. [20] reformulated DE as a general integer transformation and extended this transformation. Coltuc [21] reduced the impact of location map for low distortion embedding.

Thirdly, HS-based approaches are the most successful for image RDH. Ni et al. [22] realized the fundamental HS program and van Leest et al. [23] expressed the similar thing. Later on, Ni et al.’s HS scheme was extensively investigated and many improved works came into being [24]. Fallahpour and Sedaaghi [25] applied HS for specified blocks instead of the whole cover image. This processing trick increased EC and reduced distortion. Lee et al. [26] depicted a novel difference histogram rather than conventional pixel intensity histogram. Experiments demonstrated that this Laplacian-like distribution, having a much higher peak point, performed better than aforementioned plane. Hence, pixel spatial correlation was well exploited. Xuan et al. [27] modified the histogram issued from IWT high frequency coefficients. This made distortion minimizing workable by selecting appropriate histogram pairs. Li et al. [28] fine investigated many previous HS schemes [29], treated them as special cases, and constructed a general framework for HS-based RDH. Via this way, one can derive a new RDH operation only needing an element setting.

Finally, PEE-based algorithms, deriving from DE technique, has attracted considerable attention and has developed to a large class SDH branch. Not the practical pixel differences, but the computed prediction errors utilized for expansion are the key issue. Thodi and Rodríguez [30] initiated PEE program and embedded large EC payload with low distortion. Incorporating with HS, PEE exploited much pixel correlation inherent and played well in distortion control. Hu et al. [31] designed an optimized payload-dependent overflow location map for looser pixel shifting constraint than conventional location maps. Therefore, it was constructed with two parts including embedding and shifting. This feature made great auxiliary bit stream compressibility for watermarking payload space. Combining prediction error modification with the above predictor, Hong et al. [32] revised the histogram of prediction errors for vacant position, obtaining EC enhancement efficiently, even several times higher than that of Ni et al. [22]. Sachnev et al. [33] invented a double-layer embedding method uniting HS and PEE. This promoted much accuracy for pixel prediction.

Recently, Li et al. [34] kneaded together sorting, PEE, and HS and raised a high-fidelity RDH method called pixel-value-ordering (PVO). PVO displayed impressive performance for moderate payload size. Soon, Peng et al. [35] and Ou et al. [36] extended this predictor and produced improved pixel-value-ordering (IPVO) and PVO-K techniques. All these predictors obtained prediction values in a block-by-block manner. There are two pixels that might be predicted in each block and this constrains embedding capacity (EC) to some extent. Qu and Kim [37] created a novel pixel-based predictor in raster-scanning order, which predicted pixels in a pixel-by-pixel manner. In this method, the current pixel is previously determined and then predicted by its context pixels. The strategy has been verified as an effective prediction for high EC and peak signal-to-noise ratio (PSNR). However, PPVO only used parts of available referenced pixels within a right angle zone. In order to remedy this problem, a novel obtuse angle prediction (OAP) method will be proposed in this paper. Furthermore, prediction factors and its power weight shall be analyzed in detail.

In the rest sections, this paper shall be organized as follows. Section 2 presents four type classic predictors, containing dynamic prediction (DP), half-enclosing prediction (HEP), and full-enclosing prediction (FEP). PPVO is analyzed as a typical example of right angle prediction (RAP). Section 3 specifies the new proposed OAP type method and influencing factors in detail. Section 4 applies abundant experiments demonstrating the advantage of OAP art. Section 5 gives the conclusion.

#### 2. Related Works

In this section, four prediction schemes based on PEE including dynamic prediction [34–36], half-enclosing prediction [30–32], full-enclosing prediction [33, 40], and PPVO prediction [37] are analyzed as follows.

##### 2.1. Dynamic Prediction

Dynamic prediction obtains prediction values on the basis of context pixels with uncertain locations. Li et al. [34] provided these type prediction frames on behalf of PVO. Afterwards, Peng et al. [35] proposed IPVO and Ou et al. [36] put forward PVO-K as extensive methods. In general, PVO series methods predicted pixels in a block-by-block manner.

Firstly, the host image shall be cut into a temp processing image for some size appropriating for being divided into a great deal of nonoverlapped blocks with equal-size. As for Figure 1, a general sized block B is described, where and . For simplicity, B can be written as a vector in the raster-scan order.

Secondly, the scanning pixel vector is sorted into a new one in an ascending order. The new ordered block is denoted by B. Thus a vector mapping is derived from to with . If , , and , then .

Then, prediction benchmark shall be established for the location of the current pixel or pixels. That is, there two pixels in a block at most can be determined as the current pixels. Note that, PVOs define different prediction error and its extension methods. Here, how to generate prediction error is mainly explained.

In conventional PVO method, the second largest pixel and the second smallest pixel are determined as the two current pixels, which are predicted by the largest pixel and the smallest pixel . Two prediction errors are obtained aswhere and .

In IPVO method, two current pixels are decided in the same way with PVO. But the two prediction errors are generated bywhere

Thus, the corresponding couple prediction errors may take some values in .

In PVO-K method, two prediction errors are determined to be much more complicated than PVO and IPVO. Three terms shall be assumed for the current pixels confirmation as follows.(i)The smallest pixel value is and the largest pixel value is , .(ii)The second smallest value is no less than the second largest value and these two second minimum and maximum values are from different pixels.

Thereout, the second smallest pixel and the second largest pixel are considered to be the current pixels. The two prediction errors are decided bywhere , .

PVO, IPVO, and PVO-K have the common feature for prediction confirmation. The current pixels must be determined only after pixel sorting. Referenced pixels cannot be identified independently before sorting and context pixels are also incapable. The current pixels are dynamic, the context pixels are dynamic, and so, this type of prediction is called dynamic prediction.

##### 2.2. Half-Enclosing Prediction

Thodi and Rodríguez [30] proposed the first PEE-based method, in which histogram shifting was combined to embed secret data. It invented a simple constructor well exploiting the correlation inherent of the context pixels. Figure 2 described a pixel block. This predictor appointed the red bottom-right pixel to be the current one and the other three blue ones the context pixels. Thus, only pixels before the current pixel were probable to be used for prediction. Here, this is categorized as half-enclosing prediction.

Some feature elements were believed in the three context pixels nearby the current pixel. The predictor output was defined byThe prediction value of was marked by

##### 2.3. Full-Enclosing Prediction

A rhombus prediction scheme was presented by Sachnev et al. [33]. This predictor predicted pixels with high accuracy. The core idea was to launch all the four closely surrounding pixels to compute the prediction value. Figure 3 depicted such a case. Two pixel sets are defined as the cross set and the dot set, respectively. In a single-track procedure for embedding or restoring, if the cross set was assigned for current pixels, then the dot set should be the context pixels.

As shown in Figure 3, the current cross pixel is , and the four neighboring pixels are , , , and . The predicted center pixel value is computed as follows:The prediction error is computed asThe expanded prediction error can be applied for a data bit hiding like the difference expansion scheme by Tian [14],where is the to-be-hidden data bit.

Besides this predictor, histogram shift technique was also utilized in this paper to improve embedding performance in terms of low distortion. Two thresholds, including the negative one and the positive one , were introduced to separate the expandable set and the shifting set. The predicted errors belonging to should be expanded and the outer region ones should be shifted towards right or left. Thus, the complete error expansion algorithm is expressed by

##### 2.4. PPVO Prediction

Qu and Kim [37] presented PPVO recently. This method predicted pixel in a pixel-by-pixel manner. Each pixel except a very tiny one had a chance to be predicted, and thus, the prediction probability was larger than those block-mannered prediction schemes.

All the pixels shall be scanned in the sequential raster-scanning order from left to right and then from top to bottom. The top-left pixel is prerequisite to be the current pixel and the others in some sized block are responsible for computing its prediction value. In addition, block size is fixed as , for example. Figure 4 draws the pixel distribution of PPVO.

#### 3. Proposed Method

##### 3.1. Obtuse Angle Prediction (OAP)

Pixel-based scheme predicts pixels in following some principles. In general, context pixels are employed for predicting the current pixel. Considering reversibility, in raster-scanning order, the to-be-predicted pixel must locate before the context pixels. Figure 5 schedules the pixel distributions in an angle zone. Red circle indicates the current pixel and blue circles denote context pixels. For clarity, pixels shall be processed from left to right and then from top to bottom. The minimum straight-line pixel distance is identified with . From the viewpoint of location relations, three prediction structures are defined as follows.

**(a)**

**(b)**

**(c)**

*Definition 1 (prediction angle (PA)). *PA refers to the angle developed by the border sides of the current pixel and context pixels. In Figure 5, PA was filled with green arcs.

*Definition 2 (right angle prediction (RAP)). *RAP stands for pixel distribution with angle cased PA, in which the current pixel locates on the top-left position and context pixels locate within PA region. As shown in Figure 5(a), in the coordinate system, the current pixel is and two context pixels and situate on the boundary lines severally, . Totally, all the context pixels site in the same quadrant or on the positive axis and negative axis. Conventional PPVO belongs to RAP.

*Definition 3 (acute angle prediction (AAP)). *AAP refers to the type of pixel distribution prediction while PA is acute angle, in which the current pixel lies on the top-left location and context pixels locate within PA area. As shown in Figure 5(b), in the coordinate system, the current pixel is and two context pixels and situate on the boundary lines severally, . Totally, all the context pixels site in the same quadrant or on the only negative axis.

*Definition 4 (obtuse angle prediction (OAP)). *OAP refers to the type of pixel distribution prediction while PA is obtuse angle, in which the current pixel lies on the top-left location and context pixels locate within PA area. As shown in Figure 5(c), in the coordinate system, the current pixel is and two context pixels and situate on the boundary lines severally, . Totally, all the context pixels site across two different quadrants or on the* x* positive axis and* y* negative axis. For convenience, if the aimed pixel locates close behind the top-left position, it is denoted as OAP-I. Along with the next location of the current pixel away from the top-left position, OAP-II, OAP-III, or other multi-OAP are marked. If the number of the first row pixels is signed with , the maximum digit shall be and the according mark is OAP-().

It is easy to see that the difference between the three type predictions is the current pixel’s location, causing different context pixel numbers and distributions. In view of quantity, pixels in the first row, except for the aimed one, are all context pixels in RAP. There is no context pixel at all in the first row in AAP. Nevertheless, there is more than one context pixel in the first row in OAP. Hence, valid context pixel quantities of the above three predictions satisfy this inequation . In terms of prediction angle, consulting the aforementioned pixel distributions of RAP, AAP, and OAP, the layout advantages can be expressed by . Overall, each of OAP and RAP has different strengths and AAP behaves the worst.

##### 3.2. RDH Algorithm for OAP

###### 3.2.1. Data Embedding Procedure

Figure 6 summarizes the embedding procedure with the following four steps.

*Step 1 (image preprocess). *There are two main points. One is to modify marginal pixels probably to overflow the gray value section. For an eight-bit gray-scale image, the intensity range is . On account of the maximum pixel modification being one, 0 and 255 should be adapted to be 1 and 254 before prediction. The other one is to prepare some parameters and data for the follow-up procedure. Location map is used to stamp the modified pixel, which contributes to the reversible workflow. For less size, it should usually be compressed in a lossless compressing way.

*Step 2 (testing model). *OAP has optional schemes when block size is large enough. With a certain model, context pixel distribution and neighboring regional noise level of the current pixel have to be computed according to EC demand. Generally, iterative computation utilizes unit step size.

*Step 3 (determining parameters). *A successful embedding happens only when EC is larger than payload size. The minimum noise level threshold (NLT) and the context pixel number (CN) are the most important parameters. Auxiliary information (AI) makes for a reversible scheme. The AI sized headmost pixels extracted the least signal bits (LSB) for AI storage. Thus, LSB, compressed location map, and payload must be combined together to form the to-be-embedded data sequence (DS). Assume that the cover image has pixel size . AI contents are listed as follows:(i)The length of compressed location map ( bits).(ii)The noise level threshold NLT (8 bits).(iii)The context pixel number CN (8 bits).(iv)The location of the last embedded pixel ( bits).

*Step 4 (data embedding). *Context pixels make up a vector . The maximum value and minimum value of this vector are denoted as and separately. And so, the current pixel is predicted by

The output prediction error is denoted as

A data bit shall be hidden into this current pixel according to the following expanded errorwhere is the to-be-hidden data bit.

Pixels becomes the following values:

Embed all the DS bits into the cover image according to (11)–(14). Substitute the LSB location with AI to get the marked image. Calculate PSNR values and select the highest one as the final result. Of course, this means the corresponding best embedding quality.

###### 3.2.2. Data Extraction and Image Restoration Procedure

Figure 7 describes the data extraction and image restoration procedure with the following four steps.

*Step 1 (extracting AI for embedding parameters). *AI has fixed length. Extract the LSB of the AI sized headmost pixels to AI. Thus, four parameters including the length of compressed location map, NLT, CN, and the location of the last embedded pixel shall be obtained.

*Step 2 (extracting DS). *Predict pixels with the same method in (11) from the first embedding location to the last embedding pixel. Suppose is the pixel in marked image. In order to distinguish with the embedding procedure, prediction value is denoted as .

The output prediction error is denoted as

A data bit shall be extracted from the current pixel according to the following expanded error:where is the to-be-restored data bit.

Pixels become the following values:

If pixel is equal to the original , a reversible data hiding scheme is acquired.

*Step 3 (unzipping compressed location map). *Extracted data includes location map and it must be unzipped with the same lossless decoding unzip method.

*Step 4 (restoring the cover image). *According to the unzipped location map, recover the modified pixels back to original values. Hence, blind extraction and image restoration are accomplished.

##### 3.3. Prediction Factors

For balancing embedding capacity and image fidelity, it is advisable that the pixel block should be too small or too large. From literatures [34–37, 40], block size is proposed between and . By Definition 2, the minimum size dissatisfies OAP condition and the maximum size block may be a little complicated. Therefore, a sized block shall be taken as an example for further analysis.

OAP-I and OAP-II are both depicted in Figure 8. Only pixels behind the current pixel are involved in context vector and the number is equal to its dimension . These two parameters of RAP, OAP-I, and OAP-II are denoted asThus, ; the quantity advantage of context pixels decreased systematically from RAP and OAP-I to OAP-II.

**(a)**

**(b)**

**(c)**

Observing from Figures 5 and 8, the current pixel and the two side boundary lines form a sector, which can be named by prediction sector. Sector angle and radii feature the context pixel distribution. For further comparison of prediction capability, sector radii need to be defined explicitly in Definition 5.

*Definition 5 (prediction sector radii (PSR)). *PSR refers to the approaching degree of the current pixel and the active context pixels. It is marked as and the unit is . The amount of is different from previous pixel unit distance and it actually represents a range of distance. Here, .

In blocks of Figure 8, red is the current pixel. The close neighboring green pixels are called radii context pixels . The next closing yellow pixels are 2 radii context pixels . And the external closing pixels are 3 radii context pixels . From this, the context vectors of RAP in different sector radii can be obtained asThe context vectors of OAP-I in different sector radius areThe context vectors of OAP-II in different sector radius are

Predicting image Lena following (11) works out the active context vector dimensions within different sector radius. If , there isIf , there isIf , there is

Table 1 illustrates pixel block sizes within different PSR. Table 2 lists the embedding capacities in different PSR. Four features deserve to be noted as below.(a)In all the three practical sector radii, the context pixel quantity and maximum prediction angle in RAP are both inferior to those in OAP-I and OAP-II. Accordingly, the former’s EC is larger than that in the latter two. Comparing OAP-I with OAP-II, the former has bigger quantity and the latter has a little larger maximum prediction angle. Thus, the two methods output EC at almost the same level.(b)Along with the larger sector radii, embedding capacities are all decreased progressively.(c)When the active radii equal the minimum one , OAP-I and OAP-II have the same context pixels and the maximum prediction angle. Yet, the former’s embedding capacity is larger than the latter’s. This may be caused by the different location of the current pixel.(d)When the active radii are equal to the middle one or the maximum one , OAP-I has the more context pixels than OAP-II, while the former’s embedding capacity is lower than the latter’s. This should be attributed by the more prepositive pixels in sector radii of OAP-II.

The forgoing conclusions declare that the context pixel quantity, the maximum prediction angle, and the current pixel location all have influence on the pixel prediction power. Probable prediction method should be adopted on request for payload size in reality.

Referring to literature [34], shifting bit ratio can generally measure the embedding distortion to the host image

Here, “#” denotes the cardinal number of some set. denotes the maximum capable embedding bits and denotes the maximum shifted bits. The less is, the better fidelity is.

Table 3 compares the shifted ratio in different sector radius. Some conclusions can be summarized as follows.(a)In all the three practical sector radii, the context pixel quantity and maximum prediction angle in RAP are both inferior to those in OAP-I and OAP-II. Accordingly, the former’s shifting ratio is larger than that of the latter two. Comparing OAP-I with OAP-II, the former has bigger quantity and the latter has a little larger maximum prediction angle. Thus, the two methods output shifting ratio at almost the same level.(b)Along with the larger sector radii, the shifting ratios are all decreased progressively.(c)When the active radii equal the minimum one , OAP-I and OAP-II have the same context pixels and the maximum prediction angle. Thus, they get almost the same shifting ratios. As you see, the different locations of the current pixel of the two methods make negligible influence on shifting ratio. This variation is different from that of embedding capacity in Table 2.(d)When the active radii are equal to the middle one or the maximum one , OAP-I has more context pixels than OAP-II, while the former’s shifting ratio is higher than latter’s. This should be attributed by the more prepositive pixels in sector radii of OAP-II.

To sum up, in the minimum sector radii, the current pixel location has nearly no impact on shifting ratio. However, in the middle and maximum sector radius, obvious results appear.

##### 3.4. Prediction Factor Evaluation

Prediction power is mainly determined by three factors, including the context vector dimension, the maximum prediction angle, and the location of the current pixel. For measuring the factor weight, prediction power is computed by Here, indicates the prediction power, is the context vector dimension, theta is the maximum prediction angle, and gamma is the location measurement of the current pixel. By experience, the variance location measurements from the first to the third current pixel take values . The other three symbols , , and are the factor coefficients. Therefore, a weight vector is derived by (23)Variances , , and denote the factor weight, respectively. , . The bigger the values, the greater the impacts. On the contrary, the factor contributes less to prediction power.

Take the data in Table 1, for example, to calculate the prediction powers and factor weights. For facility to express the predicted embedding capacities, is formulated by . In the case of minimum radii , prediction satisfies Hence, and .

Similarly, as shown in Table 4, factor weights in the middle and the maximum radius can be obtained.

Table 4 gives the following results.(a)Regardless of the sector radii size, all three factors impact on embedding capacity. What is more, the maximum prediction angle has the largest weight. In the three cases shown in the table, this indicator works greater than 0.5.(b)The location of the current pixel acts on embedding capacity increasingly along with the sector radii filling out.(c)In the minimum sector radii, the location measurement is the least important factor. However, in the middle and the maximum sector radius, it exceeds the context pixel dimension.

Take data in Table 2 as the case to declare the prediction power for shifting ratio. Considering the minimum sector radii, prediction power is calculated by Hence, and .

Factor weights in the middle and the maximum sector radius can be achieved in likewise manner and they shall be listed in Table 5.

The following deductions can be drawn from Table 5.(a)No matter what the sector radii is, three factors operate on shifting ratio prediction as well. The maximum prediction angles, having the greatest weights, are superior to 0.5.(b)Along with the predicted sector radii increasing, the prediction vector dimensions started to decrease and then lifted. In the minimum and the maximum radius, it is about 0.25 and in the middle radii, it decreases to be smaller than 0.10.(c)Coupling with the sector radii extending, location measurement for the current pixel goes up first and drops down afterwards. In the minimum radii, it is about 0.17 and in the middle and the maximum radius, the same index becomes larger than 0.25.

##### 3.5. Prediction Power Test Procedure

As shown in Figure 9, the four steps of prediction power test procedure are listed below.

*Step 1. *In order to acquire large embedding capacity and high image fidelity, block should be determined to be a proper size . The bigger block size having much more context pixels shall lead to more accuracy prediction and low distortion. Meanwhile, the smaller block size shall predict pixels with more probability and this may produce larger embedding capacity. In this paper, the selected minimum block is and the maximum block is .

*Step 2 (confirming the location of the current pixel). *According to alternative methods, the current pixel should be located in the top-left position in RAP, in the next rightward one in OAP-I, and in the third rightward one in OAP-II.

*Step 3 (selecting prediction sector radii and corresponding context vector). *Enlarge the radii from the minimum value to the maximum one to get pixel prediction values.

*Step 4 (embedding payload and auxiliary bit stream and analyzing the factors’ weight distribution). *After pixel predictions, take the same procedure with PPVO to embed data into the host image. Compare with the original host image to get the output PSNR and then investigate factors’ weight by (26) and (27).

#### 4. Experiment Results

Two balancing indexes including EC and PSNR will be used to evaluate the performance of the proposed method. Six test gray images listed in Figure 10, sized , are all selected from SIPI image set. Payload is the same random bit stream and Matlab 2013a is adopted for all the experiments.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

##### 4.1. EC Performance of Full Rank Prediction

Table 6 lists predicted EC with full rank of context pixel vectors in different sector radius. Table 7 compares the prediction factor weights for EC. In the minimum and the middle sector radius, some likewise conclusions with Table 4 can be summarized. That is, each factor weight is highly consistent and this declares that prediction factors have the same effect regardless of rough or smooth images. However, in the maximum radii case, factor weights fluctuate greatly. And this indicates noticeable specific gravity under this condition. For this reason, Figure 11 portrays the concrete factor weight curves.

In Figure 11, factors are labeled on the horizontal axis and corresponding weights are marked by vertical axis. Coordinate vector maps to the factors of different sector radius. In the maximum radii, factors have distinct weight values. The most significant factor is the maximum prediction angle, the second-rate one is the current pixel location, and the least one is the context vector dimension. Lena, Barbara, and Peppers are common smooth images having pixel gray value distribution and factor weights are basically the same. Context vector dimensions have the least weight and the least variant range . The maximum prediction angles have the largest weight and the current pixel locations have the middle weight . The subsequent two factors vary within almost the same range . Baboon and Boat are rough images having tremendous changing pixels and complex distributions. Context vector dimensions have the least influence varying within . The maximum prediction angles have general values on the verge of the former. The current pixel locations have the largest influence . Airplane is smooth image having gentle varying pixels. The least weight is context vector dimension , but it is superior to that of Lena, Barbara, and Peppers. The middle weight is the maximum prediction angle under that of common smooth images and exceeding that of rough images. The largest weight is the current pixel location higher than that of common smooth images.

In brief, four viewpoints can be reported as follows.(a)In the minimum and the middle sector radius, factor weights for predicted EC are highly accordant and have nothing to do with image contents.(b)In the minimum sector radii, the maximum prediction angle having the largest weight is nearly 0.60 and the context vector dimension having the least weight is about 0.17.(c)In the middle sector radii, the most weighted factor is the maximum prediction angle, the second-rate one is the current pixel location measurement and the least weighted one is the context dimension. These three factors are about 0.65, 0.27, and 0.08, respectively.(d)In the maximum sector radii, factors become quite different. As far as varying range, the maximum prediction angle plays the most obvious function, the current pixel location takes the second place, and the context vector dimension shows the weakest one.

##### 4.2. Shifting Ratio Performance of Full Rank Prediction

Table 8 illustrates predicted shifting ratio with full rank of context pixel vectors in different sector radius. Table 9 makes a comparison to the prediction factor weights for shifting ratio. In the minimum and the middle sector radius, some similar outcomes with Table 5 can be summed up. That is, each factor weight is extremely accordant and this proves that factors are not relevant to image content, no matter rough or smooth. However, in the maximum radii case, factor weights change violently. And this means not tolerated gravity at this situation. Therefore, Figure 12 depicts the specific weight curves.

In Figure 12, the horizontal and the vertical coordinates possess the same meaning with those in Figure 11. It is not difficult to find that factors impact on predicted shifting ratio differs widely. In the maximum sector radii case, factors are related to image contents. Big to small, factor weights are in the order of the maximum prediction angle, the context vector dimension, and the current pixel location. Airplane, Lena, Barbara, and Peppers have the alike patterns.

The maximum prediction angles have the largest weight , the context vector dimensions take the second place , and the current pixel location has the least weight . All the three factors vary a little . As for Baboon, the most rough image, from large to small, factors list in the order of the current pixel location , the context vector dimension , and the maximum prediction angle . Apparently, these are approximate indexes. For Boat, a common rough image, factors alter between Baboon and other smooth images. Then, the least weight is context vector dimension , the middle weight is the maximum prediction angle , and the largest weight is the current pixel location .

From above, the following opinions come into being.(a)In the minimum and the middle sector radius, factor weights for predicted shifting ratio are almost uniform and have no relationship to image contents.(b)In the minimum sector radii, the maximum prediction angle having the largest weight is nearly 0.60 and the context vector dimension having the least weight is about 0.17. The current pixel location weights about 0.25.(c)In the middle sector radii, the most weighted factor is the maximum prediction angle, the second-rate one is the current pixel location measurement, and the least weighted one is the context dimension. These three factors are about 0.65, 0.27, and 0.09, respectively.(d)In the maximum sector radii, factors vary quite a lot. Differently from the scales of aforementioned factor scales for EC, the maximum prediction angle still acts the most significant part, but the sequent order becomes from the context vector dimension to the current pixel location.

##### 4.3. Optimal Performance

In this section, the optimal PSNR performance will be compared for six outstanding RDH methods, including four preexisting approaches from literatures [34, 37–39] and two new proposed ones. For ensuring objectivity, tested payloads are selected as the same bit stream for different methods. OAP-I and OAP-II refer to predict pixels in obtuse angle for suitable block size from to as soon as possible. Hence, Qu et al.’s RAP, OAP-I, and OAP-II have the least 2 pixels and the most 15, 14, and 13 pixels, respectively. The highest PSNR value is identified to be the optimal value.

The top embedding capacities of PVO, RAP, OAP-I, and OAP-II are listed in Table 10. On the whole, EC of OAP-I or OAP-II is a little less than that of RAP. Yet, their EC values are much the same. Obviously, PVO outputs the least EC of them. Specifically, the average EC enhancements versus PVO to RAP, OAP-I, and OAP-II are 12774 bits, 12567 bits, and 12482 bits.

Tables 11 and 12 compare the optimal PSNR values for 10000 bits and 20000 bits payloads. EC of Baboon is not illustrated in Table 12 for smaller space than 20000 bits. Inspecting Table 11, when the payload is 10000 bits, OAP-I improves RAP by 0.34 dB, 0.05 dB, −0.45 dB, 0.58 dB, 0.04 dB, and 0.32 dB, with the average of 0.14 dB. OAP-II improves RAP by 0.41 dB, 0.01 dB, −0.48 dB, 0.53 dB, 0.11 dB, and 0.23 dB, with the average of 0.09 dB. Referenced by Table 12, when the payload is 20000 bits, OAP-I improves RAP by 0.03 dB, 0.05 dB, 0.01 dB, 0.22 dB, and 0.29 dB, with the average of 0.32 dB. OAP-II improves RAP by 0.12 dB, 0.02 dB, −0.04 dB, −0.08 dB, and 0.20 dB, with the average of 0.25 dB. Thus, OAP-I prevails a little over OAP-II.

Figure 13 compares the optimal PSNR performance. The minimum payload is selected as 5000 bits and the step is 2000. Results are described in four aspects.(a)All the tested methods produce good embedding image fidelity. PSNR values are dependent on image contents. From Figures 13(a) to 13(e), Luo et al. [38] yield the lowest purple curve and Hong [39] produces a bit over cyan curve, especially in low-medium payloads. Qu and Kim’s RAP [37] portrays the green curve exceeding them in all payload sizes. The black curve generated by Li et al.’s PVO [34] is near to RAP curve in low payload and decreases quickly. For Lena, Barbara, Boat, and Peppers, Li et al.’s PVO gets higher PSNR than Hong’s method in high-medium payload. For Barbara and Peppers, PVO is higher than RAP. Striking colors are used for painting the red OAP-I and the blue OAP-II curves. Experiment results demonstrate that OAP type methods improve optimal PSNR values visibly especially in high-medium payload. In Figure 13(f), a cross appears at the nearly 20000 bits’ point for Hong’s and Luo et al.’s methods. And then, the former declines more quickly than the latter.(b)PSNR varies in the same tendency while it differs in severity with uniformly changing payload. Linearity keeps well in small payload and nonlinearity performs prominently in large payload. PVO comes down turning into medium payload and then transfers into gentle stages. For Lena, Airplane, Barbara, Boat, and Peppers, PVO descends 0.18 dB, 0.08 dB, 0.50 dB, 0.32 dB, and 0.48 dB individually when the payload size is 23000 bits, 27000 bits, 19000 bits, 19000 bits, and 21000 bits.(c)OAP-I and OAP-II deserve the overwhelming advantages especially in moderate and large payload size. In such conditions, RAP is improved visibly.(d)For rough image, such as Baboon, different methods cause results quite a lot. OAP-I, OAP-II, RAP, and Hong hold fine linearity in small payload and tend to coincide in large-moderate payload size. PVO acts the supreme in the beginning while falling down quickly in the middle and later periods. Though Luo et al.’s method has the lowest measurement values, it is still in excess of 50 dB with gentle variation.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

#### 5. Conclusion

In this paper, an OAP prediction scheme well exploiting the location relevance of context pixels is presented. The main contributions can be summarized by two key points. First, OAP develops the pixel-mannered prediction method for more pixel distribution coherence. Results demonstrate higher PSNR performance than conventional PPVO method. Second, a mathematical model for evaluating pixel prediction ability is constructed. Here, prediction power specified the evaluating index. In detail, three factors are analyzed, including the context vector dimension, the prediction angle, and the current pixel location. Furthermore, comparing with other four state-of-the-art methods, the proposed OAP scheme performs prominently.

#### Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

#### Acknowledgments

This work was supported by National Natural Science Foundation of China (no. 61302106 and no. 61201393), the Fundamental Research Funds for the Central Universities (no. 2014MS105 and no. 13MS66), and Hebei Province Natural Science Foundation of China, Youth Science Fund (no. E2013502267). Here, the authors are obliged to the above foundations, their responsible persons, and all their lab associates.