Abstract

In recent years, thanks to the use of Internet services, daily activities used to imply movement became more accessible to any user. As a result of such interconnection, now millions of people from different countries are able to communicate among themselves through the Internet, generating a great flow of data and classified information. The information on the Internet can be stolen, intercepted, anonymized, or even destroyed, resulting in cases of infringement of intellectual property rights, and the loss or damage of data. In such a globalized and interconnected world, solid security measures have become increasingly important to ensure data privacy protection and its confidentiality during transit. Nowadays, there is a variety of security mechanisms such as the steganography, an information hiding technique, which protects intellectual property by allowing the transmission of hidden data without drawing any suspicion. In order to achieve these criteria, an adaptation of the nonlocal maximum likelihood filter is proposed; in this class of filters, in general, they are used in images that require a high level of irregular pattern detection, based on the statistical dependence of the underlying pixels of the image analysis area, when using it in the wavelet domain as edge detector and/or discontinuities in images in order to have a greater selectivity when inserting information in the image. It strengthens the detection of the areas with the highest probability of having noise presenting results which are suitable areas to insert the information and that it is imperceptible in a quantitative and qualitative manner as presented in the Results and Discussion.

1. Introduction

Steganography is the science of hiding information by means of a cover medium in such a way that even the presence of the message is invisible to any eavesdropper. The object, apparently harmless, is known as the “host” and the contained information as the “payload”. The variety of host objects can range from text files to images, audio, and/or video. The most common example is the use of images; they are used as hosts due to their omnipresence in our day-to-day activities, as well as their high level of redundancy in their representation. The image steganography techniques are classified according to their domain; the most frequently used are the spatial and the frequency domain ones. However, there are now some techniques that combine both domains, with the advantage of being adaptable to the nature of the image [1].

The spatial domain techniques are used by directly manipulating the pixels of the image to hide the information. They are also characterized by having the simplest schemes, a short implementation time [1], a reduced hardware requirement, and a low time complexity.

In the spatial domain, a steganographic algorithm modifies the data in the host image in the special domain; the most representative algorithm in this domain is the substitution of the least significant bit (LSB); although this method is simple, it has a greater impact in comparison with other methods.

In general, the insertion mechanism is carried out in the LSB up to the 4th LSB. It can be assumed that inserting in the 4th LSB generates greater visual distortion to the hostess image since the hidden information is seen as “unnatural”. Similarly, the distortion occurs at the time of recovery of the inserted image.

This algorithm has been perfected in order to decrease these distortions presented both in the host image and in the recovered image; one of the most widely used methods is the LSB-OPAP [2], which makes an adaptation to the insertion of the information following the considerations of the LSB algorithm. This consists of the following set of steps: the insertion of the information is done using the LSB method to obtain the stego-image pi′; in parallel the steganographic algorithm is performed using the technique in [2] obtaining the stego-image pi′′. Then consider pi(i), pi′(i), and pi′′(i), the pixel values corresponding to the i-th pixel in the host image C, the stego-image C′ obtained by the simple LSB substitution method, and the refined stego-image obtained after the OPAP. With δi = pi′-pi, the insertion error is between pi and pi′. According to the process of incorporation of the simple LSB substitution method described above, pi′ is obtained by directly replacing the least significant bits of pi with k message bits, with the following condition: . As a result, it shows a slight improvement against the traditional LSB.

Recently, to improve visual quality and security against histogram attacks, an approach based on LSB with the capacity of 1 bpp was proposed, reducing the probability of changing pixels as a modification of 1/3 of pixels. Due to the smaller modification of the stego-image pixels, it improves the visual imperceptibility and also has resistance against detection attacks based on LSB, that is, the HCF-COM steganalysis [3].

Yuan et al. [4] proposed a method based on multilayer adaptive steganography. The insertion of the secret image adapts to the regions with different texture in the host image. The insertion of the information is done using the LSB algorithm, and this can be extracted using the XOR-based operation. This method resists modern attacks with steganalyzers such as SPAM and AUS. One of the main problems in methods based on the LSB algorithm is that although they are simple to understand and apply and even flexible to integrate with other methods, its main vulnerability is that having a direct relationship between the ability to insert with the visual quality of the stego-image, the latter is affected as the insertion in the maximum level of the LSB in a pixel is made.

The frequency domain or the transform domain is another technique used, which consists of diverse transformations used to retrieve information in the frequency domain or time-frequency.

To avoid the problems presented in the spatial domain, the processing in the frequency domain has been an adequate tool for signal filtering, pattern recognition, and image compression. A complicated problem to solve in the spatial domain becomes easy to deal with in the frequency domain, because the sharp edges and transitions in an image contribute significantly to the content of the high frequency of its transformation.

These techniques applied in the frequency domain insert the information through transforms in which the frequency components of the image are extracted, where the places or zones in which the image in the image will not be affected can be identified in a more precise way, “visual quality” of the image. These compression techniques are often used because they extract characteristics of the host image that represent the high and low frequencies of the image, where the latter represent the edges or contours of the host image, thus allowing an exchange of values in the host image, values of the image to hide. Therefore, to search for the right pixels to hide data, transformations-based schemes are a reasonable approach. In these schemes, the host image is transformed due to its approach oriented to the extraction of the main characteristics in frequency.

Among these, there is the Discrete Cosine Transformation (DCT) technique and the Discrete Wavelet Transformation (DWT) technique [5, 6].

Some popular steganographic algorithms on the Internet that apply DCT are the following.

The Jsteg/JPHide algorithm has the following characteristics for the insertion of information [7].

Jsteg has (1) steganographic tool based on the insertion of LSB; (2) the insertion is done replacing the nonzero LSB values by nonzero DCT quantized coefficients by the secret message bits. (1) In JPHide, these quantized coefficients are randomly selected with the help of any pseudo-random number generator that can be controlled with a key, (2) the second LSB can also be modified in JPHide, and (3) the Jsteg capability is equal to the number of DCT coefficients whose values are not equal to 0, 1, and -1 (this condition is selected to avoid ambiguity in the extraction of secret bits).

Another known algorithm is YASS (another steganographic scheme), which is explained below [8]:(1)The input image in the spatial domain is divided into blocks of fixed size known as large blocks (block B). Within each large block, the 8 × 8 subblock called host block (block H) is randomly selected.(2)The bits of the secret message are integrated into the DCT coefficients of the H block by the quantum modulation index (QIM).(3)With the help of Inverse Discrete Cosine Transform (IDCT) of block H, a JPEG image can be obtained.(4)The advantages include the survival of the message bits in the active guardian scenario; it works well against the autocalibration tool called autoanalysis.

For example, DWT, in Subhedar et al. (2014) [1], obtains in its best result a value of PSNR = 54.819dB with a secret image of 256x256. The authors propose a steganographic algorithm using an adaptation of the DWT called redundant wavelet transform (RDWT) and QR factorization.

For the implementation of the DWT, Abdulaziz and Pang [1] use the vector quantification called Linde-Buzo-Gray (LBG) together with the block codes known as BCH code and the discrete wavelet decomposition Haar at a decomposition level. The result presented by them argues that their algorithm presents a good quality with few perceptual defects.

Nowadays, there are also techniques adaptable to the nature of the image; spatial and frequency domain techniques are combined in these techniques [5, 6].

Adaptive steganography is a special case of the two previous methods. It is also known as “insertion based on statistics”, “masking”, or “based on models”. This method takes global statistical characteristics of the image before attempting to interact with LSB/DCT/DWT coefficients.

The statistical values obtained from the images will define where to make the changes. It is characterized by a random adaptive selection of pixels according to the host image and the selection of pixels in a block with a large local standard deviation. The latter is intended to avoid areas of uniform color (smooth areas). This behavior causes adaptive steganography to search for images with noise or existing or deliberately added images that demonstrate the complexity of the color.

Wayner [5] named in his book “life in noise”, pointing out the usefulness of the insertion of data in noise. It is proven that this method is robust with respect to compression, trimming, and image processing [5].

Chin-Chen et al. [9] proposed an adaptive technique applied to the LSB substitution method. His idea is to exploit the correlation between neighboring pixels to estimate the degree of statistical belonging. They discuss the options of having 2-4 crossing lines. The payload (embedding capacity) obtained is 355588 bits.

Hioki et al. [10] present an adaptive method called “data insertion based on block complexity” (ABCDE). The insertion is done by replacing the data in the selected pixels of blocks with high noise content in the image and is replaced with another noisy block obtained with the inserted data. This suitability is identified by two measures of complexity to adequately discriminate complex simple blocks, which are irregular in length and with edges considered noise. The hidden message is part of the noise of the image.

Regardless of their field of application, all of the image steganography techniques should focus on the following three main points: where to hide the information inside the image, the security level when embedding the information into the image, and the security level of the play load in case of intrusion. There are many steganography algorithms and each one is differently addressed.

In order to use steganography in images, it is necessary to select the specific regions to embed the image; these regions will be mentioned as Possible Embedding Region (PEB). The PEB can be in any section or object inside the image that is able to produce the minimum possible distortion. The appropriate PEB can be recognized by abrupt changes in values of surrounding pixels, which are interpreted as the edges of the objects inside the image. The edges are considered to be appropriate sections for hiding information due to the fact that human sight is less sensitive to shape or color distortions inside the peripheral areas of an object, in combination with the fact that it randomly locates pixel values. The random pixel distribution allows the dispersion of payload in the stego-image, reducing its detection. This paper presents an adaptive steganography mechanism that employs three security levels for the retrieval of the embedded information. This embedding mechanism uses the spatial as well as the frequency domain to detect the edges of the PEB. The three levels will request to have a primary key for each embedded data, additionally verifying whether the data is correct or not. In case of not complying with the three security levels, the retrieval of the information could be blocked, which provides an advantage. Finally, the quantitative results of its performance will be shown. In this work, the cover images have 2 different dimensions: 1024 x 768 and 256 x 256 pixels, while the images to be hidden have 4 different dimensions: 712 x 534, 1024 x 768, 576 x 768, and 256 x 256 pixels.

2. Theory

In this section, the proposed mechanism for using steganography in images will be described in detail. The proposed mechanism is adaptive, which means that it analyses the spatial and the frequency domains of the PEB edges for the possible embedding of information.

2.1. Discrete Wavelet Transformation (DWT)

The DWT is used to analyze an image regarding its spatial and frequency domain. It provides a time-frequency representation of the image. The DWT is created by repeatedly filtering the image in each row and in each column in order to obtain the different DWT coefficients. The DWT is useful because it analyses the information at high and at low frequencies in each pixel. The cover image goes through a filter bank, where each filter is expected to be sampled by two (wavelet transform), which has a finite impulse response.

The image processed with the low-pass filter provides a soft wavelet coefficient of the input image and, with the filter, it results in a version of the edges of the image [11].

We have used the DWT Haar to deconstruct the cover image and, in the case of the stego-image, a Haar deconstruction and a DB4 deconstruction are used [12].

2.2. Fourth Moment Wavelet

During the wavelet transform of the image, four subimages are obtained in different frequency bands. The submatrixes obtained are considered as random variables. The fourth moment wavelet (FMW) is used in the submatrixes obtained, considering the following [6, 7]: where is the image with additive noise, is the original image, represents the white Gaussian noise, and is the current position of the pixel. From (1), it can be understood that is a 2D random vector with N consecutive samples of a real process using a Gaussian distribution, with a mean equal to zero. From this consideration, the FMW is obtained as follows:From (3), the mean and the FMW standard deviation are obtained with the following consideration ; then, the mean of is obtained from the normal distribution [18, 19]:where E represents the submatrix expected value and p represents the probability of its occurrence in the sample. For the proposed method, the variability of the probability distributions is considered, and a selection threshold is finally chosen, based on the traditional considerations for hard and soft thresholding. The FMW, obtained from the coefficients of the decomposition of details, will have a value higher than , where . This is due to the conditions of variability occurrence of the components of the image [6, 7].

2.3. Noise-Level Estimation Mechanism

The noise-level estimation in images requires improved accuracy in the filters in order to be able to distinguish the edges and borders of the image and, thus, be able to separate it from the noise and the edge. Quality in image manipulation allows accurate insertion in the possible embedding regions, detected as edges and noise [20]. In this research, the nonlocal filter is used to detect noise in the images, which consists of the progressively selection of images, through each layer of their spatial composition. This type of filter is a multispectral extension of the nonlocal maximum likelihood filter (NLML). It is possible to represent to detect noise in an image through filters used for the multispectral extension of the images; this filter is known as NLML [21].

Besides, given that the standard deviation of noise (SD) is an important reference for all the no local filters, an adaptation to the Maximum Likelihood Estimation (MLE) of noise levels is presented, and it is compared with MLE local and nonlocal methods.

The SD is an important parameter for the filtering of Nonlocal Media (NLM) and the NLML [22]. According to this, several noise estimation methods have been developed through the intensity of underlying pixels in the image that is being detected. However, this might sometimes not be enough to guarantee an accurate identification of the edge of the image. Because of this, the method [22] is used, with the pertinent adaptations for the edges estimation. In respect thereof, a Linearized Maximum Likelihood (LML) method has been proposed as edge detector in images [23].

2.4. Estimation of Noise Standard Deviation

The precise estimation of σ is essential for the filtering quality, as well as for other image processing tasks such as the segmentation and the estimation of parameters to detect edges [24]. The LML approach has been proposed when the image information does not provide the information needed to detect edges and to determine the precise value of σ [2527].

For this paper, we propose the use of a modified Noise Estimation Filter using local maximum likelihood (NE-LML) to detect thresholds, employing the FMW as a selection process. During the edge detection, the adaptation of the NE-LML was employed. The estimation of σ is made for each layer of the image, which we denominate as k, which is maximized due to the use of the Rician distribution regarding the unknown values of and range through the following equation [28]:where is the modified Bessel function of the first kind. For an optimal estimation, represents the intensity of the image, k represents the decomposed layer, and represents the range of the image in layer k. The combination of the FMW and the NE-LML filter allows the detection of appropriate areas to embed information in order to obtain the stego-image.

3. Methods and Materials

3.1. Wavelet Decomposition

Suppose digital image I is composed of a set of layers k, where k= ; with a set of samples , , ,,, that take values between 0….2j+1 (typically J=27). In order to improve the steganography performance in mobile devices, we carried out the wavelet decomposition through the Haar Discrete Wavelet Transformation Technique (HDWT) and the Daubechies Discrete Wavelet Transformation Technique (DDWT). The HDWT is frequently used for detecting edges, compressing images, coding, etc. Because the HDTW requires few computational requirements, it has been principally used for image processing and pattern recognition; this low computational demand provides advantages by implementing complex algorithms in limited technologies such as the mobile devices. The base operation of the HDWT when applied in a two-dimensional signal that contains NxN samples is the following: each row of one image is filtered with a low and high-pass filter (LPF and HPF) and the output of each filter is sampled by two in order to produce an image known as L and H. L is the image originally filtered in a low-pass (LPF) and divided into direction x, and H is the image originally filtered and divided into direction x.

Afterwards, each column of the new images is filtered with a LPF and HPF and sampled by two to produce 4 subimages (LL, LH, HL, and HH). LL is the original image filtered with LPF in a horizontal and vertical direction, sampled by two. LH is the original image filtered with LPF in a vertical direction sampled by two. HL is the original image filtered with HPF in a vertical direction, sampled by two. HH is the original image filtered with HPF in a horizontal and vertical direction, sampled by two. The four subband images contain all the information present in the original image, but their dispersed nature LH, HL, and HH makes them susceptible to compression [29].

In the case of the DDWT, it is defined in the same manner as the HDWT. If the input signal f has N values, then, the transform level 1 for the dB4 is the signal mapping f, D1->(LH|HL) of signal f for the decomposition wavelet LH and HL. The main difference between the HDWT and the DDWT is in the definition of the escalation and the wavelets. The DDWT belongs to the orthogonal wavelet family, defined in a discrete manner and characterized by the number of vanishing moments for a given support. Each wavelet of this type generates a multiresolution analysis of different signal frequencies.

3.2. Detection of Threshold Noise Estimator

In the white Gaussian noise, there is a generalized way because of the frequency components in which it occurs. In the case of the images, this type of noise presents a normal distribution with a mean equal to zero and with an unknown variance . In this specific case, to detect this king of noise, we used the Gaussian white noise estimator () proposed by [30]. is the noise standard deviation and n is the signal length. The SD is estimated using the first decomposition wavelet level, which contains a high frequency band inside the image and a high number of coefficients with noise. The main aim of the noise estimator is to quantify the noisy coefficients inside the decomposed image subbands. To achieve this, the estimation methods are used to provide a coefficient reduction. The main idea of a noise estimator is to detect the noisy coefficient to be able to preserve the information related to the image. We propose the implementation of the FMW, which serves as a threshold to noise discrimination along with the adapted NE-LML.

The FMW of an image can be considered as , where n1 represents the noise in the host image obtained from the first wavelet transform [18, 19].

The FMW of the subband LL, which contains information larger than , where denotes the subband LL noise power. Using this, the coefficients that represent noise can be localized and the threshold for the selection of the embedding region can be proposed.

We can estimate the noise power by applying the following equation: where is the standard deviation of the cover image.

Finally, the power of then noise after going through the decomposition wavelet iswhere G represents the filter profits of the low and high-pass filters. Generalizing the formula, the noise power for any decomposition level of the subband LL is given byIf the focus is the filter profit as in [19], thenThus, to detect the noisy coefficients, the following condition is defined asTo finally obtain the threshold value for the embedding of information using the noise estimator function of the filter NE-LML.

The function proposed for the embedding of information is proposed under the name of noise estimator for the embedding of information based on the local maximum likelihood of the image (NEII-LML) as follows: To finally propose the following threshold criterion,

4. Proposed Method

In summary, the algorithm for the information concealment process is presented in Algorithm 1.

1: Preprocessing: RGB cover image and image to be hidden are separated in their respective layers. Each layer is an 8-bit grayscale
image.
2: for each layer do
3:Cover layer decomposition: The discrete wavelet transformation or Haar is applied to cover layer.
Where denotes the desomposition, is the cover layer and
4:Hidden layer decomposition: The discrete wavelet transformation or Haar and Daubechis 4 are applied to the
layer to be hidden.
Where denotes the desomposition, is the layer to be hidden and
Where denotes the desomposition, is the approximation sub-band of first level of decomposition and ,
5:Adjust information of sub-band: An adjust operation is applied to the sub-band
6:Noise detection: Using a 3x3 Kernel as noise detection mechanism based on the local máximum likelihood is determited
the umbral of change
7:if The element of the kernel meet the condition   then
8:Do the change in the sub-band for the element in the sub-band
9:Do the change in the subband for the element in the sub-band
10:if The element changed in the sub-band meet the condition then
11:Change the element in the sub-band according to
12: else
13:Change the element in the sub-band according to
14:end if
15:end if
16:Generate Stego layer: Obtain modified sub-bands coefficients and apply inverse discrete wavelet transform or Haar
to form the stego layer
Where denotes the inverse discrete wavelete transform, and the stego layer
17: end for
19: Generate Stego Image: The resulting layers are combined to obtain a RGB image.

The proposed concealment method consists of two sections, which are concealment, and information mapping, generating the stego-image. Now, the sections of the proposed steganography algorithm will be explained in detail.

4.1. Information Concealment

To analyze the concealment process of an image with the proposed algorithm, it is necessary to separate the cover image and the image to be hidden in their respective layers; in this case, we worked on RGB color space. Because of the aforementioned, the layers obtained in each image correspond to the color red (R), green (G), and blue (B). After, a “Haar” wavelet decomposition level is applied to each layer of the cover image (IC), as a result, four subimages are obtained: the approximation (LL), the horizontal details (LH), the vertical details (HL), and the diagonal details (HH).

For the image to be hidden (IO), each of its layers receives two levels of wavelets decomposition; in the first level, the “Haar” wavelet is employed, obtaining 4 subbands (ll, lh, hl, and hh) in the case of the layers of the image to be hidden; the Daubechies 4 (db4) is applied to the second decomposition level, to the approximation (LL), obtaining 4 new subbands: the approximation (ll1), the horizontal details (lh1), the vertical details (hl1), and the diagonal details (hh1).

Once they are obtained, the subbands to be employed are separated as in the following method: from the layers of the image, the LH, HL, and HH are used; they contain the coefficient values and the diagonal details; that is to say, they keep the information from the edges of the image, which, by going through modifications, is not affected in comparison with the original. In the case of the layers of IO, the ll1 is used, which contains the coefficients values from approximations corresponding to the second level of decomposition. The reason why this subband is chosen is because the information that it contains defines the majority of the image to conceal [31].

Previous to the information concealing, an adjustment to the escalation is done, based on the work [24]. This process is carried out in the ll1 to avoid that any inserted values in the IC visually alter the information and thus provoke changes in the resulting image. Because you are working in a RGB space, the employed image is at a depth of 24 bits. Taking these into consideration, the adjustment operation is described as in the following equation [13]: where is the coefficients subband of the image approximation to conceal from its second level of wavelet decomposition, factor corresponding to the base 2 (bit) raise to the image depth (24 bits).

As a result of the escalation adjustment, an adjustment of the ll1 subband values is obtained; these new bands are identified as New_LH, New_HL, and New_HH. From these new information subbands, we go through them though a windowing, using 3x3 Kernel detection mechanism (, i,j= 1,2,3,…, 9), which will scroll thought the subband to detect in detail the values prone to be replaced. This Kernel acts as a noise detector in the subband HH, using the following condition: “the corresponding noise threshold will be calculated every time the Kernel positions itself in the subband; if any of the values is minor or equal to the obtained threshold, this value is changed for one of the values to hide in subband ”, defined forwhere ll1 is the second level of wavelet decomposition in IO, is the current position of the Kernel in the subband, and LL is the first level of decomposition in IC.

4.2. Mapping Embedded Information

To ensure that the embedded data is recovered, the substitution of the proposed values in this research is done in the subband New_HH in order not to alter the calculation of the threshold value in the moment of applying the Inverse Discrete Wavelet Transformation (IDWT). In the subband New_LH, a change is made to store now the original value of the HL subband in the corresponding position. In New_HL subband, the change is made according to the following function: where is the current value of subband HL, is the current value of subband ll1, is the new value of subband HL, x, y is current position in the obtained subband of IC, and x1,y1 is the current position of the obtained subband of IO.

The mapping condition was proposed to guarantee minimal distortion and the information retrieval so that the complement of the operation between the original HL subband and the value to hide of subdand ll1 is stored in subband New_HL, according to the condition given by (15).

While the complement of the obtained value (key) is stored in subband New_LH, for New_HL as of the condition given for (16), it is defined next In this way, subband New_HL is used as another level of access security to the information called key. With the generated key and applying the complement with New_LH, the positions where the embedding was carried out will be secured.

Once the subband ll1 values are hidden inside subband New_HH, the reconstruction process through the “Haar” wavelet is carried out, using the subbands LL, New_LH, and New_HH, in each obtained layer according to the color space, in this case the RGB. As a result, a new image, known as steganography image, is obtained.

4.3. Image Recovery

In summary, the algorithm for the image recovery process is shown in Algorithm 2.

1: Preprocessing: RGB stego-image is separated in its respective layers.
2: for each layer do
3:Stego layer decomposition: The discrete wavelet transformation or Haar is applied to stego layer.
Where denotes the desomposition, is the cover layer and
4: Extraction process: Using a 3x3 Kernel as reconstruct detector is applied to the sub-band (Map)
5:if The element in the Kernel meet the condition   then
6:Calculate the confirmation element as
7:else
8:Calculate the confirmation element as
9:end if
10:if    then
11:Abstract the element of the sub-band to the recover sub-band
12:end if
13:Adjust information of sub-band: An adjust operation is applied to the information in the recover sub-band
14:Generate recover layer: Is applied the inverse discrete wavelet transform or Daubechis 4 to the recover sub-bands to form
the recover sub-band , also is applied the inverse discrete wavelet transform Haar to the recover sub-band and sub-bands
to form the recover layer
Where denotes the , is the recover sub-band and
Where denotes the , is the recover layer and
15: end for
16: Generate Recover Image: The resulting layers are combined to obtain a RGB image

In order to carry out the extraction process using the proposed algorithm, it is necessary to separate the RGB steganography image (SI) in each layer; during the separation into layers, the red (RSI), green (GSI), and blue (BSI) layers participate. Afterwards, each layer receives a level of wavelet decomposition, employing the “Haar” wavelet. As a result of the decomposition in the RGB layers, four subbands are obtained, corresponding to the approximation coefficients (LLSI), the horizontal detail coefficients (LHSI), the vertical detail coefficients (HLSI), and the diagonal detail coefficients (HHSI).

Once the first level of wavelet decomposition is applied, the corresponding subbands are obtained, in the case of subband LHSI, where the recovery map is stored, which allows us to identify the areas where the information of the IO is stored.

Subband HL is used as a key, which allows confirming that the localized values in the given positions in the map correspond to the hidden image, and, finally, in subband HH, the original values of the hidden image are stored.

With the purpose of identifying the correct embedded information, it is necessary to use the stored values in subband New_HL because the generated key and the stored value in subband New _HH have to coincide; this corresponds to one of the original values of IO, obeying the following condition: Next, the subband identified as New_LL is created, in which the extracted values will be stored. From the subband separation, the Kernel 3x3 is defined; it will be used to extract the position values where there might be a value corresponding to the hidden image. This Kernel runs through the subband LH as a siding window to extract the values in the correct order for the reconstruction.

Every time the Kernel is positioned in the subband a verification is executed in each of the values contained in the Kernel; in this verification, the obtained key is used by the subband HL. If a hidden value is found in the corresponding position, the supplement of the operation performed when the image was hidden must be stored, while the subband HH stores the original values of the hidden image. Thus, the result should be equal or approximate to the original value stored in subband LH; the latter is used as a map. The function used to the verification is defined next: where N is a value to be corroborated with the map (subband LH) to obtain the position of the value to be extracted; HL (x,y) is the value of subband HL in the current position and stores the supplement of the operation performed when the image was hidden; HH (x,y) is the value of subband HH in the current position and stores the original value of the hidden image; Kernel (i,j) is the Kernel value in the current position.

Once the verification of the result is performed, it is corroborated with the value in the map contained in subband LH or in the Kernel. If the difference between these values lies within the range -1 and 1, then, the value corresponding to the current position, stored in subband HH, corresponds to the value of the hidden image, for which the identified value is copied in the subband New_LL.

Once all the values of the hidden image are extracted, an adjustment operation is carried out to recover the original values of the hidden image.

Due to the fact that all of this is done in the color space RGB and that all of the images have a depth of 24 bits, the adjustment operation is described as follows:where New_LL is the approximation coefficients subband of the extracted image in a second level of wavelet decomposition; adjustment factor corresponding to base 2 (bit) rose to the image depth (24 bits). As a result of applying the adjustment operation, a new subband New_LL is obtained.

This is used, along with the subband of the second level of decomposition of the hidden image lh1, hl1, and hh1, to perform the wavelet recomposition.

This process employs the wavelet “db4”, and, as a result, the approximation ll coefficients subband, corresponding to the first level, is obtained. Then, the recomposition process is performed again, using the resulting subband ll, and subbands lh, hl, and hh. For this level, the “Haar” wavelet is used.

The layers obtained in the reconstruction process are combined according to the color space in which the process was performed. As a result, a new image is obtained, which is called recovered image.

5. Results and Discussion

5.1. Testing and Results

In this section, the robustness, the embedded data imperceptibility, and the information retrieval are validated, as well as the quality of the stego-image and of the recovered one.

The experiments were done with a group of images ordered in pairs. In each pair, one image is used as the cover object, while the other is the image to be hidden. The pairs were formed randomly, resulting in the pairing shown in Figure 1.

The images selected for the validation tests have a  .jpg format, with a depth of 24 bytes in the color space RGB. Each test was done following the procedure described next:(1)The proposed algorithm was used to hide the selected image inside the cover image, obtaining a new image tagged as stego-image.(2)Once the stego-image was obtained, the cover and the stego-image were compared with the following criteria:(a)Correlation: it represents the statistical dependency: it establishes the lineal relation between the change in magnitude and direction between two different signals. The correlation is defined as(b)The mean square error (MSE): the mean square error is a risk function between two images that allows identifying the squares of the loss regarding the expected value and the obtained value; that is to say, it reflects the difference between both images in relation to the expected values. Equation (21) served to calculate this coefficient,where MSE is the mean square error, is the image original size , and is the obtained image size .The MSE serves as a base to calculate the following metric used. The peak signal-to-noise ratio (PSNR) is a term used to define the relation between the maximum possible power of an image and the noise that affects it. In this case, the noise is the presence of the new information embedded during the concealment, or the loss of information during the retrieval of the information from the stego-image.With a lower MSE, the PSNR will tend to infinity, which means that the image compared with the original is a faithful copy. Equation (22) was used to obtain this coefficient.where PSNR is the relation coefficient of the peak signal-to-noise ratio and is the maximum value in the layer I squared. For the color space RGB, the maximum value is 255.(c)Root mean square error is given by(d)Normalized absolute error and image fidelity can be expressed as(e)Histogram: the histogram of the images allows having a notion of their spectrum and how it is affected during the hiding and retrieval process. Besides, it allows the calculation of other metrics as it is the case of the standard deviation.(f)Standard deviation: the standard deviation of an image reflects the dispersion of the values regarding its mean value; this value is obtained from the square of the variance of the image. In order to obtain the variance out of the histogram of the image, (26) is employed.where is the variance of the image, is the appearance frequency of the value, , are the pixel intensity, and is average intensity of the image.

Each of these measures was compared in each layer and in the image in general. The recuperation process of the hidden image followed, obtaining a new image tagged as the recovered image.

Having obtained the recovered image, a second comparison was made between the latter and the image to be hidden, using the metrics described in the point 2, for the case of the cover and the stego-image. Finally, the cover image and the image to be hidden were rotated, repeating the process from point 1. The rotation applied to the images was done according to what is established in Table 1, where the column of the number of the rounds specifies the type of rotation that will be made to each image. The set of rounds is applied to each pair of images, because there are only 2 types of rotations, which apply to the 8 pairs of images; in total 16 images are obtained.

From Figure 1, 4 sets in total are obtained. The rotation column makes reference to the rotation in grades of each of the images. Towards the end, the first set obtained a rotation of 0° in each image; in the second round, the cover image is rotated 45° and the image to be hidden is rotated 0°; in the third round, the cover image is rotated 0° and the hidden image 180°; finally, in the fourth round, the images are rotated 45° and 180°, respectively.

As previously mentioned, the proposed algorithm was evaluated by means of quality image metrics applied to each obtained steganographic image and to each recovered image. Figure 2 shows a group of steganographic images obtained from the first round of tests together with the quality metrics of each one, from which it can be seen that any alteration in the images is not visible.

From (20), (21), (22), (23), (24), (25), and (26), the algorithm performance rate is calculated, comparing the new obtained images and the original ones. From the correlation between the images, it is possible to establish the degree of similarity between them, based on the fact that the correlation coefficient between the images reflects the level of the lineal relation between them, that is to say, the existing relation between the quantities of energy that they possess. When the coefficient value approaches 1, it reflects that the changes in energy, in terms of magnitude and direction, are similar; thus, it can be said that they are the same image. Whereas when it approaches 0, the coefficient value reflects a drastic change between the images, which implies that the original image was drastically changed. In order to obtain this coefficient, we used (20).

In Tables 2 and 3, the results of the correlation coefficients obtained are shown. Based on this data, the worst result in the concealment process was in test 6 during round 3, in which the stego-image presents a difference of 1.5 % with respect to the cover image. The best result was test 7 in rounds 1 and 3, in which the stego-image barely differs in 0.02% in respect to the cover image. Despite the fact that the stego-image differs in all the tests, the change is not visually perceptible. The best correlation found per layer is in the red layer in test 7 in both round 2 and round 4, the best overall result is in test 7 in round 1 and round 4, and it was also identified that the layer with the best correlation between the original image and the stego-image is the red layer. The layer with the worst correlation was found in test 6 in round 3; in the same test and round, the worst general correlation was presented; the layer with the worst correlation between original image and stego-image is blue.

In contrast with the results of the comparison between the cover and the stego-image, the results of the comparison between the recovered image and the image to be hidden shown in Table 3 reflect that, in 75% of the cases, the same image was recovered. However, in 81.25 % of the cases, the images are visually perceived as identical. The worst result was in test 6, during round 3, in which the recovered image differs by 19.5% from the image to be hidden. This case coincides with the worst case identified in the concealment process. In this table, the best results were not marked because in most cases they present the best possible result 1, which indicates that the stored image was recovered exactly; the layer with the best correlation identified is the red layer.

In order to validate the results, metrics MSE and PSNR were used. The MSE allows identifying the alteration level that the obtained image had in regard to the original image, whereas the PSNR, being used with the images, allows us to measure the alteration level of the information in regard to its format. The PSNR of the RGB image is in the JPG format between 35dB and 55dB. So, by presenting values out of this range, the images show drastic visual alterations, and so does their perception.

In Tables 4 and 5, the results of the MSE between the cover and the stego-image are shown and between the recovered and the image to be hidden, respectively. The obtained data confirmed what was stated by the correlation. Presenting MSE minor to 10, the image distortion is noticeable to the naked eye. The obtained data show that the more drastic distortions occurred in test 6, confirming so with the PSNR data.

As can be seen in Table 4, the lower the MSE, the lower will be the distortion obtained in the resulting image and, therefore, the stego-image will have the highest quality. The lowest MSE detected was presented in test 2 round 2 green layer, in which also the best overall result of the stego-image was obtained according to this metric. On the other hand, the worst result was presented in test 3 round 1 green layer; in the same way in the same test and round, the worst result of this metric was detected. When analyzing each layer, it was detected that on average the layer that presents an MSE is the blue layer and the one that presented the worst results is the green layer; these results are at the end of Table 4.

In the case of Table 5, the results obtained allow us to observe that the MSE between the recovered image and the original image to be hidden in 75% of the cases is 0; in other words, the best possible result is obtained. The worst result that was obtained was presented in the test 6 round 3 in the blue layer; in the same way in the same test and round the general result was obtained. When an analysis of each layer was made, it was detected that on average the layer with the lowest MSE and the best quality is the red layer and the one with the highest MSE and the worst quality is the blue layer; these results are shown at the end of Table 5.

Tables 6 and 7 show the results between the cover and the stego-image and the recovered image and the image to be hidden, respectively. These values confirm the distortion presented by the image. In the case of the cover images, as they are in a range between 35 dB and 55 dB, the distortion is almost imperceptible, being the most notorious case when it approaches the inferior limit. In the case of the recovered images, being exactly the same recovered image, a MSE of 0 is obtained, so the PSNR tends to infinity, proving, thus, that the information is not lost and that the image has not been distorted, excluding the cases in which the MSE is very big and stays out of the range in which the visual distortion is noticeable.

The results of Table 6 allow us to observe that there is a change in the cover images and stego-images; however, the higher the PSNR the lower the change that can be perceived quantitatively. The best detected result was presented in the green layer in the round 2 test 2. In the same test and round the best overall result was obtained. The worst result was detected in test 3 round 1 in the red layer; in turn, in the same test and round the worst result was detected in a general way. On average, as shown in Table 6, the red layer is the one that presents the least similarity according to this metric, while the blue layer shows the greatest possible similarity; these results are shown in the lower part of Table 6.

In Table 7, it can be seen that the best possible result to obtain from the PSNR is when it tends to infinity, meaning that both images compared are the same. The worst result obtained in the recovery process was presented in test 6 round 3 in the blue layer; in the same way in the same test and round the worst result was obtained in a general way.

From the histograms of each layer of the images, it was identified that, in the recovered images where a distortion is present, it is due to the fact that the cover image has two characteristics that limit the random behavior of the same, reducing the areas where the information can be hidden without causing distortions: the spectrum of the cover images is very small, in addition to the fact that the distribution of its energy is mostly concentrated in only one area. In the case of the cover images that are distorted, it was identified that the images to be hidden present a very broad spectrum with an inclination of the energy after an abrupt change of its distribution.

The histograms of the cover images and stego-images are shown in Figure 3; for each pair of images the corresponding histogram was graphed for each layer that make them; each one of them is shown with the color of each layer. The histograms corresponding to the cover images are shown in a light tone, while the histograms of the stego-images are shown in a darker tone. When being plotted in an overlapping way, it is possible to identify the changes in energy that may occur between the images; as a sign of the imperceptibility of the proposed algorithm changes in intensity are not perceptible in most cases. The cases with the most notorious alterations were detected in tests 4 (d), 6 (f), and 8 (h); in these cases a visual distortion was perceived between the cover image and the stego-image.

The histograms of the images to be hidden and recovered images are shown in Figure 4; as for the cover images and stego-images, the histograms of each layer that make up the images were plotted. The light tone histograms correspond to the original images, while the dark tone histograms correspond to the recovered images. The comparison of the histograms of these pairs of images allows us to evaluate the integrity of the recovery process. By analyzing the histograms obtained, the results obtained were reaffirmed by means of the previously mentioned metrics such as correlation, MSE, and PSNR. Within the results obtained, the histogram corresponding to test 6 (f) is the one in which a greater number of discrepancies are distinguished between the original image and the recovered image.

Lastly, the metric that conformed how the distribution of the obtained images was affected in relation to the originals, the standard deviation, was calculated from the information in the histograms. Table 8 shows the comparison of the standard deviation between the cover image and the stego-image; Table 9 shows the comparison of the standard deviation between the recovered image and the one to be hidden.

The results in Table 8 allow us to identify the difference between the standard deviation of the cover image and the image-steamer. The smaller difference between these smaller changes will be shown in the information contained in each image; the smallest difference was detected in test 2 round 3 in the red layer; in the same way it was in the same test and round and detected the best result in a general way. Regarding the greatest difference detected, it was detected in test 6 round 4 in the blue layer; likewise in general, the difference was detected in the same test and round.

The results shown in Table 9 allow us to observe that in 75% quantitatively no change in the distribution of information between the recovered image and the original image was presented. The greatest difference detected was presented in test 6 round 2 in the green layer, and in the same way in the same test and round the greater general difference between the images was detected.

Through the standard deviation, the structural change that the obtained images suffered was identified. The value distribution was affected, as indicated in the histogram, which is reflected in the visual distortion, even though the structure is preserved. By having changes in the energy distribution, the perception of ghost images is provoked, as shown in Figure 5.

The recovered images as well as the cover images were subjected to image quality metrics to check the efficiency of the algorithm proposed in the recovery process. Figure 6 shows the recovered images as well as the metrics corresponding to each one; with these metrics it can be perceived that the quality of the images is maintained in 87.5% of the cases.

Figure 7 shows a comparison of the cover images and the obtained stego-images. As it can be noticed visually, not in all the cases an alteration in the image can be appreciated, without detecting that they contain hidden information inside.

Figure 8 shows the comparison between the recovered images and the original images to be hidden. In contrast to the concealment contrast, during the retrieval process the case of test 6 was presented, in which the visual distortion is noticeable, while in the rest of the cases, distortions, if any, were imperceptible.

In addition to the validation tests carried out, an insertion capacity test was carried out, during this test the amount of information that is inserted in the cover image corresponding to the image to be hidden was measured, this information corresponds to the values of each pixel of each layer, as well as the values of the key and map employees. Equation (27) describes the measurement of capacity and insertion:where corresponds to the information that has been inserted, corresponds to the amount of data to be inserted (pixels, key, and map), and is the number of layers that make up the image; in this case it is equal to 3 for the space of color RGB.

Table 10 showed the results of this test.

With this test, it is verified that the proposed algorithm allows inserting information in the cover image in a small fraction of it; on average, it is observed that the information of the image to be hidden is in 5.01% of the cover image. The smallest result identified is 3.03% in tests 1 and 2; in the test that the best results were obtained, 3.51% was obtained, and in the worst result, 6.25% was obtained, which indicates that despite obtaining measurements efficient quantitative, qualitatively errors can be presented as was shown in the previous statements.

5.2. Comparison

From Table 11, different proposed steganographic algorithms are shown, which show different results with classical images in the image processing. Carvajal et al. (2013) [27] proposed the use of color complexity analysis (CLCES) accompanied by filters for the insertion of information with a PSNR = 49.1956 dB, with a similarity index of 0.9962 and the maximum capacity for the secret message is 4.3204 e03 Kb. This paper shows the analysis of steganalysis through the IQM method with the best result of 1/9 images detected as a possible carrier of information.

Carvajal et al. (2014) [14] proposed an adaptation of the algorithm based on variance estimation (VFES), presenting a result with PSNR = 34.3736dB, a similarity index of 0.9969, and an insertion capacity of 165.336 Kb. In this work we do not show results of attacks made on host images.

Sidhik et al. (2015) [15] proposed a steganographic algorithm based on the wavelet fusion technique which has a PSNR = 37.45 dB, does not show a similarity index, and shows an insertion capacity of 600x600 bits. This work does not show results of attacks made to the host images.

Nazari S. et al. (2015) [16] proposed a steganographic algorithm which maps the cover image into a morphological representation in which they contain morphological coefficients and then insert the bits of the secret message applying permutation and a coding matrix. The results obtained in the work are for the PSNR = 49.38dB; it does not show the index of similarity and it obtains an insertion capacity of 4096 bits. This work does not show results of attacks made to the host images.

Gulve et al. (2015) [17] proposed a steganographic algorithm which makes a combination of spatial and frequency domain for the insertion of information. The results obtained in the work are for the PSNR = 40.29 dB, with similarity index of 0.800 and an insertion capacity of 650.608 bits. This work does not show results of attacks made to the host images.

Subhedar M. et al. (2016) [11] proposed a steganographic algorithm applying the redundant wavelet redundant transform (RDWT) and the QR factorization. The results obtained in the work are for the PSNR = 54.8019 dB, with a similarity index of 0.9997 and an insertion capacity of 256x256 bits. In this work, an analysis of attacks is carried out through various tests such as trimming, rotation, accentuation of edges, noise attacks, filtering, equalization, and compression. These techniques are not concentrated in a steganalyzer, such as IQM; however, the mentioned tests are contained in the IQM metrics. The author presents resistance tests against attacks of 52% for attacks with the DWT and 59% for attacks applying the contourlet.

The algorithm proposed in this paper uses an adaptation of the nonlocal maximum likelihood filter in the wavelet domain as a filter detector of edges, reliefs, or abrupt changes within the host image, accompanied by the noise detector threshold using the fourth moment wavelet, which is obtained from first-order statistics.

The information to be inserted is mapped in the submatrices obtained in the wavelet domain in such a way that the total recovery of the hidden information is guaranteed, together with the recovery criteria described in the work.

The main difference in this work lies in the adaptation of the nonlocal maximum likelihood filter in the wavelet domain and its noise detection sensitivity in the images, which allows selecting optimal zones for the insertion of information. By preserving the conditions of maximum adaptability to the medium, the PSNR visual quality measure is guaranteed, which indicates the power ratio of the image information against the noise power (everything that is not of the original image) of the image. In the present work, the ratio of information against noise PSNR is 56.1082 dB for the Splash test image.

5.3. Steganalysis

The reverse process of hiding information is known as steganalysis. Steganalysis is a technique which allows the identification and detection of information that is not coherent within the context in which the information is found.

In the application of steganalysis in digital images, advanced steganographic algorithms have now been developed, as well, making algorithms focused on the detection of information a complicated task, especially if current steganographic algorithms are focused on hiding information in noise.

Even when the images generated after the insertion of the information appear to be of good visual quality, so that the changes made are not identifiable at first sight, the insertion can affect the statistical behavior of the image, as well as its behavior in its different decompositions in frequency.

Steganographic algorithms can pass through different types of communication channels; and these communication channels can go through different types of surveillance that can be (i) passive, in which the communication channel is not a review of the information sent; (ii) active, in which the communication channel is continuously in review; and finally (iii) the passive-active mix in which the channel may or may not go through the monitoring of the channel.

For this work, the analysis of the Image Quality Measures (IQM) is chosen, which consists of 10 tests that are based on the characteristics of the image, which are related through a function that must correlate well with the degree of satisfaction of an observer [12]. The objective of the quality measures have been used functions for the coding, evaluation, and prediction of performance of image quality algorithms, as well as loss of image information.

The 10 values obtained for the IQM are also classified depending on the type of possible attacks that can have the stego-images; these are listed below:

For an active surveillance in the stego-images, the following are used: M1 = mean absolute error (MAE), M2 = mean square error (MSE), M3 = correlation measure Czekznowski, M5 = image fidelity (IF), M6 = cross correlation, M7 = magnitude of the spectral distance, and M10 = standardized quadratic mean HVS error.

For a passive surveillance in stego-images, the following are used: M4 = average angle, M8 = average distance of the block in the spectral phase, M9 = average distance of the block of spectral weight, and M10 = standardized quadratic mean HVS error.

For a mixed surveillance (active-passive) in the stego-images, the following are used: M1 = mean absolute error (MAE), M2 = mean square error (MSE), M3 = correlation measure Czekznowski, M7 = magnitude of the spectral distance, M8 = average distance of the block in the spectral phase, M9 = average distance of the block of spectral weight, and M10 = standardized quadratic mean HVS error.

For this work mixed surveillance was chosen, and the results obtained are shown in Table 12.

The results obtained from the IQM steganalysis tests are shown in Table 12. Of the 8 tests performed, 7 obtained 85.71% of message imperceptibility and 1 obtained 100% of imperceptibility.

The test that obtained a result in which it can be interpreted as noise added to the image is in the M10 test; this indicates the standardized squared error of the stego-image. The next test in which it can be interpreted as noise is in the M7 test; it is in the alteration of the spectrum of the image. However, the results obtained may be negligible because the present values are small.

5.4. Discussion

In the presented work, a steganographic algorithm was proposed applying filtering techniques in multispectral images modifying the discrimination criteria of the nonlocal maximum likelihood filter models, applying the wavelet domain in obtaining the fourth moment; in this way it is possible to propose a detector to select in a more precise way the possible points of insertion of information within the host image.

The wavelet domain in the implementation of the steganographic algorithm collaborated in the decrease in the distortion of stego-image because each subimage obtained represents a level of frequency which highlights or diminishes the edges and/or edges of the image.

To evaluate the performance of the algorithm as well as the quality of the images obtained, correlation, MSE, and PSNR measurements are used, which reach values of 100% to infinity, which indicates that the difference between the original image and stego-image is not perceived quantitatively. The results obtained from the proposed steganographic algorithm still present in conditions of angle of view variability. The results obtained show that the adaptations of the filters, as well as the use of the fourth wavelet moment as a noise discriminant, allow the minimum distortion in the stego-image.

The proposed steganographic algorithm was also subjected to steganalysis tests to evaluate the resistance it had, to various factors such as image recovery capacity, similitude between stego-image and host image, invariance in time, frequency, changes phase, and fidelity of the image, showing results above 85%, which demonstrates the degree of reliability of information recovery, as well as imperceptibility of the information inserted.

6. Conclusions

The proposed algorithm adapts Nonlocal Maximum Likelihood Filters, in the wavelet domain using the fourth wavelet moment to obtain a discriminant with considerable levels of noise detection in the host image, with the aim that the information inserted is the least perceptible possible both for the eyesight and for quantitative estimators of the quality of the image. From the results obtained using each of the aforementioned metrics (correlation, MSE, PSNR, and standard deviation) it was observed that in the case of the correlation between the cover image and the stego-image the result with the lowest performance is presented in test 6 round 3 with a correlation of 0.985136, while the best performance was obtained in test 7 both in round 1, as well as in round 3 with a correlation of 0.999895. On average, using the correlation metric between the cover image and the steganographic image, a value of 0.99687697 was obtained.

In the case of the correlation between the recovered image and the image to be hidden, the lowest performance obtained in the correlation metric between both images was obtained in test 6 round 3 with a value of 0.805666; on the other hand, the best performance obtained in the correlation that was obtained was in test 1 in all rounds, in test 2 rounds 1 and 2, test 3 rounds 3 and 4, test 4 all rounds, test 5 all rounds, test 7 all rounds, and test 8 all rounds with a value of 1. On average, the correlation between the recovered image and the image to be hidden, considering all the scenarios, gives a value of 0.98095813.

In the case of the MSE between the cover image and the stego-image, it can be seen in the results obtained that the minor performance in the MSE metric is presented in the 2 round 4 test with a value of 0.260410, and the highest performance for the MSE which is 8.276878 was in test 3 round 1. The average performance obtained in the MSE between the cover image and the steganographic image obtained a value of 2.60873384 taking into account all the scenarios of the tests performed.

Calculating the performance in the MSE metric between the recovered image and the image to be hidden, it is observed that the lowest performance obtained was obtained in test 1 in all rounds, test 2 in rounds 1 and 2, test 3 in rounds 3 and 4, test 4 in all rounds, test 5 in all rounds, test 7 in all rounds, and test 8 in all rounds with a performance obtained by the MSE of 0; on the other hand, the highest performance is present in round 6 test 3 with a performance in the MSE of 41.232236. The average performance obtained in this metric, taking into account all test scenarios, of the MSE between the recovered image and the image to be hidden has a value of 5.69734713.

In the case of the performance calculation for the PSNR between the cover image and the stego-image, it was observed that the highest value was obtained in round 2 test 4 with a performance in the PSNR of 53,974364 dB, while the lowest performance was of 38.952281 dB; this was obtained in round 3 test 1. The average value obtained in the case of PSNR, taking into consideration all the scenarios of the tests, was 46.3926254 dB.

The case of the calculation of the performance obtained in the PSNR between the recovered image and the image to be hidden was obtained where the highest performance of the PSNR tends to infinity; this value was presented in test 1 all rounds, test 2 rounds 1 and 2, test 3 in all rounds, test 4 in all rounds, test 5 in all rounds, test 7 in all rounds, and test 8 in all rounds; on the other hand, the lowest performance obtained was 32.245378 dB of PSNR; this was obtained in test 6 round 3. In this case, the average performance obtained for the PSNR, considering all the tests, yields a value that tends to infinity; this result is obtained from the fact that there is no difference between the power of the energy of the original image and the stego-image. In the same way, it is presented in the image to be hidden and in the recovered image.

In the case of the metric of the SD obtained from the cover image and the stego-image, the performance obtained in the highest difference detected between both images is obtained in the test 6 rounds 2 and 4 with a value of 0.005126, while the performance obtained with the smallest difference is in test 6 rounds 1 and 3 with a value of 0.000010. On average, the difference obtained between the SD of the cover image and the SD of the stego-image, taking into consideration all the test scenarios, yields a value of 0.00052246.

In the case of the performance calculation obtained in the SD between the image recovered and the image to be hidden, the highest difference detected which has a value of 0.000814 is obtained in test 6 round 2, while the performance obtained with the smallest difference detected was a value of 0 which was obtained in test 1 in all rounds, test 2 round 1 and 2, test 3 round 3 and 4, test 4 in all rounds, test 5 in all rounds, test 7 in all rounds, and test 8 in all rounds. On average, the difference between the SD of the recovered image and the image to be hidden shows a value of 0.00006318.

The algorithm proposed in this document was also subjected to tests of resistance to attacks based on the IQM measures, which showed a performance above 85%, thus demonstrating the reliability of the imperceptibility of the data, to possible guardians of active-type channels such as what are currently mobile communications. Likewise, an insertion capacity performance of 1,179,648 bits was obtained, occupying only 6.25% of the image, demonstrating that the three important factors of the steganographic algorithm were maintained: insertion capacity, stego-image quality, and message retrieval.

Data Availability

The images, data, and datasets generated and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

Blanca E. Carvajal-Gámez presented the main idea, conceived the experiments, interpreted the results, and wrote the paper. Manuel A. Díaz-Casco carried out the simulation; he also contributed to the interpretation of the results. All authors read and approved the final manuscript for publication.

Acknowledgments

The authors express their gratitude to Instituto Politécnico Nacional (SIP-IPN 20180410) for the support during this work, in particular to the Secretaria de Ciencia, Tecnología e Innovación of CDMX (SECITI/072/206) for all the facilities granted.