Advances in Multimedia

Volume 2016, Article ID 4985313, 10 pages

http://dx.doi.org/10.1155/2016/4985313

## Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

^{1}College of Computer and Communication, Hunan University of Technology, Hunan 412007, China^{2}Intelligent Information Perception and Processing Technology, Hunan Province Key Laboratory, Hunan 412007, China^{3}Department of Computer Science, China University of Geosciences, Wuhan, Hubei 430074, China

Received 21 January 2016; Revised 22 March 2016; Accepted 18 April 2016

Academic Editor: Stefanos Kollias

Copyright © 2016 Zhigao Zeng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

#### 1. Introduction

As a popular image processing technology, digital halftoning [1] has found wide applications in converting a continuous tone image into a binary halftone image for a better display on binary devices, such as printers and computer screens. Usually, binary halftone images can only be obtained in the process of printing, image scanning, and fax, from which the original continuous tone images need to be reconstructed [2, 3], using an inverse halftoning algorithm [4], for image processing, for example, image classification, image compression, image enhancement, and image zooming. However, it is difficult for inverse halftoning algorithms to obtain the optimal reconstruction quality due to unknown halftoning patterns in practical applications. Furthermore, a basic drawback of the existing inverse halftone algorithms is that they do not distinguish the types of halftone images or can only coarsely divide halftone images into two major categories of error-diffused halftone images and orderly dithered halftone images. This inability of exploiting a prior knowledge on the halftone images largely weakens the flexibility, adaptability, and effectiveness of the inverse halftoning techniques, making the study on the classification of halftone images imperative for not only optimizing the existing inverse halftoning schemes, but also guiding the establishment of adaptive schemes on halftone image compression, halftone image watermarking, and so forth.

Motivated by observing the significance of classifying halftone images, several halftone image classification methods have been proposed. In 1998, Chang and Yu [5] classified halftone images into four types using an enhanced one-dimensional correlation function and a backpropagation (BP) neural network, for which the data sets in the experiments are limited to the halftone images produced by clustered-dot ordered dithering, dispersed-dot ordered dithering, constrained average, and error diffusion. Kong et al. [6, 7] used an enhanced one-dimensional correlation function and a gray level cooccurrence matrix to extract features from halftone images, based on which the halftone images are divided into nine categories using a decision tree algorithm. Liu et al. [8] combined support region and least mean square (LMS) algorithm to divide halftone images into four categories. Subsequently, they [9] used LMS to extract features from Fourier spectrum in nine categories of halftone images and classify these halftone images using naive Bayes. Although these methods work well in classifying some specific halftone images, their performance largely decreases when classifying error-diffused halftone images produced by Floyd-Steinberg filter, Stucki filter, Sierra filter, Burkers filter, Jarvis filter, and Stevenson filter, respectively. They are described as follows.

*Different Error Diffusion Filters.*
Consider the following:(a)Floyd-Steinberg filter:(b)Sierra filter:(c)Burkers filter:(d)Jarvis filter:(e)Stevenson filter:

*The Error Diffusion of Stucki*(a)Error kernel of Stucki filter is(b)Matrix of coefficients template is(c) denotes the pixel being processed; , , , and indicate the four neighborhood pixels:based on different error diffusion kernels, as summarized in [10–12]. Moreover, these literatures did not consider all types of error diffusion halftone images. For example, only three error diffusion filters are included in [6, 7, 9] and only one is involved in [5, 8]. The idea of halftoning for the six error diffusion filters is quite similar, with the only difference lying in the templates used (shown in different error diffusion filters and the error diffusion of Stucki described above; the templates are shown at the right-hand side in each equation). It is difficult to classify the error-diffused halftone images because of the almost inconspicuous differences among various halftone features extracted from, using these six error diffusion filters, the error-diffused halftone images. However, as a scalable algorithm, the error diffusion has gradually become one of the most popular techniques, due to its ability to provide a solution of good quality at a reasonable cost [13]. This asks for an urgent requirement to study the classification mechanism for various error diffusion algorithms, with the hope to promote the existing inverse halftone techniques widely used in different application fields of graphics processing.

This paper proposes a new algorithm to classify error-diffused halftone images. We first extract the feature matrices of pixel pairs from the error-diffused halftone image patches, according to statistical characteristics of these patches. The class feature matrices are then subsequently obtained, using a gradient descent method, based on the feature matrices of pixel pairs [14]. After applying the spectral regression kernel discriminant analysis to realize the dimension reduction in the class feature matrices, we finally classify the error-diffused halftone images using the idea similar to the nearest centroids classifier [15, 16].

The structure of this paper is as follows. Section 2 presents the method of kernel discriminant analysis. Section 3 describes how to extract the feature extraction of pixel pairs from the error-diffused halftone images. Section 4 describes the proposed classification method for the error-diffused halftone image based on the spectral regression kernel discriminant analysis. Section 5 shows the experimental results. Some concluding remarks and possible future research directions are given in Section 6.

#### 2. An Efficient Kernel Discriminant Analysis Method

It is well known that linear discriminant analysis (LDA) [17, 18] is effective in solving classification problems, but it fails for nonlinear problems. To deal with this limitation, the approach called kernel discriminant analysis (KDA) [19] has been proposed.

##### 2.1. Overview of Kernel Discriminant Analysis

Suppose that we are given a sample set of the error-diffused halftone images with belonging to different class , respectively. Using a nonlinear mapping function , the samples of the halftone images in the input space can be projected to a high-dimensional separable feature space ; namely, , . After extracting features, the error-diffused halftone images will be classified along the projection direction along which the within-class scatter is minimal and the between-class scatter is maximal. For a proper , an inner product described as can be defined on to form a reproducing kernel Hilbert space. That is to say, , where is a positive semidefinite kernel function. In the feature space , let be the between-class scatter matrix, let be the within-class scatter matrix, and let be the total scatter matrix:where is the number of the samples in the th class, is the th sample of the th class in the feature space , is the centroid of the th class, and is the global centroid. In the feature space, the aim of the discriminant analysis is to seek the best projection direction, namely, the projective function to maximize the following objective function:Equation (10) can be solved by the eigenproblem . According to the theory of reproducing kernel Hilbert space, we know that the eigenvectors are linear combinations of in the feature space : there exist weight coefficients such that . Let ; then it can be proved that (10) can be rewritten as follows:The optimization problem of (11) is equal to the eigenproblem where is the kernel matrix and ; is the weight matrix defined as follows:For sample , the projective function in the feature space can be described as

##### 2.2. Kernel Discriminant Analysis via Spectral Regression

To efficiently solve the eigenproblem of the kernel discriminant analysis in (12), the following theorem will be used.

Theorem 1. *Let be the eigenvector of the eigenproblem with eigenvalue . If , then is the eigenvector of eigenproblem (12) with the same eigenvalue λ.*

According to Theorem 1, the projective function of the kernel discriminant analysis can be obtained according to the following two steps.

*Step 1. *Obtain by solving the eigenproblem in (12).

*Step 2. *Search eigenvector which satisfies , where is the positive semidefinite kernel matrix.

As we know, if is nonsingular, then, for any given , there exists a unique satisfying the linear equation described in Step . If is singular, then, the linear equation may have infinite solutions or have no solution. In this case, we can approximate by solving the following equation:where is a regularization parameter and is the identity matrix. Combined with the projective function described in (14), we can easily verify that the solution given by (15) is the optimal solution of the following regularized regression problem:where is the th element of and is the reproducing kernel Hilbert space induced from the Mercer kernel with being the corresponding norm. Due to the essential combination of the spectral analysis and regression techniques in the above two-step approach, the method is named as spectral regression (SR) kernel discriminant analysis.

#### 3. Feature Extraction of the Error-Diffused Halftone Images

Since its introduction in 1976, the error diffusion algorithm has attracted widespread attention in the field of printing applications. It deals with pixels of halftone images using, instead of point processing algorithms, the neighborhood processing algorithms. Now we will extract the features of the error-diffused halftone images which are produced using the six popular error diffusion filters mentioned in Section 1.

##### 3.1. Statistic Characteristics of the Error-Diffused Halftone Images

Assume that is the gray value of the pixel located at position in the original image and is the value of the pixel located at position in the error-diffused halftone image. For the original image, all the pixels are firstly normalized to the range []. Then, the pixels of the normalized image are converted to the error-diffused image line by line; that is to say, if , , which is the value of the pixel located at position in error-diffused image , is 1; otherwise, is 0, where is the threshold value. The error between and is diffused ahead to some subsequent pixels not necessary to deal with. Therefore, for some subsequent pixels, the comparison will be implemented between and the value which is the sum of and the diffusion error . A template matrix can be built using the error diffusion modes and the error diffusion coefficients, as shown in the error diffusion of Stucki described above, for example, (a) the error diffusion filter and (b) the error diffusion coefficients which represent the proportion of the diffusion errors. If the coefficient is zero, then the corresponding pixel does not receive any diffusion errors. According to the error diffusion of Stucki described above, pixel suffers from more diffusion errors than pixel ; that is to say, has a larger probability to become 1-0 pixel pair than . The reasons are as follows. Suppose that the pixel value of is , and pixel has been processed by the thresholding method according to the following equation:In general, threshold is set as 0.5. According to the template shown in the error diffusion of Stucki described above, we can know that the diffusion error is , the new value of pixel is , and the new value of pixel is , where and are the original values of pixels and , respectively.

Since the value of each pixel in the error-diffused halftone image can only be 0 or 1, there are 4 kinds of pixel pairs in the halftone image: 0-1, 0-0, 1-0, and 1-1. Pixel pairs 0-1 and 1-0 are collectively known as 1-0 pixel pairs because of their exchange ability. Therefore, there are only three kinds of pixel pairs essentially: 0-0, 1-0, and 1-1. In this paper, three statistical matrices are used to store the number of different pixel pairs with different neighboring distances and different directions, which are of size and are referred to as , , and , respectively ( is an odd number satisfying and is the maximum neighboring distance). Suppose that the center entry of the statistical matrix template covers pixel of the error-diffused halftone image with the size , and other entries overlap other neighborhood pixels . Then, we can compute three statistics on 1-0, 1-1, and 0-0 pixel pairs within the scope of this statistics matrix template. If the position of pixel changes continually, the matrices , , and with zero being the initial values can be updated according towhere , , , , , and . After normalization, the three statistic matrices can be ultimately obtained as the statistical feature descriptor of the error-diffused halftone images.

##### 3.2. Process of Statistical Feature Extraction of Halftone Images

According to the analysis described above, the process of statistical feature extraction of the error-diffused halftone images can be represented as follows.

*Step 1. *Input the error-diffused halftone image , and divide into several blocks with the same size .

*Step 2. *Initialize the statistical feature matrix (including ) as the zero matrix, and let .

*Step 3. *Obtain the statistical matrix of block according to (18), and update using the equation .

*Step 4. *Set . If , then return to Step . Otherwise, go to Step .

*Step 5. *Normalize such that it satisfies and , where satisfies and represents the size of the template matrix of the th error diffusion filter.

According to the process described above, we know that the statistical features of the error-diffused halftone image are extracted based on the method that divides into image patches, which is significantly different with other feature extraction methods based on image patches. For example, in [20], the brightness and contrast of the image patches are normalized by -score transformation, and whitening (also called “sphering”) is used to rescale the normalized data to remove the correlations between nearby pixels (i.e., low-frequency variations in the images) because these correlations tend to be very strong even after brightness and contrast normalization. However, in this paper, features of the patches are extracted based on counting statistical measures of different pixel pairs (0/0, 1/0, and 1/1) within a moving statistical matrix template and are optimized using the method described in Section 3.3.

##### 3.3. Extraction of the Class Feature Matrix

The statistics matrices , after being extracted, can be used as the input of other algorithms, such as support vector machines and neural networks. However, the curse of dimensionality could occur, due to the high dimension of , , , making the classification effect possibly not significant. Thereby, six class feature matrices are designed in this paper for the error-diffused halftone images produced by the six error diffusion filters mentioned above. Then, a gradient descent method can be used to optimize these class feature matrices.

error-diffused halftone images can be derived from original images using the six error diffusion filters, respectively. Then, statistics matrices can be extracted as the samples from error-diffused halftone images using the algorithm mentioned in Section 3.2. Subsequently, we label these matrices as to denote the types of the error diffusion filters used to produce the error-diffused halftone image. Given the th sample as the input, the target out vector , and the class feature matrices , the square error between the actual output and the target output can be derived according to where The derivatives of in (19) can be explicitly calculated aswhere , , and is the dot product of matrices defined, for any matrices and with the same size , asThe dot product of matrices satisfies the commutative law and associative law; that is to say, and . Then, the iteration equation (23) can be obtained using the gradient descent method:where is the learning factor and means the th iteration. The purpose of learning is to seek the optimal matrices by minimizing the total square error , and the process of seeking the optimal matrices can be described as follows.

*Step 1. *Initialize parameters: initialize the numbers of iterations* inner* and* outer*, the iteration variables and , the nonnegative thresholds and used to indicate the end of iterations, the learning factor , the total number of samples , and the class feature matrices .

*Step 2. *Input the statistics matrices , and let and . The following three substeps are executed.(1)According to (23), can be computed.(2)Compute and .(3)If or , then set and go to Step ; otherwise, set and return to (10).

*Step 3. *Set . If , then go to Step . Otherwise, set , and go to Step .

*Step 4. *Compute the total error . If or =* outer*, then end the algorithm. Otherwise, set and go to Step .

#### 4. Classification of Error-Diffused Halftone Images Using Nearest Centroids Classifier

This section describes the details on classifying error-diffused halftone images using the spectral regression kernel discriminant analysis as follows.

*Step 1. *Input error-diffused halftone images produced by six error-diffused filters, and extract the statistical feature matrices , including , , , , using the method presented in Section 3.2.

*Step 2. *According to the steps described in Section 3.3, all the statistical feature matrices (, , and ) are converted to the class feature matrices , including and , correspondingly. Then, convert into one-dimensional matrices by columns, respectively.

*Step 3. *All the one-dimensional class feature matrices are used to construct the samples feature matrices , , of size , respectively.

*Step 4. *A label matrix* information* of the size is built to record the type to which the error-diffused halftone images belong.

*Step 5. *The first features of the samples feature matrices are taken as the training samples (the first features of , , or which is the composition of , , and also can be used as the training samples). Reduce the dimension of these training samples using the spectral regression discriminant analysis. The process of dimension reducing can be described by three substeps as follows.

() Produce orthogonal vectors. Let and be the matrix with all elements being 1. Let be the first vector of the weight matrix , and use Gram-Schmidt process to orthogonalize the other eigenvectors. Remove the vector , leaving eigenvectors of denoted as follows:() Add an element of 1 to the end of each input data , and obtain vectors , where is the solution of the following regular least squares problem:Here is the th element of eigenvectors and is the contraction parameter.

() Let be the transformation matrix with the size . Perform dimension reduction on the sample features , by mapping them to dimensional subspace as follows:

*Step 6. *Compute the mean values of samples in different classes according to following equation:where is the number of samples in the th class; . Let be the class-centroid of the th class.

*Step 7. *The remaining samples are taken as the testing samples, and the dimension reduction is implemented for them using the method described in Step .

*Step 8. *Compute the square of the distance and between each testing sample and different class-centroid , according to the nearest centroids classifier; the sample is assigned to the class if .

In Step , the weak classifier (i.e., the nearest centroid classifier) is used to classify error-diffused halftone images, because this classifier is simple and easy to implement. Simultaneously, in order to prove that these class feature matrices, which are extracted according to the method mentioned in Section 3 and handled by the algorithm of the spectral regression discriminant analysis, are well suited for the classification of error-diffused halftone images, this weak classifier is used in this paper instead of a strong classifier [20], such as support vector machine classifiers and deep neural network classifiers.

#### 5. Experimental Analysis and Results

We implement various experiments to verify the efficiency of our methods in classifying error-diffused halftone images. The computer processor is Intel(R) Pentium(R) CPU G2030 @3.00 GHz, the memory of the computer is 2.0 GB, the operating system is Windows 7, and the experimental simulation software is matlab R2012a. In our experiments, all the original images are downloaded from http://decsai.ugr.es/cvg/dbimagenes/ and http://msp.ee.ntust.edu.tw/. About 4000 original images have been downloaded and they are converted into 24000 error-diffused halftone images produced by six different error-diffused filters.

##### 5.1. Classification Accuracy Rate of the Error-Diffused Halftone Images

###### 5.1.1. Effect of the Number of the Samples

This subsection analyzes the effect of the number on the feature samples on classification. When , and feature matrices , , , are taken as the input data, respectively, the accuracy rate of classification under different conditions is shown in Tables 1 and 2. Table 1 shows the classification accuracy rates under different number of training samples, when the total number of samples is 12000. Table 2 shows the classification accuracy rates under different number of training samples, when the total number of samples is 24000. The digits in the first line of each table are the size of the training samples.