Reduction of Multidimensional Image Characteristics Based on Improved KICA
The domestic and overseas studies of redundant multifeatures and noise in dimension reduction are insufficient, and the efficiency and accuracy are low. Dimensionality reduction and optimization of characteristic parameter model based on improved kernel independent component analysis are proposed in this paper; the independent primitives are obtained by KICA (kernel independent component analysis) algorithm to construct an independent group subspace, while using 2DPCA (2D principal component analysis) algorithm to complete the second order related to data and further reduce the dimension in the above method. Meanwhile, the optimization effect evaluation method based on Amari error and average correlation degree is presented in this paper. Comparative simulation experiments show that the Amari error is less than 6%, the average correlation degree is stable at 97% or more, and the parameter optimization method can effectively reduce the dimension of multidimensional characteristic parameters.
As a result of technological advances, the data object which needed to be processed is becoming more and more complex [1–3], and the data dimension is higher and higher. The feature space of multidimensional data usually contains many redundant features even noise characteristics that will increase learning, training time, and the space complexity, which reduce the accuracy of the analysis process. Therefore, before analyzing the multidimensional data, we should carry out the dimension reduction preprocessing.
Combined with dimension reduction technique of principal component analysis and blind source separation technique of independent component analysis,  proposed a dimensional reduction method for multidimensional mixed-signal characteristics, known as the PCA-ICA method. Reference  proposed a fused dimensional reduction algorithm based on 2DPCA and ICA; it can be used for vehicle recognition combined with the supported vector machine model. Reference  proposed data dimensional reduction method based on principal component analysis and kernel independent component analysis (KICA), used for reduction processing for signals like mechanical vibration. Reference  proposed a linear data dimensional reduction method based on two-step adaptive process. It is adaptive for different types of data dimensional reduction process, but its performance needs to be further improved when dealing with data sets with more and stronger abnormal data. Reference  proposed data dimensional reduction method based on related-statistics. It has strong dimension reduction stability so it can substitute other dimensional reduction methods in some cases. However, since the introduction of the new parameter, this method needed to be validated in a larger data set.
In a certain degree, those methods presented above can realize dimensional reduction process to the multidimensional characteristic signal in a specific range of redundancy and noise, but when there are many characteristic redundancies or noise data, its efficiency and accuracy need to be further optimized.
Based on the analysis above, the paper proposed a reduction method of multidimensional image characteristics based on improved KICA. In this method, the independent parameters are obtained by KICA (kernel independent component analysis) algorithm to construct an independent group subspace, while using 2DPCA (2D principal component analysis) algorithm to complete the removal of related second order and further reduce the dimension. Meanwhile, this paper presents a dimension reduction effect evaluation method based on the average Amari error and correlation as the evaluation criteria.
2. Dimensional Reduction Algorithm Based on Improved KICA Model
2.1. The Basic Principles of the Model
KICA method  is a new independent component analysis method based on “kernel skills,” which is a multivariate data processing method based on higher-order statistics. It can decompose the image signal into a number of relatively independent components, which can be used for feature extraction and image recognition of the target images. However, this method fails to effectively reduce the dimension and noise when multidimensional feature parameter contains certain abnormal data and redundancy (Figure 2).
2DPCA algorithm [10, 11] is based on the traditional linear data dimensional reduction method: the PCA algorithm, but this method does not need to convert image matrix into vector and can directly use the two-dimensional image matrix to solve covariance matrix. So compared with PCA method, 2DPCA algorithm simplifies the calculation of the eigenvalues and eigenvectors, retaining the structure information of the image, and therefore significantly improves computational efficiency and shortens the computation time, with better capability of noise-removing and dimensional reduction.
Based on the analysis above, this paper will combine 2PCA with KICA in order to realize dimension reduction of multidimensional characteristic parameters, which is called the 2PCA-KICA method. This method obtains the independent parameters of multidimensional characteristics to construct an independent group subspace using KICA (kernel independent principal component analysis) algorithm, while using 2DPCA (2D principal component analysis) algorithm to complete the removal of related second order and further reduce the dimension, in order to project multidimensional signal feature samples onto a low-dimensional independent-based space, and then realize effective dimensional reduction of the multidimensional characteristics parameters. This method can not only extract main signal from multidimensional characteristics sample signal, but can also approximately estimate the source signal and effectively balance the efficiency and immunity.
The KICA method consists of two contrast functions: KCCA (kernel canonical correlation analysis) and KGV (kernel generalized variance). Correspondingly, the 2DPCA-KICA method can also be divided into 2DPCA-KCCA algorithm and 2DPCA-KGV algorithm. The overall flowchart of the algorithm is shown in Figure 1.
2.2. Analysis of the Optimization Process of Characteristic Parameters
Assume that the sample matrix of multidimensional signal characteristic is a matrix:
So, we can get the mathematical model of 2DPCA-KICA dimensional reduction method process as follows:
By projection , project the multidimensional feature signal sample in -dimensional space into -dimensional feature space (). The 2DPCA-KICA dimensional reduction process can be divided into two steps.
Step 1. 2DPCA Dimensional Reduction. Reduce -dimensional data into -dimensional data by projection .
Step 2. KICA Projection. Project -dimensional data into feature space by projection , utilize the kernel idea which estimates the source signals in the feature space , and construct an independent group subspace.
The concrete progress of Step 2 will be described in 2DPCA-KICA algorithm in the next section. Now the progress of Step 1 is shown below.
Calculate the average of the training sample matrix: where is the training samples and is the number of training samples .
Calculate the covariance matrix of the training sample matrix:
PCA calculation of training samples is as follows.
Calculate the characteristic value of , , , and the corresponding orthonormal eigenvectors make , so we can get where represents diagonal matrix consisting of eigenvalues:
Determine the projection matrix as follows.
Select the corresponding orthonormal eigenvectors of the larger feature value in the first values in step (3) to construct a vector group and make it as the projection matrix:
Calculate the projection features of training samples: where is the projection feature of training samples, expressed as .
2.3. Dimensional Reduction Algorithm Progress of Multidimensional Feature Architecture
For the convenience of the dimension reduction and error comparison of the characteristic parameters, we usually assumed as the sample matrix after centralization.
Specific steps are as follows.
Step 1. Sample signal matrix of standardized multidimensional characteristics In the above formula, is the inverse of the square root for sample covariance matrix of .
Step 2. Calculate the characteristics values of the sample covariance matrix of and the corresponding eigenvectors , where the eigenvalues are arranged in descending order.
Step 3. Determine the remaining number of primary elements based on the PCA selection criteria of 2DPCA and calculate the former primary elements, expressed as
In the above formula, is the transpose of the matrix composed of the former feature vector and is the new matrix composed of the former primary elements after dimensional reduction of by 2DPCA method.
Step 4. Perform whitening process to ; we get In the above formula, is the whitening transformation matrix; is the whitened matrix of , , where is a unit matrix.
Step 5. Select the kernel function (this paper chooses Gaussian radial basis function ), depending on the KICA algorithm (KCCA or KGV). Determine the contrast function and gradually search the unmixed matrix by the minimized contrast function.
Step 6. Estimate the source signals which is called the independent-based subspace based on unmixed matrix, it is expressed as below:
In the above formula, and are the unmixed matrixes obtained, respectively, by using the KCCA algorithm and KGV algorithm. is the independent-based matrix.
3. Simulation Results and Analysis
3.1. Evaluation Method for the Effect of Parameter Optimization
The last section describes the proposed parameters optimizing method based on improved KICA algorithm. In order to compare with the proposed PCA, 2DPCA and PCA-ICA and 2DPCA-ICA algorithm [12, 13], and verify the correctness of the proposed algorithm in this paper, we need to compare the effects of dimensional reduction. This section compares two standards of comparison of the dimensional reduction effect. The comparative results will be specifically described in detail in the analysis of the experiment.
(1) The Amari Error. Amari error represents the similarity between two matrices, and it can be used to measure the accuracy of the estimated value of the matrix . Therefore, when mixing matrix and unmixed matrix are known, we can compare different dimension reduction effects of dimensional reduction methods by calculating the Amari error between mixing matrix and unmixed matrix.
The Amari error between the matrix and the matrix is defined as follows:
In the above formula, . While the value of is smaller, the difference between the matrix and the matrix is smaller. Only when are the matrix and the matrix the same.
For linear mixed-signal matrix, the Amari error between equivalent mixing matrix and unmixed matrix can effectively measure the dimensional reduction effect of different methods. However, for nonlinear mixed signal matrix, we cannot find the equivalent mixing matrix . So, we cannot use the Amari error criterion to compare different effects of different dimensional reduction methods when dealing with nonlinear mixed signals. As a result, this paper introduces the conception of correlation.
(2) The Correlation and Relevancy. The correlation  is used to indicate the degree of linear correlation between two random variables, so it can be used to describe the correlation between sample characteristics parameters. For the characteristic data after dimension reduction and the estimated characteristic data , their correlation is described as follows: Then we get the following judgment.(1)When , and have the same trends.(2)When , and have the converse trends.(3)When , and are not relevant.
According to the characteristics of the correlation , we define the absolute value of the correlation as the measurement of the correlation metrics between two data. It is expressed as follows: It is called the single signal.
When comparing the correlation degree between the two signal matrices, we can use the availability of correlation between source signals and the estimated signal as a standard; it is expressed as follows:
It is called the average correlation. This paper will use the average correlation to compare different dimension reduction effects of different dimensional reduction methods when dealing with multidimensional feature signal matrix (linear or nonlinear).
3.2. Experimental Procedure and Analysis
This paper is based on the multiangle feature recognition. The experimental sample data consists of 300 different models of 30 car brands including Volkswagen, Hyundai, GM, Audi, DongFeng, Nissan, Honda, and Toyota. In order to reduce errors caused by image capture, we selected 30 groups of samples from each brand, and we get totally 9000 groups. Select any 15 groups of images from each of the above models, mixed signals 4500 samples as the training samples and the remaining samples as the test sample. Each model feature chooses an initial set of 36 parameters, with loss of the generality. The following parameters dimensional reduction is calculated based on 30 sets of the view from the front, side, and tail of the Volkswagen Bora models. To other models, we use similar method to get similar conclusions.
The experimental environment is a PC of Intel core i5-2430 M 2.4 GHz CPU, 2 GB DDR, Windows 7 OS. The software is Matlab 2010a. Since the video sequence can be treated as a seamless overlay of multiple static images, in order to simplify the amount of data, the paper test images were taken by a static shot.
Respectively, we use PCA, 2DPCA, PCA-ICA, PCA-KICA, and improved KICA (including 2DPCA-KCCA and 2DPCA-KGV) methods to realize dimensional reduction for the original multidimensional characteristics signal . Calculate the estimated signals and the single signal correlation, the average correlation, and the Amari error between the original signal and the estimated signals.
In this paper, the improved algorithm is used to reduce dimension for the original multidimensional feature data; the finally selected number of the optimal PCA is 10. They are, respectively, labeled as , , and Amari. The results are shown in Table 1. We compare different dimension reduction effects of dimensional reduction methods for multidimensional feature samples. The results are shown in Table 2.
To reduce the volatility of the results of dimensional reduction, the paper presents a more objective comparative evaluation to two kinds of dimensional reduction evaluation criteria. We carry out times of dimensional reduction process to the original multidimensional feature signal. Calculate the average correlation and Amari error, as shown in Tables 3 and 4.
The Amari error and the average correlation of the above six dimensional reduction methods are labeled as follows: Amari1, Amari2, Amari3, Amari4, Amari5, and Amari6 and , and . Comparison of Amari errors between Tables 1 and 3 shows that the effect of 2DPCA-KICA is the best, in which 2DPCA-KGV is the optimal method; comparison of the average correlations between Tables 1 and 4 shows that the effect of 2DPCA-KICA is the best, in which 2DPCA-KGV is the optimal method. The conclusion is consistent with the method which using Amari error as the dimensional reduction evaluation criteria, so it is effective to use average correlation as the dimensional reduction evaluation criteria. As shown in Table 2, dimensional reduction method based on 2DPCA-KICA has the highest efficiency, averagely saving 0.439 s compared to PCA-KICA model.
The comparison of the combined effect above shows that the dimensional reduction method based on 2DPCA-KICA proposed in this paper has higher accuracy, less errors, and faster operational efficiency, which proved the scientific validity of the improved model in this paper.
This paper studies and analyzes the reduction method of multidimensional feature based on 2DPCA-KICA. It has the following innovations and benefits: firstly, it effectively balances the dimensional reduction efficiency and the processing accuracy. It can meet the need of multidimensional reduction under various circumstances of redundancy or noise, with a strong robustness; secondly, it proposes a dimensional reduction evaluation method using Amari error and the average correlation as the evaluation criteria. Simulation results show that the method has better dimensional reduction, meeting the real-time requirements. In the next step, we will focus on researching an excellent classification in order to improve the algorithm applied in specific classification model.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
W. C. Sun, Y. Z. Zhao, J. Li, L. Zhang, and H. Gao, “Active suspension control with frequency band constraints and actuator input delay,” IEEE Transactions on Industrial Electronics, vol. 59, no. 1, pp. 530–537, 2012.View at: Publisher Site | Google Scholar
W. C. Sun, Z. Zhao, and H. Gao, “Saturated adaptive robust control for active suspension systems,” IEEE Transactions on Industrial Electronics, vol. 60, no. 9, pp. 3889–3896, 2013.View at: Publisher Site | Google Scholar
W. C. Sun, H. Gao, and O. Kaynak, “Adaptive backstepping control for active suspension systems with hard constraints,” IEEE/ASME Transactions on Mechatronics, vol. 18, no. 3, pp. 1072–1079, 2013.View at: Publisher Site | Google Scholar
L. J. P. Maaten, E. O. Postma, and H. J. Herik, “Dimensional reduction: a comparative review,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 2, pp. 1–35, 2007.View at: Google Scholar
W. J. Li, J. H. Sun, L. H. Wei, and X. Li, “A novel method for vehicle-logo recognition based on 2DPCA-ICA and SVM,” Journal of Liaoning Normal University, vol. 34, no. 2, pp. 165–169, 2011.View at: Google Scholar
S. J. Liang, Z. H. Zhang, L. L. Cui, and Q. H. Zhong, “Dimensionality reduction method based on PCA and KICA,” Systems Engineering and Electronics, vol. 33, no. 9, pp. 2144–2148, 2011.View at: Publisher Site | Google Scholar
K. Luebke and C. Weihs, “Linear dimension reduction in classification: adaptive procedure for optimum results,” Advances in Data Analysis and Classification, vol. 5, no. 3, pp. 201–213, 2011.View at: Publisher Site | Google Scholar | MathSciNet
K. Lee, A. Gray, and H. Kim, “Dependence maps, a dimensionality reduction with dependence distance for high-dimensional data,” Data Mining and Knowledge Discovery, vol. 26, no. 3, pp. 512–532, 2013.View at: Publisher Site | Google Scholar
F. R. Bach and M. I. Jordan, “Kernel independent component analysis,” Journal of Machine Learning Research, vol. 3, no. 1, pp. 1–48, 2003.View at: Publisher Site | Google Scholar | MathSciNet
T. Raiko, A. IIlin, and J. Karhunen, “Principal component analysis for sparse high-dimensional data,” in Neural Information Processing, pp. 566–575, Springer, Berlin, Germany, 2008.View at: Google Scholar
S. Jegelka and A. Gretton, Brisk Kernel ICA, MIT Press, Boston, Mass, USA, 2007.
J. Yang, D. Zhang, and A. F. Frangi, “Two-dimensional PCA: a new approach to apperence -based face representation and recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 1, pp. 131–137, 2004.View at: Publisher Site | Google Scholar
B. A. Draper, K. Baek, M. S. Bartlett, and J. R. Beveridge, “Recognizing faces with PCA and ICA,” Computer Vision and Image Understanding, vol. 91, no. 1-2, pp. 115–137, 2003.View at: Publisher Site | Google Scholar
M. Kato, Y. W. Chen, and G. Xu, “Articulated hand motion tracking using ICA-based motion analysis and particle filtering,” Journal of Multimedia, vol. 1, no. 3, pp. 52–60, 2006.View at: Google Scholar
S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics and Intelligent Laboratory Systems, vol. 2, no. 1–3, pp. 37–52, 1987.View at: Publisher Site | Google Scholar