Mathematical Problems in Engineering

Volume 2016 (2016), Article ID 2173914, 6 pages

http://dx.doi.org/10.1155/2016/2173914

## An Inexact Update Method with Double Parameters for Nonnegative Matrix Factorization

^{1}School of Mathematics and Computing Science, Guilin University of Electronic Technology, Guilin 541004, China^{2}Guangxi Key Laboratory of Automatic Detecting Technology and Instruments, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China^{3}Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China^{4}Guangxi Key Laboratory of Cryptography and Information Security, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China^{5}School of Mathematics and Information, Beifang University of Nationalities, Yinchuan 710021, China

Received 14 July 2016; Revised 15 October 2016; Accepted 16 October 2016

Academic Editor: Elisa Francomano

Copyright © 2016 Xiangli Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Nonnegative matrix factorization (NMF) has been used as a powerful date representation tool in real world, because the nonnegativity of matrices is usually required. In recent years, many new methods are available to solve NMF in addition to multiplicative update algorithm, such as gradient descent algorithms, the active set method, and alternating nonnegative least squares (ANLS). In this paper, we propose an inexact update method, with two parameters, which can ensure that the objective function is always descent before the optimal solution is found. Experiment results show that the proposed method is effective.

#### 1. Introduction

Nonnegative matrix factorization (NMF) [1] is not only a well-known matrix decomposition approach but also an utility and efficient feature extraction technique. NMF was first put forward by Lee and Seung. Recently, NMF has been successfully applied in many fields, including face verification [2], text mining [3], gene expression data [4], blind source separation [5], and signal processing [6].

A nonnegative matrix is decomposed into two low-rank nonnegative matrices and such that is approximately equal to , denoted by where and . and mean different things to different applications; for example, in blind source separation (BSS), and are called mixing matrix and source signal matrix, respectively.

In order to decrease the approximation error between and , the Euclidean distance-based model is employed in this paper. Namely, NMF can be expressed as the following optimization form: where is the Frobenius norm and means that all elements of the matrix are nonnegative. Clearly, it is difficult to find a global optimal solution because the objective function is nonconvex. Therefore, the remaining issue is how to solve the nonconvex problem.

In 2001, Lee and Seung tried to find a local optimal solution instead of a global one and proposed multiplicative update algorithm [7]. Multiplicative update algorithm is widely used as an efficient computational method for NMF. The update rules for (2) are given as follows: where represents the iteration count. Later, many new methods were available to solve NMF in addition to multiplicative update algorithm, such as gradient descent algorithms [8, 9], the active set method [10], and alternating nonnegative least squares (ANLS) [11–13].

For the sake of solving the minimum problem (2), Hien et al. [14] have proposed a novel algorithm, in which the update rule contains only a parameter and the parameter selection is considered exact in some cases. Compared with multiplicative update algorithm, the novel algorithm [14] has fast convergence speed and small decomposition error.

Some inexact technique is widely applied to large scale optimization problems. Based on this idea, we employ an inexact parameter instead of the exact parameter [14]. In the meantime, we add another parameter to accelerate the speed decrease of objective function. Hence, we present an inexact update algorithm with two parameters. The proposed method also updates the elements of matrices and one by one. Under some assumptions, the proposed method ensures that the objective function is always descent before the optimal solution is found.

Similar to multiplicative update algorithm, the proposed method has many advantages, including its efficient storage, simple calculation, and good results. In multiplicative update algorithm, the descent property is established by means of an auxiliary function. However, the proposed method is descent by the local monotonicity of a quadratic function. And it has a faster convergent speed. Besides, the main idea of ANLS is that two optimization problems can be solved alternately. By contrast, the proposed method is easier to implement.

The remainder of this paper is organized as follows: we present the procedure to update an element of the matrices and in Section 2. In Section 3, we give an inexact update method with double parameters for NMF and establish the convergent properties. In Section 4, experimental results demonstrate the validity of the method. Finally, we conclude this paper.

#### 2. Algorithm for Updating an Element of Matrix

In this section, we will discuss the procedure to update an element of the matrices and . In [14], is adjusted by adding a parameter : where

Motivated by the above work, we give the following adjustment to by two parameters and : in which can be viewed as a constant as well as a function. We define the parameter by a certain value , where . Similar to [14], we have that if , otherwise,Next, we deduce where

In order to ensure that the function possesses the descent property, we should define so that is nonpositive in . For a given value , is expressed as follows: Since is a quadratic function, or . In other words, the value can be “inexact.” If , specially, reaches the minimum value. It is obvious that the decline of is the steepest; at the moment, is believed to be “exact.”

Lemma 1. *If does not satisfy the KKT conditions, then ; otherwise, ; this implies .*

*Proof. *Clearly, , , and are the KKT conditions of (2). By the definition of , we have If does not satisfy (12), we can obtain in the condition of or and . Therefore, .

Conversely, if and , we have ; if and , it is easy to see that and . In either case, .

Similarly, we can deduce the following update rule of and Lemma 2:Let For a given value , is expressed as follows: where or .

Lemma 2. *If does not satisfy the KKT conditions, then ; otherwise, ; this implies .*

#### 3. The Proposed Algorithm

Based on the above analysis, in this section, we report our algorithm as follows.

*Algorithm 3. *)Give the starting point and ; set .()Update by using the following formula: For and is computed from (11).()Update by using the following formula: For and is computed from (15).()Let . Go to ().

In order to ensure the monotonous decrease of , we give the next theorem, which follows directly from Lemmas 1 and 2 and Algorithm 3.

Theorem 4. *Suppose that is generated by Algorithm 3; then is monotonically decreasing; that is, *

Corollary 5. *Sequence is bounded; then is a convergence sequence; namely, there exists a positive constant C such that for all large enough.*

It is clear to obtain the above corollary from theorem. Next, the another convergence property of Algorithm 3 is given.

Theorem 6. *Suppose is a limit point of and Then is the stationary point of problem (2).*

Since the above theorem corresponds to [14] (Theorem 3.2.2) and the proof is the same as that in [14] (Theorem 3.2.2), we will not prove it here.

#### 4. Numerical Experiments

In this section, we give some numerical experiments of Algorithm 3 and compare its behavior with the method of [14] (NNMF). All codes of the computer procedures are written in MATLAB and run in a MATLAB 7.10 and are carried out on a PC (CPU 2.13 GHz, 2 G memory) with Windows 7 operation system environment.

In experiments, we will compare the following statistics: the number of iterations (Iter), the CPU time in second (time), the speed of convergence to stationary point, and the minimum value of objective function value (fun). Same as [14], the speed of convergence is measured by using the following formula: where is the number of elements of the set and is the number of elements of the set is called a normalized KKT residual.

In order to avoid the initial point effect on the numerical result, in every experiment, we use 20 initial points, and every initial point is randomly generated. We list the average values of Iter, Fun, Time, and , respectively.

In Table 1, the relevant parameters of Algorithm 3 are specified as follows: