Abstract

Minimizing a sum of Euclidean norms (MSEN) is a classic minimization problem widely used in several applications, including the determination of single and multifacility locations. The objective of the MSEN problem is to find a vector such that it minimizes a sum of Euclidean norms of systems of equations. In this paper, we propose a modification of the MSEN problem, which we call the problem of minimizing a sum of squared Euclidean norms with rank constraint, or simply the MSSEN-RC problem. The objective of the MSSEN-RC problem is to obtain a vector and rank-constrained matrices such that they minimize a sum of squared Euclidean norms of systems of equations. Additionally, we present an algorithm based on the regularized alternating least-squares (RALS) method for solving the MSSEN-RC problem. We show that given the existence of critical points of the alternating least-squares method, the limit points of the converging sequences of the RALS are the critical points of the objective function. Finally, we show numerical experiments that demonstrate the efficiency of the RALS method.

1. Introduction

The problem of minimizing a sum of Euclidean norms (MSEN) is an optimization problem whose goal is to find a vector such that it minimizes a sum of Euclidean norms of systems of equations , where and , for . Vectors represent the coordinates of existing facilities in the plane, and represents the coordinates of a new facility in the position, which minimizes the weighted sum of distances from the new facility to each existing facility facilities ones. The multifacility location problem arises when the number of new facilities is more than one. The weighted sum of norms to be minimized then involves the distances between each pair of new facilities and the distances between the new and existing facilities.

The MSEN problem was studied by Fermat in the seventeenth century, when each is the identity matrix, and . Here, the goal of this problem is to find the point in such that it minimizes the sum of distances from it to three given points. Moreover, various names are associated with the MSEN problem, for example, the general Fermat problem, the Weber problem, and the weighted single-facility location problem.

The MSEN problem arises in many applications, such as the superresolution mapping based on spatial-spectral correlation to overcome the influence of linear and nonlinear imaging conditions [1]. Specifically, the spatial correlation is obtained using the mixed spatial attraction model based on the linear Euclidean distance. Other applications of the MSEN problem are the Euclidean facilities location problem applied in transportation and logistics and the Steiner minimal tree problem under a given topology [24].

Several techniques solve the MSEN problem. For example, a predictor-corrector algorithm developed by Andersen et al. in [2] is derived from a primal-dual interior-point algorithm by applying Newton’s method directly to a system of nonlinear equations representing primal and dual feasibility and a perturbed complementary condition. Furthermore, the primal-dual algorithm developed by Qi et al. in [5] presents an augmented smoothing algorithm for solving a nonsmooth equation. By transforming the MSEN problem into a classic convex programming problem in conic form, Xue and Ye show in [6] that an -optimal solution of the MSEN problem can be estimated efficiently using interior-point algorithms.

The MSEN problem is convex but not differentiable [3]. Moreover, it may happen that the accuracy of this problem is still not satisfactory because vector solution is not a solution of all linear systems , producing a higher value in the objective function of the MSEN problem.

In this paper, we wish to develop and justify a new optimization problem that provides better-associated accuracy than the MSEN problem. We propose a new optimization problem based on the classical MSEN problem. This optimization problem is called the problem of minimizing a sum of squared Euclidean norms with rank constraint, or simply the MSSEN-RC problem. The goal of the MSSEN-RC problem is to obtain a vector and rank-constrained matrices such that they minimize a sum of squared Euclidean norms of systems of equations .

The proposed MSSEN-RC problem is crucial because it extends the use of linear forms in the MSEN problem to bilinear forms . The use of bilinear forms increases the accuracy of the objective function by increasing the number of parameters to be optimized. Augmenting the number of optimized parameters to improve accuracy is used in various minimization problems (see, e.g., [710]). Moreover, the objective function on the MSSEN-RC problem is differentiable, unlike the MSEN problem. This feature is essential because we can use a gradient method to estimate a solution and thus improve the prediction ability by updating the guess in the direction of your gradient, for example, the BFGS algorithm [11].

There are two significant differences between the MSEN and the MSSEN-RC problems. First, the MSSEN-RC problem finds the optimal vector and rank-constrained matrices , while the MSEN problem finds the optimal vector . This modification is essential to obtain equivalent linear systems . Second, the MSSEN-RC problem considers a least-squares problem instead of a least-absolute problem as in the MSEN problem. This adaptation permits us to obtain a stable solution because the objective function in the MSSEN-RC problem is a differentiable function, unlike the objective function in the MSEN problem.

Additionally, in this paper, we propose an algorithm to obtain a solution for the MSSEN-RC problem. This algorithm is based on the regularized alternating least-squares (RALS) method (In [12, 13], the RALS method is known as a proximal point modification of the Gauss-Seidel method) [1315]. The RALS method is an iterative method that uses a regularization term in the classical alternating least-squares method [14]. This additional term does not address the degeneracy problem, however. Moreover, the limit points of the RALS method will be shown as the critical points of the original optimization problem and not of the regularized version (see Section 4.4). The RALS method is utilized in related fields, such as remote sensing and computer vision. For example, this iterative method is used to estimate the solution of a nonnegative coupled hyperspectral image superresolution model for image reconstruction [16].

The RALS method is based on two known techniques. First, it uses an update to the classical Broyden Fletcher Goldfarb Shanno (BFGS) method as presented by Li and Fukushima in [17]. The BFGS method estimates a vector that minimizes objective function when matrices are known. The BFGS method is part of the Quasi-Newton family and is an iterative method for solving unconstrained nonlinear optimization problems (see Section 4.1). Second, the RALS method uses a solution to the generalized low-rank matrix approximation problem [18] to compute rank-constrained matrices that minimize when is known (see Section 4.2). We show that the limit points of the converging sequences of the RALS method are the critical points of . Finally, in this paper, we present several numerical simulations that illustrate the advantages of the proposed RALS method to estimate a solution to the MSSEN-RC problem.

The main motivation of this paper is to expand the theoretical framework developed around the classical MSEN problem (see, e.g., [2, 4]). Further, the main contributions of this paper are as follows: (i)We present a new optimization problem, so-called the MSSEN-RC problem. The MSSEN-RC problem generalizes the MSEN problem(ii)A brief theoretical study on the MSSEN-RC problem is provided in this work(iii)We develop an algorithm to estimate a solution for the MSSEN-RC problem. Additionally, we present its respective convergence analysis(iv)Numerical experiments have been performed to demonstrate the effectiveness of the proposed algorithm

The paper is organized as follows. In Section 2, we present the notation used in this paper. Additionally, we introduce the mathematical formulation of the MSEN problem. A theoretical study for the MSSEN-RC problem is developed in Section 3. Section 4 presents the RALS method to estimate a solution to the MSSEN-RC problem. Here, we explain in detail the two methods on which the RALS method is based: the BFGS method (in Section 4.1) and the low-rank matrix approximation technique (in Section 4.2). In Section 4.3, we analyze two approaches to compute regularization parameters. The convergence analysis of the proposed method is presented in Section 4.4. Section 5 presents numerical experiments based on random matrices. Finally, Section 6 contains some concluding remarks.

2. Preliminaries

2.1. Notations

Throughout this paper, we use the following notation. Let be the set of all real matrices of rank at most . Let be the -ary Cartesian product over the set of all real matrices, i.e.,

We define as the -ary Cartesian product over the set of all real matrices of rank at most , for , i.e.,

To simplify notation, we will omit the subscript in , i.e., . We consider function such that where and . Based on equation (119) in [19], is differentiable. The gradient vector is defined by where

Finally, it follows from Section 1.9 in [14] that a norm of is defined by where is the Frobenius norm.

2.2. The MSEN Problem

Given matrices and vectors , for , the MSEN problem finds a vector such that

The MSEN problem emerges in many applications, such as the superresolution mapping, VLSL design, transportation and logistics, and the Steiner minimal tree problem. Furthermore, there are several algorithms to estimate optimal in the MSEN problem, for example, the predictor-corrector method developed in [2], the primal-dual algorithm created in [5], and the interior-point technique presented in [6].

Note that if is a solution of problem (7) such that then is a solution of equivalent linear systems . However, a solution to the problem (7) does not always satisfy the condition (8) because the accuracy of the MSEN problem is not yet satisfactory. To increase this accuracy, we also need to increase the number of parameters to optimize. Therefore, in the below section, we proposed a new minimization method called the problem of minimizing a sum of squared Euclidean norms with rank constrain, or simply the MSSEN-RC problem.

3. A Theoretical Study for the MSSEN-RC Problem

The goal of the MSSEN-RC problem is to estimate a vector and rank-constrained matrices , for , such that they minimize a sums of squared Euclidean norms. Mathematically, given vectors and positive integer , for , the MSSEN-RC problem finds and , for , such that

In Theorem 1 below, we establish that the MSSEN-RC problem (9) has an infinite number of solutions.

Theorem 1. The MSSEN-RC problem has a global minimizer, and it is not unique.

Proof. A solution of the MSSEN-RC problem is given by any nonnull vector and for , where is the Moore-Penrose inverse of and is an arbitrary matrix such that . Based on the fact that (see, e.g., [20]), it is clear that and each given by (10) are a global minimizer of problem (9) because

Finally, solution of problem (9) is not unique because and are arbitrarily chosen.

Some relevant remarks on the MSSEN-RC problem are presented below.

Remark 2. Note that if is the null matrix in (10), then is a solution for problem (9), where , for all . However, finding matrices in (10) such that and is not straightforward. Therefore, in this paper, we consider the MSSEN-RC problem when and , for all .

Remark 3. Proof of Theorem 1 shows the existence of a global solution to problem (9). Additionally, it follows from (11) that if are given, then (8) is true and, therefore, , for all . Thus, we conclude that the MSSEN-RC problem allows us to obtain equivalent linear systems such that each is rank-constrained.

Remark 4. There are two differences between the MSEN problem and the MSSEN-RC problem. First, the MSSEN-RC problem estimate the optimal vector and rank-constrained matrices , while the MSEN problem just estimates optimal vector . This modification is necessary to obtain equivalent linear systems. Second, we consider a least-squares problem in (9) instead of a least-absolute problem as in (7). This modification allows us to obtain a stable solution in (9) because the objective function in the MSSEN-RC problem is a differentiable function, unlike the objective function in the MSEN problem.

4. A Regularized Alternating Least-Squares Method to Solve (9)

The MSSEN-RC problem can be solved by the alternating least-squares method, i.e., alternatively solving the following two subproblems: for and . Generally, an alternating least-squares method can produce a converging sequence with limit points that are critical points of the problem under certain conditions. Two of these conditions are that each objective function in the alternating minimization method must have a unique solution (see, e.g., Chapter 14 in [14]) or it must be strictly convex (see, e.g., Proposition 5 in [13]). These conditions are not satisfied in (12). Note, for example, that there is no dependency between matrices in (3) and, therefore, problem (12) can be rewritten as for . A solution of problem (12) is given by (10), i.e., where is an arbitrary matrix such that . Therefore, solution of (12) is not unique.

To solve the above issue, we consider regularization terms in (12) that penalize the difference between the previous iterations, which themselves need not be a bounded sequence. Therefore, a regularized alternating least-squares (RALS) method to estimate a solution to the MSSEN-RC problem is given by for and . Here, and are regularization parameters such that and are sequences converging to zero. The regularization terms and are the fitting terms for and , respectively. The additional regularization terms in (15) penalize the difference between the previous iterates. This approach does not address the degeneracy problem. Moreover, assumptions of convexity and uniqueness of solutions are not required if we are only interested in the critical points (see Section 7 in [13]). The limit points of the RALS algorithm will be shown as the critical points of (3) and not of the regularized version.

In Sections 4.1 and 4.2 below, we present two algorithms used by the RALS method to estimate a solution to subproblems (15), respectively. The first algorithm is an update to the BFGS method, part of the Quasi-Newton family. The second algorithm uses a generalized low-rank matrix approximation problem to estimate rank-constrained matrices.

4.1. A BFGS Method to Solve First Problem in (15)

To solve first problem in (15), we define a function such that , and the minimization problem

Note that function can be simplified as follows:

where and such that and , for all . Therefore, function is represented by

Additionally, we consider the gradient function defined by

Note that if then and, therefore, a solution for the problem (16) is given by the least-squares solution (see, e.g., [20] for more details). Furthermore, we might use the Tikhonov regularization to give preference to a particular solution with desirable properties (see, e.g., [21] for more details). However, the computational complexity increases when and are huge in both cases. Moreover, if is ill-conditioned, then an estimation of is not very accurate. Therefore, in this paper, we use an iterative method to compute a solution of problem (16).

To estimate a solution to problem (16), we use an update to the classical BFGS method presented by Li and Fukushima in [17]. The classical BFGS method is a well-known Quasi-Newton method for solving unconstrained optimization problems. Moreover, the BFGS method determines the descent direction by preconditioning the gradient with curvature information [22]. Li and Fukushima propose a cautious BFGS update and prove that the objective function to be minimized has Lipschitz continuous gradients. Then, the new BFGS method converges globally using an Armijo-type line search [17]. Note that in (17) is a Lipschitz continuity function because, for all , where . Last inequality follows from Fact 9.8.41 in [23]. The BFGS method in [24] to estimate a solution of problem (17) is based on the following steps: (i)Step 1: choose an initial vector , initial symmetric and positive-definite matrix and constants and (ii)Step 2: determine BFGS direction by solving the linear system (iii)Step 3: find a stepsize that is the largest value in the set such that the inequality is satisfied. This step is the Armijo-type line search.(iv)Step 4: calculate (v)Step 5: compute and (vi)Step 6: if , then update with (vii)Step 7: replace by (viii)Step 8: if , then quit. Otherwise, go to step 1

Input:
Output:
1 procedure: BFGS_method
2   and
3  for do
4     and
5  for do
6    determine such that
7   find smallest such that
              
8   
9   if
10    
11    break
12   and
13   if then
14    
15   else
16    
17  return

As mentioned in step 2, initial matrix is any arbitrary symmetric and positive-definite matrix. Thus, in this paper, we consider as initial matrix the identity matrix of order , i.e., . Moreover, constants can also be chosen arbitrary in the BFGS method. Thus, w.l.g, we use constants , , and . A pseudocode of the BFGS method in [17] is presented in Algorithm 1 below.

Some relevant remarks of the BFGS method in Algorithm 1 are presented below.

Remark 5. If the level set is bounded, then the BFGS method in [17] converges globally, i.e., This result follows from Theorem 3.3 in [17] and the fact that in (17) is a Lipschitz continuity function (see (18) above). Additionally, Theorem 3.3 in [17] shows that there exists a subsequence of converging to a stationary point of (16).

Remark 6. In step 7 of Algorithm 1, we use the Armijo-type line search to estimate optimal . However, Li and Fukushima show in [17] that a Wolfe-type inexact search can also be used in the BFGS method and obtain a global convergence.

Remark 7. Classical BFGS method updates matrix using formula given by step 14 of Algorithm 1, even when

In this case, if inequality (20) is achieved, then is not necessarily positive-definite, even if is positive-definite. Therefore, Li and Fukushima present in [17] the cautious update shown in steps 13‑16 of Algorithm 1 to ensure the positive-definiteness of . This condition is necessary to ensure global convergence. Additionally, the linear system in step 6 of Algorithm 1 can be solved using a fast algorithm for symmetric and positive-definite matrices, for example, the Cholesky decomposition (see, e.g., Theorem 4.18, page 97, in [25]).

Remark 8. In this paper, we choose the BFGS method developed in [17] for its easy computational implementation and high speed and efficiency to estimate a solution of MSEN problem (16). However, there are other methods to estimate a solution of the MSEN problem (see, e.g., [26]).

4.2. A Low-Rank Matrix Approximation Problem to Solve Second Problem in (15)

To solve second problem in (15), we consider optimization problem for . Let be a partitioned matrix. Note that

Therefore, if and , then problem (21) can be reformulated as follows:

A solution of problem (22) was proposed and justified by Friedland and Torokhti in [18], which is given by for , where and is the -truncated singular value decomposition (SVD) of , i.e., if is the SVD of , then the -truncated SVD of is defined by where and are formed with the first columns of and , respectively, and is formed with the first rows and columns of . A pseudocode to solve problem (22), and hence, problem (15), is presented in Algorithm 2 below.

Remark 9. It follows from Theorem 3.1 in [27] that if then for all . Moreover, it follows from Fact 6.4.2 in [23] that if and are not in the range space of and , respectively, then

Thus, under the above-mentioned conditions, (24) is true and, therefore, for any . In particular, if we choose , then for all and . Our numerical simulations in Section 5 show that inequality (24) is always true and, therefore, for all and .

Input: , , , ,
Output:
1 procedure: low_rank_method
2 for do
3 
4 
5 
6 
7 
8 return
4.3. RALS Method and Regularization Parameters and

Based on iterative procedure (15) and Algorithms 1 and 2, we show in Algorithm 3 a pseudocode of the RALS method to estimate a solution of the MSSEN-RC problem (9).

Input: , , , , ,
Output: ,
1procedure: RALS_method , , , , , ,
2  
3  for
4    compute
5    = BFGS_method , , , , ,
6    compute
7    =low_rank_method
8    if , then
9      
10      break
11 return ,

Moreover, recall that the goal in regularization is to produce a solution as close as possible to the accurate answer. Therefore, in this section, we analyze two approaches to compute regularization parameters and in steps 4 and 6, respectively, in Algorithm 3.

1. Optimal estimation: first, note that optimization problem in (16) can be reformulated as where and . A solution for problem (16) is given by , where is a solution for problem (25). We consider two cases to compute optimal :

If the variance of the data is unknown, we can use the method of generalized cross-validation (GVC) [28] to get an approximate optimal value for . The GCV estimate of is given by where is the minimizer of where

If the variance of the data is known, then consists of the true data plus random noise, i.e., , where is a random vector from a probability distribution with mean and standard deviation . Let be the SVD of , where and are orthogonal matrices, and is a generalized diagonal matrix. It follows from [29] that optimal can be estimated by , where is the minimizer of where and is a suitable index, depending on the standard deviation Other methods to estimate optimal , when variance of data is known, are presented in [24, 30].

Otherwise, to compute optimal in (21), for , we need to solve the problem where is the solution for regularization problem (21), which is given by (23), and is a solution for no regularized problem which is given by (14).

2. Approximate estimation: the iterative procedure (15) is an iterative Tikhonov regularization method. As mentioned in [31], this method can be understood in two ways: either or is fixed and the iteration number is the main steering parameter or the iteration number is set a priori, and and are tuned. If the tuning parameter is based on the parameter choice rule, both procedures give a convergent regularization. The parameter choice must be related to the noise level in the data according to a well-known result by Bakushinskii [32, 33]. Otherwise, a choice independent of the noise level cannot be expected to give convergent results in the worst case. As mentioned in [31], the two schemes above can be mixed to approximate optimal and by choosing a different regularization parameter at each iteration, e.g., a geometrically decaying sequence with , for all . In fact, this is the strategy that we use in the numerical experiments in Section 5 with .

4.4. Convergence Analysis

In Theorems 10 and 11 below, we present the convergence analysis of proposed RALS method in Algorithm 3. In both theorems, we use the following notations. We define and such that for , and . Let and be the zero vector in and zero matrix in , respectively. We define the zero element in as .

Theorem 10. Let be a sequence obtained from Algorithm 3. Then, sequence is convergent.

Proof. Algorithm 3 is based on subproblems (15). These subproblems provide the following inequalities: for . It follows from (26) and (27) that and therefore for all . Using inequalities (28)-(29) repeatedly, we have and therefore for all . It follows from (30) that is a decreasing sequence. Moreover, is bounded below because , for all . Thus, for the monotone convergence theorem (see, e.g., Theorem 2.3, page 39, in [34]), we conclude that is convergent.

Theorem 11. Let be a sequence obtained from Algorithm 3. If as , then.
1. as
2. is a critical point of function , i.e., .

Proof. 1. From inequality (30), we obtain that for all . It follows from the fact that as , inequalities in (31), the continuity of and the squeeze theorem that for all , i.e., as .
2. It follows from (32) that making the limits in (26) and (27), we have which implies for . From (15), we deduce that and minimize and , respectively. Therefore, for . Then, taking in (34), using arguments in (33), the continuity of and the fact that and are sequences converging to zero, we obtain for . Finally, it follows from (35) that is a critical point of function .
Thus, the proof of Theorem 11 is completed.

Some remarks of Theorem 11 and proposed RALS method in Algorithm 3 are presented below.

Remark 12. As a result of Theorem 11, the RALS method solves the same alternating least-squares cost function. Moreover, we proved that the limit point obtained from the RALS algorithm is a critical point of . However, our numerical simulations in Section 5 show that this critical point is typically also globally optimal.

Remark 13. Theorem 11 is a conditional convergence proof, impinging upon the existence of the limit point . Thus, this result does not address the degeneracy problem. An analysis of the existence of the limit of the RALS method is not included in this work as it is a challenging problem that would require careful study.

5. Numerical Experiments

In this section, we show some numerical experiments of the RALS method in Algorithm 3 to estimate a solution to the MSSEN-RC problem (9). Algorithm 3 was implemented in GNU Octave 6.1.0. The numerical experiments were run on a desktop computer with 2.20 GHz processor (Intel Xeon E5 2660 v2) and 48.00 GB RAM.

As mentioned in Section 1, the proposed MSSEN-RC problem in (9) is a new optimization problem. For that reason, we cannot compare Algorithm 3 with another method that solves that problem. However, the MSSEN-RC problem generalizes the classical MSEN problem given in (7). In the following experiments, we compare Algorithm 3 with a predictor-corrector method given in [2], which estimates a solution of the MSEN problem. This predictor-corrector technique is derived from a primal-dual interior-point algorithm by applying Newton’s method directly to a system of nonlinear equations characterizing primal and dual feasibility and a perturbed complementary condition. The stopping condition of this method is given by the duality gap, i.e., where is a dual variable from the MSEN problem and , for all (more details of the predictor-corrector algorithm are given in [2]). Based on Algorithm 3 and the predictor-corrector method, we numerically show that the minimum obtained in the MSSEN-RC problem is less than that of the MSEN problem, i.e., where and .

5.1. Experiment 1

We consider the first six numerical examples given in Section 7 in [35]. These examples are used to solve the MSEN problem (9). Therefore, we generalized the numerical results in [35] to the proposed MSSEN-RC problem (9) and Algorithm 3. We defined , , and , for , such that , , , and , where . In Algorithm 3, we use , , , and .

Our numerical results are summarized in Tables 1, 2, 3, and 4. In Tables 1 and 2, is the number of iterations. In Tables 3 and 4, we present the solutions to each numerical simulation given by Algorithm 3 and the predictor-corrector method in [2], respectively. Additionally, Figure 1 shows diagrams of the errors associated with the numerical simulations in Table 1.

The results listed in Tables 1 and 2 show that Algorithm 3 is extremely promising. Moreover, it follows from Table 1 and Figure 1 that Algorithm 3 is able to approximate numerical solutions for all the experiments within reasonable iterations. Particularly, Figure 1 illustrates that the error decreases to zero as the iterations increase, i.e., as . The above sentence follows from Theorem 11.

Futhermore, note that matrices , , and in Table 3 are full-rank, i.e., . This result follows from inequality (19) in Remark 9, which is true in this experiment, for all and .

Finally, it follows from Tables 1 and 2 that , i.e., the minimum obtained in the MSSEN-RC problem is less than that of the MSEN problem.

5.2. Experiment 2

We evaluate the efficiency of Algorithm 3 to approximate a solution of the MSSEN-RC problem (9), using random matrices. Specifically, for , we consider matrices and vectors generated from a normal distribution with zero mean and standard deviation . In Algorithm 3, we use , is the vector of ones, , for and .

Tables 5 and 6 summarizes the numerical results obtained. In both tables, , , and specify the problem dimensions, denotes the number of iterations, and is the rank constraint of each , for . Figure 2 shows diagrams of the errors associated with the numerical simulation in Table 5. Similarly to experiment 5.1, the results presented in Table 5 show that Algorithm 3 is promising. Moreover, Figure 2 shows that the error decreases to zero as the iterations increase, i.e., as . The above sentence follows from Theorem 11. Based on the numerical data in Table 5 and Figure 2, Algorithm 3 approximates a solution for all the experiments using few iterations.

Moreover, we obtain that , for each numerical simulation in this experiment. This result follows from inequality (24) in Remark 9, which is true for all .

Finally, it follows from Tables 5 and 6 that , i.e., the minimum obtained in the MSSEN-RC problem is less than that of the MSEN problem.

6. Conclusion

In this work, we proposed modifying the classical MSEN problem, which we call the MSSEN-RC problem. The objective of the MSSEN-RC problem is to obtain and , for , such that they minimize function in (3). The proposed MSSEN-RC problem allows the creation of equivalent linear systems , such that each is a rank-constrained matrix. Additionally, we developed an iterative method in Algorithm 3 to estimate a solution to the MSSEN-RC problem. Algorithm 3 is based on a regularized alternating least-squares (RALS) method. We considered two numerical techniques in the RALS method: the BFGS method, part of the Quasi-Newton family, and a generalized low-rank matrix approximation problem to estimate rank-constrained matrices. In Theorem 11, we showed that, under some conditions, Algorithm 3 converges to a critical point of function . The theoretical development and numerical experiments in this paper demonstrate the advantages of the proposed RALS method in Algorithm 3. As for future work, we will investigate applying the MSSEN-RC problem to solve a real-world inverse problem.

Data Availability

No data are available

Conflicts of Interest

The author declares that there are no conflicts of interest.

Acknowledgments

This work was financially supported by Vicerrectoría de Investigación y Extensión from Instituto Tecnológico de Costa Rica (research #1440042).