Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017 (2017), Article ID 6843614, 11 pages
https://doi.org/10.1155/2017/6843614
Research Article

Solving a Class of Nonlinear Inverse Problems Using a Feedback Control Approach

Department of Computer Science, San Diego State University, San Diego, CA 92128-7720, USA

Correspondence should be addressed to Mahmoud Tarokh; ude.usds.liam@hkoratm

Received 28 February 2017; Revised 8 April 2017; Accepted 10 April 2017; Published 28 May 2017

Academic Editor: J.-C. Cortés

Copyright © 2017 Mahmoud Tarokh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Inverse problems have applications in many branches of science and engineering. In this paper we propose a new approach to solving inverse problems which is based on using concepts from feedback control systems to determine the inverse of highly nonlinear, discontinuous, and ill-conditioned input-output relationships. The method uses elements from least squares solutions that are formed within a control loop. The stability and convergence of the inverse solution are established. Several examples demonstrate the applicability of the proposed method.

1. Introduction

Inverse problems form an interdisciplinary filed encompassing science and engineering. Applications of inverse problems are diverse and include robotics, optics, nondestructive detection, geophysics, imaging, acoustics, and civil and mechanical engineering, just to list a few. Briefly and roughly speaking, “forward” problems estimate the effect (output) from the cause (input). In contrast, “inverse” problems require estimating the cause or the parameters from the effect. For example, in robot manipulators the forward kinematics equations are readily available and relate the robot manipulator joints angles to the end effector (hand) position and orientation. However the fundamental problem in robotics is the inverse kinematics; that is, given hand position and orientation, determine the joint angles. The mathematical formulation of inverse problems leads to models that are typically ill-posed, meaning that a solution may not exist, the solution may not be unique, or the solution is sensitive to small changes in data causing instability. In the robot manipulator example, this situation often happens when during the robot motion the joint angles form a configuration in which the so-called manipulator Jacobian matrix is singular.

Inverse problems can be categorized into two groups: linear and nonlinear. Linear inverse problems are usually formulated as where is vector of (possibly noisy) data and the input vector is to be extracted, given that the matrix can be ill-conditioned such that its inverse either does not exist or is meaningless and does not reflect the actual physical problem. In other to overcome these difficulties, one has to use regularization which replaces an ill-posed problem by a neighbouring well-posed one. There are two main related regularization approaches, namely, Tikhonov and truncated singular value decomposition. In the Tikhonov approach a small constant is included in the pseudo-inverse computation of the matrix to limit to magnitude of . In the truncated SVD method, tiny singular values of are removed to prevent the pseudo-inverse of to become excessively large. Variations and improvement of these two basic approaches have been proposed (e.g., [13]).

Many real-world inverse problems are nonlinear and unlike the linear ones have not been fully explored due to the complexity of the problem. Many of the existing techniques are based on least squares formulation of the ill-posed nonlinear problem of the form [48]. Most damped least squares solutions are based on Levenberg–Marquardt iterative solution which is further explored in [9]. A more recent work describes a two-stage method combining Levenberg–Marquardt regularization of the linearized problems at each stage [10]. When the solution is assumed to have a sparse expansion, an iterative method has been proposed to solve the nonlinear inverse problem [11]. This method shows good results for colour image inpainting applications.

A recent book describes the application of inverse problems to imaging [12]. A survey of nonlinear inverse problems can be found in [13] and references therein. Neural networks have also played a major role in the solution of inverse problems (see, e.g., [14] and references therein). This is due to the fact that neural networks are considered universal approximators. A method using function decomposition and approximation is proposed by the author that is applicable to a wide-class of nonlinear inverse problems [15].

Many approaches to inverse problems propose solutions for specific and often simple cases and thus are not generally applicable to situations and applications other than those that they are developed for. In this paper we present a new approach to the inverse problems that have complex structures with highly nonlinear and discontinuous behaviours as well as being ill-conditioned. We employ concepts from feedback control systems and stability theory to prove the convergence and accuracy of the solutions. Examples are provided to demonstrate the concept.

2. Statement of the Problem

Consider the input vector of dimension and the corresponding output vector of dimension and the forward relationship between and of the form It is noted that (1) also covers cases where the input-output relationship is expressed in the integral form

We assume that the forward relationship of the form (1) or (2) is known. The inverse problem is given ; find . In most applications there is no analytical solution to the inverse problem; that is, cannot be expressed in analytical form in terms of .

There are a number of challenges to the inverse problems. Noise can corrupt the data, the forward and/or the inverse relationship can be discontinuous, and the problem can be ill-conditioned. Furthermore, in some applications the forward relationship can be very complex and the inverse can have multiple solutions for a particular value of . These are especially the case for many practical problems such as robotic inverse kinematics problem. In such an application even the forward equations consist of highly nonlinear trigonometric functions with sever discontinuities due to joint limits and other physical constraints, as well as varying number of the inverse solutions.

We propose to solve the above inverse problem using a feedback control approach and a generalized pseudo-inverse method, as well as optimizing a performance index in some cases. Suppose that the desired output vector is and we want to find an input such that . Suppose further that, for an initial value , for example, randomly chosen, the corresponding output is obtained from (1) as . The problem is then to transfer from to . We propose to do this transition through a smooth reference trajectory parameterized by time . Among other possible functions such as quintic, we propose a cycloid function defined by where is the time to move from the initial value to the desired value . It is noted that the first, second, and higher order time derivatives of (3) are all continuous, smooth, and finite which is an important characteristic of this particular trajectory. Higher values of correspond to slowly varying trajectory and have a favorable effect on the stability of the system, as will be seen later. The cycloid function has a simple form and is parameterized by only three quantities, namely, , , and , each of which has a meaningful interpretation. On the other hand, other smooth functions are described by more parameters whose roles in the overall form of the function are not easily recognizable. For example, the quintic function requires six parameters, and it is not easy to determine these six parameters to achieve the desired transition time, final value, and maximum value of its derivative, all of which are needed later in Sections 3 and 4. Furthermore, a simple step function is undesirable since its first and higher order derivatives are infinite, resulting in instability and large steady-state error for this application, as will be seen in Sections 3 and 4. In addition, a step function has no parameters to adjust the transition time between initial and final values.

Since the problem is nonlinear we use an incremental approach by taking the time derivative of (1) to obtain where is the Jacobian matrix. The partial derivative is computed numerically. When the trajectory passes through the points where the function is discontinuous, some or all elements of become infinity at certain values of . We assume that the desired is not at the points of discontinuity. Consequently at the points where the derivative is infinite, we set the corresponding values of the elements of to a large but finite value to prevent numerical difficulties when the inverse of is found. Since we seek values of for finite , therefore high values of the derivative can only happen during the transition, that is, at times . As a result, even though can be very different from the desired during the transient, it will become arbitrarily close to at steady-state, that is, , as will be shown in Section 3.

Equation (4) is a differential equation relating the incremental change in the input to the incremental change in the output. The generalized inverse solution to (4) is [16] where is the pseudo-inverse of , is the identity matrix, is an arbitrary free vector, and is an arbitrary positive scalar. The free vector and scalar will be used for optimization as will be discussed in Section 4. The second term on the right hand side of (5) is orthogonal to the first term. It is possible to substitute for from (3) into (5) to obtain and then integrate to get whose steady-state value inverse corresponds to the desired. However, integration of (5) is not a viable solution due to (i) integration drift and numerical difficulties and (ii) ill-conditioned matrix . This means that the matrix can have singular values close to zero and as a result the inverse or pseudo-inverse of the matrix, that is, , can produce solutions that are grossly in error, especially when noise is present. To overcome (i) we propose to form an error defined byand apply a feedback signal , where is the gain matrix. To remedy (ii) we decompose into two matrices as follows:where is well-conditioned with a desired conditioning number and is a residual ill-conditioned matrix that can be found in a number of different ways, two of which will be described shortly. Thus, (5) is modified to We will prove that the steady-state solution to dynamic equation (8) will be inverse solution for a desired when a suitable feedback matrix is determined. The feedback block diagram representing (2) and (8) is shown in Figure 1.

Figure 1: Block diagram of the feedback system for the inverse problem.

It is noted that since in the proposed control scheme which has the dimension is the quantity to be controlled, and the number of adjustable variables is , then for controllability we must have . When , the additional degrees of freedom can be used for an optimization task as will be seen in Section 4. When , and the second term in (8) vanishes and it becomeswhere the arguments of the quantities in (9) have been dropped for convenience. The inverse problem in the above control system setting may be stated as follows. Design the controller such that the control system is stable and the error (6) tends asymptotically to a sufficiently small value ; that is,giving where the bound on will be established in Section 3.

We now discuss how the decomposition in (7) can be achieved. The first method is based on assigning the small singular values of to so that the well-conditioned is a truncated version of . Let the singular value decomposition (SVD) of the matrix be expressed as where is a unitary matrix and are vectors and have the property . Similarly is a unitary matrix, with the same property as . The singular value matrix is a diagonal matrix , where are the singular values. When the matrix is ill-conditioned one or more singular values are very small or even zero. This situation causes the inverse of the matrix to produce erroneous results. One common method to deal with this problem is to decompose and separate the terms involving small singular values in (10) as [3] where the second summation on the right hand side of (12) contains small singular values. In the truncated singular value decomposition method, the matrix is replaced by which is not ill-conditioned and whose pseudo-inverse is The amount of truncation, that is, the value of, can be determined by specifying a desired conditioning number for which is the ratio of the largest singular value to the smallest singular value of ; that is, Thus, given and the known maximum singular value , the minimum singular and thus the truncation index are found.

The second method to obtain is via Tikhonov method; that is, where is a small scalar. It can be shown that [3] Thus, Since we defined in (7) as , and the first term on the right hand side of (17) is , we have for the Tikhonov method It is noted that the conditioning number of obtained from the Tikhonov method is Since and are known, can be found for a desired conditioning number . For the case of single input single output systems, there will be only one singular value which cannot be truncated and in this case only Tikhonov method will be used. It is important to note that obtained in (13) by the truncated singular values and found from (16) using the Tikhonov method are not necessarily the same. However, using the above procedures, both will be well-conditioned with a desired conditioning number.

Having obtained by either method, we premultiply both sides of (9) by , and noting that , we obtainSubstituting from (7) into (20) we obtain

Since by (4), we haveWe substitute for from (9) into (22) and noting from (6) that , we obtain the error dynamics asWe set the controller gain towhere is a positive scalar gain. Now (23) becomesThe above equation represents the dynamics of the error. We will discuss the effect of the gain on the performance of the system in Section 3.

To summarize this section, we have formulated the nonlinear inverse problem using incremental change and its associated Jacobian matrix. When the problem is ill-conditioned the Jacobian matrix will have singular values close to zero. The Jacobian matrix is expressed as the sum of two matrices consisting of an ill-conditioned matrix and a well-conditioned matrix. Employing a feedback control approach that uses only the well-conditioned part of the Jacobian matrix, the problem has been transformed into a dynamic system that describes the error dynamics, that is, the difference between the reference and actual output. In the next section, we prove that the error dynamic is stable and that the error can be made sufficiently small, thus providing an accurate inverse solution.

3. Stability and Convergence of Solutions

In this section we show that the dynamic equation (25) is asymptotically stable, meaning that the error goes to small values allowing the inverse corresponding to be determined precisely in the steady sate at the output of the system of Figure 1.

In order to establish the stability of the nonlinear error dynamics (25) and its convergence, we consider the Lyapunov function candidatewhose derivative using (25) iswhere denotes the 2-norm and is the maximum value of which is obtained from (3), where is the difference between initial and desired values of the output. The 2-norm of a matrix is equal to its maximum singular value, and thus for the truncated SVD, with and given by (11) and (12), we have , and thus in (27) isNote that and therefore . For the Tikhonov method with and given by (16) and (18), respectively, we obtain and if we set . As a result . Thus, in both cases of truncated singular values and Tikhonov methods, and (27) becomesNote from (26) that , where . Substituting for in (29), we obtain whose solution is , where is the initial value. Thus,It is seen that the magnitude of the error decreases exponentially with time and its value is finite at all times. The steady-state value of the error, that is, , is It is noted that since , the closeness of the initial output values to the desired value reduces the steady-state error. It is therefore advisable to assign several initial values into (1) and pick the resulting that is closest to . Regardless of this, the steady-state error can be made as small as desired by increasing the gain and the trajectory time . Observe that large value of the gain is applied to the system (Figure 1) that contains the well-conditioned inverse of and does not therefore create a problem. Furthermore, higher values of the gain will also make the decay of the error faster as seen from (30). The steady-state value of , that is, , which is obtained in the output of the control system of Figure 1, is the required inverse solution. Furthermore, when is regular (not ill-conditioned), then . Consequently (25) reduces to whose solution is and thus the steady-state value of the errors is .

When noise is present, (4) becomes as . Assuming a bound exists on the noise such that , then the truncation index in (12) is chosen by the “discrepancy principle” such that , where . The details of discrepancy principle are provided in [3, 17]. In this case the truncation amount increases (or equivalently decreases) with . In other words, the higher is the value of the noise norm , the fewer singular values will be included in . A similar procedure is used to determine in the Tikhonov method. In this case, increases with an increase in [17]. Furthermore, it is well known that, in a closed loop system, in general feedback reduces that effect of noise on the output of the system, that is, in Figure 1. To demonstrate this, we consider the above noise model. The error dynamics (25) with noise changes to and considering the above bound on the magnitude of noise and following the procedure leading to (31), it can readily be shown that (31) is modified to . Thus, the effect of the noise on the error can be reduced by increasing the gain and trajectory final time.

A note is appropriate in the situation where the inverse problem has multiple solutions for a given value of . In such a case, the particular solution that the above method finds depends on the initial values or . If the number of solutions is not too many, various solutions can generally be found by assigning random initial values within the range of interest and applying the method to obtain these various solutions. This aspect will be explored in Section 5.

In summary, in this section using the Lyapunov stability approach, we have proven that the dynamics of the error, that is, the difference between the desired output and the actual output , is stable and that the steady-state error is sufficiently small, yielding an accurate inverse solution corresponding to the desired .

4. Optimization

For the case when and , there are degrees of freedom that can be used for optimization of a performance index . For example, if the magnitude of is to be minimized we set . Alternatively one might be interested to maximize signal to noise ratio.

Suppose that the function is to be minimized, in which case we set the free vector in (5) asIn order to show that is minimized, we take its time derivative aswhere has been substituted from (5). Furthermore, (34) can be manipulated to obtainThe matrix is both symmetric and idempotent and therefore . As a result the first term in (35) is negative definite and can be made as large as desired by the free parameter . Since , this term can be made small by choosing a high trajectory time constant . Finally, the term involving the error is finite and small, as shown before. Considering these, the right hand side of (35) becomes negative implying that performance index decreases monotonically and the minimization is achieved. For maximization of the performance index, the free vector is chosen as .

Finally we observe that all developments in Sections 24 are applicable to a linear relationship of the form , in which case will be a constant matrix.

5. Examples

In this section we present two examples to demonstrate the features of the proposed method for solving challenging inverse problems.

5.1. Example 1

We consider the Gamma function which is described by As shown in Figure 2, Gamma function is highly nonlinear and discontinuous and has multiple solutions for a given and therefore provides a challenge as an inverse problem. This problem is ill-conditioned since (i) for a given , the corresponding is not unique, for example, in the range of  , it can have up to six solutions, and (ii) the solution does not change continuously with the initial condition since a small change in the initial condition can change the solution substantially due to the jumping from one function branch to another as is evident from Figure 2.

Figure 2: The Gamma function showing a highly nonlinear and discontinuous function.

As seen from (31) the steady sate error is inversely proportional to the product of the gain and the trajectory time and is proportional to the difference between the desired and initial outputs: . Since is known, we can determine and for a desired bound on error. It is also noted that the rate by which the errors decay with time is dependent on the gain, as seen by (30) and (32). Higher values of the gain produce faster response but can also produce overshoot and possible instability in discontinuous systems. These observations provide a general guideline for the selection of the gain and time constant. We consider the following three cases for all of which we choose the trajectory time in (3) as and feedback gain . This gives , and thus the error per unit of becomes less than 1/60 = 0.016 when the system is ill-conditioned. Note that due to the very conservative bounds that were established in obtaining (31), the actual errors will be much smaller than those established in (31), as will be seen in the following cases. The set of values for and is not unique, and other values satisfying the desired steady-state errors and decay time can be chosen. The errors tend to zero when the system is not ill-conditioned, as seen from (32). The regularization parameter in (15) is chosen as and when the trajectory is near or enters the singularity (discontinuity).

Case 1. Let the desired output be , and suppose the initial value is chosen as which corresponds to . This point is located in the right most branch at point A in Figure 2. The trajectories of the desired output , actual output , the error , and the acquired are shown in Figure 3. It is seen that the error decreases to practically zero. The final desired value of is . This solution is located to the right of point A which is the closest to the initial point.

Figure 3: Case : the reference trajectory (a), actual trajectory (b), error (c), and inverse solution (d). The maximum error is and steady-state error is = 0.

Case 2. Let us now choose the initial value of with the corresponding and the same desired as in Case . The trajectories (not shown) are similarly smooth and . It is noted that now the solution is to the left of the initial point A which is the closest point to the initial value, that is, minimum change in .

Case 3. Now suppose we use the same initial values as in Case : that is, , , which is point A in Figure 2. However, we now change the desired value to . As seen from Figure 2 this requires the passage through one or more discontinuities/singularities. The resulting trajectories are now shown in Figure 4. It is seen that, at about the time , the value of is which is close to the minimum value of which is , that is, at point B in Figure 2. As a result the system oscillates to the right and left of the minimum point B to reduce the error. However, values of around B are all positives, and achieving is impossible. As a result, the error increases until at about the time (Figure 4) and the feedback control causes a jump to point C in Figure 2. The error now decreases until reaches which corresponds to the desired with negligible steady-state error. The amplitude of jump depends on the gain . Lower values of gain will make the jump to a point right of C, and higher gains make the jump to the left of point C, both of which are close to the desired . This was verified and in each case an almost exact solution was found.
The above case studies and many more carried out and not reported here demonstrate the effectiveness of the proposed method in dealing with ill-condition cases, including severe nonlinearities and discontinuities.

Figure 4: Case : the reference trajectory , (a), actual trajectory , (b), error (c), and inverse solution (d).
5.2. Example 2

We consider the complex two-input, two-output system described by The plots of the above functions are shown in Figure 5. For a desired set of , there can be multiple inverse solutions, and arriving at a particular solution depends on the initial values.

Figure 5: Three-dimensional views of the functions (a) and (b) .

Case 1. Let , , , and , and use the truncated singular value method to remove one singular value when it becomes below 0.01. The responses are shown in Figure 6, and the steady-state value of the output is obtained exactly equal to the desired values, and the corresponding input vector is .
We have computed the inverse desired and for all values of in the range of −20 to 60 and from −1 to 1. The plots of inverses and against and are shown in Figure 7.

Figure 6: Case , two-input, two-output system: references trajectories, (a), actual trajectories , (b), errors , (c), and inverse solution , (d).
Figure 7: Three-dimensional views of the inverse functions. (a) as a function of and ; (b) as a function of and .

Case 2. The purpose of this case is to demonstrate the optimization discussed in Section 4. Consider the two-input, single-output system of second equation in (37); that is, Let the desired value be and the initial values of , as before. The corresponding initial output is . The optimization criterion is chosen as the minimization of while achieving the desired output. Thus, , and the parameters are chosen as , and . The trajectories are shown in Figure 8 and indicate a smooth transition to the final values of with the . It is noted that with the same initial values but without optimization . Thus, the optimization has minimized the change in while achieving the desired value.

Figure 8: Case , two-input one-output system with optimization: the reference trajectory (a), actual trajectory (b), error (c), and inverse solutions , (d).

6. Conclusions

A novel approach to solving a class of nonlinear inverse problems is proposed. The solution is based on using a feedback system approach which ensures the stability of the solution using the classical control theory. Thus, the convergence of the solution is achieved which guarantees close to zero steady-state error, and therefore highly accurate inverse solutions are obtained. Two methods of regularization have been investigated within the feedback system: one is based on truncation and the other utilizes the Tikhonov method. Both of these methods have been shown to produce inverse solutions with a desired accuracy. It is shown that the proposed feedback scheme can also reduce the noise to a desired level. The proposed method has a number of features and advantages. A main advantage is dealing with highly nonlinear and discontinuous inverse problems. Furthermore, all results obtained for linear inverse problems such as dealing with ill-conditioned cases and regularization can also be achieved with the proposed formulation as a special case. The extension of the method to other classes of nonlinear inverse problems is currently being investigated.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. S. Morigi, L. Reichel, and F. Sgallari, “A truncated projected {SVD} method for linear discrete ill-posed problems,” Numerical Algorithms, vol. 43, no. 3, pp. 197–213 (2007), 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. L. Reichel and Q. Ye, “Simple square smoothing regularization operators,” Electronic Transactions on Numerical Analysis, vol. 33, pp. 63–83, 2009/09. View at Google Scholar · View at MathSciNet
  3. M. Fuhry and L. Reichel, “A new Tikhonov regularization method,” Numerical Algorithms, vol. 59, no. 3, pp. 433–445, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. R. Ramlau, “T{IGRA}---an iterative algorithm for regularizing nonlinear ill-posed problems,” Inverse Problems, vol. 19, no. 2, pp. 433–465, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  5. A. Rieder, “On convergence rates of inexact Newton regularizations,” Numerische Mathematik, vol. 88, no. 2, pp. 347–365, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. H. W. Engl and O. Scherzer, “Convergence rates results for iterative methods for solving nonlinear ill-posed problems,” in Surveys on Solution Methods for Inverse Problems, D. Colton, Ed., Springer, Vienna, Austria, 2000. View at Google Scholar
  7. M. Hanke, “A regularizing Levenberg-Marquardt scheme, with applications to inverse groundwater filtration problems,” Inverse Problems, vol. 13, no. 1, pp. 79–95, 1997. View at Publisher · View at Google Scholar · View at MathSciNet
  8. M. Hanke, “Regularizing properties of a truncated Newton-{CG} algorithm for nonlinear inverse problems,” Numerical Functional Analysis and Optimization, vol. 18, no. 9-10, pp. 971–993, 1997. View at Publisher · View at Google Scholar · View at MathSciNet
  9. J. Pujol, “The solution of nonlinear inverse problems and the Levenberg-Marquardt method,” Geophysics, vol. 72, no. 4, pp. W1–W16, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. M. G. Gasparo, A. Papini, and A. Pasquali, “A two-stage method for nonlinear inverse problems,” Journal of Computational and Applied Mathematics, vol. 198, no. 2, pp. 471–482, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. G. Teschke and R. Ramlau, “An iterative algorithm for nonlinear inverse problems with joint sparsity constraints in vector-valued regimes and an application to color image inpainting,” Inverse Problems, vol. 23, no. 5, pp. 1851–1870, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  12. J. K. Seo and E. J. Woo, Nonlinear Inverse Problems in Imaging, Wiley, 2012.
  13. W. Heinz and E. P. Kugler, “Nonlinear inverse problems: theoretical aspects and some industrial applications,” in Multi-Disciplinary Methods for Analysis, Optimization And Control of Complex Systems, V. Capasso and J. Periaux, Eds., Springer, 2005. View at Google Scholar
  14. G. Marsala, G. Cirrincione, M. Pucci, and M. Cirrincione, “The GMR Neural networks for inverse problems,” Journal of Electrical Systems, vol. 3, no. 3, pp. 176–188, 2007. View at Google Scholar
  15. M. Tarokh, “Solving inverse problems by decomposition, classification and simple modeling,” Information Sciences, vol. 218, pp. 51–60, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  16. Y. Nakamura, Advanced Robotics: Redundancy and Optimization, Addison Wesley, 1991.
  17. H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic, Dordrecht, The Netherlands, 2000. View at Publisher · View at Google Scholar · View at MathSciNet