Table of Contents Author Guidelines Submit a Manuscript
Journal of Function Spaces
Volume 2018, Article ID 2835175, 10 pages
https://doi.org/10.1155/2018/2835175
Research Article

The Tensor Pad-Type Approximant with Application in Computing Tensor Exponential Function

1Department of Mathematics, Shanghai University, Shanghai 200444, China
2Changzhou College of Information Technology, Changzhou 213164, China

Correspondence should be addressed to Chuanqing Gu; nc.ude.uhs.ffats@ugqc

Received 21 April 2018; Accepted 22 May 2018; Published 19 June 2018

Academic Editor: Liguang Wang

Copyright © 2018 Chuanqing Gu and Yong Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Tensor exponential function is an important function that is widely used. In this paper, tensor Pad-type approximant (TPTA) is defined by introducing a generalized linear functional for the first time. The expression of TPTA is provided with the generating function form. Moreover, by means of formal orthogonal polynomials, we propose an efficient algorithm for computing TPTA. As an application, the TPTA for computing the tensor exponential function is presented. Numerical examples are given to demonstrate the efficiency of the proposed algorithm.

1. Introduction

Tensor exponential function is an important function that is widely used, owing to its key role in the solution of tensor differential equations [14]. For instance, Markovian master equation can be written as tensor differential equations , where the probabilities tensor [5]. Consider the initial value problem defined by the tensor ordinary differential equation [6, 7]where the superimposed dot denotes differentiation with respect to and and are given constant tensors. The solution to system (1) is , and is the tensor exponential function.

To solve (1), we need to calculate an exponential function about tensor . Recently, tensor computation, especially eigenvalues of tensors, has attracted attention of many scholars; some important results have been found in the current literature [815]. For instance, computation of is required in applications such as finite strain hyperelastic-based multiplicative plasticity models [7, 1619]. Explicitly, for a generic tensor , the tensor exponential can be expressed by means of its series representation [7] (see p.749).and the preceding series is absolutely convergent for any argument and, as its scalar counterpart, can be used to evaluate the tensor exponential function to any prescribed degree of accuracy [16]. The computation of (2) is carried out by simply truncating the infinite series with terms: with being such that .

However, the accuracy and effectiveness of the preceding algorithm is limited by round-off and choice of termination criterion [16]. Pad approximant has become by far the most widely used one in calculation of exponential function or formal power series due to the following reasons: first, the series may converge too slowly to be of any use and the approximation can accelerate its convergence; second, only few coefficients of the series may be known and a good approximation to the series is needed to obtain properties of the function that it represents [20]. For instance, matrix Pad-type approximant (MPTA) [21] can be used to simplify the high degree multivariable system by approximating transfer function matrix that can be expanded into a power series with matrix coefficients, i.e., , where . The key to construct TPTA is to maintain the same order of tensor with different powers. For this issue, we introduce t-product [22, 23] of two tensors to solve it. In addition, in order to give the definition of TPTA, we introduce a generalized linear functional in the tensor space for the first time.

This paper is organized as follows. In Section 2, we provide some preliminaries. First, we introduce the t-product of two tensors; then, we show the definitions of tensor exponential function and the Frobenius norm of a tensor. In Section 3.1, we define the tensor Pad-type approximant by using generalized linear functional; the expression of TPTA is of the form of tensor numerator and scalar denominator; and then we introduce the definition of orthogonal polynomial with respect to generalized linear functional and sketch an algorithm to compute the TPTA. Numerical examples are given and analyzed in Section 4. Finally, we finish the paper with concluding remarks in Section 5.

2. Preliminaries

There arise mainly a problem for approximating tensor exponential function. That is how to expand into the power series for order- () tensors. For a Symmetric and second-order tensor , higher powers of can be computed by the Cayley-Hamilton theorem [24], but it fails for the order- () tensors. Therefore, we shall utilize the t-product to obtain higher powers of order- () tensors in this section. Firstly, we introduce some notations and basic definitions which will be used in the sequel. Throughout this paper tensors are denoted by calligraphic letters (e.g., , ), while capital letters represent matrices, and lowercase letters refer to scalars.

An order- tensor, , can be written as

Thus, a matrix is considered a second-order tensor, and a vector is a first-order tensor [22], for , denoted by , the tensor whose order is and is created by holding the th index of fixed at . For example, consider a third-order tensor, Fixing the rd index of , we can get three matrices, namely, 2-order tensor, which are , , and and with elements respectively.

Now, we will define the t-product of two tensors.

Definition 1 (see [22, 23]). Let . Then the block circulant pattern tensor of is denoted by where ,

Define unfold to an tensor by an block tensor in the following way: If is order-3 tensor, then is a block vector. Similarly, define fold as the inverse operation, which takes an block tensor and returns an tensor; then

Definition 2 (see [23]). Let be and be . Then the t-product is the tensor defined recursively as

Remark 3. If and are order-2 tensors, then the product “” can be replaced by standard matrix multiplication.

Remark 4. The times power of is defined as ( times); “” denotes the t-product.

Example 5. Lettingthen, from Definition 2, we have

Remark 6. One of the characteristic features of t-product is that it ensures that the order of multiplication result of two tensors does not change, whereas other tensor multiplications do not have the feature; that is why we chose the t-product as the multiplication of tensors.

The tensor exponential function is a tensor function on tensors analogous to the ordinary exponential function, which can be defined as follows.

Definition 7. Let be an real or complex tensor. The tensor exponential function of , denoted by or exp, is the tensor given by the power series where is defined to be the identity tensor (see Definition 8) with the same orders as .

Definition 8 (see [23]). The order- identity tensor () is the tensor such that is the order- identity tensor, and is the order- zero tensor, for .

By Definition 8, we can define tensor inverse, transpose, and orthogonality. However, we do not discuss these works here, as it is beyond the scope of the present work. For the details of these definitions of tensor, we refer to reader to [22, 23, 25] and the references therein.

Let ; then the norm of a tensor is the square root of the sum of the squares of all its entries [25]; i.e.,

This is analogous to the matrix Frobenius norm. The inner product of two same-sized tensors , is the sum of the products of their elements [25]; i.e.,It follows immediately that

3. Tensor Pad-Type Approximant

Let be a given power series with tensor coefficients; i.e.,

Let denote the set of scalar polynomials in one real variable whose coefficients belong to the real field and denote the set of elements of of degree less than or equal to .

Let be a linear functional on . Let it act on byThen, by the linearity of , we have

3.1. Definition of Tensor Pad-Type Approximant

Let be a scalar polynomial of of exact degree . In this case, is said to be quasi-monic. Define the tensor polynomial associated with with tensor-valued coefficients, byIt is easily seen that is a tensor polynomial of exact degree in . SetThen, the polynomials and are obtained from and , respectively, by reversing the numbering of the coefficients. By the procedure given above, the following conclusion is obtained.

Theorem 9. Let ; then

Proof. Expanding in (18) and applying yields that Computing , we get Thus,

Definition 10. is called a tensor Pad-type approximant with order for the given power series (15) and is denoted by

Remark 11. The polynomial , called the generating polynomial of with respect to power series , can be arbitrarily chosen.

Remark 12. The tensor Pad-type approximant possesses the degree constraint, which is caused by its construction process. The constraint implies that the method does not construct tensor Pad-type approximant of type when is different from .

To fill this gap, we define a new tensor Pad-type approximant by introducing a generalized linear functional.

Let be a generalized linear functional on . Let it act on bySimilarly to what was done for , we consider the polynomial associated with , and defined bySetand defineThen we have the following conclusion.

Theorem 13. Let ; then

Proof. Let be the formal power series ThenExpanding (25) and using (26) we obtain Computing the product , we find that By Theorem 9, one has Then, for we deduce from (27) and (29) that

Now, we can achieve by the above procedure, and it will be denoted by .

Definition 14. is called TPTA with order and is denoted by

Algorithm 15 (compute with being arbitrarily chosen). (1)Set and chose a quasi-monic polynomial .(2)Use (19) to compute .(3)Compute and by (25) and (26), respectively.(4)Substitute and into (27) to compute .(5)Set .

Example 16. Let

Now we apply Algorithm 15 to compute TPTA of type for this example.Chose , Use (19) to compute : By using (25) and (26) we getand Substituting and into (27), we obtainSet . It is easy to verify that

3.2. Algorithm for Computing TPTA

Generally, the precision of TPTA is limited, since the denominator polynomials of TPTA are arbitrarily prescribed. In this subsection, in order to improve the precision of approximation, we propose an algorithm for computing the denominator polynomials and illustrate the efficiency of this algorithm in next section.

First, we give the following conclusion.

Theorem 17 (error formula).

Proof. Note that is a linear functional on , only acting on . From (18) and (20) we deduced that and then this error formula holds.

In terms of the error formula, it holds thatIf we impose that satisfies the condition , then the first term of (42) disappears, and the order of approximation becomes . If, in addition, we also impose the condition , the second term in the expansion of the error also disappears, and the order of approximation becomes , and so on. We indicate that depends on arbitrary constants; however, on the other side, a rational function is defined apart from a multiplying factor in its numerator and its denominator. It implies that depends on arbitrary constants. So let us take such that

Definition 18. in (43) is called an orthogonal polynomial with respect to the linear functional and in (42) is also called a TPTA for the given power series (15) when (43) is satisfied.

From (43) we obtainLet in (44); then it follows thatForming the scalar product of both sides of (45) with , respectively, we getDenoteand call det the Hankel determinant of with respect to the coefficients .

Then (46) is converted into

In the case of TPTA, is not arbitrarily chosen any more but is determined by the preceding system. The choice of can help to improve the accuracy of approximation, but unfortunately we have not been able to guarantee that the solution of system (48) comes into existence, so far. We only give the following basic theorem about system (48) on the basis of linear algebra.

Theorem 19. The solution of (48) exists if and only if . Moreover, the solution is unique if det.

Proof. The proof of the assertion follows from the simple fact that, for a system of linear equations, described by , where is matrix and , are vectors, the solution of system (48) comes into existence for if and only if ; i.e., the right-hand vector must be in the vector space spanned by the columns of the coefficient matrix . Moreover, if det, according to Cramer’s rule, the solution is unique.

Theorem 20 (existence). Let be given power series (15); then and exist and are unique if and only if .

Proof. ” By Theorem 19, if , it means that nonhomogeneous equation (48) exists as a unique solution for . From (19), it also means that and exist. Hence, by the construction of , existence holds.
” Let exist and be unique; then it implies that and if , then ; the fact that equation (46) has solutions, namely, that we can construct generating polynomials, which is contradictory to the uniqueness of , holds.
The proof of existence and uniqueness of is similar to the preceding process.

Theorem 21. Let det; then , where the generating polynomial is given bywhere , , and and are given by (19) and (20), respectively.

Now, we can derive an algorithm to calculate using (26), (27), and (50).

Algorithm 22 (compute ). (1)Use (14) to calculate and .(2)Use (50) and (19) to compute and , respectively.(3)Set and compute and by (4)Compute the numerator of TPTA by (5)Obtain .

4. Application for Computing the Tensor Exponential Function

The method of truncated infinite series has abroad applications in finite single crystal plasticity for computing tensor exponential function [16]. However, the accuracy and effectiveness of such algorithm are limited by round-off and choice of termination criterion. In this section, we will utilize the method of TPTA to compute tensor exponential function. We start by briefly reviewing some basic equations that model the behaviour of single crystals in the finite strain range [16].

Consider a single crystal modelwhere and denote elastic part and plastic part, respectively.

For a single crystal with a total number of slip systems, the evolution of the inelastic deformation gradient, , is defined by means of the following rate form:where denotes the contribution of slip system to the total inelastic rate of deformation. The vectors and denote, respectively, the slip direction and normal direction of slip system .

The above tensor differential equation can be discretized in an implicit fashion with use of the tensor exponential function. The implicit exponential approximation to the inelastic flow equation results in the following discrete form:

The above formula is analogous to the exact solution of initial value problem (1) and it is necessary to calculate

In [7], the author used Algorithm 23 to calculate (56).

Algorithm 23 (truncated infinite series method [7] (p.749)). (1)Given tensor , initialise and .(2)Increment counter .(3)Compute and .(4)Add new term to the series .(5)Check convergence, if

Example 24. Consider a tensor exponential function ; the entries of are , , , , and zero elsewhere.

To find a tensor Pad-type approximation of type for the tensor exponential function, first we should expand into power series by means of Definition 7. We can obtain By Algorithm 22, the following can be done.Use (14) to compute and : Use (50) to calculate : and  compute by (19), so we get Set and compute , : andSubstitute and into (27) to compute : Obtain .

In Table 1 we compare the number of exact figures given by the method of TPTA of type with corresponding exact value of referring to the entries of , , , and . We also compute the norm of absolute residual tensor (denoted by ). Here, where the operation is defined by (13).

Table 1: Numerical results of Example 24 at different points by using Algorithm 22.

From Table 1, it is observed that the estimates from TPTA can reach the desired accuracy.

Example 25. Let be given by Example 24.

By Algorithm 22 for preceding example again, we calculate . The exact value and approximant value associated with the entries of , , , and are listed in Tables 2 and 3, respectively, where .

Table 2: The exact value of .
Table 3: Numerical approximations of using Algorithm 22 for Example 25.

From Table 3, we can see that has the best approximation for this example. We also compute by using Algorithm 23, and the corresponding numerical results are listed in Table 4. By comparison of Table 3 with Table 4, we find that it requires at most 6 coefficients (since ) of power series expansion of to achieve an error of in Algorithm 22, while requiring 11 coefficients in Algorithm 23. It is straightforward to understand that Algorithm 23 is more expensive than Algorithm 22 especially for higher order tensor exponential function. In practical applications, only few coefficients of the series may be known, so, we may get the desired results by means of TPTA. Thus the effectiveness of the proposed Algorithm 22 is verified.

Table 4: Numerical results of Example 25 by using Algorithm 23.

5. Conclusion

In this paper, we presented tensor Pad-type approximant method for computing tensor exponential function; the expression of TPTA is of the form of tensor numerator and scalar denominator. In order to have a tensor Pad-type approximant with the higher possible precision of approximation, we proposed an algorithm for computing denominator polynomials of TPTA, and its effectiveness has been investigated in one example of tensor exponential function. The key to the TPTA to be applied to the tensor exponential function is that it can be expanded into power series with the same order tensors coefficients by means of t-product. Of course, there are several ways to multiply tensors [2630], but the order of the resulting tensor may be changed. For example, if is and is , then the contracted product [26] of and is . So, the choice of the multiplication of two tensors is an open question for expanding tensor exponential function, and the corresponding tensor Pad approximant theoretic is a subject of further research.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The work is supported by National Natural Science Foundation (11371243) and Key Disciplines of Shanghai Municipality (S30104).

References

  1. S. Dolgov and B. Khoromskij, “Simultaneous state-time approximation of the chemical master equation using tensor product formats,” Numerical Linear Algebra with Applications, vol. 22, no. 2, pp. 197–219, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. N. G. van Kampen, Stochastic Process in Physics and Chemistry, North-Holland Personal Library, Elsevier B.V., 3rd edition, 2007. View at MathSciNet
  3. P. GelB, S. Matera, and C. Schütte, “Solving the master equation without kinetic Monte Carlo: tensor train approximations for a CO oxidation model,” Journal of Computational Physics, vol. 314, pp. 489–502, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  4. W. Ding, K. Liu, E. Belyaev, and F. Cheng, “Tensor-based linear dynamical systems for action recognition from 3D skeletons,” Pattern Recognition, pp. 75–86, 2017. View at Publisher · View at Google Scholar
  5. P. Gelß, S. Klus, S. Matera, and C. Schütte, “Nearest-neighbor interaction systems in the tensor-train format,” Journal of Computational Physics, vol. 341, pp. 140–162, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  6. C. Gu, Y. Z. Huang, and Z. B. Chen, Continued Fractional Recurrence Algorithm for Generalized Inverse Tensor Padé Approximation, Control and Decision, 2018, http://kns.cnki.net/kcms/detail/21.1124.TP.20180416.0932.036.html.
  7. E. A. de Souza, D. Peric', and D. R. J. Owen, Computational methods for plasticity: Theory and Applications, Wiley, 2008.
  8. H. Chen, Y. Chen, G. Li, and L. Qi, “A semidefinite program approach for computing the maximum eigenvalue of a class of structured tensors and its applications in hypergraphs and copositivity test,” Numerical Linear Algebra with Applications, vol. 25, no. 1, e2125, 16 pages, 2018. View at Publisher · View at Google Scholar · View at MathSciNet
  9. G. Zhou, G. Wang, L. Qi, and M. Alqahtani, “A fast algorithm for the spectral radii of weakly reducible nonnegative tensors,” Numerical Linear Algebra with Applications, vol. 25, no. 2, e2134, 10 pages, 2018. View at Publisher · View at Google Scholar · View at MathSciNet
  10. H. Chen and Y. Wang, “On computing minimal H-eigenvalue of sign-structured tensors,” Frontiers of Mathematics in China, vol. 12, no. 6, pp. 1289–1302, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  11. G. Wang, G. Zhou, and L. Caccetta, “Z-eigencvalue inclusion theorems for tensors,” Discrete and Continuous Dynamical Systems - Series B, vol. 22, no. 1, pp. 187–198, 2017. View at Publisher · View at Google Scholar · View at MathSciNet
  12. K. Zhang and Y. Wang, “An H-tensor based iterative scheme for identifying the positive definiteness of multivariate homogeneous forms,” Journal of Computational and Applied Mathematics, vol. 305, pp. 1–10, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  13. Y. Wang, K. Zhang, and H. Sun, “Criteria for strong H-tensors,” Frontiers of Mathematics in China, vol. 11, no. 3, pp. 577–592, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. H. Chen, L. Qi, and Y. Song, “Column sufficient tensors and tensor complementarity problems,” Frontiers of Mathematics in China, vol. 13, no. 2, pp. 255–276, 2018. View at Publisher · View at Google Scholar · View at MathSciNet
  15. Y. Wang, L. Caccetta, and G. Zhou, “Convergence analysis of a block improvement method for polynomial optimization over unit spheres,” Numerical Linear Algebra with Applications, vol. 22, no. 6, pp. 1059–1076, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. E. A. de Souza Neto, “The exact derivative of the exponential of an unsymmetric tensor,” Computer Methods Applied Mechanics and Engineering, vol. 190, no. 18-19, pp. 2377–2383, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. A. Cuitino and M. Ortiz, “A material-independent method for extending stress update algorithms from small-strain plasticity to finite plasticity with multiplicative kinematics,” Engineering Computations, vol. 9, no. 4, pp. 437–451, 1992. View at Publisher · View at Google Scholar · View at Scopus
  18. A. L. Eterovic and K. Bathe, “A hyperelastic‐based large strain elasto‐plastic constitutive formulation with combined isotropic‐kinematic hardening using the logarithmic stress and strain measures,” International Journal for Numerical Methods in Engineering, vol. 30, no. 6, pp. 1099–1114, 1990. View at Publisher · View at Google Scholar · View at Scopus
  19. J. C. Simo, “Algorithms for static and dynamic multiplicative plasticity that preserve the classical return mapping schemes of the infinitesimal theory,” Computer Methods Applied Mechanics and Engineering, vol. 99, no. 1, pp. 61–112, 1992. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. C. Brezinski, Padé-type approximation and general orthogonal polynomials, vol. 50 of International Series of Numerical Mathematics, Birkhauser, Basel, Switzerland, 1980. View at MathSciNet
  21. C. Gu, “Matrix Padé-type approximant and directional matrix Padé approximant in the inner product space,” Journal of Computational and Applied Mathematics, vol. 164, pp. 365–385, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  22. M. E. Kilmer, K. Braman, N. Hao, and R. C. Hoover, “Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging,” SIAM Journal on Matrix Analysis and Applications, vol. 34, no. 1, pp. 148–172, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  23. C. D. Martin, R. Shafer, and B. Larue, “An order-p tensor factorization with applications in imaging,” SIAM Journal on Scientific Computing, vol. 35, no. 1, pp. A474–A490, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  24. M. Itskov, “Computation of the exponential and other isotropic tensor functions and their derivatives,” Computer Methods Applied Mechanics and Engineering, vol. 192, no. 35-36, pp. 3985–3999, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM Review, vol. 51, no. 3, pp. 455–500, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. M. E. Kilmer and C. D. Martin, “Factorization strategies for third-order tensors,” Linear Algebra and its Applications, vol. 435, no. 3, pp. 641–658, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  27. B. W. Bader and T. G. Kolda, “Algorithm 862: {MATLAB} tensor classes for fast algorithm prototyping,” ACM Transactions on Mathematical Software, vol. 32, no. 4, pp. 635–653, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  28. H. A. L. Kiers, “Towards a standardized notation and terminology in multiway analysis,” Journal of Chemometrics, vol. 14, no. 3, pp. 105–122, 2000. View at Publisher · View at Google Scholar · View at Scopus
  29. L. De Lathauwer, B. De Moor, and J. Vandewalle, “A multilinear singular value decomposition,” SIAM Journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1253–1278, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  30. D. Liu, W. Li, and S.-W. Vong, “The tensor splitting with application to solve multi-linear systems,” Journal of Computational and Applied Mathematics, vol. 330, pp. 75–94, 2018. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus