The tensor product (TP) model transformation defines and numerically reconstructs the Higher-Order Singular Value Decomposition (HOSVD) of functions. It plays the same role with respect to functions as HOSVD does for tensors (and SVD for matrices). The need for certain advantageous features, such as rank/complexity reduction, trade-offs between complexity and accuracy, and a manipulation power representative of the TP form, has motivated novel concepts in TS fuzzy model based modelling and control. The latest extensions of the TP model transformation, called the multi- and generalised TP model transformations, are applicable to a set functions where the dimensionality of the outputs of the functions may differ, but there is a strict limitation on the dimensionality of their inputs, which must be the same. The paper proposes an extended version that is applicable to a set of functions where both the input and output dimensionalities of the functions may differ. This makes it possible to transform complete multicomponent systems to TS fuzzy models along with the above-mentioned advantages.

1. Introduction

The appearance of the Singular Value Decomposition (SVD) was one of the largest breakthroughs in matrix algebra [1]. Its applicability was extended to tensors in the form of the Higher-Order SVD [2] around 2000. Recently, a further extension of the SVD and HOSVD concept, known as the tensor product (TP) Model Transformation, was proposed for functions in control theory [3]. A comprehensive overview is given in [4]. Various extensions of the TP model transformation such as the bilinear-, pseudo-, multi-, and generalised TP model transformation, as well as the concept of HOSVD canonical form of TS fuzzy or TP models, were proposed in [47], with a special focus on TS fuzzy models in [8]. The approximation power of the TP model transformation applied to TS fuzzy models is investigated in [9].

The above-mentioned extensions and variations of the TP model transformation were primarily applied to fuzzy model complexity reduction [10, 11] and in the widely used TS fuzzy model based PDC (Parallel Distributed Compensation) control theories [1214]. But also, in general, it has been applied to polytopic model, TP/TS fuzzy model, and LMI (Linear Matrix Inequality [15]) based control theories. The most important features of the TP model transformation are guaranteed by the key transformation step whereby a numerically reconstructed HOSVD structure is determined. Key features of the transformation are as follows:(i)It is executable on models given by equations or soft computing based representations, such as fuzzy rules or neural networks or other black-box models. The only requirement is that the model must provide an output for each input (at least on a discrete scale, see Section 4, Step 1).(ii)It will find the minimal complexity, namely, the minimal number of rules of the TS fuzzy model. If further complexity reduction is required, it provides one of the best trade-offs between the number of rules and approximation error.(iii)It works like a principle component analysis, in that it determines the order of the components/fuzzy rules according to their importance.(iv)It is capable of deriving the antecedent fuzzy sets according to various constraints. For instance, it can be used to define different convex hulls, a capability which has recently been shown to play an important role in control theory.(v)It is capable of transforming the given model to predefined antecedent fuzzy sets (pseudo-TP model transformation)(vi)It is capable of transforming a set of models simultaneously, while common antecedent fuzzy sets are derived for all models.

Based on the above, various theories and applications have emerged using the TP model transformation. Further computational improvements were proposed in [16, 17]. It has been proved in [5, 1820] that LMI based control design theories are very sensitive for convex hulls defined by consequents (vertices) of TS fuzzy models. Thus, the convex hull manipulation capability of the TP model transformation is an important and necessary step in LMI based control design. Very effective convex hull manipulation methods were incorporated into the TP model transformation in [2123]. Further useful control approaches and applications were published in the field of control theory [2441]. Many powerful approaches are published on the field of sliding mode control in [29, 42, 43]. In physiological control the usability of TP model transformation has been demonstrated as well [4449]. Various further theories and applications are studied in [5087].

One of the key advantages of the TP model transformation is that is capable of finding the minimal complexity of all components of the system and guarantees the same antecedent system for all components. This is a very typical requirement in design or stability verification methodologies, that is, the model, controller, and observer need to have the same antecedent system, hence, convex representation. Therefore, the simultaneous manipulation of the components with the multi-TP model transformation or the generalised TP model transformation (that combines all variants of the TP model transformation) yields further possibilities for control performance optimisation [1820].

Despite the above advantages, a crucial limitation of the generalised TP model transformation is that it can only be applied to a set of systems which have the same number of inputs. For instance, consider four different systems given with different representations, as shown in Figure 1. S1 is a fuzzy logic model; S2 is neural network; S3 is given by an equation; and S4 is a black-box model. All of these models have the same inputs but may have different sized output tensors. The multi-TP model transformation is capable of simultaneously transforming all systems to TP or TS fuzzy model form, such that the same antecedent sets are defined on the inputs. The generalised TP model transformation can also transform to predefined antecedent fuzzy sets.

A further generalisation proposed in this paper can be applied to systems like in the example given in Figure 2. Here each system may be given by different representations (like in the above case) but may also have different numbers of inputs. The transformation can simultaneously convert all of the systems to TS fuzzy model form, such that the antecedent fuzzy sets will either be the same or assume a predefined structure. From all other perspectives, the proposed TP model transformation inherits all of the advantageous features of the previous TP-based approaches.

Recenly proposed SOS-type (Sum-of-sqares) TS fuzzy LPV models are also widely applied in fuzzy control theories [88, 89]. The further extension of the TP model transformation to such systems is highly welcome in future works.

2. Notation and Concepts

2.1. Notation

The following notations are used in the paper:(i)Scalar:   is scalar.(ii)Vector: contains elements .(iii)Matrix: contains elements .(iv)Tensor: contains elements .(v)Set: , for example, .(vi)Index : the upper bounds of the indices are denoted by the uppercase letter, for example, .(vii)Index denotes that index takes the elements of set , respectively. is understood as per default.(viii)Interval: .(ix)Space: is an dimensional hypercube.(x) expresses the fact that vector is within the space . The dimensions of and are the same.(xi) denotes a dimensionality reduced subset in general as follows:(a)In the case of spaces: states that is a hypercube with the same sized intervals as , but has a smaller number of dimensions.(b)In the case of vectors: , where and means that and .(c)In the case of tensors: means for instance that is obtained by deleting complete dimensions from tensor .(xii)Grid: is a rectangular hyper grid (tensor), where defines the locations of the different grid points in increasing order.(xiii)Pair : space and grid are in a pair, meaning that and .(xiv)Discretised function of denotes the sampling of over pair . Thus, it is a tensor with the size of and entries: (xv) is the tensor product (TP); for details, refer to [4, 5, 8]. A slight difference in notation here is that under the tensor product operation is only a set numbers to which the product should be applied.(xvi) represents the TP function, , where is called the weighting function system.(xvii)Types of the weighting functions are as follows:(a)SN: sum normalised(b)NN: nonnegativeness(c)NO: normalised(d)CNO: close to normalised(e)RNO: relaxed normalised(f)INO: inverse normalised(g)IRNO: inverse relaxed normalised.For further details, refer to [4, 5].

3. The Proposed TP Model Transformation

Assume that a set of functions is given as , ; thus , . The output tensor of each function may differ in the number of dimensions and its size as , where denotes the number of dimensions of the output and denotes the number of elements in dimension .

The goal of the TP model transformation is to transform into TP function form as

under the following constraints given on the weighting functions.

(i) Unified Constraints for . All resulting TP functions will have the same weighting function system on each dimension defined by the set (obviously, if the function has that input dimension):(a)Weighting function systems , , are predefined.(b)Weighting function systems , will be derived by the transformation; only their types are predefined (i.e., SN, NN, NO, CNO, RNO, INO, and IRNO). Further the number of the weighting functions are minimised.

(ii) Different Constraints for Each . The resulting TP functions have different weighting functions on dimensions :(a)Weighting function systems , are predefined of each .(b)The types (i.e., SN, NN, NO, CNO, RNL, INO, and IRNO) of the weighting function systems are predefined for dimensions of each .

Thus, (2) can be given as follows:

4. The Computation of the Proposed TP Model Transformation

Step 1 (discretisation). (i)Discretisation of all results in tensor , (). The size of in dimension is .(ii)Discretise the predefined weighting functions over the dimensions of :

Remark 1. This step is executed in the same way as in the case of the original TP model transformation; see [4, 5, 8].

Step 2 (defining TP structures). Execute the following steps in each dimension :(i)Lay out tensors in dimensions if vector has the following dimension:(ii)If then createExecute SVD on and SN, NN, NO, CNO and complexity trade-off by discarding singular values in the same way as in the original TP model transformation, which results inAs a matter of fact, if nonzero singular values are discarded then it is only an approximation. Let(iii)If then execute SVD on asand, according to the conditions, execute SN, NN, NO, CNO, and complexity trade-off by discarding singular values in the same way as in the original TP model transformation:Again, if nonzero singular values are discarded then it is only an approximation. Let(iv)Finally,wherewhere denotes the pseudoinverse.

Step 3 (reconstruction of the weighting functions). This step is the same as in the multi-TP model transformation [4, 5, 8]. Having the result of the above steps, and , we can recalculate the weighting functions at any point. We may calculate the first two steps over a grid, which is not too dense, but calculate the weighting function over a very dense grid (as suggested in [5]), and then construct piecewise linear functions. As a result we have and .
Then we achieved the goal. We have the TP model form of all functions with the given constraints:orif a complexity trade-off is executed (nonzero singular values are discarded), whereOr in other words,

Remark 2. The convex hull manipulation and the complexity trade-off are done in the second step. Therefore the approximation accuracy is controlled here by the discarded nonzero singular values. However, the discarded nonzero singular values lead to approximation error. If the given weighting function system is not sufficient (i.e., the number of the weighting functions is less than the rank of that dimension) then we arrive at an approximation only. The use of the pseudoinverse guarantees, however, that it will be the best approximation.

5. Example

5.1. The System

Consider a multicomponent system with input vector , where and . The system has four subsystems, as shown in Figure 1.

System 3. In order to have a systematic notation, we denote the input vector of System 3 as that is . It is a neural network; see Figure 3: where is the activation function (let it be a very simple one in the present case: ) of the neurons and are the weights connecting the th input neuron to the th output neuron. Thus the output of the system is

System 4. The input vector of System 4 is , where and .
This system is given by formulas such as

System 5. The input vector of System 5 is , where and .
This is given by a fuzzy logic model. Assume that two rules are given ():
Further assume that the membership functions are in Ruspini partition:and the consequent sets are singleton sets located at elements 5 and 6 of the output universe. It is a TS fuzzy model and, therefore, the transfer function (product-sum-gravity) of the model is

System 6. The input vector of System 6 is , where and .
This is a black-box model that can provide for any input .(In order to follow all computational steps of the example, let us reveal what is the output of the black-box .)

5.2. Conditions of the TP Model Transformation

The goal of the example is to transform all the four systems to TS fuzzy representations (or TP model if the resulting weighting functions cannot be represented as antecedent fuzzy states), with the following conditions:(i)All systems must have the same antecedent function system on the input interval of . The antecedent functions must be in Ruspini partition, namely, in SN and NN type. In order to have a complexity minimised representation, a further requirement is that the number of antecedent functions must be minimal.(ii)The same antecedent function system of variable is predefined for all systems:where “” denotes “predefined,” and (iii)The only requirement for the weighting function system of the input of each system is that they must be the singular functions of the HOSVD canonical form (othonormed system ordered by the higher-order singular values). These functions are not representable as antecedent functions of fuzzy sets, since they may take negative values as well. Obviously they will not be the same for all systems.

5.3. Execution of the Proposed TP Model Transformation

It is worth emphasizing again that the previous methods for TP model representation cannot be applied in the present case, since the elements of the input vectors are different.

Step 1. (i)Let us define grid to : Thus the number of points on the discretisation grid is .(ii)Let us discretise the systems over the rectangular grid defined by vectors , . The discretisation of System results in , . In case of System 3,where the first three dimensions are assigned to the input variables and the last dimension is assigned to the output vector. The discretisation of System 4 yields where the first two dimensions are assigned to the input variables and the last two dimensions are assigned to the output matrix. The discretisation of System 5 yields the following vector: The discretisation of System 6 results in where the first two dimensions are assigned to the input variables and the last dimension is assigned to the output vector. Let us discretise the predefined weighting function as well:

Step 2. (i) Dimension Lay out tensors in the dimension assigned : CreateExecute SVD on incorporating SN and NN condition [4] (only nonzero singular values are kept):The result of this step to be used later is .(ii) Dimension Let(iii) Dimension Lay out tensors in the dimension assigned :Then execute HOSVD on each (only the nonzero singular values are kept):The result of this step is .(iv) Reconstructing the core tensors

Step 3. Let .
Then having the discretised tensors and weighting functions of all systems we can numerically reconstruct the weighting functions [4, 5] as.
Thus, we have achieved our goal:

6. Conclusion

The proposed TP model transformation can be executed on a set of models where the dimensionality of the inputs may differ. The proposed TP model transformation has all the advantages of the previous ones, including easy convex hull manipulation, complexity trade-offs, pseudo TP model transformation, and automatic and numerical execution.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.


This work was supported by the FIEK Program (Center for Cooperation between Higher Education and the Industries at the Széchenyi István University, GINOP-2.3.4-15-2016-00003).