Abstract

Dynamic behaviour of a system may be described by models with different forms: thermal (RC) networks, state-space representations, transfer functions, and ARX models. These models, which describe the same process, are used in the design, simulation, optimal predictive control, parameter identification, fault detection and diagnosis, and so on. Since more forms are available, it is interesting to know which one is the most suitable by estimating the sensitivity of the model to transform into a physical model, which is represented by a thermal network. A procedure for the study of error by Monte Carlo simulation and of factor prioritization is exemplified on a simple, but representative, thermal model of a building. The analysis of the propagation of errors and of the influence of the errors on the parameter estimation shows that the transformation from state-space representation to transfer function is more robust than the other way around. Therefore, if only one model is chosen, the state-space representation is preferable.

1. Introduction

Dynamic models for thermal behaviour of buildings are of interest for model predictive control (MPC) and measurement of performance. These applications use different representations (thermal networks, state-space models, autoregressive models, etc.) for the same physical system. However, the study of error propagation in the models due to the model transformations needs more clarifications.

According to the literature of the last few years, MPC is a widespread strategy with promising energy saving potential [13]. The algorithm minimizes the energy consumption while optimizing the thermal comfort by incorporating weather forecast, occupancy schedule, and other factors. Since MPC is a model-based strategy, it means that the performances rely on the model accuracy; therefore the model must be capable of catching the building dynamics. The information on the model of the building may be used to tackle down the influence on the energy consumption of inputs such as weather and occupancy. This allows distinguishing the intrinsic part of the model from its disturbances. Therefore, it is interesting to keep the physical meaning in the model for two reasons: (1) calibration of model on actual data by using the parameter identification based on in situ measurements and (2) physical interpretation of the parameters of the building such as time constants and envelop thermal conductance. This strategy is attractive because nowadays we are not able to measure the energy efficiency of a building. Indeed actual methods present discrepancies from 50% to 200% between the estimated consumption and the real one [46]. In the conception phase, simulations are used to determine if a new or a refurbished building matches the performance specifications. If it is not the case, the building parameters are optimized until the requirements are met. Nevertheless these detailed simulations use conventional data such as weather records, occupancy schedules, and set-point temperatures, which can be far from the reality as pointed out by the magnitude of the gaps between the simulation results and the measurements. This problem in the conception phase cannot be fixed since it is not possible to have in situ measurement when the building is not even built. The same methodology is used to quantify the energy efficiency gained by retrofitting. It follows naturally the reassessment on the relevance of the predicted gains and how the return of investment can be certified. A solution could be to measure the energy consumption in a reference situation, before and after refurbishment and determine the savings [7]. During the baseline period, the model is calibrated on measurements. Thereafter, in the reporting period, the difference between the consumption measured by sensors and that estimated by the model represents the energy savings due to the refurbishment.

The deadlock in these applications is the experimental identification of the model. Two model structures are widely used for control analysis and synthesis: state-space and transfer function. When the first is established from a thermal circuit, the relations between the physical parameters (i.e., thermal resistances, , and capacities, ) are explicit contrary to the transfer function, where the coefficients have to follow some transformations to find the values of the physical parameters. Figure 1 summarises the feasible transformations and the process to follow. Both state-space representations and transfer functions are suitable for control strategy and fault detection and diagnosis. The question is which one is better since their parameters are identified by using different methods. Therefore we are interested in studying the propagation of error through the transformations from a form of the model to another.

According to Figure 1, we consider three model types. For ease of notation, we refer to the thermal network by RC, the continuous state-space by SS, and the ARX model by ARX. For instance, if a thermal network is put in state-space representation, it is a RC to SS transformation.

2. Error Propagation by Monte Carlo Simulation

The transformations in Figure 1 can be represented by a function of the form , which relates input parameters, , from an initial representation, to the output parameter, , of a new representation. In practice, the input parameter vector is often estimated from noisy measurements and consequently is uncertain.

How are the uncertainties of an estimated parameter propagated from one representation to another? Is it better to identify the parameters of a state-space model and then to have the possibility of converting it in a transfer function or to do the procedure the other way around? Which transformation is the most robust to uncertainties? The error propagation between model transformations is quantified by Monte Carlo simulations where the vector, , is randomly chosen such thatwhere is a normal distribution with mean and standard deviation , is the true input parameters vector, and represents the percentage of introduced as variance.

Then, given , the output parameter is obtained. This process is repeated times, for sufficiently large such that the input parameter space is well explored and the output parameter distribution is well defined.

The transformations between different model representations are first presented, starting from a dynamic thermal network in which the parameters have a physical meaning.

2.1. Transformation between Thermal Network and State-Space Model

To illustrate the methodology presented in this section [8], the second-order thermal network in Figure 2 is considered. The heat transfers through the envelope and the ventilation are modelled by the following heat transfer rates:(i): infiltration and windows conduction (W)(ii): outside-air convection (W)(iii): wall conduction (W)(iv): wall conduction and inside-air convection (W).The thermal capacities and represent, respectively, the energy accumulated in the envelope and the energy accumulated in the medium and the indoor air.

A temperature source and two heat rate sources are considered as inputs:(i): outdoor temperature (°C)(ii): solar radiation on the outside-wall surfaces (W)(iii): heat flow from the HVAC system and internal gains (W).For the thermal network in Figure 2, the temperature differences over each resistance can be written in a matrix form:where is the vector of temperature drops over the thermal resistances, is the vector of temperature in the nodes, and is the vector of temperature sources on the branches.

The heat transfer rates in the branches can be expressed in matrix form:where is the vector of heat rates in the branches and is the diagonal matrix of thermal conductivities.

The heat balance in each temperature node giveswhere , , is the diagonal matrix of thermal capacities, and is the vector of heat rate sources connected to the temperature nodes.

The matrix partitions the set of equations (4) into nodes with thermal capacities and nodes without thermal capacities :The heat balance equation can be transformed in state-space model by eliminating the nodes without thermal capacity and therefore separating differential and algebraic equations.

The state-space model of the thermal network is then given bywhereThe input matrix (8) can be reduced by taking only the columns corresponding to nonzero elements in the input vector (9) and by adding the first and the second column which correspond to the same input, , which givesThis transformation maps six physical parameters into eight state-space parameters, which means that two parameters are dependent on the others, such as and , where and represent, respectively, the parameter of the state matrix and input matrix in row and column . Reciprocally, two parameters which do not act directly on the solution have to be removed to balance the system. Here, and are not used, which gives the following expressions: Finally, the state-space model (10) is discretized with the forward Euler method to conserve the explicit relations between the physical parameterswhere is the sampling time; is considered for the rest of the paper.

2.2. Transformation between State-Space Model and ARX Model

The purpose of this section is to find a relationship between parameters of the state-space model and parameters of the discrete transfer function. Although there is a well-known formula to pass from state-space to transfer function, reciprocally, the latter is not unique since the input-output relationship is invariant to similarity transformation [9]. The state-space model (12) can be transformed in a new representation if a new state vector is chosen according to the following relation:where is an invertible matrix and is the state vector of the discretized state-space.

The new representation is linked to the original state-space model bywhere is the observability matrix computed bywith , the number of states.

One of the easiest ways to get an input-output representation is to describe it as a linear difference equation in which the next output depends on previous observations (outputs and inputs) [10]. Since observed data are always collected by sampling, only the discrete form is considered. The system can be represented by a discrete transfer function, where the numerator and the denominator are polynomial in , the shift operator; this type of model is called autoregressive with exogenous inputs (ARX).

A multi-input multioutput (MIMO) system can be described by a difference equation [11]:where is the input, is the output, is the output at time instant , is the output delayed of samples, , and .

By using the properties of the shift operator , (19) is rewritten:By definingthe difference equation (20) is transformed in ARX model withOne solution to convert a state-space model in ARX model and vice versa could be to get the transfer function from the continuous state-space and then discretize it or discretize the state-space before the transformation, which gives the structure to impose in the ARX model. Relations between both transfer functions can be found afterwards.

This technique presents some limitations. First the transfer function must be found to know the ARX’s structure; the number of ARX parameters might not be equal to the number of the state-space ones. This is the case for system (10) where the state-space model has eight parameters and the ARX model has seven parameters. An internal relation in the state-space model has to be found to balance the system before solving it; this method is not elegant and not unique as mentioned before. In order to keep the physical meaning of the state-space, the ARX model must be transformed in the same system of coordinates as the state-space model. This is possible if the state-space model is transformed in observability canonical form, where the ARX model coefficients appear explicitly. The observability canonical form has the following structure:with given byThe canonical state-space (23) can be projected in the physical state-space coordinates by using similarity transformation. The observability matrices of the physical state-space and the canonical state-space model are linked by relation (17). Furthermore the observability matrix of the canonical state-space is the identity matrix; therefore the similarity matrix reduces toBy using the knowledge of the physical state-space, a transformation which projects a realization in a given coordinates system and then keeps the physical meaning can be found. Moreover the transformation works for MIMO systems and stays straightforward even if the order of the model increases because only matrix operations are involved. In the next section the error propagation through model conversion is studied in order to test the robustness of the method.

2.3. Simulation for Error Propagation

The input parameters are normally distributed using (1) with for the transformations from RC to ARX and in the other way. The following results are given for simulations.

Three criteria are used to characterize the parameter transformations: the expected value of the output parameter distribution which gives the location of the highest occurrence; the absolute mean error (MAE) between the true output parameter values ; and the simulated output parameters such thatThe Kolmogorov-Smirnov test [12] is used to know if the output parameters are normally distributed. To determine if it is the case, the test finds the maximum distance between two curves:where is the absolute difference between , the empirical cumulative distribution function calculated from and , and the cumulative distribution function (CDF) of the hypothetical distribution. is then compared to the critical level , the probability of rejecting the hypothesis when it is true ( gives ). indicates the rejection of the hypothesis at significance level and indicates a failure to reject the hypothesis at significance level , which means that the data belong to the hypothetical distribution. The Kolmogorov-Smirnov test is illustrated in Figure 3 to check if the parameter is normally distributed. The red line is the empirical cumulative distribution function of the data and the black dotted lines represent the critical level boundaries. If crosses a critical level boundary, it means that the test fails to reject the null hypothesis; that is, the data set is not normally distributed. has been used to widen the critical level boundaries only for the illustration.

The simulation results for the six transformations are given in Figure 4 and Table 1 for the RC to SS transformation, in Figure 5 and Table 2 for the SS to ARX transformation, in Figure 6 and Table 3 for the RC to ARX transformation, in Figure 7 and Table 4 for the ARX to SS transformation, in Figure 8 and Table 5 for the SS to RC transformation, and in Figure 9 and Table 6 for the ARX to RC transformation. If histograms are not centred on -axis scale, it means that outliers are present but are not visible because of the low occurrence.

The three transformations from RC to ARX have expected values very close to the true parameter values and all the absolute mean errors are approximately around of , which corresponds to the variance introduced to the input parameters . The majority of the distributions are a little bit skewed but close to the normal distribution. The parameter in the RC to ARX transformation (Figure 6, Table 3) is the worst case with a mean absolute error superior to , which is still a satisfactory result. Indeed, by looking at the analytical expression, depends on fives parameters with of their values as variances. Furthermore, it is more difficult to obtain accurate results with very small values like .

In the other way, from ARX to RC, the transformations are more sensitive to parameter deviations; this is why the variance of (1) has been scaled with instead of as previously. The histograms of , , and in Figure 8 are zoomed to see most of the distribution. Otherwise, outliers far from the true parameter values give an improper graphical scale. The same situation occurs for , , , and in Figure 9.

From ARX to SS, the parameters and are not present in the results (Figure 7 and Table 4) because they are not a function of the ARX parameters; they are known directly from the similarity matrix . In this transformation, all the parameters are normally distributed but the mean absolute errors of , , and are far greater than their respective true values. These transformations depend on the poles and which are several orders of magnitude higher than the output parameters; consequently uncertainties introduced in the input parameters have strong impacts on the output parameters.

The transformations from SS to RC involve only operations with values relatively small and with different orders of magnitude which explains the skewed forms of , , and (Figure 8) and the presence of outliers. Moreover, and have complicated transformations involving six input parameters.

The problem gets even worse when both transformations are combined in the ARX to RC transformation. The expected values of , , , and are far from their respective true values and the normal distribution shapes of and are completely lost and therefore cannot be shown properly by histograms. Only two parameters out of six are robust against uncertainties and some and parameters have negative values which has no physical meaning. These results show that the transformation from ARX to RC is not suitable even if only of the parameter values are introduced as variance.

In this section, the most sensitive relations to parameter deviations have been located. Sometimes, functions relating input and output parameters are complicated and it is hard to identify which parameters are most responsible for the output variance.

3. Factor Prioritization

As in Section 2, the transformations are represented by a function of the form , which relates uncertain input parameters, , to the output parameter . This section introduces a variance-based method which ranks the input parameters, , according to their contribution to the output parameter variance [13]. This information allows fixing parameters with negligible influence and, on the other hand, focusing on the most sensitive parameters to reduce the output variance. Indeed, constraints can be added to these parameters in the identification process, to increase the robustness of the transformations.

The variance of the output is decomposable in summands wherewhere is the th input and denotes the vector of all inputs but , is the expectation of taken over all possible values of while keeping fixed, and is the variance over all possible values of .

Dividing both sides of (28) by yieldswhere are the sensitivity indices which express the share of variance of that is due to a given input () or input combination ().

The number of sensitivity indices of a model with inputs grows as since all the input interactions are taken into account. A solution to overcome this cumbersome approach is to only compute the first-order () and total sensitivity indices (). These two indices are sufficient to describe synthetically the sensitivity pattern of the model.where is the expected reduction in variance that would be obtained if could be fixed and is the expected variance that would be left if all factors but could be fixed. represents plus all the interactions linked to .

The indices and can be computed efficiently by the following sampling procedure [14]:(a)Generate two independent sample matrices and , with being the number of inputs and being the number of simulations. Each sample matrix is defined in the -dimensional unit hypercube.(b)Generate further matrices , for , such that all columns are from except the th column which is from . This resampling strategy is called radial design.(c)Compute model evaluation, one for each row of the matrices , , and , which gives, respectively, , , and (d)Compute the first (31) and total (32) sensitivity indices usingThe factor prioritization is now applied to the transformations presented in Section 2 to have a better explanation of the results in Section 2.3.

3.1. Factor Prioritization Applied to Model Transformation

All the sensitivity indices were estimated with simulations to converge to a sufficient accuracy of three decimals. If the model transformation is additive, the sums of the first order and total indices must be equal to one. If the first index is inferior to one, it means there are interactions between inputs and therefore total sensitivity indices are superior to one. The gap between the actual sum and one is an indicator of the input’s interactions. In the following tables, parameters on the left column (outputs) are function of the parameters in the header (inputs); each output has two lines; the first one represents the first-order indices and the bold one the total-order indices.

The factor prioritization results for the six transformations are given in Table 7 for the RC to SS transformation, in Table 8 for the SS to ARX transformation, in Table 9 for the RC to ARX transformation, in Table 11 for the ARX to SS transformation, in Table 10 for the SS to RC transformation, and in Table 12 for the ARX to RC transformation.

It has been highlighted in Section 2.3 that the transformation from ARX to RC is highly sensitive to parameter deviations but, thanks to the factor prioritization, the influence of each parameter on the output variance is known.

It has been shown in Section 2.3 that the transformation from ARX to SS is very sensitive: good results are obtained only for and because they depend, respectively, on two and three parameters (Table 12). The parameters and present the worst results; by looking at Table 12, they are the only input parameters which are function of the poles and . Even if the impact of and is very small, the difference of orders of magnitude between the parameters is unfavourable. The parameters and are function of two and three input parameters, respectively (Table 12); however almost of the output variance is due to the ARX parameter . If a priori information is available for these parameters, boundaries on the ARX parameters can be determined such that the physical meaning is not violated. Another example is the parameter in the RC to ARX transformation with an absolute mean error equal to of its value. Both thermal capacities in this transformation, and , account for approximately of the output variance; constraints can be added in the identification of the state-space to limit their variations and therefore make the transformation into ARX model more robust.

4. Conclusion

The possibility of switching from a thermal network to state-space representation and to an autoregressive model is appealing since both advantages can be cumulated. The transfer function is practical and not cumbersome whereas the state-space brings physical information which is related to the thermal network. Both forms of the model can be used for control strategy and diagnosis, but which one is more robust to error transmission? To respond to this question, this study is composed of two steps. First, Monte Carlo simulations are used to determine the transformations characteristics and, then, sensitivity analysis is used to rank parameter influences on the output variance. It appears that the transformation from ARX model to RC parameter is less robust to uncertainties. We conclude that identifying the parameters on the state-space representation is a better choice since in this way the results indicate that the transformations are more robust against parameter deviations.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was financially supported by BPI France (FUI Project COMETE).