Abstract

A new multipolynomial approximations algorithm (the MPA algorithm) is proposed for estimating the state vector of virtually any dynamical (evolutionary) system. The input of the algorithm consists of discrete-time observations . An adjustment of the algorithm is required to the generation of arrays of random sequences of state vectors and observations scalars corresponding to a given sequence of time instants. The distributions of the random factors (vectors of the initial states and random perturbations of the system, scalars of random observational errors) can be arbitrary but have to be prescribed beforehand. The output of the algorithm is a vector polynomial series with respect to products of nonnegative integer powers of the results of real observations or some functions of these results. The sum of the powers does not exceed some given integer . The series is a vector polynomial approximation of the vector , which is the conditional expectation of the vector under evaluation (or given functions of the components of that vector). The vector coefficients of the polynomial series are constructed in such a way that the approximation errors uniformly tend to zero as the integer increases. These coefficients are found by the Monte-Carlo method and a process of recurrent calculations that do not require matrix inversion.

1. Introduction

Consider a dynamical system whose mathematical model has the form Here, the integers refer to time instants ; is the state vector of the dynamical system at the instant ; is the vector of random perturbations; is a given function (nonlinearly depending on its arguments, in general); are random vectors with given distributions.

Let the results of observations at the instants be described by a scalar sequence generated by the mathematical model where is a given function of its arguments, and the distribution is given for the sequence of random variables .

In what follows, it is assumed that are components of the vector .

A parameter vector is defined as a vector whose components coincide with some components (or given functions of these) of state vectors of the dynamical system at given time instants. In the present paper, we describe a new algorithm for estimating the vector .

The problem of estimating the vector is called [1, 2] the problem of smoothing, if ; the problem of filtration, if ; the problem of extrapolation, if .

It is well known that the mean-square optimal estimate of the random vector on the basis of observations of the vector is the vector of conditional expectation . Therefore, we consider the problem of creating the MPA algorithm for the construction of approximations converging to the vector .

There is only one requirement with regard to computer power and the expressions (1.1) and (1.2): for given distributions of random variables , it is possible to generate a set that consists of a considerable number (several thousands) of sequences of random state vectors and observation results satisfying (1.1) and (1.2).

Elements of the sequence are sent to the input of the algorithm at the instants .

Let us briefly consider some known algorithms that give an approximate solution to the above estimation problems.

(1) The algorithm of the nonlinear least-squares method should be regarded as the most general method of solution. This algorithm is mentioned in a huge number of publications such as, for instance, [3]. Thus, in the smoothing problem for , the algorithm determines the estimate vector that approximately minimizes the heuristic quality functional with the constraints (1.1). The estimation quality functional is the sum of squared residuals Here, the vectors are obtained by consecutive calculations of state vectors on the basis of (1.1) with the initial data . For the numerical solution of this problem one utilizes numerous versions of the gradient method or the Newton method. In order to apply the Newton method, one has to solve a system of algebraic equations, which is obtained if we equate to zero partial derivatives of the right-hand side of (1.3) with respect to the components of the vector

The basic drawbacks of the nonlinear least-squares method are the following:

(i)the quality functional in the form of the sum of squared residuals is theoretically adequate for an estimate quality criterion only in the unrealistic case of the residuals being independent random variables with Gaussian distribution;(ii)the versions of the gradient method or the Newton method are applicable if the functions in the mathematical models (1.1) and (1.2) are differentiable; these methods require the existence of a good first approximation and the construction of a complicated interactive computation process, if the function happens to have a set of local minima;(iii)to get an idea of the statistical characteristics of estimation errors is possible only by means of the Monte-Carlo method (there are no explicit formulas for the estimate error variance).

(2) The extended Kalman filter algorithm (EKF algorithm) yields an heuristic solution to the filtration problem (there are many publications dedicated to the exposition of the EKF algorithm; it suffices to mention [4]). The EKF algorithm is based on a sequence of linearizations of nonlinear functions involved in the mathematical model of a dynamical system, the linearizations being constructed in a neighborhood of a sequence of estimate vectors.

The recurrent scheme of the EKF algorithm has a two-step structure.

Step 1. Suppose that after the instant , approximations have been found for the first and the second statistical moments of the state vector at the instant (the estimation vector for this vector coincides with the first statistical moment). On the time interval (prior to observation results at the instant ) prediction is made and its results are approximations for the first and the second statistical moments of the state vector at the instant . The vector and the matrix of the prediction are found by the calculation of a matrix formed by partial derivatives of the state vector (the Jacobian matrix). Therefore, realization of the prediction requires the differentiability of the right-hand side of (1.1) and the admissibility of its linearization with respect to the components of the state vector increments arising from to .

Step 2. After actual observations at the instant and linearization of the right-hand side of (1.2), mean-square optimal linear correction of the first statistical moment of the state vector at the instant is implemented, as well as the corresponding correction of the second statistical moment. This step results in approximations for the first and the second statistical moments of the state vector components at the instant .

The two-step scheme of the recurrent EKF algorithm is quite credible and gives quite satisfactory results in many practically important cases.

The basic drawbacks of the EKF algorithm are similar to those of the nonlinear least-squares method:

(i)the estimate quality functional is defined and coincides with the estimation error variance vector only if the functions in the right-hand sides of (1.1) and (1.2) linearly depend on their arguments (in this situation, the EKF turns into the standard discrete Kalman filter);(ii)the EKF algorithm is applicable only if the functions involved in the mathematical models (1.1) and (1.2) are differentiable;(iii)if the functions in (1.1) and (1.2) are essentially nonlinear, it is necessary to have a fairly good first approximation in order to ensure a convergent calculation process (the notion of a “fairly good approximation’’ is heuristic and is determined on the basis of numerical experiments);(iv)there are no explicit formulas for the estimate error variance.

(3) In last ten years the significant number of researches with representation basic solution of a problem by definition of conditional expectation of a vector of parameters for mathematical model nonlinear dynamic system [1, 510] was published. By means of multiple application of formula Bayes to vectors from (1.1) and (1.2) and by using numerical quadratures, the recurrent equations are easyly discovered for the probability density function (pdf) of state vectors of dynamic system. The actual solutions of the previous recurrent equations are, however, unfeasible because of the unwieldy dimensions of the integrals involved [10]. Therefore several alternative strategies have been developed. One set of such alternatives consists of developing suboptimal filtering, such as those based on linearization or transformations, and the other consists of methods which employ Monte-Carlo simulation strategies (e.g., estimation by using the sequential importance sampling particle filter) to approximate evaluate the multidimensional integrals in a recursive manner.

The presented direction is perspective but is bulky and unsuitable for the solution of practical applied (instead of model!) problems of nonlinear identification.

2. Schematic Diagram of MPA Algorithm

Let us describe the basic structure of the MPA algorithm [1114]. Denote by the vector of dimension to be estimated upon fixing the vector .

The MPA algorithm is a new recurrent algorithm, such as asymptotic accurately solve nonlinear problem to construction of the vectors conditional expectations of the vector of parameters for mathematical model nonlinear dynamic system by any random errors and perturbations with given distributions. Therefore MPA algorithm approximate solve problem the optimal mean-square estimation.

MPA algorithm in principle differs from the all above mentioned algorithms since

(i)MPA algorithm does not make linearizations and do not attempt to find the posterior probability density function of state vectors of dynamic system and never uses Bayes's formula, although essentially MPA algorithm is Bayes—it uses a priori information about a priori distributions. For given , MPA algorithm once build approximations to vectors posterior expectation of the estimated parameters vector. The approximation determined in the form segments of the vectorial power series relatively products the integer non negative powers of components of the vector . For every term of series sum of powers bounded above given integer . For given , the approximations uniformly bounded converge to the vectors posterior expectation of the estimated parameters vector by ;(ii)the vectorial coefficients of power series are defined by means of an adjustment of MPA algorithm. The adjustment is realized be means of define the first and second statistical moments of basis random vector, which components are estimated parameters and the mentioned powers of components of the vector . This definition is realized by means of Monte-Carlo's method from (1.1) and (1.2). However, after these vectorial coefficients have been calculated and kept in computer memory, the estimation vectors are determined by simple calculations after any new observations of the vectors ;(iii)there is a sequence vectors such as each consist from leading to a component of vector , . After adjustment by given , MPA algorithm determines the recurrent approximations to the conditional expectation of these vectors and matrices covariance of errors of these approximations. In this connection, formulas of evaluations are simple and do not require the inverse of matrices and evaluations of quadratures.

The new MPA algorithm proposed in this paper for estimating the vector of parameter (and in particular state vector) of a nonlinear dynamical system, for the most part, has no drawbacks mentioned above.

Step 1. Suppose that is a given positive integer number and the set of integer numbers consists of all nonnegative solutions of the integer inequality , the number of which we denote by . The value is given by the recurrent formula proved by induction, We obtain the vector of dimension , the components of which are all possible values of the form that represent the powers of measurable values.

Next, we define a basic vector of dimension .

Step 2. We use a known statistical generator of random vectors to solve repeatedly the Cauchy problem for (1.1) for a given initial conditions , a control law and various realizations of random vectors and . As a result, the computer memory will contain a set of realizations of random basic vectors sufficient for the calculation of the statistical characteristics of the basic vector . We apply the Monte-Carlo method to find the prior first and second statistical moments of the vector , that is, the mathematical expectation , and the covariance matrix

Implementation of Step 2 is a learning process for the algorithm, adjusting it to solve the particular problem described by (1.1) and (2.3).

Step 3. For given and and a fixed vector , we assign the vector to be the solution to the estimation problem. This vector gives an approximate estimate of the vector that is optimal in the root-mean-square sense on the set of vector linear combinations of components of the vector . Then the vector is an element of this set and is an approximate mean-square optimal estimate for the vector of the conditional expectation of the vector .
The vector can be presented in a following aspect: where vectors are some weight vectorial factors.

The vector and the matrix are the initial conditions for the process of recurrent calculations that realizes the principle of observation decomposition [6] and consists of steps. Once the final step is performed, we obtain vector weight coefficients for (1.1). Moreover, we determine the matrix , which is the covariance matrix of the estimation errors for the vector of conditional mathematical expectation estimated by the vector . Calculating the elements of the matrix , we have the method of preliminary (prior to the actual flight) analysis of observability of identified parameters for the given control law, structure of measurements and their expected random errors. Recurrent calculations do not require matrix inversion and indicate the situations when the next component of the vector is close to linear combination of its previous components. To implement the recursion, we process the components of the vector one after another. However, the adjustment of the algorithm performed by applying the Monte-Carlo method to find the vector and the matrix takes into account a priori ideas on stochastic structure of components of the whole set of possible vectors that can appear in any realizations of the random vectors and allowed by a priori conditions. This adjustment is the price we have to pay if we want the MPA algorithm to solve nonlinear identification problems efficiently. This is what makes the MPA algorithm differs fundamentally from, for instance, the standard Kalman filter designed to solve linear identification problems only or from multiple variations of algorithms resulted from attempts to extend the Kalman filter to nonlinear filtration problems.

Using the multidimensional analog of the Weierstrass theorem (a corollary of the Stone theorem [15]), we prove that with the increase of the integer the error estimate vector tends to zero uniformly in some domain.

The basic structure of the MPA algorithm described above allows us to use a strict quality criterion for the estimates obtained, since on each step the algorithm tends to ensure the minimal mean-square error of the estimate for the conditional expectation vector with a given volume of observations. Its closeness to the minimum increases with the increase of the integer and the increase of the number of realizations, if the Monte-Carlo method is used.

If the number of the approximating polynomial series is not small, a large volume of calculations has to be performed (after the initial choice of the integers ) for the determination of the vector coefficients .

3. Mean-Square Optimal Estimate for the Conditional Expectation Vector

Consider the principal algorithm (PA) for solving the problem of the mean-square optimal estimate for the vector . It is known that the vector is the mean-square optimal estimate for the vector after the vector has been fixed. Therefore, it makes sense that the PA should estimate the conditional expectation vector.

We are going to construct a PA that gives a polynomial approximation of the vector . To that end, we obtain an estimate for the vector which is linear with respect to the components of the vector and is mean-square optimal.

In what follows the vector of that estimate is denoted by .

An explicit expression for the estimate vector is obtained after the calculation of the elements of the vector and the covariance matrix , which coincide with the first and the second (centered) statistical moments for the vector . This vector and this matrix can be divided into blocks whose structure can be represented as follows: The right-hand sides of the above blocks coincide with the first and the second (centered) statistical moments determined by the Monte-Carlo method. On the other hand, the left-hand sides of the same blocks coincide with the first and the second (centered) statistical moments of the components of the conditional expectation vector. Therefore, the said statistical moments can be found experimentally on the basis of mathematical models (1.1) and (1.2) also for the conditional expectation vectors. This obvious statement is crucial for the practical numerical procedure of estimating the conditional expectation vector .

Let where is an -matrix satisfying the equation

Let where is an arbitrary -vector and is an arbitrary -matrix. Let and be error estimate covariance matrices for the vector in terms of the estimate vectors and .

Lemma 3.1. A matrix is the nonnegative definite matrix: .

Lemma 3.1 follows from the identity

Corollary from Lemma 3.1
The vector is a mean-square optimal estimate for the vector on the set of estimates linear with respect to the components of the vector .

If , then the estimate vector is unique and

The covariance matrix for the error of the estimate of vector is defined by

If , then the vectors giving the linear mean-square optimal estimate are nonunique, but the variance of the components of the difference of these vectors is equal to zero.

Formula (3.2) gives explicit expressions for the vector coefficients in (2.3). These expressions are obtained by equating the right-hand side of (3.2) to the right-hand side of (2.3) and writing out explicit expressions for the components of the vector .

Let us examine asymptotic errors of the estimates obtained when using formula (3.2).

For a given vector , suppose that the vector is defined as a function of the argument in some a priori given compact domain and is continuous in that domain. Then the following result holds.

Theorem 3.2.

Proof. According to the multidimensional version of the Weierstrass theorem [7], for any there is a multidimensional polynomial such that This relation can be rewritten as Let be the covariance matrix for the random vector : From (3.10), it follows that By construction, the vector gives a mean-square optimal estimate for the vector and this estimate is linear with respect to the components of the vector . But Lemma 3.1 implies that for any other nonoptimal linear estimate, in particular that like the vector , we have . Hence, taking into account (3.12), we get The statement (3.13) is equivalent to (3.8), if we take into account that where is the joint distribution density for the random vectors . Theorem 3.2 is proved.

Thus, the PA, by virtue of (2.3), determines a vector series which, with the increase of the number of its terms , approximates the conditional expectation for the vector of the estimated parameters with arbitrarily small uniform mean-square error.

4. Recurrently MPA Algorithm

Suppose that for a given observation vector and an integer we have constructed the vector with the components .

The implementation of the algorithm starts with approximate calculation (by the Monte-Carlo method) of -dimensional integrals that determine the components of the vector and the matrix . Further, splitting this vector and this matrix into suitable blocks and using (2.3), we find the desired estimate vector and the estimate error covariation matrix .

However, this approach is unreasonable, since it requires the inversion of the matrix , which is a difficult task for of a large dimension or being close to a singular matrix.

Below we describe a recurrent calculation process based on the principle of decomposition of observations expounded in [11].

We are going to construct a recurrently MPA algorithm that does not involve matrix inversion and consists of steps of calculations of the first and the second statistical moments for a sequence of special vectors after a priori moments for the basic vector have been found.

Denote by the vector formed by the components of the basic vector remaining after the elimination of the component by the vector formed by the components of remaining after the elimination of ; and so forth.

The quantity is the last component of the vector , and after its elimination, the last vector turns out equal to the estimate vector .

On Step 1, formulas (3.2) and (3.6) are used for the calculation of the following quantities:

(i)the vector that yields an estimate for the vector which is mean-square optimal and linear with respect to ;(ii)the estimate error covariance matrix .

The estimate vector consists of the conditional expectation estimate vector and a vector of dimension . For fixed, the latter coincides with the vector of statistical prediction of mean future values of the quantities .

Note that the calculations of Step 1 are based on a priori values found beforehand.

In a similar way, suppose that on steps of the calculation process, the vector and the matrix have been found after fixing the quantities .

On step , the following quantities are calculated by means of (3.2) and (3.6):

(i)the vector of the estimate of , which is mean-square optimal and linear with respect to ;(ii)the estimate error covariance matrix .

The vector is composed of the conditional expectation estimate and a vector which, after the quantities have been fixed, is the vector of statistical prediction for the mean values of “future” quantities .

Note that the calculations of step are based on the already determined quantities , which it is natural to call a priori first and second statistical moments of the “future’’ quantities . The vector and the matrix represent a priori data regarding the statistical moments of the components of the vector before the quantity is sent to the input of the algorithm.

The formulas of the computation precess described above are the following: Here, the scalar is the th component of the vector (this scalar is a linear mean-square optimal estimate for the component after the algorithm has used the components ); the vector is obtained from the vector by eliminating its component ; the scalar is the th diagonal element of the matrix (this scalar is the variance of the estimate error for the component after the components have been used); the matrix is obtained from the matrix by elimination of its th row and its th column; the vector coincides with the th column of the matrix with the th component excluded.

If the scalar happens to be close to zero, then the component becomes close to a linear combination of the components . In this case, the component gives no new information about and should be dropped from the computation process..

Note that the sequence of random variables is a renewable sequence.

The top left block of the -matrix coincides with the estimate error covariance matrix for the vector after the vector has been used by the algorithm.

Denote by the vector formed by the first components of the vector . The formula representing the evolution of the covariance matrix as a function of (the number of the components of the vector ) has the form

In what follows, the above MPA algorithm is tested in several numerical problems of estimating components of state vectors of essentially nonlinear dynamical systems. The components to be estimated are unknown random constant parameters of a dynamical system. The specific applied problems considered below are those of smoothing and filtration.

The Monte-Carlo method is used if the number of random realizations is from 5000 to 10000. This number has no strong effect on estimate errors arising in the MPA algorithm. It is assumed that the estimated random parameters are statistically independent and have uniform distribution a priori.

The quantity is determined theoretically by the calculation of variances, that is, the diagonal elements of the covariance matrix .

The quantity is determined experimentally by the Monte-Carlo method with the number of realizations being 10000.

The fact that the experimental and the theoretical mean-square deviations are approximately the same proves the correctness of the MPA algorithm formulas specified above.

5. Estimates for the Principal Moments of Inertia and Orientation Angles of the Principal Axes of Inertia of a Solid

Consider a solid in space (e.g., a satellite) with a fixed reference trihedron, that is, orthogonal axes , , and their origin . The corresponding projections of the angular velocity of the body, , are measured by angular velocity sensors. The projections of the moment of external forces are assumed known.

In the general situation, a solid has 6 moments of inertia relative to a rectangular reference frame , among which three are called centrifugal moments. It is known that for a point there is a rectangular coordinate system whose axes coincide with the principal axes of inertia. In this coordinate system, the centrifugal moments of inertia are equal to zero and the remaining ones, , are called the principal moments of inertia relative to the point .

Let be the Euler angles that determine the angular position of with respect to , let be the orthogonal matrix of the direction cosines of these angles.

Generally, a solid in outer space is a complex nonsymmetric structure whose parts play different roles and are constructed of different materials. Therefore, it is difficult to calculate the quantities on the basis of design diagrams of these parts and their junctures. However, for the analysis of the dynamics of a rotating solid it is convenient to use the coordinate axes that coincide with the principal axes of inertia, since the equations of motion (dynamical Euler equations) have a simple form in these axes.

Thus, it is reasonable to state the problem as follows: using the MPA algorithm, find estimates for the quantities , provided that the solid is rotating in space and in the reference frame the quantities are observed as discrete-time functions.

In the continuous-time model, the equations of motion of a nonlinear dynamical system have the form Here, are projections of the vectors and to the axes : Further, assume that observations of the time-functions take place with intervals 0.1 second and have duration 10 seconds. In order to create a mathematical model (1.1) with discrete time, we use numerical integration by the fourth-order Runge-Kutta method with constant step 0.2 second between observation instants and the initial conditions and.

Setting , we find that at the instant the components of the initial observation vector are related to nonobservable quantities and nonobservable functions by

Using the sequence of primary observation vectors and (5.1), it is required to estimate the parameters .

The secondary observations , which serve as the input data of the estimation algorithm, are found by summing up the components of each vectors of consecutive primary observations: In our model, it is assumed that and the array of 300 primary observations has been converted to an array of scalar secondary observations.

For the operation of the MPA algorithm it is necessary to find a priori first and second statistical moments of the basic 18-dimensional vector formed by the estimated quantities and . These data were found by the Monte-Carlo method under the assumption that the random parameters to be estimated have uniform distribution on a priori given segments: where .

The quantity describes the range in which a priori errors of the estimated parameters may vary. In our model, the values of were the following: (large a priori errors), (medium a priori errors), and (small a priori errors).

Let be a priori mean-square deviation (MSD) of the scattering of the unknown parameter , and let be an a posteriori MSD of the error of estimating this parameter found after the utilization of the MPA. In what follows, the quality of the estimate of the parameter is characterized by the ratios , which show how many times a posteriori (after observation) dispersal of the estimated parameter is smaller than a priori dispersal.

Below, for different values of we give the values of the said ratios for the MPA algorithm operating with (the estimate of is realized by second-order polynomials with respect to 12 secondary scalar observations; the number of terms of the polynomials is equal to ):

These data show that for and the chosen characteristics for the vectors of primary and secondary observations, a posteriori dispersal of estimate errors is about 100 or more times less than a priori dispersal of the unknown parameters , provided that a priori errors of the parameters belong to the classes of medium or small errors. A posteriori dispersal of the parameters is only about 2 times less than a priori dispersal. To increase the accuracy of the estimates of these parameters, it seems necessary to use larger values of , as well as other characteristics of the vectors of primary and secondary observations. It is also possible to use iterations that on each step decrease the intervals of possible a priori errors.

For the general reduction of estimate errors it seems reasonable to split a priori error intervals into smaller intervals. Thus, suppose that we have large a priori errors. In order to obtain estimate errors corresponding to the class of medium a priori errors it suffices to divide every large a priori interval into two parts and apply the MPA algorithm to each of the 64 resulting families of a priori intervals from the class of medium error intervals. The MPA algorithm calculation of the MSDs of a posteriori dispersal allows us to find a family of a priori intervals with the minimal values of these MSDs.

6. Estimates of the State Vector of a Satellite by Means of Handling the Information on Angles of Vising

Many results of astronomical observations depend on measurements and subsequent processing of digital data with regard to a sequence of viewing angles of space objects obtained in the optical frequency band. The viewing angles are measured by an optical device or on the basis of images on a photographic plate.

These angles are often the only available data regarding high-orbit sun-exposed satellites with onboard radio communication system failure (the energy of terrestrial radio band viewing facilities is insufficient because of the long distances to a satellite). Therefore, it makes sense to develop an algorithm that would give fairly precise estimates for state vectors of objects in space on the basis of processing the data about viewing angles.

For high-orbit satellites such data is obtained by specially constructed optical-electronic systems (e.g., the optical-electronic system Window [16] which monitors near-the-earth-space in optical frequency band and gives viewing information about space bodies at distances up to 40000 km).

In an inertial geocentric coordinate system, the equations of motion of a body whose mass is not very large have the form Here, , is the gravity constant, are rectangular coordinates of the body.

The viewing angles are the two angles that determine (with respect to the earth) the direction of the viewing vector, that is, the direction of the straight line from the observer to an object in space.

We assume that at the current time instant the center of viewing is located at a point of the Earth surface with geographical coordinates (longitude), (latitude), (altitude), where is the angular speed of the Earth rotation. In the rotating right geocentric coordinate system , the coordinates of the center of viewing are , where is the Earth radius.

The two angles of orientation of the viewing vector, in the local horizon plane of the viewing center, are represented by an angle and, in the local vertical plane, by an angle . These angles are defined by where

In what follows, we assume that the algorithm processing the data on viewing angles should solve a filtration problem: in an inertial geocentric coordinate system at certain current time instants, the algorithm should estimate three rectangular coordinates of an object in space and three rectangular components of its velocity vector.

The possibility of creating such an algorithm is based on the existence or correlation between the varying viewing angles and varying rectangular coordinates of a space object. This correlation is due to the law of gravitation responsible for the gravity acceleration vector that changes the trajectory of its motion and the viewing angles and is a known function of rectangular coordinates of the object.

In the absence of gravity acceleration, it would be impossible to estimate the parameters of motion by measuring the viewing angles. Indeed, the law of variation of the viewing angles is the same for all bodies in uniform rectilinear motion with parallel velocity vectors and the same constant ratio of the distance and the velocity modulus. Therefore, in this situation the parameters of motion are unobservable and cannot be estimated on the basis of information about the viewing angles.

The problem of the algorithm for estimating current components of the state vector satisfying the system of six nonlinear differential equations (6.1) is a problem of nonlinear filtration. The input of the algorithm consists of a sequence of observation results (6.2) nonlinearly depending on the current values .

Let be the time interval between two consecutive observations of pairs of viewing angles of the space object. After the first observations, we construct integral observation results . Here, is the sum of the first fixed quantities ; similarly, is the sum of the first fixed quantities . In like manner, we construct integral pairs and so forth. The sequence of these random variables is a sequence of inputs to the recurrent algorithm at the instants where . The summation operator reduces the effect of random tracking errors on the accuracy of the estimates.

Choosing an integer , we introduce the following definitions. Let us divide the sequence of pairs of integral observation results into a sequence of segments, each containing pairs.

The iteration vector 1 is the estimate vector obtained after the 1st pair segment has been sent to the input of the recurrent algorithm. The iteration vector 1 is determined at the instant after observations of viewing angle pairs.

The iteration vector 2 is the estimate vector obtained after sending to the input the second segment of the pairs of integral observation results. The iteration vector 2 is determined at the instant after observations.

Iteration vectors 3, 4, and so forth, are determined in a similar manner.

For the construction of iteration vector 1, it is necessary to find the vector and the matrix of a priori data: the first and the second statistical moments of the basic random vector 1 whose first six components coincide with the components of the actual (unknown) initial state vector of the space object; its other components have the form of products of integer powers of integral observation results; the sum of these powers does not exceed the given integer . For , the dimension of the basic vector 1 is equal to ; for , the dimension of the basic vector 1 is equal to , and so forth.

A priori scattering of the vectors determines a priori dispersal of the other components of the basic vector 1 by means of (6.1) and (6.2). Next, we set , where is the nominal initial radius-vector of the observed space object and is the nominal initial vector of its velocity. It is assumed that is the velocity vector of an object moving on a circular orbit of radius . Uniform scattering of the components of random vectors is determined by the inequalities where .

As the modeling process goes on, the errors in the determination of the initial coordinates reach several thousands km, and the errors in the determination of the initial velocity vector reach several hundreds m/sec. In this situation, the extended Kalman filter based on linearization is inapplicable for estimation, since it diverges at a fast rate.

A priori first and second statistical moments of the components of the basic vector 1 are determined by integrals over domains , which are calculated by the Monte-Carlo method.

After a priori statistical characteristics of the basic vector 1 have been defined, random variables are sent to the input of the recurrent algorithm, namely, integral observation results for the viewing angles on the time interval . After steps of calculations, the recurrent algorithm determines the iteration vector 1, which is the estimate vector for the state vector of the space object at the instant .

The accuracy of the estimate is described by the estimate error covariance matrix calculated on each step of the recurrent algorithm.

The determination of iteration vector 2 is similar to the above. Set for where .

The components of a priori initial state vector correspond to a satellite on a circular orbit:

The viewing system is defined by the relations

The components of the actual initial state vector are determined by the inequalities (6.4) for ,

In the process of simulation, the recurrent algorithm was operating under the conditions seconds, . These conditions mean that each 20 pairs of viewing angles observed during the sequence of time-intervals of length seconds are replaced by pairs of integral observation results. Iteration vectors form the output of the algorithm at the instants

In the process of simulation, the method of polynomial approximation for the components of the estimate vector was used for and . For , on each step of the recurrent algorithm, the components of the estimate vector are linear combinations of the last 12 components of the basic vector and consist of 12 integral observation results. For , on each step of the recurrent algorithm, the components of the estimate vectors are linear combinations of 90 terms, which coincide with the last 90 components of the basic vector and consist of products of powers of 12 integral observation results.

Below, we give the values of in meters and in meters per second attained in the process of several iterations.

A priori MSDs of the components of the state vector of the space object at the instant have the form Iteration 1, : Iteration 2, : Iteration 3, : Iteration 4, : The above simulation data show that each iteration decreases, about 100 times, the MSD of the dispersal of the components of the state vector of the space object. By the end of iteration 4 (), its coordinates and the components of the velocity vector are practically precise.

7. Identification of Parameters of a Non Linear Servomechanism

We consider a task of identification of parameters of the nonlinear servo-mechanism, which consists sequentially of an inertial link from a constant of time , an amplifier with factor , an integrating link 1, a nonlinear link a gap 1 (magnitude of the gap is ), an amplifier with factor , an integrating link 2, a nonlinear link a gap 2 (magnitude of the gap is ), a transmitter of a feedback, which exit a variable . A variable is an input of the servo-mechanism which should be repeated by an exit of the servo-mechanism—a variable .

The block diagram of the servo-mechanism is similar to the hydraulic booster which is used for turn steering surfaces of an aircraft [17].

The equations of driving of the servo-mechanism look like where if , if and , if the system is open loop, if the system is closed loop.

We suppose that —to the servo-mechanism set harmonic motions with frequency 1 hertz and amplitude 1, a priori data about gaps: , remaining a priori data: , where are nominal (a priori) magnitudes, . Further we suppose (we know parameters with errors of not surpassing 20 percents).

Outcomes of observations are variables measure with frequency 1000 hertz: They are used for identification of parameters. Random errors of observations are uniformly distributed on a segment . The sequence of observations decomposes on segments in length . The sums of outcomes of observations, belonging each segment, are sequence of inputs in MPA algorithm, which estimates 5 parameters. Parameters have numbers .

The exactitude of identification is presented by magnitudes , and the rationes are certain in Section 4.

At and various times of observations the mentioned rationes are presented as follows:

8. Conclusion

The algorithm described above gives a sequence of convergent estimates for the vector of conditional expectation and the corresponding estimate error covariance matrices practically in any situation, provided that one knows the mathematical models of the system and observations necessary for leaning of the algorithm on the basis of the Monte-Carlo method.

It is convenient to compress data by replacing the high-dimensional vector of primary observations with a vector of a substantially smaller dimension, whose components are functions of primary observations. Such a replacement is likely to cause an insignificant increase of theoretical estimate error variance.

Thus, in Sections 5 and 6 data compression was implemented by passing from a 200-dimensional vector of primary observations to a 12-dimensional vector whose components are formed by sums of 20 consecutive components of the primary vector.

The development of rational methods for the compression of primary data is necessary for practical utilization of the proposed general algorithm, which is a unique example of a situation in which the Mote-Carlo method provides synthesis of a new efficient method of nonlinear estimation.

Acknowledgment

This work has been supported by the Russian Foundation for Basic Research.