#### Abstract

Our knowledge about surroundings can be achieved by observations and measurements but both are influenced by errors (noise). Therefore one of the first tasks is to try to eliminate the noise by constructing instruments with high accuracy. But any real observed and measured system is characterized by natural limits due to the deterministic nature of the measured information. The present work is dedicated to the identification of these limits. We have analyzed some algorithms for selection and estimation based on statistical hypothesis and we have developed a theoretical method for their validation. A classic (non-quantic) algorithm for observations and measurements based on statistical strategies of optical field is presented in detail. A generalized statistical strategy for observations and measurements on the nuclear particles, is based on these results, taking into account the particular type of statistics resulting from the measuring process also.

#### 1. Introduction

The methods of testing statistical hypothesis and parameters estimation, built up in the frame of mathematical statistics, represent algorithms which confirm the “functionality” of experimental systems [1–4]. The aim of this paper is to identify natural limits by building up “observation” and “estimation” algorithms based on “statistical strategies” of “assessment and control” of these limits. In the experimental systems as optical communications a large interest is focused on observation and measurement of signals with entropy bigger than the noise level. Thus, the signal/noise ratio is used as a main observable for validation of correct operation of a communication system [5–12].

A classic (non-quantic) algorithm based on statistical strategies for an optical field is presented in detail. A generalized statistical strategy based on observations and measurements on the nuclear particles as neutrinos can be also developed [13, 14]. The neutrinos physics and engineering are related very closely to that of the stars. The chemical composition of the solar interior is one of the frontiers of solar neutrino spectroscopy. They have a decisive role as an energy-loss channel for understanding stellar evolution also. The observed astrophysical neutrino sources other than the Sun help us in understanding supernova physics as stellar core collapse as well as the dynamics of supernova explosions and nucleosynthesis [15–24].

The methods of statistical physics we will discuss in the paper are inseparably intertwined in the strategy for observations and measurements on the nuclear particles, as neutrinos.

A high-statistics neutrino observation provides us with very important data about other low-mass particles which determine large-scale experiments in which new types of particle detectors will be developed and built. Concomitantly with the neutrino observation, a lot of theoretical and numerical work remains to be done, based on statistical physics methods giving us crucial information for the accuracy of the experiments to be developed and built.

Two application examples are given: one is based on the bilateral test for validation of statistical hypothesis (validation of mean value for a given dispersion) and another one for validation of the mean value for an unknown dispersion [6–12, 15, 25].

To eliminate the noise by construction of instruments with high accuracy it is very important to mention that statistical validation of some communication systems based on control statistical strategies points out that the signal/noise ratio is not the essential parameter in characterising such a system but the structure of the statistical strategy. Therefore a system with high signal/noise ratio will not solve the validation (good working of this system) from the point of view of multistochastic processes that generate noise.

#### 2. Theoretical Considerations

We define the set of measurements for the considered signal as follows: Starting with this, we intend to calculate the false alarm probability , the detection probability , and the physical system state [7]. We assume that a physical system is in statistical states, described by statistical hypothesis . If an physical system is characterized by statistical state, the detected signal will be, [8], where represents the useful signal and the random signal.

We assume that the observation vector is a random variable described by the probability density function:

Thus, the “strategy” consists in associating the statistical hypothesis to the event , with risk probability (classic measurement operator).

The functions represent a “random strategy” for choosing the best statistical hypothesis. We define the probability of choosing the statistical hypothesis when the physical system is characterized by statistical hypothesis [9] as follows: where .

The statistical event (given by probability) is described by risk. Using the prior probability , the value of average risk for immediate strategy leads to We define the risk function in choosing statistical hypothesis as follows: Using (6), the value of average risk leads to the following expression: The problem consists in finding a number of functions , which satisfy the conditions: The value of minimum risk is defined as follows: or in other form: where: The risk functions are directly proportional to the risks defined “a posteriori” as it is presented in the following: where represents “a posteriori” probability for statistical hypothesis : Also, the a posteriori probability could be defined in relation to verisimilitude ratio under the expression: and the verisimilitude ratio can be written as, ,

#### 3. Structure of Statistical Strategy for Two Statistical Hypotheses

Let us consider two statistical hypotheses defined as follows: From measurements, we obtained Probability densities associated with the statistical hypothesis will be [10]: The risk functions are calculated as follows: Bayes’ strategy consists in choosing statistical hypothesis if the following relation is valid: (see [10]).

From (18) and (19) one can get The a posteriori probability has the form: We calculate the probability to choose hypothesis for a system retrieved in hypothesis: and also we calculate the probability to choose for a selected physical system the hypothesis which can be found in the hypothesis Actually, the strategy consists in maximization for a certain given value of (the Neyman-Pearson criteria).

In this case, if the verisimilitude ratio has the form: then for (22) and (23) we obtain, [11], Let us define a phase space with parameter function . This is a simple convex space: D (a simple convex region) is the field of possible values for and , as shown in Figure 1.

A reliability assessment of equipment compliance indicators based on these calculations is performed. Values and are ****chosen to use the test plan for testing (—standardized coefficient calculated in one step).

At limits of compliance for the value is calculated, the number of failures for is determined and, in rectangular coordinate system, a trace line by the two points is obtained (Figure 2(a)).

**(a)**

**(b)**

For areas of inadequacy the number of failures is determined, the value for is calculated, and a trace line by the two points is obtained (Figure 2(b)).

Achieve line is represented as a process that begins right at point zero and coincides with the horizontal axis.

Let the curve equation () which upper limits the region D be Since region D is convex, no tangent at curve crosses the D region.

Let be the point of the coordinate tangent. The line equation with slope has the expression: Whatever the values are, the points belonging to D region fulfill the condition: This can be rewritten as follows: In this case, the statistic strategy consists in maximization of the integral in (30): From (31) we obtain the parameter function, , which maximizes the value for . The can have possible values: And therefore In the case of equality, , we choose as hypothesis, with the probability: where includes → the uncertainty region.

In this case, values of and will have the expressions: For a continuous structure on observable space (34) and (35) become where region has null probability for any hypothesis .

#### 4. Calculation Algorithm of Statistical Strategy (Classic Case) for Observation and Measurement of an Optical Signal in Presence of Gaussian Fluctuations

Let us suppose that we consider the two statistical hypotheses case : where represents the Gauss random distributed signal (the Gauss noise) and the detectable useful signal.

We suppose an averaging operator for statistical hypothesis and then we have the calculation rules: where represents the correlation function of Gaussian noise.

The signal defined as must be “observed” and “measured” in period.

For observables determination, which represents latent vectors, and fundamental functions, we can define the expansions: or from (37) it results that Equation (39) represents spectral expansion defined by quantities .

Let an observable space be defined by Gaussian fluctuations, characterized by probability functions and . Therefore we define where Also, we build the correlation matrices (for Gaussian noise): or The matrix is defined as Let correlation operator be: And let eigenvalues equation be where Then we obtain the scalar matrix: where represents the measure of dispersion for every measurement. The distribution functions and acquire a factorial expression (as if every sampling quantity has been Gaussian distributed).

Thus, we have The verisimilitude ratio can be written as Next, , will have the following expressions: where is the significance threshold of statistical strategy.

If we accomplish only one measurement, the signal/noise ratio can be considered as follows [12]: but only if Then the result is The signal/noise ratio becomes If probability distributions for the two statistical hypotheses are characterized by a parameter, then we can write We define the probability () as where is the critical domain in the observable space.

In the end, we can write the equations as follows: In this case, the statistic strategy consists in determining the optimum critical domain in the observable space so that, if there is any other critical domain, the following relation can be written: We specify (by estimation) the parameters of probability distribution: If alternative hypothesis is not only a simple hypothesis: but a more complex hypothesis written as then there does not always exist a validation test to be the most powerful over any other test (so there is no most powerful validation test).

If we verify null hypothesis against alternative hypothesis then the most powerful test will exist.

If for the alternative hypothesis there is a value which satisfies the following condition: for value , then it results that the best test is the one for which values (which determine the critical range) satisfy inequality (the Neyman-Pearson auxiliary theorem) For Gaussian noise we have From (69) it results that increases as increases. Thus, the highest value for that fulfills (68) will be which satisfies the equality: By expanding (70) it results that The best critical domain (critical value) is Using the following special functions: we can calculate for instance From (74) it results that and then we find In similar way, it results that From expressions (76) and (77) we obtain or in another form: Similarly, we can write the following expressions: From (80) we can determine the signal/noise ratio as or written in a different form: If then we have Let the empiric average of measurements be The statistical hypothesis has the structure as follows: where represents the variable of distribution function.

Now the distribution functions are written as where , and are the parameters for distribution functions.

Therefore, we can conclude that the probability equation is and then Also and the result is From (88) and (89) it results that Therefore, for empirical average, the following expressions are obtained:

##### 4.1. Application for a Particular Case with and

We consider as known the following expressions: and the parameters and need to be determined.

The signal/noise ratio will be calculated as The detection probabilities are And then we look for the best characteristic region : where

The graph in Figure 3 indicates that, in case of low amplitude signals detection, more measurements (*n-bigger values*) are necessary in comparison with high amplitude signal detection, where the number of measurements must be low.

##### 4.2. Algorithm Regarding the Bilateral Test for Validation of Statistical Hypothesis (Validation of Mean Value for a Given Value of Dispersion )

Let us put the matrix in a diagonal form so the repartition functions will have the expression: We make the following hypothesis: In this case, we have or, in general, we define a verisimilitude function for correspondence: or in the following form: For , the maximum verisimilitude estimation will be In the following we calculate the verisimilitude functions: Also, the verisimilitude ratio is calculated as follows: The critical region is given by and the limit of critical region will be and then it will result that The final result will be The test significance threshold equation will be because there are two values for with two equally associated areas.

From (109) we have and then results the best critical domain The statistical strategy structure will be characterized by The following approximation is considered: and the result is and also and finally

##### 4.3. Algorithm Regarding Bilateral Test for Validation of Statistical Hypothesis (Validation of Mean Value for an Unknown Value of Dispersion )

Let us consider the following equation: We define null hypothesis: and alternative hypothesis: The general form of the verisimilitude function is From the equations system: the result is So (121) becomes Therefore, maximum likelihood estimation for in case of null hypothesis (i.e., ) will be obtained from (122) and (123). The estimated value of dispersion will be (in the limit case ) Then, by definition it results that The verisimilitude ratio will be in the following form: If we define then we can write where is the dispersion experimentally determined according to these values. Thus, the verisimilitude ratio gets to the following form: and the following limit is verified: The best critical region is given by the following inequality: Therefore it results that From verisimilitude threshold expression: it results that The best critical domains will be determined by the following inequality: so the signal/noise ratio will have the following expression: Consequently, it results that

#### 5. Conclusions

Our knowledge is achieved by observation and by measurements of systems, operations which are affected by errors. The aim of this paper has been to identify these natural limits by the developing of observation and assessment algorithms based on statistical strategy of control and checking.

It is very important to mention that statistical validation of some communication systems based on control statistical strategies points out that the signal/noise ratio is not the essential parameter in characterising such a system but the structure of the statistical strategy. In the simplest and most relevant form it contains the false alarm probability and detection probability . Therefore a system with high signal/noise ratio will not solve the validation (good working of this system) from the point of view of multistochastic processes that generate noise.

Using the algorithms described in the paper an algorithm based on the bilateral test for description of the unknown dispersion can be further developed.

A generalized statistical strategy for observations and measurements on the nuclear particles is based on these results, taking into account the particular type of statistics resulting from the measuring process also.

#### Acknowledgments

This work was supported by a grant of the Romanian National Authority for Scientific Research, CNDI-UEFISCDI, Project no. PN-II-PT-PCCA-2011-3.2-1007 (Contract no. 184/2012) and dedicated in memory of our former colleague Vasile Babin Ph.D.