Abstract

This paper discusses the fundamentals of negative probabilities and fractional calculus. The historical evolution and the main mathematical concepts are discussed, and several analogies between the two apparently unrelated topics are established. Based on the new conceptual perspective, some experiments are performed shading new light into possible future progress.

1. Introduction

The scientific knowledge evolves by means of two distinct and independent processes. The most common consists of small incremental steps that improve existing methods and tools. The second process, more rare and frequently not well accepted, is made of sudden changes towards unpredictable, strange, directions. These “quantum leaps” often remain with the reputation of being “exotic” and having limited usefulness. History demonstrates that these prejudices lead to some perplexity, later on, when the community realizes the possibilities that such concepts unravel in nature.

Probability started with the correspondence between Pierre de Fermat and Blaise Pascal in 1654, but the modern theory of probability is usually credited to Andrey Kolmogorov (1931). Standard probabilities have values between 0 and 1 and are interpreted as the cases 0% and 100% chances that some event will happen. So, conceiving a negative probability (NP) or, even, some other values lying outside that interval, seems to be an “error.” Yet, the Nobel laureates Paul Dirac and Richard Feynman discussed this concept in the scope of quantum physics. In spite of this, the concept remained limited to a few studies and only recently started to emerge in some applications.

Today’s differential calculus is credited to Isaac Newton (1643–1727) and Gottfried Leibniz (1646–1716) that, independently, developed frameworks about derivatives and integrals. Standard operations of differentiation, or of integration, are a priori interpreted to be calculated for an integer number of times. Again the idea of half derivative seems an “error,” and, yet, it started as the classical calculus, with the ideas of Leibniz. Furthermore, many important mathematicians such as Euler, Fourier, Liouville, and Riemann contributed to its development. Nevertheless, the area remained as “exotic” until a few decades ago, when scientists recognized fractional calculus (FC) to be an important tool for describing complex phenomena.

Both NP and FC generalize standard concepts and extend them towards real values outside the domains. We can talk about fractional coins, or about fractional derivatives, but, surprisingly, in spite of the resemblances in history as well as in the mathematical concepts, both areas remain apart. This paper debates such state of affairs and establishes a first link between these two areas so that future developments can benefit from the synergies of possible analogies.

Bearing these thoughts in mind, Section 2 introduces the historical evolution and the main mathematical concepts involved in the generalization towards NP and FC. Section 3 develops an analogy between the fields of probability and calculus and, based on the new ideas, presents several experiments in the area of control theory and discusses the results. Finally, Section 4 draws the main conclusions.

2. Fundamental Concepts

This section introduces the fundamental concepts underlying the study to be developed in the sequel. Section 2.1 presents the evolution of the concept of NP and fractional coins. Section 2.2 outlines the history of fractional calculus and the concepts of FD.

2.1. Negative Probabilities

In 1942, Dirac wrote a paper [1] introducing the concepts of negative energies and negative probabilities (NPs). Dirac wrote “Negative energies and probabilities should not be considered as nonsense. They are well-defined concepts mathematically, like a negative of money. Later, Feynman [2, 3] explored the idea in the scope of quantum mechanics. He pointed that no one objects the use of negative numbers in calculations, although “minus three apples” is not a valid concept in real world. Furthermore, he argued not only about NP, but also that probabilities above one could be useful in calculations. In 1945, Bartlett [4] developed the first efforts towards a formal treatment of NP.

In 2005, Székely [5] introduced the concept of “half-coins” as conceptual objects leading to NP. He started with a fair coin having two sides, denoted as “0” and “1,” with identical probabilities. Let us recall that for a discrete random variable , taking values , the probability generating function (pgf) is defined as , where is the probability mass function of . Therefore, the pgf of a fair coin is . The addition of independent random variables corresponds to the multiplication of their pgfs, and, therefore, the pgf of the sum of fair coins is . Székely proposed the generalization of the pgf and defined “half-coin” as the one producing the pgf:

This strange object makes a complete coin, because if we flip two half coins, then the sum of the outcomes is 0 or 1, with probability , as if we flipped a fair coin. Furthermore, expanding (1), we verify that “half-coin” reveal an infinite number of sides, some having negative probabilities. In fact, we have , , and a series of positive and negative probabilities. Székely mentioned also biased coins and dice, as well as any th root instead of the square root in (1).

NPs are allowed by defining quasiprobability distributions. Quasiprobability distributions relax some of the axioms of probability theory; they share some of the features of standard probabilities (e.g., the expectation value), but they violate the first and third probability axioms.

NPs have been discussed mainly in physics, and we can mention [611], but the topic remained somewhat untouched in other scientific areas. Nevertheless, in 2004, Haug [12] applied NP in the area of mathematical finance. In 2007, Tijms and Staats [13] mentioned negative probabilities in waiting-time probabilities. More recently, Burgin and Meissner [14] address the application of NP to financial option pricing.

2.2. Fractional Calculus

NP. In 1695, Gottfried Leibniz sent a letter to Guillaume l’Hôpital raising the question “Can the meaning of derivatives with integer order be generalized to derivatives with noninteger orders?.” l’Hôpital replied with another question “What if the order will be ?.” Leibniz answered “It will lead to a paradox, from which one day useful consequences will be drawn.”

In 1738, Euler noticed that the evaluation of had meaning for noninteger values , but the first mention of a FD appears in 1819, in a text of S. F. Lacroix. For the power function he gives the example:

FC was a research topic for three centuries, and many important mathematicians had work on the area, namely, Fourier, Abel, Liouville, and Riemann. Historical surveys can be found in the books by Oldham and Spanier [15], Miller and Ross [16], and Samko et al. [17].

In 1893, Heaviside [18], in relation with his research in electromagnetism and operational calculus, addressed briefly the application of FC. Nevertheless, only in the last decades [19], FC emerged as a useful mathematical tool in several applications [2024]. We can mention control theory, signal processing, chemical physics, anomalous diffusion, and many other areas, where FC revealed superior results than classical calculus [2536].

There are several definition of FD, namely, the Riemann-Liouville, Caputo, and Grünwald-Letnikov formulations [28, 37]. In this paper, it will be followed the Grünwald-Letnikov definition of a FD of order given by where is Euler’s gamma function, means the integer part of , and is the step time increment.

The geometrical interpretation of FD has been the subject of debate, and several perspectives had been forwarded [3841].

Using the Laplace transform, for zero initial conditions, we have the expression: where and denote the Laplace variable and operator, respectively.

The Grünwald-Letnikov definition (3) is often adopted in signal processing and control systems [42, 43] because it leads directly to a discrete-time algorithm based on the approximation of the time increment through the sampling period : where and represent the -transform variable and sampling period, respectively.

3. Synergies in the Fractional Perspectives

In the previous sections, the concepts of NP and FD were presented briefly. It is worth noting that, in both cases, the generalization of the classical concepts emerge naturally when thinking of the transforms, that is, when using the fgm (in the probability world) and the Laplace or operator in differential calculus. A closer look reveals that the NP in the “half-coin” (1) and the FD of a function (3) share common features. In fact, in a more general level, we can say that both follow the expression: where , , and are weights.

For a fair coin, we have identical positive weights (probabilities); that is, , while for a derivative we have symmetrical coefficients; that is, . The weights are “natural” in each of the native worlds, but “strange” in the other counterpart. For example, the factor in the derivative can be interpreted as the probability of an “antievent.” Let us now suppose that we consider to not have only positive, but also negative values. In the differential calculus world it means simply that we have integral instead of derivative. However, in the probability world it means the inverse action of flipping a -coin, or let us say the “antiflipping.” By other words, while flipping two half coins is the same as flipping one coin, flipping one half coin and “antiflipping” one half coin is identical to do nothing! Again what is common in one side seems uncommon in the other. Figure 1, showing the worlds of probability and differential calculus, summarizes these concepts.

Let us check the four expansions of (6) resulting from

Expressions (7) and (8), with the subtraction of terms 1 and , lead to the fractional integral and fractional derivative, respectively. Expressions (9) and (10), with the sum of terms 1 and , represent the antiflipping and flipping of a half coin, respectively. However, in the probability world, expressions (7) and (8) describe the antiflipping and flipping of a half coin with an antiface, respectively. On the other hand, in the world of differential calculus, expressions (9) and (10) seem even more difficult to interpret.

So, the question arises if there is some usefulness in establishing an analogy between these two apparently unrelated fields. By other words, is there some hidden connection between NP and FD, or is this merely an abstract manipulation of expressions and concepts? Though it is not the subject of this paper to explore all possibilities, we considered the application of these concepts in the field of control theory. Therefore, in the sequel, we analyse the time response of a unit feedback closed-loop control system with transfer function in the direct loop, under the action of a discrete-time controller , where denotes the truncation order. The control algorithm is inspired in (7)–(10) interpreted as expressions in the domain. Furthermore, in the four cases it is considered a control gain , a sampling frequency s, and a unit step reference input . There was no special tuning technique for control gain . Therefore, remains identical in all experiments in order to ease the comparison. Figure 2 depicts the closed-loop time response of the four algorithms with truncation of the series up to term .

Figure 3 depicts the frequency response of the four control algorithms (, s) for .

We verify that (9) and (10) “interpolate” the half-order integral and derivative, (7) and (8), respectively. In fact, this result should be expected since, observing (9) and (10), we verify that the series has alternating positive and negative terms, in opposition with (7) and (8), where we have always positive (for integral) and always negative (for derivative) terms. In conclusion, the NP-inspired series are something “in the middle” of a fractional PID controller.

4. Conclusions

This paper presented the historical evolution and the main concepts supporting two “exotic” areas, namely, negative probabilities and fractional calculus. The observation of simple examples and the corresponding mathematical models reveal new possibilities hidden when addressing separately each area. Based on the new perspective, new algorithms in the area of control systems are explored. The example intends to be merely a first step in taking advantage of the synergies that emerge in the analogies from which one day useful consequences will be drawn.