/ / Article

Research Article | Open Access

Volume 2020 |Article ID 7163254 | https://doi.org/10.1155/2020/7163254

Alexander K. Vidybida, "Calculating Permutation Entropy without Permutations", Complexity, vol. 2020, Article ID 7163254, 9 pages, 2020. https://doi.org/10.1155/2020/7163254

# Calculating Permutation Entropy without Permutations

Academic Editor: Eric Campos
Received05 May 2020
Revised14 Jul 2020
Accepted05 Aug 2020
Published23 Oct 2020

#### Abstract

A method for analyzing sequential data sets, similar to the permutation entropy one, is discussed. The characteristic features of this method are as follows: it preserves information about equal values, if any, in the embedding vectors; it is exempt from combinatorics; and it delivers the same entropy value as does the permutation method, provided the embedding vectors do not have equal components. In the latter case, this method can be used instead of the permutation one. If embedding vectors have equal components, this method could be more precise in discriminating between similar data sets.

#### 1. Introduction

Because of technical progress in the areas of sensors and storage devices, a huge amount of raw data about time course of different processes, such as ECG, EEG, climate data recordings, and stock market data, have become available. These data are redundant. The data processing and classification, aimed at extracting meaningful for nonspecialist characteristics, is based on reducing the excess of redundancy. As a result, new data are obtained, small in size and digestible by a human being. Examples of those reduced data for time series can be mean value, variance, Lyapunov exponents, correlation dimension, and attractor dimension.

A remarkable method suitable for reducing the excess of redundancy in time series has been proposed by Bandt and Pompe in , known as permutation entropy. This method is simple and transparent, is robust with respect to monotonic distortions of the raw data, and is suitable for estimating the dynamical complexity of the underlying dynamical process. Many interesting results, e.g. , have been obtained with straightforward application of the permutation entropy methodology in its initial form, as it is described in . Nevertheless, this method is subjected to a critique for not taking into account absolute values of the raw data and for not treating properly a possibility of having equal values in the embedding vector (ties), [6, 7]. In this connection, it should be taken into account that any redundancy reduction method leaves out some types of information, which may be useless for one process/task and may carry useful information for another one. In the latter case, the bare idea in  about how to treat equal values can/should be modified in order to meet a purpose of concrete situation. Examples of such a modification can be found in [8, 9] for taking into account absolute values or in [10, 11] for treating equal values. Interesting modification of the permutation entropy method has been proposed in  for 3-tuple EEG data.

In the standard permutation entropy methodology, it is preferable that embedding vectors have all their components different. Otherwise, they cannot be plainly symbolized by a permutation without using additional rules, which actually treat equal values as not being such. Situation with equal values in the embedding vector may arise for high embedding dimension, for crude quantization of measured data, for very long data sequences, and when observed dynamical system has intrinsically only a small number of possible outputs.

This note is aimed at discussing a slightly different symbolization technique of embedding vectors, which does not refer to combinatorics and which is capable of preserving information about equal values in embedding vectors. Instead of permutation, an embedding vector is emblematized with a single integer number of base , where is the embedding dimension. In the case of no ties (no equal components in the embedding vectors), the technique is equivalent to the standard permutation entropy methodology. In the opposite case, it may discriminate between similar data sets better than the permutation entropy method does.

#### 2. Permutation Entropy

Consider a finite sequenceof measurements. By choosing the embedding dimension , the data (1) can be embedded into a -dimensional space by picking out consecutive ‐tuples from . As a result, a set of -dimensional embedding vectors are obtained:where each vector has the following form:

An additional parameter of the embedding procedure is delay . In the above definition, we put for simplicity. With one would have instead of (3).

The data represented in (2) and/or (3) are even more redundant than those represented in (1) since for , most data values from (1) are represented in (3) times. In the permutation entropy technique , each embedding vector from (2) and/or (3) is replaced with a permutation of integers {0, 1, 2, …, }, which is defined by the order pattern of values composing the vector. For any embedding vector , the permutation , which symbolizes it, is calculated as follows. Arrange all components of either in the descending :or in the ascending [11,14]:order keeping their subscripts unchanged (actually, in (4) and (5), equal values (ties) are as well admitted. Here, we exclude such a possibility for the sake of clarity. The equal values are discussed in the next section.) The permutation which corresponds to is obtained as the row of the subscripts in the rearranged vector from either (4) or (5):

From the set of embedding vectors , calculate a new set of order patterns by replacing each vector in (2) by the corresponding permutation:

Now, empirical probability of each permutation, , can be obtained by dividing the number of occurrences of in by the total number of elements in . The permutation entropy of is the Shannon entropy of the probability distribution :where is the number of different permutations in .

##### 2.1. Treatment of Equal Values

Equal values in an embedding vector are, to an extent, inconvenient. Indeed, if for some in a vector , then and should be placed side by side in the permutation (6), but which one should go first? Due to the sameness of values, it is impossible to uniquely determine a corresponding permutation without introducing additional rules. In some cases, the possibility of equal values can be ignored due to their low probability. This is reasonable when the embedding dimension is low and/or a chaotic process data are recorded with high precision [1, 15, 16]. If equal values are inevitable, the following rule is applied (in some cases, e.g. [10, 11], the opposite inequality sign is used here):

The rule (9) has different meaning depending on whether (4) or (5) convention is used. Namely, in the case of (4), an embedding vector with all components equal will be equivalent to a vector with monotonically ascending components. If (5) is adopted, then that same vector will be equivalent to a vector with monotonically descending components (Figure 1).

Without knowing a real system, it is not clear which case is better and whether it is good or bad to label a sequence of same values as being decreasing or increasing. Actually, the permutation symbolization technique aims at reducing redundancy. Discrimination between constant and either increasing or decreasing sequences of data may appear to be excessive in some cases. On the contrary, when a system, which generates data, has a few possible outputs, the data were subjected to a crude quantization, or embedding dimension is large, it may happen to be useful if the presence of equal values in the embedding vector results in the order pattern preserving this fact. One possible approach to do this is discussed in the next section.

#### 3. Arithmetic Entropy

##### 3.1. Symbolization

The following symbolization is aimed to keep information about equal values in embedding vectors. Having a vector , construct a sequence of integers :by using the following rule: find the smallest component, , in . If is found at places , put number 0 at those places in . Find the next smallest component , in . If is found at places , put number 1 at those places in . Proceed this way until components of are exhausted. At this stage, all components of will be determined. obtained this way is used as a symbol of embedding vector .

For example, consider . The corresponding symbol, or the order pattern, is . Here, information about equal values and their positions is preserved.

If has no equal components, it can be proven (Appendix A) that . This means that is the inverse permutation of the one obtained for if convention (5) is used. Since correspondence between permutations and their inverse is one-to-one, it does not matter which one, or , is used for calculating entropy. This further means that for a data set and embedding method, which does not deliver equal values in the embedding vectors, symbolization used here is equivalent to the permutation one (it seems that in paper , the symbolization method described here is used. But, as it may be concluded from  (equation (6)), the issue of equal values is not addressed. Similar approach is used in [12, 17], again without considering equal values.) while calculating entropy.

##### 3.2. Arithmetization

Expect that embedding vector in (10) has exactly unique components, where . In this case, the corresponding symbol will be a sequence of numbers chosen from the set in such a way that not any element from is missed. The latter can be formulated as the following condition:

The sequence can be considered as a single integer , in a base- positional numeral system with digits :

(For a single embedding vector, might be chosen as radix instead of . But may be different for different vectors. And a same integer may have different representation for different bases with (11) satisfied, e.g., 01123 = 11102.). It is clear that there is one-to-one correspondence between order patterns and integers obtained as shown in (12). Therefore, a set of order patterns, constructed as described in Section 3.1, can be replaced with a set of integers obtained as shown in (12):

The empirical probabilities to find an integer among those in can be calculated as usual, and we have for the arithmetic entropy:where is the number of different integers in .

For a data sequence and embedding method which does not deliver equal values in the embedding vectors, all and the integers will represent corresponding permutation order patterns unambiguously. In this case, , where corresponds to pattern :

And corresponds to pattern :

In this case, only integers will be used from due to condition (11).

##### 3.3. How Many New Possible Order Patterns Are Got?

If it is decided to treat the order patterns generated from embedding -vectors with some components equal as not equivalent to those from vectors with all components different, then the number of all possible patterns will be greater than . Here we attempt to estimate how many new patterns can be obtained.

Any new pattern appears from embedding -vector with different components, where . So, having fixed, the number of corresponding new patterns is equal to the number of base--digit integers constructed from digits in such a way that each of the digits is used at least once. This number can be calculated aswhere is the Stirling numbers of the second kind (, Part 5, Section 2). Considering all possible values for , we have for the total number of possible new patterns:where are known as the ordered Bell numbers, see (, p. 337) for naming discussion. Calculating (the Stirling numbers were calculated with stirling2(D, d) function in the “maxima” computer algebra system (http://maxima.sourceforge.net/)) for , we see that the number of new patterns is normally greater than , see Figure 2 and also Table 1. Of course, the possible new patterns may only be significant when they can be observed (see discussion about this in ). This depends on the process under study and embedding method.

 2 3 4 5 6 7 3 13 75 541 4683 47293
##### 3.4. Coding

Certainly, there are several possible implementations of the algorithm discussed in Sections 3.1 and 3.2. Here, the one used for the examples in Section 4, and Appendix C, is shown. It is a C++ program. It is expected that the sequence (1) is organized into a one-dimensional array X[N]. For calculating the arithmetic order pattern of vector shown in (3), it is necessary to pass a pointer to X[i] to the function get_numerical_pattern, below, as its third argument: data_point = X + i.

In the below example, X[i] is declared as double, but it can be of any type with appropriate sorting defined. The returning value is declared as mpz_class, which is a GNU multiple precision integer (https://gmplib.org/). This is used because for embedding dimensions , the returned number representing an order pattern may exceed 64 bits in size (it makes sense to use large embedding dimensions only for very long sequences of data. Otherwise, any observed pattern appears only once, which is unfavorable for estimating probabilities). For smaller , mpz_class can be replaced with int or long everywhere in the code.(1)#include <gmp.h>(2)#include <gmpxx.h>(3)#include <forward_list>(4)(5)/(6) Function calculates numerical representation of order pattern(7) of an embedding vector V_i = {(x_i, x_{i + tau},…).(8) Here D is the embedding dimension and tau is the delay.(9) The data_point points to the first component of Vi in the(10) array of raw data.(11)/(12)mpz_class get_numerical_pattern (int D, int tau, double data_point)(13){(14) int k;(15) std:forward_list<double> FL;(16) auto it = FL.before_begin();(17) for (k = 0; k < D; k++) it = FL.emplace_after (it, data_point [k∗tau]);(18) FL.sort ();(19) FL.unique ();(20)(21) int pDpnm = new int [D]; //order pattern will be here(22) int tag = 0;(23) for (auto it = FL.begin (); it != FL.end (); ++it)(24)  {(25)   for (k = 0; k < D; k++)(26)    if (it == data_point [ktau]) pDpnm [k] = tag;(27)   tag++;(28)  }(29)(30) mpz_class pnum = 0; // arithmetic order pattern (initial value)(31) mpz_class digval = 1; // initial value of a single digit(32) for (k = 0; k < D; k++)(33)  {(34)   pnum += pDpnm [k]digval;(35)   digval = D;(36)  }(37) return pnum;(38)}

This code is transparent and does not refer to combinatorics. At the same time, provided an embedding vector does not have equal components, when loop at lines 23–28 above is complete, we obtain in the array pDpnm [D] a permutation , where is the permutation for that vector obtained in accordance with the standard rules of  reproduced in Section 2 with (5) adopted.

#### 4. Example

The discussed methodology has been tested at two surrogate sequences. The purpose was to demonstrate that for a pair of sequences, the standard permutation entropy method gives roughly the same entropy, whereas the arithmetic entropy may be considerably different.

For calculating standard permutation entropy in situation when equal components in embedding vectors are possible, we replace the following fragment:(i)for (auto it = FL.begin (); it != FL.end (); ++it)(ii) {(iii)  for (k = 0; k < D; k++)(iv)   if (∗it == data_point [ktau]) pDpnm [k] = tag;(v)  tag++;(vi) },

in the code of Section 3.4, with the following one:(i)for (auto it = FL.begin (); it != FL.end (); ++it)(ii) {(iii)  for (k = D−1; k >= 0; k--)(iv)   if (it == data_point [ktau]) pDpnm [k] = tag++;(v) }

With such a replacement, we get in the array pDpnm [D] above the permutation, which is inverse to the one obtained for in the standard permutation entropy symbolization with rules (5) and (9) adopted. As it was mentioned above, usage of inverse permutations instead of the initial ones delivers the same value for the standard permutation entropy.

The two sequences, S1 and S2, are obtained as follows: by means of function gsl_rng_uniform_int from the GNU Scientific Library , we generate random numbers from the set , which are equally probable. Each obtained random number “val” is written into the S1. The same number is written into S2 provided it is not equal to the number written to S2 at the previous step. If it does, then the number (val + 1) (mod 5) is written instead. This introduces a nonzero correlation between consecutive values in S2. For example, in S2, any two consecutive values are always different. Examples of S1 and S2 are as follows:

1 000 000 long S1 and S2 were produced and both permutation and arithmetic entropy have been calculated. The results are shown in Tables 2 and 3.

 PE AE PE AE S1 2.497 3.684 S1 4.390 6.165 S2 2.368 2.919 S2 4.187 5.238
Entropy is given in bits.
 PE AE PE AE S1 2.498 3.684 S1 4.393 6.166 S2 2.407 3.676 S2 4.224 6.098

Notice that arithmetic entropy is considerably greater than the permutation one. This is due to the high frequency of embedding vectors with equal components. Also, from Table 2 with , it can be seen that arithmetic entropy discriminates better between S1 and S2. However, the case with delay shown in Table 3 is not similarly conclusive. This might be due to the construction method of the S2 sequence. Namely, by pulling from S2 embedding vectors with delay 2, we may get vectors with equal adjacent components, similarly to the S1 case. This alleviates difference between S1 and S2. For , embedding vectors for S2 do not have equal adjacent components. One more example is in Appendix C.

#### 5. Conclusions and Discussion

In this note, we have discussed a method for calculating entropy in a sequence of data, which is similar to the permutation entropy method. The characteristic features of this method are as follows:(i)It treats equal components in the embedding vectors as being equal instead of ordering them artificially(ii)It is entirely exempt from combinatorics, labeling order patterns by integers instead of permutations(iii)If embedding vectors do not have equal components, this method delivers exactly the same value for the entropy as does the standard permutation entropy one

In the symbolization procedure discussed in Section 3.1, new order patterns may appear as compared to the standard permutation method (Section 3.3). Those new patterns arise from embedding vectors with some components being equal to each other. In the standard permutation entropy method, the embedding vectors characterized by those new patterns, if any, are labeled by permutations as if there were no equal components. This is made possible through ordering equal values in accordance with the rule (9).

Mathematically, replacing embedding vectors with their order patterns means constructing a quotient set from the set of all embedding vectors with respect to some equivalence relation [10, 21, 22]. In the case of permutation entropy, the corresponding equivalence relation is defined by using (9) and either (4) or (5). Denote it by . For arithmetic entropy, the corresponding equivalence relation is defined by using the algorithm described in the first paragraph of Section 3.1. Denote it by . It is clear that for two embedding vectors and , if , then . Namely, if and have the same arithmetic order pattern, then they do have the same permutation order pattern. That means that is coarser relation than . Other equivalence relations could be offered, which are courser than , finer than , lying in between, or incomparable with the both, see . A symbolization which still uses permutations, but is equivalent to the one discussed here, as regards the treatment of equal values in embedding vectors, has been proposed in , see discussion in Appendix B. Which one is better depends on the data sequence and which kind of redundancy is intended to strip.

#### A. Equivalence with Permutations

The following theorem proves the statement made in Section 3.1.

Theorem A.1. Suppose that an embedding vector does not have equal components. Then its symbolic pattern , obtained as described in Section 3.1 after equation (10), represents permutation which is inverse to the —the permutation obtained in the standard permutation entropy approach with convention (5) adopted:

Proof. Since has no equal components, then represents some permutations of sequence . Furthermore, the procedure of obtaining from does not change the rank order: for any , if , then , and vice verse. If so, then can be used for calculating standard permutation :In this course, after arranging elements of as required in (5), one obtainsObtained permutation acts as follows:Now, take into account that has number at position (A.3). That means that order pattern , if treated as a permutation, acts as follows:The latter just means that .
Due to this theorem, the method discussed in this note is equivalent to the standard permutation entropy method if in any embedding vector, any two components are different.

#### B. Comparison with Modified Permutation Entropy

Several versions of the modified permutation entropy symbolization have been proposed. We analyze here those proposed in [10, 11]. Considering firstly , the symbolization proposed is obtained as follows: having an embedding vector arrange its components as shown in (5), with their subscripts retained, if there are equal components, arrange their subscripts similar to (9) rule, or any other way. Before fetching the row of subscripts in the resulting vector as the modified symbol, do the following preparation. If there is a group of equal components in , then replace all subscripts in this group by the smallest among . Do this with all groups of equal components in . Use the row of subscripts in the such way modified as the modified symbol of . This way modified symbolization retains some information about equal components in . Let us denote this type of symbolization as MPE and corresponding symbol as .

By comparing the values presented in Table 1 with the data of TABLE 1 in , we see that the total number of possible patterns is bigger in Table 1. Therefore, it could be expected that MPE symbolization used in  is coarser than that discussed in this note. An additional hint in the same direction is that for some embedding vectors, symbolization in  gives the same result, while the method discussed in this note gives two different results. Here is one example: , . MPE symbolization in  gives both for and the same order pattern , whereas AE symbolization gives for and for resulting in two different numerical patterns, 68 and 20 calculated for as shown in (12). Notice now that for any two embedding vectors and ,

Indeed, the MPE symbol of any vector is obtained through rearranging components of in accordance with their rank order. Symbol , if considered as a vector, has the same rank of its components as does . Therefore, can be used for calculating instead of itself. If so, then (B.1) becomes evident. The above reasoning proves that MPE and AE methods of symbolization are comparable, and AE is finer than MPE.

Consider now the symbolization used for modified permutation entropy proposed in . In this symbolization, each embedding vector is symbolized with . has the following structure:where is a permutation. The second half in (B.2), , keeps information about equal components in . Call this symbolization MPE2. The symbol is obtained as follows: arrange components in in the ascending order keeping their subscripts. As a result, we obtain a sequence of groups consisting of equal components. Each group may have from one to elements. Of course, in the latter case, there will be only one group. The value composing each group in the sequence increases from left to right. Arrange subscripts in each group in the ascending order. Denote this way prepared sequence of components with their initial subscripts as . The row of subscripts in is from (B.2). This is the standard PE symbol with (5) adopted with the only difference that in the rule (9), the opposite inequality sign is used. The sequence is composed of zeros and ones by the following rule: if, then; otherwise.

Theorem B.2. Symbolization MPE2 produces the same partition of a set of embedding vectors as does the AE one described in Section 3.1.

Proof. In order to prove this statement, we need to show that for any and , the following equivalence holds:It is easily seen that can be unambiguously recovered from . Indeed, and considered as a vector have the same rank order of components. And calculation of is based exclusively on the rank order. Therefore,Thus, vectors with same will have same . This proves the one half of (B.3). In order to prove the second half, we need to show how can be unambiguously recovered from . For this purpose, we use the equality (B.4). So, if we arrange in the ascending order retaining subscripts, we obtain, instead of , above, a vector . This vector consists of groups of equal values: the first group has only zeros, the second one has only “1,” and the last one has only “,” where is the number of unique components in or . The sequence of subscripts in is the permutation, which constitutes the first part in . If one would have without subscripts inherited from , in the form of , the required might be obtained by applying permutation to . Namely, for , , where is taken from the permutation : . The required sequence can be recovered from the second part of . For this purpose, do the following reprocessing of . Replace with the following sequence: . With the obtained new sequence, proceed as follows: at the step number one, if , replace it with 0, otherwise replace it with 1. Similarly, at the step number , if , replace it with the number put at the previous step in place of , otherwise, replace with that same number incremented by 1. After replacing , we obtain the required sequence . This completes the proof.

#### C. Example

Here, we consider the sequence of digits in decimal expansion of (see (, Section 4.1)). The first 1 million digits in the decimal expansion of have been downloaded from https://catonmat.net/tools/generate-sqrt2-digits and https://apod.nasa.gov/htmltest/gifcity/sqrt2.1mil.

Denote this sequence S1. The first 10 millions digits in the decimal expansion of has been downloaded from https://apod.nasa.gov/htmltest/gifcity/sqrt2.10mil.

Denote this sequence S10. Both PE and AE were calculated for both S1 and S10 for different embedding dimensions with delay . The values were chosen based on the number of occurrences of different order patterns in S1 (Table 4). Based on the data of Table 4, we skip and cases because the number of occurrences of some arithmetic entropy patterns is too small for calculating probabilities. The values obtained for entropy are presented in Tables 5 and 6.

 D 2 3 4 5 6 7 100169 10055 1024 105 14 1 450143 120545 21274 2661 300 47 3 13 75 541 4683 47293
denotes the smallest number of repetitions in S1 for a pattern, is the biggest number of repetitions, and is the total number of patterns found.
 D 2 3 4 5 AE 1.369 3.477 6.067 8.992 PE 0.993 2.563 4.536 6.816 NAE 0.864 0.940 0.974 0.990 NPE 0.993 0.992 0.989 0.987
Entropy is given in bits. NAE is calculated as AE/, where is taken from the bottom row of Table 4.
 D 2 3 4 5 AE 1.369 3.476 6.066 8.991 PE 0.993 2.563 4.537 6.817 NAE 0.864 0.939 0.974 0.990 NPE 0.993 0.992 0.990 0.987

The data, which are obtained numerically, can be checked analytically. Indeed, the number is believed to be base 10 normal . This means that any combination of digits can be found in the expansion with probability . For example, if , there are 10 combinations with AE pattern , 45 combinations with AE pattern , and the same amount with AE pattern . This gives for the probabilities , , and . And . In the PE symbolization, both and correspond to and corresponds to (we use here the rule (9) with inverse inequality sign). This gives .

From Tables 5 and 6, we see that AE is usually bigger than PE. This could be explained by the bigger total number of patterns available in the AE symbolization. Perhaps, for the same reason, normalized AE is smaller than NPE for small . What seems unexpected, it is the opposite behavior of NPE and NAE with growing . Namely, NAE is increasing and NPE is decreasing function of for the parameter set considered (D = 7 and 8 were considered for S10. The results, as regards decreasing and increasing, support those observed for smaller D). As it is illustrated in the previous paragraph, the ‐tuples of digits from the expansion sequence are distributed unevenly between different order patterns both for PE and AE (this might explain dispersion of the patterns’ frequencies observed in (, Figure 8)).The abovementioned behavior with increasing suggests that the unevenness decreases for AE and increases for PE, at least in some “normalized” sense. This is for the expansion. Whether a similar behavior takes place for other sequences and a possible practical utilization of this fact require additional study.

#### Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

#### Disclosure

In this paper, the following free software has been used: (i) linux operating system (https://getfedora.org/); (ii) GNU Scientific Library  (https://www.gnu.org/software/gsl/); (iii) GNU Multiple Precision Arithmetic Library (https://gmplib.org/); (iv) Maxima, a free Computer Algebra System (http://maxima.sourceforge.net/); and (v) RefDB, a free Reference Manager created by Markus Hoenicka (http://refdb.sourceforge.net/).

#### Conflicts of Interest

The author declares that there are no conflicts of interest.

#### Acknowledgments

The work was partially supported by the Program of Fundamental Research of the Department of Physics and Astronomy of the National Academy of Sciences of Ukraine “Mathematical models of nonequilibrium processes in open systems” (N 0120U100857).

1. C. Bandt and B. Pompe, “Permutation entropy: a natural complexity measure for time series,” Physical Review Letters, vol. 88, no. 17, Article ID 174102, 2002. View at: Publisher Site | Google Scholar
2. A. Porta, S. Guzzetti, N. Montano, R. Furlan, M. Pagani et al., “Entropy, entropy rate, and pattern classification as tools to typify complexity in short heart period variability series,” IEEE Transactions on Biomedical Engineering, vol. 48, no. 11, pp. 1282–1291, 2001. View at: Publisher Site | Google Scholar
3. M. Zanin, L. Zunino, O. A. Rosso, and D. Papo, “Permutation entropy and its main biomedical and econophysics applications: a review,” Entropy, vol. 14, no. 8, pp. 1553–1577, 2012. View at: Publisher Site | Google Scholar
4. A. F. Bariviera, M. B. Guercio, L. B. Martinez, and O. A. Rosso, “A permutation information theory tour through different interest rate maturities: the libor case,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 373, no. 2056, Article ID 20150119, 2015. View at: Publisher Site | Google Scholar
5. L. Tylová, J. Kukal, V. Hubata-Vacek, and O. Vyšata, “Unbiased estimation of permutation entropy in EEG analysis for Alzheimer’s disease classification,” Biomedical Signal Processing and Control, vol. 39, pp. 424–430, 2018. View at: Publisher Site | Google Scholar
6. L. Zunino, F. Olivares, F. Scholkmann, and O. A. Rosso, “Permutation entropy based time series analysis: equalities in the input signal can lead to false conclusions,” Physics Letters A, vol. 381, no. 22, pp. 1883–1892, 2017. View at: Publisher Site | Google Scholar
7. D. Cuesta–Frau, M. Varela–Entrecanales, A. Molina–Picó, and B. Vargas, “Patterns with equal values in permutation entropy: do they really matter for biosignal classification?” Complexity, vol. 2018, Article ID 1324696, 15 pages, 2018. View at: Publisher Site | Google Scholar
8. H. Azami and J. Escudero, “Amplitude-aware permutation entropy: illustration in spike detection and signal segmentation,” Computer Methods and Programs in Biomedicine, vol. 128, pp. 40–51, 2016. View at: Publisher Site | Google Scholar
9. Z. Chen, Y. Li, H. Liang, and Y. Jing, “Improved permutation entropy for measuring complexity of time series under noisy condition,” Complexity, vol. 2019, Article ID 1403829, 12 pages, 2019. View at: Publisher Site | Google Scholar
10. C. Bian, Q. Chang, D. Qianli, Y. Ma, and Q. Shen, “Modified permutation-entropy analysis of heartbeat dynamics,” Physical Review E, vol. 85, no. 2, Article ID 021906, 2012. View at: Publisher Site | Google Scholar
11. T. Haruna and K. Nakajima, “Permutation approach to finite-alphabet stationary stochastic processes based on the duality between values and orderings,” The European Physical Journal Special Topics, vol. 222, no. 2, pp. 383–399, 2013. View at: Publisher Site | Google Scholar
12. S. Berger, G. Schneider, F. E. Kochs, and D. Jordan, “Permutation entropy: too complex a measure for EEG time series?” Entropy, vol. 19, no. 12, Article ID 692, 2017. View at: Publisher Site | Google Scholar
13. K. Keller, A. Unakafov, and V. Unakafova, “Ordinal patterns, entropy, and EEG,” Entropy, vol. 16, no. 12, pp. 6212–6239, 2014. View at: Publisher Site | Google Scholar
14. T. Gutjahr and K. Keller, “Ordinal pattern based entropies and the Kolmogorov–Sinai entropy: an update,” Entropy, vol. 22, no. 1, Article ID 63, 2020. View at: Publisher Site | Google Scholar
15. C. Bandt, “Ordinal time series analysis,” Ecological Modelling, vol. 182, no. 3-4, pp. 229–238, 2005. View at: Publisher Site | Google Scholar
16. W. Aziz and M. Arif, “Multiscale permutation entropy of physiological time series,” in in Proceedings of the 2005 Pakistan Section Multitopic Conference, pp. 1–6, Karachi, Pakistan, December 2005. View at: Publisher Site | Google Scholar
17. C. W. Kulp and L. Zunino, “Discriminating chaotic and stochastic dynamics through the permutation spectrum test,” Chaos: An Interdisciplinary Journal of Nonlinear Science, vol. 24, Article ID 033116, 2014. View at: Publisher Site | Google Scholar
18. J. Riordan, An Introduction to Combinatorial Analysis, John Wiley, Hoboken, NJ, USA, 1958.
19. N. Pippenger, “The hypercube of resistors, asymptotic expansions, and preferential arrangements,” Mathematics Magazine, vol. 83, no. 5, pp. 331–346, 2010. View at: Publisher Site | Google Scholar
20. M. Galassi, J. Davies, J. Theiler et al., GNU scientific library reference manual, Network Theory Ltd, 2009, https://www.freetechbooks.com/network-theory-ltd-p1818.html.
21. K. Keller, M. Sinn, and J. Emonds, “Time series from the ordinal viewpoint,” Stochastics and Dynamics, vol. 7, no. 2, pp. 247–272, 2007. View at: Publisher Site | Google Scholar
22. A. B. Piek, I. Stolz, and K. Keller, “Algorithmics, possibilities and limits of ordinal pattern based entropies,” Entropy, vol. 21, no. 6, Article ID 547, 2019. View at: Publisher Site | Google Scholar
23. M. Queffeec, “Old and new results on normality,” in Dynamics & Stochastics, Lecture Notes–Monograph Series, D. Denteneer, F. den Hollander, and E. Verbitskiy, Eds., pp. 225–236, Institute of Mathematical Statistics, 48 edition, 2006, https://imstat.org/. View at: Google Scholar