Journal of Mathematics

Journal of Mathematics / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6651445 | https://doi.org/10.1155/2021/6651445

Amirul Aizad Ahmad Fuad, Tahir Ahmad, "Ordering of Transformed Recorded Electroencephalography (EEG) Signals by a Novel Precede Operator", Journal of Mathematics, vol. 2021, Article ID 6651445, 14 pages, 2021. https://doi.org/10.1155/2021/6651445

Ordering of Transformed Recorded Electroencephalography (EEG) Signals by a Novel Precede Operator

Academic Editor: Georgios Psihoyios
Received19 Nov 2020
Revised13 Apr 2021
Accepted20 Apr 2021
Published08 May 2021

Abstract

Recorded electroencephalography (EEG) signals can be represented as square matrices, which have been extensively analyzed using mathematical methods to extract invaluable information concerning brain functions in terms of observed electrical potentials; such information is critical for diagnosing brain disorders. Several studies have revealed that certain such square matrices—in particular, those related to so-called “elementary EEG signals”—exhibit properties similar to those of prime numbers in which every square EEG matrix can be regarded as a composite of these signals. A new approach to ordering square matrices is pivotal to extending the idea of square matrices as composite numbers. In this paper, several ordering concepts are investigated and a new technique for ordering matrices is introduced. Finally, some properties of this matrix order are presented, and the potential applications of this technique to analyzing EEG signals are discussed.

1. Introduction and Motivation

Epilepsy is a common neurological disease that affects 1% of the world’s population [1]. People of all ages can be affected by this chronic brain disorder [2], which lowers sufferers’ quality of life with the possibility of a seizure occurring at any time. Early diagnosis can not only improve quality of life but also prevent patients from experiencing accidents. The diagnosis of epilepsy is often not straightforward, and misdiagnosis occasionally does occur [3]. A detailed and reliable eyewitness account of the event is the most crucial piece of information for indicative assessment, but this may not be accessible [4]. In most cases, electroencephalography (EEG) is an essential diagnostic test for assessing patients with possible epilepsy. Besides providing diagnostic support [5], EEG can also assist in classifying the underlying epileptic syndrome [6].

Mathematical analysis of EEG signals offers medical professionals vital information regarding brain activity during a seizure, thus increasing the understanding of complex brain function [7, 8]. In general, relevant information is extracted from EEG signals via two main methods: linear and nonlinear. Linear analysis (e.g., Fourier and wavelet transforms) has been successfully executed and has produced several good results [915]. However, since linear methods disregard the underlying nonlinear EEG dynamics, the results can only provide a limited amount of information concerning the brain’s electrical activity. By contrast, it is commonly accepted that the brain is a chaotic dynamical system; therefore, EEG signals generated by the brain are considered to be chaotic in another sense [1618]. Additional information can be extracted from EEG signals by progressively incorporating a nonlinear analysis that reveals features that cannot be measured via linear methods [19].

Crucial to diagnosing the disorder is to solve the neuromagnetic inverse problem to identify the location of epileptic foci [20]. Therefore, fuzzy topographic topological mapping (FTTM) was introduced in [21] to determine the epileptic foci. More recently, FTTM has been extensively utilized to study the features of the recorded EEG signals of seizure patients (see [2227]). Most notably, Yun in [28] claimed that one of the components of FTTM—namely, the magnetic contour ()—obeys the associative law, which is also satisfied in turn by events in time [29]. The author concluded by stating that the is a plane containing information. This prompted Binjadhnan in [25, 30] to perform Krohn–Rhodes decomposition on a set of square matrices of EEG signals, . The scholar found a remarkable result, namely, the EEG signals taken during an epileptic seizure (henceforth called EEG-signal square matrices for the remainder of this paper) are not chaotic, but rather exhibit ordered patterns in the form of simple algebraic structures, as expressed by Theorem 1.

Theorem 1 (see [30]). Any invertible square matrix of EEG-signal readings during an epileptic seizure at time can be written as a product of elementary EEG signals during an epileptic seizure in one and only one way.

Theorem 1 states that the elementary EEG signals (i.e., unipotent and diagonal EEG signals) constitute the building blocks of all EEG signals. This theorem, to a certain extent, is similar to the fundamental theorem of arithmetic, which holds that prime numbers are the multiplicative building blocks of the integers [31]. Equally significant are the results that indicate that has properties resembling those of prime numbers via the Jordan–Chevalley decomposition [32]. The well-ordering property of positive integers is vital in producing one of the most beautiful results in the study of prime numbers, namely, the infinitude of prime numbers. Therefore, a technique of ordering matrices is required to extend the work of viewing the elementary EEG signals as prime numbers. The analogy of elementary EEG signals as prime numbers is of vital importance since the pattern of EEG signals can be investigated in terms of the pattern of prime numbers. Hence, the goal of this paper is to introduce a technique of ordering transformed EEG signals (in terms of square matrices).

The remainder of this paper is organized as follows. In Section 2, a brief review of a few concepts and techniques for ordering matrices is presented, along with their viability for ordering transformed EEG signals. In Section 3, a new technique for ordering matrices, namely, the precede operator, is introduced, allowing several ordering properties to be obtained. Next, this binary relationship is shown to fulfill the partial-order properties; beyond that, it is shown to be totally ordered in Section 4. In Section 5, several results are obtained when the order is applied to symmetric matrices. Then, the implementation of the precede operator to the real data of EEG signals is presented in Section 6. The interpretation of the results and their connection with the prime numbers are discussed in Section 7. Finally, we bring the paper to a close with concluding remarks concerning the need for such a partial order. Throughout the following sections, every matrix is considered to be a square matrix unless otherwise stated.

2. Concepts for Ordering Matrices

Over the past few decades, mathematicians and applied scientists alike have taken a deep interest in the ordering of matrices. Several order relations for matrix algebra have been produced in connection to a series of applications relevant to different branches of mathematics and its applications. These order relations include minus partial order [33], star partial order [34], sharp partial order [35], and matrix majorization [36]. Mitra et al. [37] wrote a comprehensive monograph in which they presented developments in the field of matrix ordering and shorted operators for finite matrices in a unified way, thus sparking research interest in this topic.

Matrix partial ordering has applications in many different areas; for instance, Liu [38] developed applications for comparing linear models. Moreover, in the field of statistics, Baksalary and Puntanen [39] presented the best linear unbiased estimator in a general Gauss-Markov model. At the same time, Dahl et al. [40] characterized a binary relation involving stochastic matrices (namely, matrix majorization), which is very useful for comparisons of statistical experiments. Additionally, in the field of finance, Fontanari et al. [41] proposed a technique called quantum majorization to compare and rank correlation matrices such that portfolio risk can be more significantly assessed.

The minus partial order is the fundamental matrix partial order, of which almost all subsequent partial orders (including the star and sharp orders) constitute extensions. Such extensions have been created through the addition of restrictions to the minus partial order. The minus partial order (which was originally called the plus order) was established by Hartwig in [33] and independently by Nambooripad in [42] to generalize conventional partial orders on semigroups. Antezana et al. [43] and Šemrl [44] extended this partial order such that it could be applied in an objective way to operators on infinite-dimensional spaces. Djikić et al. [45] documented a new representation of the minus order on the algebra of bounded linear operators on a Hilbert space. The natural partial order of Vagner on inverse semigroups and the star order of Drazin can be extended through minus order [37]. One key feature to note is that these partial orders are defined via the method of generalized inverses.

Another essential ordering concept for matrices is majorization, which has been applied across many fields including economics [46], statistics [47, 48], and, most recently, quantum mechanics [49]. This concept was first introduced in a classical book by Hardy et al. [50]. Later, Marshall et al. [36] extensively treated both the theory and application of majorization. Torgersen in [51, 52] studied the generalization of vector majorization and developed the theory of statistical-experiment comparison. This theory is intended to answer the question “What conditions must be fulfilled in order to say that one statistical experiment provides more information than another?” A simple experiment can be found in [53], in which the conventional notion introduced by the author is closely related to that of vector majorization. However, while these generalizations evolved from statistical studies, they are not regularly discussed in the linear-algebra literature. Dahl [54] introduced and studied the generalization of (vector) majorization as it applies in the notable case of matrices with rows. The classical concept of majorization between vectors can be generalized via matrix majorization [55].

Some of the techniques of ordering matrices found in the literature, along with their real-world applications, advantages of the techniques, and their limitations (with respect to the purpose of ordering transformed EEG signals), are summarized in Table 1.


Ordering techniqueReal-world applicationsAdvantagesLimitations

Multivariate majorization(i) Measuring income inequalities and comparing the contents of experiments [36, 40]
(ii) Comparing the information of classical or quantum physical states [56]
(iii) Network flow theory [55]
(iv) Measuring income inequalities [5759]
(v) Measuring experimental designs and survey sampling [60]
(i) More than one attribute of a system, such as income inequality, can be compared
(ii) Comparison between matrices that have different dimensions
Requires the existence of a doubly stochastic matrix
Quantum majorization(i) Comparing and ranking correlation matrices to assess portfolio risk in a unified framework [41]
(ii) Comparing quantum processes in which a complete set of entropic conditions for state transformations in resources theories of asymmetry and quantum thermodynamics is derived [49]
(i) It is a generalization of matrix majorization
(ii) The technique can be applied to all quantum states, whereas the previous results are limited to a restricted family of states
(iii) Quantum majorization is preferred mainly for two reasons: (a) verification in the data can be easily done and (b) the axiomatic approach commonly used in financial and actuarial mathematics is satisfied
(i) Requires the existence of completely positive and trace-preserving (CPTP) maps
(ii) Additional tools are required for the case of approximate transformations
Loewner’s orderingMultivariate analysis [61]Generalization of univariate statistical analysisLimited to symmetric matrices of the same order
Matrix ordering of special C-matrices for statistical analysis [62]Facilitate the comparison of information matrices between corresponding block designs and dispersion of two multinomial distributionsLimited to the special case of a C-matrix in experimental design theory
Image processing [6365](i) Fundamental concepts of mathematical morphology could be transferred to matrix-valued data
(ii) The ordering technique can be applied to higher-dimensional tensor data
Limited to the set of symmetric matrices
Partial order induced by affine-invariant geometryInformation geometry to perform statistical analysis [66](i) The ordering technique is critical to study the monocity of functions
(ii) The ordering technique can be applied to study dynamical systems and convergence analysis of algorithms defined on matrices
Limited to the set of positive definite matrices of dimension derived from the affine-invariant geometry
Sharp partial orderAutonomous linear systems [67, 68]Enables a comparison between two autonomous systems, and extraction of much more informationRequires the existence of group inverses
Minus partial orderCompartmental control systems [69](i) The compartmental control system models’ performance and efficiency, such as infectious disease evolution, are improved
(ii) A reachable successor system can be obtained from a nonreachable one
Requires the existence of generalized inverses

The techniques of ordering matrices summarized in Table 1 have been deemed unfit to be used to extend the work of Binjadhnan and Ahmad [25, 30] and Fuad and Ahmad [32] since there are some conditions required to be fulfilled, and some are limited to the special matrices. This offers the possibility of introducing and investigating a new partial order of square matrices as discussed in Section 3.

3. Precede Operator

As mentioned in Section 2, the set of square matrices of EEG signals during a seizure, , has properties similar to those of prime numbers. Therefore, EEG-signal square matrices can be assumed to be analogs of natural numbers. It can be said that one matrix is “greater” than another matrix, just as any natural number can be either greater than or less than another natural number since is a complete ordered field and . With this in mind, the precede operator, denoted by , is introduced as defined by Definition 1.

Definition 1. Let and be matrices and . Matrix is said to precede , written as , whenever the first exists for some . The comparison must be made in the sequence of rows, i.e., , until the first is discovered and denoted as . Otherwise, if , then . When , i.e., all the corresponding entries for each matrix are the same, then .
In other words, let us consider such thatDefine , such that and denote row in and , respectively.
Next, define such thatHence, matrix .
Now, define such thatIn other words, rearrange and reidentify the entries of .
Next, define such thatFinally, .
Definition 1 is introduced as a map between a matrix and a real number in , since is a complete ordered set.

Example 1. Consider two matrices and such thatAs can be clearly seen, the first is found. In this case, or . Then, .

Theorem 2. The mapping is well-defined.

Proof. Let , such that . Therefore, and imply thatHence, the mapping of is well-defined.
The execution of this definition can be summarized by Algorithm 1.
In short, , where is a composition of mappings such thatThe composition of mappings is best illustrated by Figure 1.

Step 1: given matrices and , determine row for and , i.e., and .
Step 2: determine matrix whereby , such that for .
Step 3: determine matrix whereby , such that .
Step 4: determine such that for and .

4. Ordered Matrices

In this section, several results are obtained to show that any square matrices together with the precede operator are ordered matrices.

Lemma 1. If , then .

Proof. Let . Then, there exists a first and a first for some such that . Suppose that is false; then, is true. Therefore, there exists a first and a first for some such that . This is impossible, since according to Definition 1, and it also contradicts with the assumption that says there exists a first and a first for some such that , as noted earlier.

Lemma 2. If and then .

Proof. Suppose ; then, there exists a first and a first for some , such that . Similarly, if then there exists a first and a first for some , such that . This is only possible if . In other words, , by Definition 1.

Lemma 3. If and , then .

Proof. Let . Then, there exists a first and a first for some such that . If , then there exists a first and a first for some such that . Suppose that is false; therefore, . In other words, there exists a first and a first for some , such that . There are three cases to consider:(i): if , then is a contradiction, since is no longer the first to be found, such that and , according to Definition 1.(ii): if , then is a contradiction, since is no longer the first to be found, such that but .(iii): if , then , since and , since . Therefore, , which immediately implies . In other words, , which is a contradiction .All three cases lead to contradictions; thus, if and , then .
Consequently, the binary relation is a partial order.

Theorem 3. The set is a partially ordered set.

Proof. The set is a partially ordered set since(i)It is reflexive, since by Definition 1(ii)By Lemma 2, is antisymmetric(iii)By Lemma 3, is transitiveOf equal importance, is totally ordered as well.

Theorem 4. The set is totally ordered.

Proof. According to Theorem 3, the set is partially ordered. Next, or . Consider the case where , and a contrapositive for Lemma 2 is applied. In other words, this means that . In summary, or or . Hence, is totally ordered.

5. Precede Ordering for Symmetric Matrices

When precede operator is applied to special matrices (namely, symmetric matrices), several properties are obtained.

Proposition 1. If and are symmetric matrices and , then (i.e., matrix transposition preserves for symmetric matrices).

Proof. Suppose that and are symmetric matrices and . It immediately follows that since and .

Proposition 2. If and are symmetric matrices and , then .

Proof. Suppose that ; then, there exists a first and a first for some . Nevertheless, , since for some , and and are the first terms of and respectively, that exhibit such conditions. Consequently, .

Proposition 3. If , then for (i.e., matrix-positive-scalar multiplication preserves ).

Proof. Suppose that ; then, there exists a first and a first for some such that . Then, , since . Hence, .

Proposition 4. If and are skew-symmetric matrices and , then .

Proof. Suppose that and are skew-symmetric matrices; therefore, and . Nevertheless, if , then by Proposition 2. Consequently, , since and are skew-symmetric matrices.

Theorem 5. Suppose that and are positive symmetric matrices, such that and ; then, .

Proof. Suppose that , and are positive symmetric matrices. implies that there exists a first and a first for some , such that . Similarly, implies that there exists a first and a first for some , such that and a first for some , such that . Now, , since and are the fields. In short, the first the first for . This implies that .
Theorem 5 is best illustrated by Example 2.

Example 2. Consider the positive symmetric matrices , and such thatBy Definition 1, and is obtained. Next,Similarly, by Definition 1, is obtained.

Corollary 1. Suppose that and are positive symmetric matrices. If , then .

Proof. Suppose that and are positive symmetric matrices and that . Then, , according to Proposition 1. By replacing and in Theorem 5, we obtain .

Corollary 2. Suppose that and are positive symmetric matrices such that ; then, .

Proof. Suppose that and are positive symmetric matrices and . Then, , by Proposition 2. By replacing and in Theorem 5, we obtain .

Example 3. Suppose that and are two symmetric matrices, such thatThen, by Definition 1, Next,which, by Definition 1, implies that

Theorem 6. for every positive symmetric matrix (i.e., a positive symmetric matrix precedes its negation).

Proof. Suppose that and are positive symmetric matrices and that . Notice that, is a positive symmetric matrix sinceSimilarly, is also a positive symmetric matrix. Now, , according to Corollary 1, and , according to Proposition 2. Consequently, , by Theorem 5. Equivalently,Notice that if and are positive symmetric matrices, then is a symmetric matrix, since . Next, let and ; therefore, is symmetric. Replace in ; then, is obtained as required.

Example 4. Let be a symmetric matrix such thatThen, by Definition 1, we obtain .

Theorem 7. Suppose that and are positive symmetric matrices. If and , then .

Proof. Suppose that , and are positive symmetric matrices. Assume that and . According to Theorem 6, . Since and imply that by Lemma 3, we consequently obtain and by Theorem 3. In other words, . Therefore, by Proposition 3, which implies that .

Example 5. Suppose that , and are symmetric matrices such thatClearly, by Definition 1, . Next,

6. Implementation

As an example of the implementation of the precede operator on real data, two readings of EEG signals’ square matrices are presented. The EEG signals of epileptic seizure patients could be recorded and composed into a set of square matrices (see Figure 2). Firstly, the EEG data is gathered from the hospital by the EEG technologists and the three-dimensional data are transformed into two-dimensional data. The transformation of the EEG data into a lower-dimensional is executed via a novel technique called flattening the EEG, where the information can be preserved and conveniently analyzed [70].

The coordinate system of EEG signals, depicted in Figure 3(a), is defined as

Moreover, a function (see Figure 3(b)) is defined as

The mapping is an injective mapping of a conformal structure since both and were designed and proven as two manifolds [70]. Hence, the mapping can keep up the data in a specific angle and orientation of the surface throughout the recorded EEG signals. The technique of flat EEG is executed on three groups of EEG signals recorded from three different epileptic patients [70]. The author digitized the EEG signals during epileptic seizures at 256 samples per second using the Nicolet One EEG software. Next, each APD at every second was stored in a file that contained the position of an electrode on a magnetic-contour () plane. Subsequently, the stored data were used to compose a set of square matrices.

Differences in surface potential could be recorded using an array of electrodes appended to the scalp; the computed voltages between pairs of electrodes are then clarified, amplified, and recorded. The most widely used system of electrode placement is the International Ten-Twenty System; this is a recommended standard method for characterizing the locations of electrodes at particular time intervals along with the head for recording scalp EEG [71]. The Ten-Twenty system depends upon the connection between the position of an electrode and the underlying area of the cerebral cortex (the “Ten” and “Twenty” refer to 10% and 20% interelectrode distances, respectively) [72]. The electrode position of this system is shown systematically in Figure 4.

Figure 4(a) illustrates the case where almost all of the electrodes are positioned or below from vertex . On the contrary, Figure 4(b) shows the electrode position from the top view of the head by modeling the head as a sphere. We assume that the hemisphere is from the top of the head [70]. In other words, from the front to the back is from to and from the left to the right is from to . In general, every APD at each second is stored in a file containing the position of electrodes on the plane, as tabulated in Table 2.


SensorAPD


Then, the readings in Table 2 are rewritten in terms of a matrix, as shown below.

Let and a function be defined as

The mapping of can be rewritten as the following matrix:

Specifically,

The corresponding square matrix is generated by substituting the analogous average potential difference of every element into the above matrix. In particular, every single second of the APD is stored in a square matrix that contains the positions of electrodes on the plane.

Therefore, the plane becomes a set of matrices (EEG signals), which is written as follows:where is a potential-difference reading for EEG signals from a particular sensor at time . For instance, the recorded EEG signals data during seizures at times and are tabulated in Tables 3 and 4.


SensorAPD