Computational and Mathematical Methods in Medicine

Computational and Mathematical Methods in Medicine / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6676681 | https://doi.org/10.1155/2021/6676681

Marius Georgescu, Laura Haidar, Alina-Florina Serb, Daniela Puscasiu, Daniel Georgescu, "Mathematical Modeling of Brain Activity under Specific Auditory Stimulation", Computational and Mathematical Methods in Medicine, vol. 2021, Article ID 6676681, 20 pages, 2021. https://doi.org/10.1155/2021/6676681

Mathematical Modeling of Brain Activity under Specific Auditory Stimulation

Academic Editor: Raul Alcaraz
Received14 Nov 2020
Revised28 Feb 2021
Accepted10 Mar 2021
Published22 Apr 2021

Abstract

Understanding the connection between different stimuli and the brain response represents a complex research area. However, the use of mathematical models for this purpose is relatively unexplored. The present study investigates the effects of three different auditory stimuli on cerebral biopotentials by means of mathematical functions. The effects of acoustic stimuli (S1, S2, and S3) on cerebral activity were evaluated by electroencephalographic (EEG) recording on 21 subjects for 20 minutes of stimulation, with a 5-minute period of silence before and after stimulation. For the construction of the mathematical models used for the study of the EEG rhythms, we used the Box-Jenkins methodology. Characteristic mathematical models were obtained for the main frequency bands and were expressed by 2 constant functions, 8 first-degree functions, a second-degree function, a fourth-degree function, 6 recursive functions, and 4 periodic functions. The values obtained for the variance estimator are low, demonstrating that the obtained models are correct. The resulting mathematical models allow us to objectively compare the EEG response to the three stimuli, both between the stimuli itself and between each stimulus and the period before stimulation.

1. Introduction

Understanding how the brain functions is one of the utmost scientific challenges of all time. Interdisciplinary research in the fields of neuroscience, physics, biology, neurochemistry, genetics, molecular biology, and psychology has made exciting progress on a wide range of issues. However, as researchers discover evermore functions and locations of brain activity, other scientific concerns arise and no general theory of brain function has been entirely accepted.

Several common techniques exist to measure brain activity: direct imaging techniques—electroencephalography (EEG) and magnetoencephalography (MEG)—which measure electrical or magnetic signal generated due neuronal activity directly and indirect imaging techniques—functional magnetic resonance imaging (fMRI) and positron emission tomography (PET)—which measure neuronal activity using neuronal oxygen consumption. In the case of the EEG and MEG, one joint approach has been to investigate how the time course of brain electrical potentials is influenced by specific stimuli. Combined with controlled sensory stimulation, these methods allow exploration of the sensory and perceptual processes. In this specific area, EEG has been recommended for its non - invasiveness, high temporal resolution, relatively low setup price, possibility of portability and relative ease of use. The influence of different types of external, visual or acoustic, stimuli on cortical EEG has been detailed in various studies [18].

Acoustic stimulation through repetitive stimuli, either solely administered or associated with diverse activities can provide new data about complex mental processes [911].

Music is a particular type of auditory stimulus because it is a combination of frequency, beat, density, tone, rhythm, repetition, amplitude, and lyrics. Researchers mapped the music-evoked areas of the brain and suggested that music is able to modulate activity in the core areas of emotion, revealing that distinct parts of the brain are activated by music as a function of tonality [1221].

It has been demonstrated that persistent negative emotional states can increase one’s susceptibility to viral infections, yeast infestations, heart attacks, high blood pressure, and other diseases [22]. It is likely that music therapy can influence the autonomous nervous system and reduce stress and stress-related health problems [23], rebalancing the immune system, particularly when the music is known and pleasing to the individual [15, 24, 25]. The effect of music on patients suffering from various neurological disorders or other pathologies has been extensively studied and positive effects have been observed, making music a valuable adjunct to medical practice [2632].

In addition, classification of emotions based on the EEG while listening to music has currently gained increasing attention due to its potential applications in fields such as music therapy, musical affective brain-computer interface (BCI), neuromarketing, and multimedia tagging and triggering [33].

The profound influence of music training on the functional and structural architecture of auditory-related cerebral areas has been documented by a large number of studies and highlighted the often-observed cognitive advantages of music experts in a variety of cognitive domains, including verbal learning, memory and attention [3442].

The most noticeable connection between music and increase of performance or altering of neuropsychological activity was shown by studies involving Mozart’s music, from which the theory of “The Mozart Effect” [43] was derived. Outcomes of many studies showed that listening to music, especially Mozart compositions (e.g. Mozart sonata K 448) can enhance cognitive performance, motor skills and recovery after brain injury [15, 4446].

A good part of the EEG studies carried out on Mozart’s music showed that listening to Mozart sonata K 448 decreased alpha power which may indicate cortical activation and offer helpful evidence of the Mozart Effect. In addition, significantly decreased EEG theta and beta power were observed [47]. Literature data show that alpha power is regarded as a sensitive indicator of cortical activity and is inversely related to cortical function, decreased values being associated with activations in cortical structures that govern goal-directed cognition and behavior [47, 48]. However, other investigations in this area reported opposite findings, also showing increases in the alpha band in response to music [49, 50]. Despite this, EEG surveys of cerebral activity under acoustic stimulation and, in particular, music, are scarce.

Increased understanding of the relationship between an acoustic stimulus and the brain response will accelerate various researches on the analysis of the brain reaction. Different mathematical and computational methods were used for analysis of the EEG signal under external stimuli [5156]. Unlike the stereotypical EEG response produced following a short auditory event such as a click or onset of sound, cortical activity associated with continuing stimulation is harder to interpret as responses to each individual event overlap in time, and the lack of repetition prevents simple averaging over trials. Therefore, the development of models from data can be a formidable task, especially in the field of clinical neurophysiology. Thus, several studies focused on characterizing, discriminating or clustering the time series based on the different types of measures applied in preictal EEG segments in order to predict the seizure onset in patients with epilepsy [5759]. These measures are structured in three main groups: linear, nonlinear, and “other” measures, each group being subdivided in subgroups. The group of linear measures encompasses the subgroups of correlation measures, frequency-based measures and model-based measures. The standard linear models for time series are the autoregressive model (AR) and autoregressive moving average model (ARMA) [60]. Parametric modeling has long been acknowledged as a versatile tool for the analysis of EEG data [6165].

Modeling of brain activity is a dynamic area of research and several open issues need to be addressed in order to successfully implement these techniques, especially for practical applications such as EEG driven, BCI systems. The potential applications in this direction include BCI-based music recommendation system and BCI-based music therapy, prediction and diagnosis of epilepsy or other neurological impairments. However, literature data on the use of mathematical models in this area is limited.

In this context, our study is aimed at investigating cerebral electrical activity under the influence of different auditory stimuli, both recorded from nature and artificial, different in regard to frequencies, amplitudes, and tonality, by developing, for the first time, mathematical models, in which mathematical functions offer the possibility of study in evolution, with comparisons to the prestimulation period, of the EEG spectral components. Moreover, our data would guide ongoing efforts to develop additional representative models of the brain response to other external stimuli as well as in the case of other brain status.

2. Materials and Methods

2.1. Subjects

The experiments were carried out in the University of Medicine and Pharmacy of Craiova on 21 males (most of them students), all being right-handed, average age 23 (), without previous musical training, and homogenous regarding professional and extraprofessional activity. Those with a history of neurological disturbances or with a history of drug or ethanol abuse were excluded from the study.

To standardize the group in terms of the degree of fatigue of the subjects, the experiments were performed in the evening, 8 pm, at low levels of noise and natural light.

To select the study group, while ensuring that it is uniform in terms of the perception of sound stimuli, we used a sinusoidal frequency generator coupled to an audio amplifier. The sound provided by the amplifier was perceived by the subjects tested by means of headphones, which were also used in the EEG recording procedure under auditory stimulation. Each subject was subjected to the minimum frequency (45 Hz) at a minimum sound intensity (pressure level). As the subject confirmed the presence of the stimulus, we increased the frequency. When the tested subject signaled the disappearance of the sound stimulus, the intensity was increased up to the limit of 40 dB, which was chosen as the sound intensity level for our study. If the subject did not perceive the sound, he was excluded from the study group. The test ended when we reached the maximum frequency of 16500 Hz. This initial procedure was carried out because the three variants of auditory stimulation we chose encompassed a wide range of frequencies at different intensities.

Starting 12 hours prior to the EEG recording, consumption of none of the following substances—alcohol, caffeine, tea, chocolate, B group vitamins, hormones, hypotensive drugs, sedatives, tranquilizers, sleeping pills—was allowed. Approval for experiments with human subjects for scientific purposes was obtained from The Ethical Commission of the University of Craiova, Romania. Each subject was provided with detailed information about the aims of the ongoing study and gave his written consent to participate in it.

2.2. Experimental Stimuli

Auditory stimulation was performed using three different stimuli: S1, an automobile moving on a rough surface, S2, rainfall recorded in a rainforest and S3, two piano sonata K448 by Mozart. The three types of acoustic stimuli were chosen as representative for a monotonous auditory stimulation, subjectively disturbing in case of S1, soothing in case of S2 and pleasant but tensing, in case of S3 [8, 66, 67] (Figure 1).

The three sounds have different characteristics: while in S1 the uniformity of stimulation and the presence of low frequencies are noticeable, with values between 75 Hz and 325 Hz, S2 presents major differences from S1, being much richer in frequencies (the presence of three frequency groups, one below 600 Hz, one around 2000 Hz +/-250 Hz and the last between 3100 and 3800 Hz can be observed), and in variations of amplitudes and tones; while S3 is not a flat, monotonous signal, it is also not quite unpredictable, with repetition intervals which, while not emerging at equal intervals, can still be detected, and a frequency range that is comparable to that of the S2 signal, with some uniform presences distributed.

The EEG was recorded while the subjects were undergoing continuous auditory stimulation, for the duration of 20 minutes, using a pair of headphones (frequency range: 20-20000 Hz) connected to a laptop, powered by its own batteries to avoid parasitic currents. The intensity of the sound was measured with an NM102 Noise Meter and was maintained at a medium level of around 40 dB which was considered safe for the 20 minutes stimulating period in our study activity. We considered the monotony condition to be achieved in stimuli lasting 20 minutes. Longer stimulation periods led to decrease in recording quality, due to the too prolonged discomfort caused by sitting immobile in the chair, electrodes on the scalp, headphones in the ears, or even the reverse – falling asleep.

2.3. EEG Recording

The acquisition of cortical biopotentials was made using an industrially produced electroencephalograph, Nihon Kohden EEG-9200. A transformer was used to separate it from the public electricity supply network. The electrodes were placed in the international standard 10–20 system, bipolar acquisition montage, references on the 2 ears, and the extra ECG lead (both hands and the left foot) with the main role of signal quality control. Figure 2 presents the montage corresponding to the pattern 3 of the collection of cerebral micropotentials.

A great advantage of the Nihon-Kohden EEG-9200 electroencephalograph is that after a bipolar collection, at the time of the acquisition, the data is stored at the sampling rate set initially. This allows adaptation of both the filtration and the collection patterns of the micropotentials according to the requirements of processing. Processing does not affect the original form of the stored data. Spectral analysis can be performed by the program on an artifact-free portion of the EEG, selected by the operator.

During signal inspection, we complied with the requirements of the QP-220AK spectral analysis and mapping program (collection of bipolar brain micropotentials). For a high-fidelity study, the acquisition of signals was made at a high sampling rate (500 Hz), which gives us an EEG that also contains relatively high frequencies (120 Hz), as well as the possibility of performing a Fast Fourier transform (FFT) in the Hanning window on 1024 points for 20 seconds of the signal.

For our study, the frequency band was reduced by more than 60 Hz, and the time constant (the filter passes up) was set to 0.3 s, thus ensuring a sufficiently large bandwidth for the investigated beta, alpha, theta, and delta cerebral rhythms (Table 1).


EEG rhythmFrequency band (Hz)The filter passes up (Hz)The filter passes down (Hz)

Delta2-424
Theta4-848
Alpha 18-10810
Alpha 210-131013
Beta 113-201320
Beta 220-302030

All recordings were made in identical experimental conditions: subjects with the same degree of physical and psychological tiredness – assessed both subjectively by the examinee and objectively (e.g. number of hours slept previous to experiment) - sitting immobile in a relaxed position, eyes closed, no ambient sound and lighting, no disruptive ambient electrical fields. As an additional measure, a front, grounding electrode was used. The contact impedance during EEG recordings was kept below 30 Kohms and saline solution was used for its reduction, maintaining the contact noise at unanimously acceptable values.

The procedure was carried out as follows—3 valid recordings for each subject, each made under a different stimulus of the three (S1, S2, and S3) (see Figure 3 for the experimental design), with the following well-specified steps: (i)Switching on the equipment(ii)Placement of electrodes and headphones(iii)Entering the data corresponding to the registration in the EEG program(iv)Checking the contact impedance and viewing the EEG for control(v)Reduction of ambient light(vi)Start recording(vii)Subjects close their eyes at the operator’s command(viii)After 5 minutes of silence (L1 period), the operator begins the auditory stimulation(ix)After 20 minutes of stimulation (S period), the operator stops the auditory stimulation(x)After 5 minutes of silence (L2 period), the operator stops the recording(xi)The operator stores the recording made on the computer(xii)The subject may open his eyes and be released

To avoid inducing rhythm modulation, subjects were not instructed to follow any particular mental activity, or lack thereof, and were given complete freedom.

Each experiment was carried out in accordance to the general working conditions and the stages inscribed in the protocol, and the recordings performed on the same subject under the influence of auditory stimuli were made on consecutive days (day 1: sound S1, day 2: sound S2, and day 3: sound S3) after 8 pm, in identical working conditions.

After each experiment followed a verification stage, in which the signals were analyzed, for validation, performing repetitions (tests at another time), in case of defective recordings.

2.4. EEG Data Processing for Mathematical Modeling

Because our study was limited to the effects of sound stimulation, we only analyzed the data collected from electrodes P3-A1, P4-A2, O1-A1, and O2-A2 (Table 2) by mediation, obtaining a single series of data to characterize the sound projection area.


Median value for EEG rhythmsIndices for the whole spectrum
DeltaThetaAlpha 1Alpha 2Beta 1Beta 2TotalEdgeAvMedianPeak

P3A12.7662.84.7853.6152.0881.51517.56819.14110.169.5710.16
P4A23.7273.786.6393.8541.5210.78320.30413.6728.7899.1810.16
O1A12.2572.97113.64416.392.5911.89539.74514.45310.359.96110.16
O2A22.4164.02813.1713.791.931.39836.72911.9149.9619.96110.16

Edge (edge frequency): the frequency that establishes, at a value initially set by the operator (90% in our case), the ratio—the left area/the whole area of the spectrum; Av (average frequency): the frequency corresponding to the center of gravity of the spectrum area; median (median frequency): the frequency that divides the area into two equal parts; peak (peak frequency): the maximum energy frequency.

Following spectral analysis, we subjected data to normalization. Since we are interested in the change of EEG frequencies under external sound stimulation, we compared the values obtained to the normal values of the period without auditory stimulation (the time recorded before the initiation of sound stimulation). We obtained a normalization coefficient for each electrode (average value during the period before stimulation) and, with these values, we applied normalization to each electrode separately (we divided each value recorded by the normalization coefficient).

The mathematical formulas used in the normalization process for the studied derivations are presented below:

The standardized data series formed the basis for mathematical modeling.

2.5. Mathematical Modeling

When building mathematical modeling for the study of the evolution of the total spectrum, as well as for alpha, beta, delta, theta rhythms, we used the Box-Jenkins methodology [68].

In modeling a sample of data as a time series, we searched for patterns in which the process, noted , can be described using a white noise process. A stochastic process is called white noise if for each are uncorrelated, with zero mean and constant dispersion, .

We will encounter two categories of such time series: autoregressive time series (AR ()) and moving average time series (MA ()).

A order autoregressive time series, (AR ()) is given by the equation: in which is a white noise process.

A time series is a order moving average time series, (MA ()) if given by the equation: .

In addition to these two major categories there are also combined time series, the so-called ARMA models, whose equation is

In applications, to find the appropriate serial model for a data sample, a first step is to convert the data series into a series that is similar to an ARMA model. The most commonly applied method is to model the overall trend of the series and the seasonal component, if any. The global trend can be modeled using the regression method. The effective application of the method was made using the program Minitab 16.

The following parameters were used for the modified Box-Pierce (Ljung-Box) Chi-Square statistics: the lag, value, Chi-Square and DF.

The lag represents the time period that separates the data that are ordered in time which is used to calculate the partial autocorrelation coefficient. Minitab displays lags that are in multiples of 12.

The value is a probability that measures the evidence against the null hypothesis. Lower probabilities provide stronger evidence against the null hypothesis.

Chi-square is the test statistic that Minitab uses to determine whether the residuals are independent by calculating the value and comparing the value to the significance level for each chi-square statistic.

DF (the degrees of freedom) represents the amount of information in the presented data, which are used by Minitab program for the chi-square statistics to calculate the value.

The coefficients used for the regression equation/analysis are SE coefficient, value, and value.

SE coefficient (standard error of the coefficient) is used to measure the precision of the estimate of the coefficient. The smaller the standard error, the more precise the estimate. Dividing the coefficient by its standard error a value can be calculated. If the value associated with the statistic is less than the significance level, the coefficient is statistically significant.

The value measures the ratio between the coefficient and its standard error. Minitab uses the value to calculate the value, which is used to test whether the coefficient is significantly different from 0.

Mathematically, the signals obtained from the occipital area were simulated only. This is because the projection in the occipital area is what provides the information for sound perception.

3. Results

3.1. Delta Rhythm
3.1.1. Mathematical Model in S1 Signal Stimulation

The S1 D time series appears to have a quadratic global trend (Figure 4(a)), which allows for the use of two types of regression models: linear and quadratic, of which the quadratic regression model is the more adequate.

The results of the regression analysis are presented below Table 3:


TermCoefSE coef

Constant0.9437010.039778923.72360.001
C1-0.0180540.0087241-2.06950.054
C1C10.0011420.00040352.82890.012

The regression equation is

The value of the linear coefficient is marginal (), and therefore we decided to retain it, since the quadratic value of the correlation coefficient, , which reflects the percentage of variation in the S1 D variable, is higher than when the coefficient of the linear term is zero.

We continue the analysis of the residual series, namely the S1 D series, from which we subtracted the regression model. The series is noted below as .

Since the coefficient for lag 2 is quite large, the proposed model for the time series is second-order autoregressive, AR (2), of the form in which is white noise.

The result for the Box-Jenkins analysis is represented in Tables 4 and 5.


TypeCoefSE coef

AR 10.40750.18232.240.038
AR 2-0.63390.1823-3.480.003


Lag12243648

Chi-square8.8
DF10
0.549

We observed that both coefficients are significant (the values are very low for AR1, AR2) and the value for the Box-Pierce statistics is high, confirming that the process can be viewed as being white noise. By combining the result of the regression equation (7) and that of the Box-Jenkins analysis we obtained the model:

3.1.2. Mathematical Model in S2 Signal Stimulation

The S2 D series is represented in Figure 4(b). The global trend is linear and ascending, and also presents a somewhat periodic behavior, for which it is difficult to define the period, which is why we used a linear regression. The regression equation is

If we subtract from the S2 D series the model given by equation (10), we obtain the series RESI7, represented in the graph in Figure 4(c).

Next, we model the seasonal component of the series, using a trigonometric function:

The RESI7 series, together with the model proposed in equation (11) are represented in Figure 4(d). The pattern given by equation (11) is represented in red in the graphical expression.

We noted the residuals of the RESI7 series after the elimination of the sinusoidal model as RESI9.

Thus, the model found by this method is

We estimate the variance of white noise as being .

3.1.3. Mathematical Model in S3 Signal Stimulation

Although the graph of the S3 D series appears to show the global quadratic trend (see Figure 4(e)), we opted to use a linear model, because the quadratic model had a non-significant second-order coefficient.

For the modeling of residuals, we chose a 1st-order moving average, MA (1).

The obtained model is significant, the autocorrelation and partial autocorrelation functions show that the model is suitable and an advantage of such a model is that it was obtained with a minimum number of parameters (coefficients).

In conclusion, the model obtained is

3.2. Theta Rhythm
3.2.1. Mathematical Model in S1 Signal Stimulation

The time series S1 theta (T) has an ascending global trend (see Figure 5(a)), with a slight increase in variance and without a seasonal component, which is why the most suitable option for modeling is to use a second-degree polynomial function, with the following equation: in which we can estimate the variance of white noise

3.2.2. Mathematical Model in S2 Signal Stimulation

The S2 T time series (Figure 5(b)) has an ascending, linear global trend, without seasonal component. The proposed model for S2 T is

The estimated value for the white noise process variant is

3.2.3. Mathematical Model in S3 Signal Stimulation

The S3 T time series has an ascending, linear global trend, with a tendency for the variance to increase. The series also seems to have a seasonal composition, of period 6 (Figure 5(c)).

We transformed the series to transform it into a constant variance series.

Usually, the transformation that is used in such cases is a power function. In this case we used the following transformation: where represents the original data series, S3 T.

Next, we modeled the global trend of the series with a linear model:

The graph of the residuals obtained following the elimination of the global trend (named RESI6) is presented in Figure 5(d).

The pattern found for the residue series is a first-order seasonal moving average (SMA (1)): .

The results of the analysis are presented in Tables 6 and 7.


TypeSE coefCoef

SMA 6-0.73300.2948-2.490.022


Lag12243648

Chi-square11.8
DF11
0.376

We observed that the value for the model coefficient is small; thus, the coefficient is significant.

On the other hand, the value for the Box-Pierce statistics is high, confirming that the residual process can be seen as white noise.

The model found for S3 T is

3.3. Alpha Rhythm
3.3.1. Mathematical Model for Alpha 1 Rhythm in S1 Signal Stimulation

The S1 alpha 1 (A1) series is graphically represented in Figure 6(a).

We observed that the series has an approximately constant global trend and also seems to have a seasonal component of period 6. Calculating the slope of the regression line, we found it to be non-significant (). This fact led us to the proposal of a regression line, which would be constant, (the value of the constant is the average of the S1 A1 series).

The indexes of the seasonal component, identified using the Minitab software, are the following:

, , , , , .

The residuals of the model thus constructed have the appearance of a white noise process (which is confirmed by the graphs of the autocorrelation and partial autocorrelation functions), so the proposed model for the S1 A1 series is where is the value of the remainder of dividing by 6. The estimated value for the white noise process variant is

3.3.2. Mathematical Model for Alpha 1 Rhythm in S2 Signal Stimulation

The S2 A1 time series has an ascending, linear global trend, without a seasonal component (Figure 6(b)). The model constructed with linear regression is ; however, the for the slope of the line is marginally non-significant ().

The second model we tested is a constant line in which the value of the constant is the average of the series S2 A1: . The residuals obtained with this global trend model present functions of autocorrelation and partial autocorrelation corresponding to a white noise process, and therefore we can model the S2 A1 series: in which the variance of the white noise process is estimated to be

3.3.3. Mathematical Model for Alpha 1 Rhythm in S3 Signal Stimulation

The S3 A1 series is presented in Figure 6(c). We observed an ascending trend, possibly quadratic, with a slight increase in variance. A seasonal component is possible; however, the period is difficult to define.

A first attempt to model the global trend, through a second-degree polynomial, demonstrates that such a model is non-significant, since the coefficients have very high values. A second attempt was to consider the series of first-order differences: , the graph of which is presented in Figure 6(d).

We observed that the series has a constant global trend; the variance appears to be constant and does not appear to have a seasonal component. An analysis of the autocorrelation and partial autocorrelation functions suggests a first-order moving average model, MA (1): . The result of the Box-Jenkins analysis is presented below. The model coefficient is significant and the model is adequate, as shown by the value for the Box-Pierce statistics Tables 8 and 9.


TypeCoefSE coef

MA 10.71010.18873.760.001


Lag12243648

Chi-square10.5
DF11
0.483

Also to be considered is that the graphs of the autocorrelation and partial autocorrelation functions of the model residuals confirms that the residuals are a white noise process. We conclude that an appropriate model is

Estimating the variance of the white noise process is .

3.3.4. Mathematical Model for Alpha 2 Rhythm in S1 Signal Stimulation

The S1 A2 series has an almost constant global trend, without seasonal component; however, we did observe a sudden drop at time (Figure 7(a)).

We used a linear regression model for the global trend of the series:

We estimate the variance of the white noise process as

3.3.5. Mathematical Model for Alpha 2 Rhythm in S2 Signal Stimulation

The S2 A2 series has a quadratic global trend, with an ascending variance, and appears to have a seasonal appearance (Figure 7(b)); however, the data is insufficient to confirm whether the pattern is repeated periodically or not.

We transformed the series, taking into account the first-order differences, and thus define the new series as (Figure 7(c)).

We observed that the series seems to present constant variance. A model of first-order seasonal moving average (with period 4, SMA 4) produces the result shown in Tables 10 and 11.


TypeCoefSE coef

SMA 40.82190.21503.820.001


Lag12243648

Chi-square13.7
DF11
0.250

The model coefficient is significant, and the Box-Pierce statistics show that the model has residuals that may belong to a white noise process (). We therefore conclude that a plausible model for the S2 A2 series is

We can estimate the variance of the white noise process:

3.3.6. Mathematical Model for Alpha 2 Rhythm in S3 Signal Stimulation

The S3 A2 series appears to have a global tendency similar to a sinusoid, with an approximately constant variance, without the seasonal component (Figure 7(d)).

After transforming the series, and considering the first-order differences, we obtained a series which can be interpreted as a white noise process (Figure 7(e)).

Thus, a plausible model is , with the white noise variant estimated as being .

3.4. Beta Rhythm
3.4.1. Mathematical Model for Beta 1 Rhythm in S1 Signal Stimulation

The series has an overall decreasing trend, without a seasonal component (Figure 8(a)). By linear regression, we modeled the global trend with a line:

The result of the regression analysis shows that both coefficients are significant Table 12.


CoefSE coef

Constant0.966550.0274635.190.001
C1-0.0076790.002293-3.350.004

The regression equation is

After eliminating the global trend, the residuals obtained have autocorrelation and partial autocorrelation functions graphs which can be interpreted as belonging to a white noise process. Thus, we can model the S1 B1 series as with the white noise variance estimated to be .

3.4.2. Mathematical Model for Beta 1 Rhythm in S2 Signal Stimulation

The S2 B1 series has an ascending global trend, with a seasonal component of period 6, which is presented in Figure 8(b).

The equation for the global trend was obtained by linear regression:

The seasonal component of period 6 has the following values:

The original series, together with the model given by the global trend and the seasonal component, are represented in Figure 8(c).

The residuals series has the characteristics of a white noise process, and thus, we can conclude that the model of the series is

3.4.3. Mathematical Model for Beta 1 Rhythm in S3 Signal Stimulation

The series has an ascending global trend, with a slight increase in variance which is difficult to verify, given the size of the data sample (Figure 8(d)).

The global trend can be modeled using a linear regression:

After eliminating the global trend, the residuals series can be seen as a white noise process, so a suitable model for the series is

3.4.4. Mathematical Model for Beta 2 Rhythm in S1 Signal Stimulation

In Figure 9(a) we present the evolution in time of the series of values corresponding to the beta 2 rhythm, under auditory stimulation with the S1 signal. This series (S1 B2) has a decreasing global trend, without a seasonal component. As a global trend model, we propose the linear regression line: .

The series of residuals obtained, after we subtracted the linear model, RESI13, appears to be a white noise process, although an analysis of the autocorrelation and partial autocorrelation functions shows an increase of the values of the autocorrelation coefficients with the increase of the gap (although the values remain in the 95% confidence band).

Given that we used a fairly short range of values, it is difficult to establish how the autocorrelation function behaves at larger gaps. Thus, a first model that we propose is

A second model we propose is the following: first we consider the series of first-order differences. Then, since the lag 1 autocorrelations coefficients were high and then decreased sharply, a first-order moving average model, MA (1) is advised.

The model is

3.4.5. Mathematical Model for Beta 2 Rhythm in S2 Signal Stimulation

The S2 BS series is represented in Figure 9(b).

We distinguish an ascending global trend, with a seasonal character of period 6. The equation for the global trend was obtained by a linear regression:

The seasonal component of period 6 has the following values: , , , , , and .

The original series, together with the model, are represented in Figure 9(c). The series of residuals has the characteristics of a white noise process; we can decide that the series model is

3.4.6. Mathematical Model for Beta 2 Rhythm in S3 Signal Stimulation

The S3 B2 series is very similar to the S3 B1 series: the same sudden increase is observed at the 16th value in the series. Beyond this, the series seems to have a decreasing global trend, without a seasonal aspect (Figure 9(d)). The equation given by linear regression is

The Box-Jenkins analysis shows that the MA (2) model is adequate: the coefficients have very low values and that the value for the Box-Pierce statistic is high Tables 13 and 14.


TipCoefSE coef

MA 1-1.06220.2281-4.660.001
MA 2-0.80110.2220-3.610.002


Lag12243648

Chi-square5.3
DF10
0.869

We can thus decide that the S3 B2 series model is

4. Discussion

In this paper, we estimated the variance from the residuals of the mean fit to each signal series. Therefore, the smaller the estimate for the variance, the better the regression fit to the series. Following the values of the obtained models, we obtained a sufficiently small variance size – with the exception of three cases in which a value could not be obtained.

The resulting mathematical functions offer the possibility to study, during the 20 minutes of stimulation, the evolution in time, compared to the period prior to stimulation, of the spectral composition of the EEG.

In Table 15 the mathematical models of the alpha, beta, delta and theta rhythms corresponding to the S1 sound are presented.


EEG spectrumMathematical modelVariance estimator

Delta
Theta0.0073
Alpha 1
, , , , ,
0.0158
Alpha 20.1943
Beta 10.054
Beta 20.079

We observe that the alpha rhythm (A1 and A2) has the largest variance, and generally the lowest amplitude while the theta rhythm has the highest amplitude, at least towards the end of the period when the S1 stimulation was applied. Indeed, the mean values of the alpha waves (Table 16) are among the lowest (0.727 for alpha1 and 0.889 for alpha2), while the mean value of the theta wave is 1.023. Also, the amplitude of the theta waves tends to increase towards the end of the period when the S1 stimulus was applied, reaching the value of about 1.2, compared to the minimum amplitude of about 0.55, reached by the alpha1 wave.


AverageStandard deviationMinimumQ1MedianQ3Maximum

S1 A1200.72700.12560.54180.63690.69280.83591.0130
S1 A2200.88940.19430.59730.70560.95121.06711.1849
S1 B1200.88590.07330.77100.80570.89890.95011.0026
S1 B2200.96550.08900.80550.90530.95821.02911.1430
S1 D200.91790.07060.78610.87880.91610.96241.0648
S1 T201.02320.08520.92810.96550.99041.07031.1992

Regarding the S2 sound, (Table 17) the alpha waves again show the lowest amplitude, while the delta rhythm has the highest amplitude and increasing in the second half of the period when the sound was applied, reaching up to a value of about 2.15. Analyzing the standard deviations from Table 18, the delta rhythm has the highest value (0.405), while the smallest deviation is the beta2 wave (0.062). We also notice that the beta1 and beta2 frequency bands are almost identical, a fact confirmed by the similarity of the two models found by the Box-Jenkins analysis.


EEG spectrumMathematical modelVariance estimator

Delta0.1642
Theta0.0218
Alpha 10.099
Alpha 20.019
Beta 1
, , , , ,
0.074
Beta 2
, , , , , .
0.038


AverageStandard deviationMinimumQ1MedianQ3Maximum

S2 A1200.76400.09880.61290.71180.74040.84210.9440
S2 A2200.91500.19130.53190.77070.92891.03821.2770
S2 B1200.92550.08230.81650.86160.91660.97271.0946
S2 B2200.92360.06200.83110.88080.92020.95751.1015
S2 D201.44450.40530.81831.05301.42601.76062.1503
S2 T201.15750.14770.88691.04061.17801.25821.4131

The graphs show the greatest separation between the S2 sound when compared to the S1 sound.

Mathematical models for brain rhythms corresponding to S3 stimulation are given in Table 19. As with the S2 stimulus, delta and theta rhythms appear to dominate in the second half of the time interval. It is interesting to note in Table 20 that the standard deviations are mostly close to 0.2, except for the beta1 and beta2 frequency bands which have values close to 0.13.


EEG spectrumMathematical modelVariance estimator

Delta0.0226
Theta
Alpha 10.036
Alpha 20.074
Beta 10.0172
Beta 20.0066


AverageStandard deviationMinimumQ1MedianQ3Maximum

S3 A1200.90160.19330.66480.77230.86510.96541.5343
S3 A2201.02740.21930.71080.80601.03431.20961.4195
S3 B1200.97260.13120.84730.88950.94051.00261.4151
S3 B2200.96280.13570.82240.85970.93861.00521.3536
S3 D201.39810.20270.98211.28781.35151.58041.8019
S3 T201.18300.20190.90411.01321.14311.32181.6730

If we compare the results obtained after the stimulation with the three types of complex sounds, grouping by frequency band, we notice that for both the alpha1 and alpha2 frequency bands the lowest average values (0.727 and 0.889, respectively) are recorded for the S1 stimulus and the highest mean values are for the S3 stimulus (0.901 and 1.027, respectively). In terms of standard deviation, it is higher for S3 sound for both alpha1 and alpha2 waves.

For the beta rhythm, the mean values are much closer than in the case of alpha waves ranging between 0.8859 (S1 B1) and 0.9726 (S3 B1). Standard deviations are also lower, ranging from 0.0733 (S1 B1) to 0.1357 (S3 B2). Thus, S3 for beta1 has the highest average on the considered interval for beta 2, while the sound S1 has the highest average, slightly exceeding S3.

The delta rhythm shows a much more interesting behavior: the average values increase from 0.9179 for the S1 sound to 1.4445 for the S2 sound and 1.3981 for the S3 sound. Standard deviations also increase significantly, from 0.07 for S1 to 0.405 for S2 and 0.2027 for S3. A clear domination of the waves corresponding to the signals S2 and S3 is observed.

The same increase is observed for the theta rhythm: the average values increase from 1.0232 for S1 to 1.183 for the S3 stimulus, but the increase is much smaller than that recorded for delta or alpha rhythms. The theta rhythm corresponding to the sounds S2 and S3 dominates the one corresponding to S1. The standard deviation is as in the case of the other studied rhythms higher for S2 and S3 (0.1477 and 0.2019, respectively) compared to 0.0852 for S1.

5. Conclusions

In the present study, a model was obtained for the th5ree types of stimulation signals S1, S2, and S3 which generated mathematical functions for the main waves of the electroencephalogram: alpha, beta, delta, and theta. Mathematical models give us the possibility to compare simply but objectively the response of the encephalogram to the stimuli S1, S2, and S3.

Synthetically, mathematical models were obtained expressed by 2 constant functions, 8 first-degree functions (linear), a second-degree function, a fourth-degree function, 6 recursive functions, and 4 periodic functions.

Each sound stimulation produced a characteristic pattern of changes in cortical micropotentials: S3 predominantly influences the low-frequency bands (for theta ), S1 influences those of higher frequencies (for beta ), and S2 exerts a moderate influence on both bands, with a slight predominance over the low-frequency ones.

In most models for the residuals, the estimate for the variance is rather small, indicating that the signal series can be modeled quite accurately.

The resulting mathematical functions offer the possibility of studying, for the 20 stimulation minutes, of the evolution in time, compared to the period before stimulation, of the EEG spectral component.

The development of a mathematical model which allows the study of evolution of the spectral EEG component represents an aspect of originality of this study and marks the practical importance, in the case of monotonous auditory stimulations, of the interval of time in which the synchronization of cerebral activity, depending on the type of stimulation, may occur.

Data Availability

All used data is within the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Authors’ Contributions

Marius Georgescu and Laura Haidar contributed equally to this work.

Acknowledgments

This paper is dedicated to the memory of Professor Dr. Anda Gadidov, Kennesaw State University, Kennesaw, USA. Part of the research was done at the Center of Genomic Medicine from the “Victor Babes” University of Medicine and Pharmacy of Timisoara (POSCCE 185/48749, contract 677/09.04.2015).

References

  1. L. I. Yukhymenko, M. Y. Makarchuk, and V. S. Lyzogub, “EEG patterns of differentiation of visual stimuli under auditory deprivation,” International Journal of Physiology and Pathophysiology, vol. 9, no. 3, pp. 257–263, 2018. View at: Publisher Site | Google Scholar
  2. K. Eroğlu, T. Kayıkçıoğlu, and O. Osman, “Effect of brightness of visual stimuli on EEG signals,” Behavioural Brain Research, vol. 382, p. 112486, 2020. View at: Publisher Site | Google Scholar
  3. E. Tekin, M. Engin, T. Dalbasti, and E. Z. Engin, “The evaluation of EEG response to photic stimulation in normal and diseased subjects,” Computers in Biology and Medicine, vol. 39, no. 1, pp. 53–60, 2009. View at: Publisher Site | Google Scholar
  4. N. Martini, D. Menicucci, L. Sebastiani et al., “The dynamics of EEG gamma responses to unpleasant visual stimuli: from local activity to functional connectivity,” NeuroImage, vol. 60, no. 2, pp. 922–932, 2012. View at: Publisher Site | Google Scholar
  5. M. Teplan, A. Krakovská, and S. Stolc, “Direct effects of audio-visual stimulation on EEG,” Computer Methods and Programs in Biomedicine, vol. 102, no. 1, pp. 17–24, 2011. View at: Publisher Site | Google Scholar
  6. Y. Soeta, S. Uetani, and Y. Ando, “Propagation of repetitive alpha waves over the scalp in relation to subjective preferences for a flickering light,” International Journal of Psychophysiology, vol. 46, no. 1, pp. 41–52, 2002. View at: Publisher Site | Google Scholar
  7. R. K. Chikara and L.-W. Ko, “Modulation of the visual to auditory human inhibitory brain network: an EEG dipole source localization study,” Brain Sciences, vol. 9, no. 9, p. 216, 2019. View at: Publisher Site | Google Scholar
  8. M. Georgescu, D. Georgescu, M. Iancau, and A. Gadidov, “Effects of monotonous auditory stimulation on the human EEG,” Neuroscience and Medicine, vol. 3, no. 4, pp. 337–343, 2012. View at: Publisher Site | Google Scholar
  9. M. J. Corral and C. Escera, “Effects of sound location on visual task performance and electrophysiological measures of distraction,” NeuroReport, vol. 19, no. 15, pp. 1535–1539, 2008. View at: Publisher Site | Google Scholar
  10. N. Jaušovec and K. Habe, “The "Mozart effect": an electroencephalographic analysis employing the methods of induced event-related desynchronization/synchronization and event-related coherence,” Brain Topography, vol. 16, no. 2, pp. 73–84, 2003. View at: Publisher Site | Google Scholar
  11. N. Jaušovec and K. Jaušovec, “Differences in induced gamma and upper alpha oscillations in the human brain related to verbal/performance and emotional intelligence,” International Journal of Psychophysiology, vol. 56, no. 3, pp. 223–235, 2005. View at: Publisher Site | Google Scholar
  12. Y. Hou and S. Chen, “Distinguishing different emotions evoked by music via electroencephalographic signals,” Computational Intelligence and Neuroscience, vol. 2019, Article ID 3191903, 18 pages, 2019. View at: Publisher Site | Google Scholar
  13. I. Daly, D. Williams, F. Hwang, A. Kirke, E. R. Miranda, and S. J. Nasuto, “Electroencephalography reflects the activity of sub-cortical brain regions during approach-withdrawal behaviour while listening to music,” Scientific Reports, vol. 9, no. 1, article 9415, 2019. View at: Publisher Site | Google Scholar
  14. P. Gomes, T. Pereira, and J. Conde, “Musical emotions in the brain-a neurophysiological study,” Neurophysiology Research, vol. 1, no. 1, pp. 12–20, 2018. View at: Google Scholar
  15. E. K. J. Pauwels, D. Volterrani, G. Mariani, and M. Kostkiewics, “Mozart, music and medicine,” Medical Principles and Practice, vol. 23, no. 5, pp. 403–412, 2014. View at: Publisher Site | Google Scholar
  16. S. Koelsch, “Towards a neural basis of music-evoked emotions,” Trends in Cognitive Sciences, vol. 14, no. 3, pp. 131–137, 2010. View at: Publisher Site | Google Scholar
  17. S. Koelsch, T. Fritz, and G. Schlaug, “Amygdala activity can be modulated by unexpected chord functions during music listening,” NeuroReport, vol. 19, no. 18, pp. 1815–1819, 2008. View at: Publisher Site | Google Scholar
  18. K. J. Pallesen, E. Brattico, H. Bailey et al., “Emotion processing of major, minor, and dissonant chords: a functional magnetic resonance imaging study,” Annals of the New York Academy of Sciences, vol. 1060, no. 1, pp. 450–453, 2005. View at: Publisher Site | Google Scholar
  19. E. Brattico, V. Alluri, B. Bogert et al., “A functional MRI study of happy and sad emotions in music with and without lyrics,” Frontiers in Psychology, vol. 2, p. 308, 2011. View at: Publisher Site | Google Scholar
  20. A. C. Green, K. B. Baerentsen, H. Stodkilde-Jorgensen, M. Wallentin, A. Roepstorff, and P. Vuust, “Music in minor activates limbic structures: a relationship with dissonance?” NeuroReport, vol. 19, no. 7, pp. 711–715, 2008. View at: Publisher Site | Google Scholar
  21. L. M. Riby, “The joys of spring,” Experimental Psychology, vol. 60, no. 2, pp. 71–79, 2013. View at: Publisher Site | Google Scholar
  22. H. J. Trappe, “Role of music in intensive care medicine,” International Journal of Critical Illness and Injury Science, vol. 2, no. 1, pp. 27–31, 2012. View at: Publisher Site | Google Scholar
  23. M. A. D. Pereira and M. A. Barbosa, “Teaching strategies for coping with stress: the perceptions of medical students,” BMC Medical Education, vol. 13, no. 1, p. 50, 2013. View at: Publisher Site | Google Scholar
  24. J. Jiang, A. J. Scolaro, K. Bailey, and A. Chen, “The effect of music-induced mood on attentional networks,” International Journal of Psychology, vol. 46, no. 3, pp. 214–222, 2011. View at: Publisher Site | Google Scholar
  25. D. Kučikienė and R. Praninskienė, “The impact of music on the bioelectrical oscillations of the brain,” Acta Medica Lituanica, vol. 25, no. 2, pp. 101–106, 2018. View at: Publisher Site | Google Scholar
  26. J. R. Hughes, “The Mozart effect: additional data,” Epilepsy & Behavior, vol. 3, no. 2, pp. 182–184, 2002. View at: Publisher Site | Google Scholar
  27. M. Trimble and D. Hesdorffer, “Music and the brain: the neuroscience of music and musical appreciation,” BJPsych. International, vol. 14, no. 2, pp. 28–31, 2017. View at: Publisher Site | Google Scholar
  28. S. Schneider, P. W. Schönle, E. Altenmüller, and T. F. Münte, “Using musical instruments to improve motor skill recovery following a stroke,” Journal of Neurology, vol. 254, no. 10, pp. 1339–1346, 2007. View at: Publisher Site | Google Scholar
  29. S. M. Peng, M. Koo, and J. C. Kuo, “Effect of group music activity as an adjunctive therapy on psychotic symptoms in patients with acute schizophrenia,” Archives of Psychiatric Nursing, vol. 24, no. 6, pp. 429–434, 2010. View at: Publisher Site | Google Scholar
  30. K. Sherratt, A. Thornton, and C. Hatton, “Music interventions for people with dementia: a review of the literature,” Aging & Mental Health, vol. 8, no. 1, pp. 3–12, 2004. View at: Publisher Site | Google Scholar
  31. E. Bodner, I. Iancu, A. Gilboa, A. Sarel, A. Mazor, and D. Amir, “Finding words for emotions: the reactions of patients with major depressive disorder towards various musical excerpts,” The Arts in Psychotherapy, vol. 34, no. 2, pp. 142–150, 2007. View at: Publisher Site | Google Scholar
  32. T. Särkämö, “Cognitive, emotional, and neural benefits of musical leisure activities in aging and neurological rehabilitation: a critical review,” Annals of Physical and Rehabilitation Medicine, vol. 61, no. 6, pp. 414–418, 2018. View at: Publisher Site | Google Scholar
  33. Y. P. Lin, Y. H. Yang, and T. P. Jung, “Fusion of electroencephalographic dynamics and musical contents for estimating emotional responses in music listening,” Frontiers in Neuroscience, vol. 8, p. 94, 2014. View at: Publisher Site | Google Scholar
  34. A. Frey, C. François, J. Chobert, J.-L. Velay, M. Habib, and M. Besson, “Music training positively influences the preattentive perception of voice onset time in children with dyslexia: a longitudinal study,” Brain Sciences, vol. 9, no. 4, p. 91, 2019. View at: Publisher Site | Google Scholar
  35. M. Christiner and S. M. Reiterer, “Early influence of musical abilities and working memory on speech imitation abilities: study with pre-school children,” Brain Sciences, vol. 8, no. 9, p. 169, 2018. View at: Publisher Site | Google Scholar
  36. N. Kraus, D. L. Strait, and A. Parbery-Clark, “Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory,” Annals of the New York Academy of Sciences, vol. 1252, no. 1, pp. 100–107, 2012. View at: Publisher Site | Google Scholar
  37. J. Kuhnis, S. Elmer, M. Meyer, and L. Jancke, “Musicianship boosts perceptual learning of pseudoword-chimeras: an electrophysiological approach,” Brain Topogr, vol. 26, no. 1, pp. 110–125, 2013. View at: Publisher Site | Google Scholar
  38. S. Elmer, M. Meyer, and L. Jancke, “Neurofunctional and behavioral correlates of phonetic and temporal categorization in musically trained and untrained subjects,” Cerebral Cortex, vol. 22, no. 3, pp. 650–658, 2012. View at: Publisher Site | Google Scholar
  39. S. Elmer, J. Hänggi, M. Meyer, and L. Jäncke, “Increased cortical surface area of the left planum temporale in musicians facilitates the categorization of phonetic and temporal speech sounds,” Cortex, vol. 49, no. 10, pp. 2812–2821, 2013. View at: Publisher Site | Google Scholar
  40. K. Schulze, S. Zysset, K. Mueller, A. D. Friederici, and S. Koelsch, “Neuroarchitecture of verbal and tonal working memory in nonmusicians and musicians,” Human Brain Mapping, vol. 32, no. 5, pp. 771–783, 2011. View at: Publisher Site | Google Scholar
  41. D. L. Strait, N. Kraus, A. Parbery-Clark, and R. Ashley, “Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance,” Hearing Research, vol. 261, no. 1-2, pp. 22–29, 2010. View at: Publisher Site | Google Scholar
  42. S. Elmer, C. Klein, J. Kühnis, F. Liem, M. Meyer, and L. Jäncke, “Music and language expertise influence the categorization of speech and musical sounds: behavioral and electrophysiological measurements,” Journal of Cognitive Neuroscience, vol. 26, no. 10, pp. 2356–2369, 2014. View at: Publisher Site | Google Scholar
  43. F. H. Rauscher, G. L. Shaw, and C. N. Ky, “Music and spatial task performance,” Nature, vol. 365, no. 6447, p. 611, 1993. View at: Publisher Site | Google Scholar
  44. T. Särkämö and D. Soto, “Music listening after stroke: beneficial effects and potential neural mechanisms,” Annals of the New York Academy of Sciences, vol. 1252, no. 1, pp. 266–281, 2012. View at: Publisher Site | Google Scholar
  45. N. Jausovec and K. Habe, “The influence of Mozart’s sonata K 448 on brain activity during the performance of spatial rotation and numerical tasks,” Brain Topography, vol. 17, no. 4, pp. 207–218, 2005. View at: Publisher Site | Google Scholar
  46. N. Jausovec, K. Jausovec, and I. Gerlic, “The influence of Mozart’s music on brain activity in the process of learning,” Clinical Neurophysiology, vol. 117, no. 12, pp. 2703–2714, 2006. View at: Publisher Site | Google Scholar
  47. L. C. Lin, C. S. Ouyang, C. T. Chiang, R. C. Wu, H. C. Wu, and R. C. Yang, “Listening to Mozart K.448 decreases electroencephalography oscillatory power associated with an increase in sympathetic tone in adults: a post-intervention study,” JRSM Open, vol. 5, no. 10, p. 205427041455165, 2014. View at: Publisher Site | Google Scholar
  48. W. Klimesch, “EEG alpha and theta oscillations reflect cognitive and memory performance: a review and analysis,” Brain Research Reviews, vol. 29, no. 2-3, pp. 169–195, 1999. View at: Publisher Site | Google Scholar
  49. S. Katayama, Y. Hori, S. Inokuchi, T. Hirata, and Y. Hayashi, “Electroencephalographic changes during piano playing and related mental tasks,” Acta Med Okayama, vol. 46, no. 1, pp. 23–29, 1992. View at: Publisher Site | Google Scholar
  50. R. S. Schaefer, R. J. Vlek, and P. Desain, “Music perception and imagery in EEG: alpha band effects of task and stimulus,” International Journal of Psychophysiology, vol. 82, no. 3, pp. 254–259, 2011. View at: Publisher Site | Google Scholar
  51. H. Namazi, R. Khosrowabadi, J. Hussaini, S. Habibi, A. A. Farid, and V. V. Kulish, “Analysis of the influence of memory content of auditory stimuli on the memory content of EEG signal,” Oncotarget, vol. 7, no. 35, pp. 56120–56128, 2016. View at: Publisher Site | Google Scholar
  52. M. Adib and E. Cretu, “Wavelet-based artifact identification and separation technique for EEG signals during galvanic vestibular stimulation,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 167069, 13 pages, 2013. View at: Publisher Site | Google Scholar
  53. M. Musselman and D. Djurdjanovic, “Time-frequency distributions in the classification of epilepsy from EEG signals,” Expert Systems with Applications, vol. 39, no. 13, pp. 11413–11422, 2012. View at: Publisher Site | Google Scholar
  54. E. Derya Übeyli, “Analysis of EEG signals by combining eigenvector methods and multiclass support vector machines,” Computers in Biology and Medicine, vol. 38, no. 1, pp. 14–22, 2008. View at: Publisher Site | Google Scholar
  55. V. Lawhern, S. Kerick, and K. A. Robbins, “Detecting alpha spindle events in EEG time series using adaptive autoregressive models,” BMC Neuroscience, vol. 14, no. 1, 2013. View at: Publisher Site | Google Scholar
  56. A. S. al-Fahoum and A. A. al-Fraihat, “Methods of EEG signal features extraction using linear analysis in frequency and time-frequency domains,” ISRN Neuroscience, vol. 2014, Article ID 730218, 7 pages, 2014. View at: Publisher Site | Google Scholar
  57. D. Kugiumtzis and A. Tsimpiris, “Measures of analysis of time series (MATS): AMATLABToolkit for computation of multiple measures on time series data bases,” Journal of Statistical Software, vol. 33, no. 5, 2010. View at: Publisher Site | Google Scholar
  58. D. Kugiumtzis, I. Vlachos, A. Papana, and P. G. Larsson, “Assessment of measures of scalar time series analysis in discriminating preictal states,” International Journal of Bioelectromagnetism, vol. 9, no. 3, pp. 134–145, 2007. View at: Google Scholar
  59. F. Mormann, T. Kreuz, C. Rieke et al., “On the predictability of epileptic seizures,” Clinical Neurophysiology, vol. 116, no. 3, pp. 569–587, 2005. View at: Publisher Site | Google Scholar
  60. G. E. P. Box, G. M. Jenkins, and G. C. Reinsel, Time Series Analysis: Forecasting and Control, Prentice Hall, Englewood Cliff, New Jersey, USA, 3rd edition, 1994.
  61. A. Isaksson, A. Wennberg, and L. H. Zetterberg, “Computer analysis of EEG signals with parametric models,” Proceedings of the IEEE, vol. 69, no. 4, pp. 451–461, 1981. View at: Publisher Site | Google Scholar
  62. V. Sakkalis, T. Cassar, M. Zervakis et al., “Parametric and nonparametric EEG analysis for the evaluation of EEG activity in young children with controlled epilepsy,” Computational Intelligence and Neuroscience, vol. 2008, Article ID 462593, 15 pages, 2008. View at: Publisher Site | Google Scholar
  63. S. G. Fabri, K. P. Camilleri, and T. Cassar, “Parametric modelling of EEG data for the identification of mental tasks,” in Biomedical Engineering, Trends in Electronics, Communications and Software, A. Laskovski, Ed., pp. 367–386, IntechOpen Ltd., London, UK, 2011. View at: Publisher Site | Google Scholar
  64. I. Rezek, “Multivariate biomedical signal processing,” in Wiley Encyclopedia of Biomedical Engineering, M. Akay, Ed., vol. 1, John Wiley & Sons Ltd, New Jersey, USA, 2006. View at: Google Scholar
  65. X. Li, “Complexity and synchronization measures of EEG with a time varying parametric model,” in Life Science Data Mining, S. Wong and C.-S. Li, Eds., vol. 2, pp. 99–119, World Scientific Publishing Co Pte Ltd, Singapore, Singapore, 2007. View at: Google Scholar
  66. M. Georgescu, A. F. Serb, C. Tatu et al., “The influence of external auditory stimuli on electrical brain activity,” Fiziologia – Physiology, vol. 22.2, no. 74, pp. 4–10, 2012. View at: Google Scholar
  67. M. Georgescu, D. Georgescu, D. Alexandru, C. Tatu, and M. Iancău, “Analysis of the evolution in time of the effects of monotonous auditory stimulation on cortical biopotentials,” Fiziologia – Physiology, vol. 23.3, no. 75, pp. 18–22, 2012. View at: Google Scholar
  68. B. Bowerman, R. O’Connell, and A. B. Koehler, Forecasting, Time Series and Regression: An Applied Approach, Thomson Brooks/Cole, Belmont, CA, 4th edition, 2005.

Copyright © 2021 Marius Georgescu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1146
Downloads885
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.