Computational and Mathematical Methods in Medicine

Volume 2015 (2015), Article ID 652030, 7 pages

http://dx.doi.org/10.1155/2015/652030

## Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

Department of Neurosurgery, University Hospital Regensburg, 93042 Regensburg, Germany

Received 13 August 2015; Revised 20 October 2015; Accepted 25 October 2015

Academic Editor: Anne Humeau-Heurtier

Copyright © 2015 Rupert Faltermeier et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

#### 1. Introduction

The course of severe neurological events like subarachnoid hemorrhage (SAH) and traumatic brain injury is influenced by two main pathophysiological principles: (A) the primary injury sustained at the time of impact which is mostly irreversible and therefore not primary object to treatment [1], (B) the secondary injury consisting of cytotoxic and vasogenic edema with increased intracranial pressure (ICP), reduced cerebral blood flow with consecutive brain ischemia, and insufficient oxygenation leading to programmed cell death of neurons that can be detected from hours to days following injury and may contribute to neurological dysfunction [2–4]. The primary goal of neurointensive care treatment is therefore to avoid secondary brain injury by providing an optimal physiological and biochemical environment [5]. Since the biological changes leading to secondary injury are highly individual [6], a recent consensus has defined the necessity for patient specific treatment protocols in contrast to a rigid all size fits all approach [7]. In this circumstance, the cerebral pressure autoregulation maintaining a continuous cerebral blood flow despite variations of systemic arterial pressure is of paramount importance [8, 9]. Under physiological conditions, an increase of ABP will not induce higher ICP levels. However, if the autoregulation disturbed, a positive correlation between ABP and ICP will occur [10]. Therefore, if the autoregulation is intact, enhancing the systemic blood pressure leads to improved cerebral perfusion pressure (CPP) and appropriate cerebral blood supply. Conversely, in a patient with impaired autoregulation, augmentation of CPP may cause brain swelling and worse outcome [11]. Recent studies indicated that a deviation from the putatively optimal CPP based on the function of cerebral autoregulation will lead to significantly worse outcome of the patients [12]. Therefore an individualized treatment strategy accounting for the autoregulation status of the patient is necessary [13]. This however requires an array of different monitoring techniques for the assessment of intracranial pressure (ICP), oxygenation status, and metabolism [14, 15] leading to an immense volume of multimodal datasets frequently overwhelming the treating physician [16]. To address this problem, we have recently developed a mathematical tool set termed selected correlation analysis that unmasks deterioration of the cerebral autoregulation [17] and indicates reduced intracranial compliance [18]. However, this approach needs to be validated in a prospective study allowing the adjustment of treatment ultimately leading to improved outcome of the affected patients [14]. The goal of our study was to optimize the parameter of selected correlation analysis in order to provide the most sensitive and specific tool set for a randomized clinical trial assessing the benefit of the proposed method.

#### 2. Methods

##### 2.1. Patient Population

The study was conducted in accordance with the ethical guidelines of the University of Regensburg Institutional Review Board. Informed consent was obtained from the patient’s relatives; all study results were stored and analyzed in an anonymized fashion. We prospectively investigated a cohort of 25 adult patients (13 females, 12 males) who were treated at the neurosurgical intensive care unit for traumatic brain injury (TBI) or subarachnoid hemorrhage in 9 and 16 cases, respectively. We have exclusively included patients with critical neurological diseases in our study since only in this patient subgroup multimodal brain monitoring is clinically indicated. The mean age was 43.4 years (range: 18.4–72.4); the median Glasgow Coma Scale (GCS) at the time of admission was 6 (range: 3–15). Follow-up was completed up to January 2015 by reviewing outpatient records and contacting the patient’s family member or the patient’s primary physician. The mean follow-up time was 39.8 months; no patient was lost to follow-up. The neurological outcome was measured by the Glasgow Outcome Scale at last follow-up; the median score last follow-up was 3 (range: 1–5). All patients were sedated and mechanically ventilated during the observation period and received an intra-arterial catheter for the continuous measurement of arterial blood pressure as part of the standard treatment procedure in our institution. ICP monitoring was performed continuously using either an external ventricular drain equipped with an electronic pressure device (EVD) or a parenchymal ICP probe (both from Raumedic, Helmbrechts, Germany). The ABP and ICP data were acquired continuously using a data logger (Daq USB 6210, National Instruments, Munich, Germany) with a sample frequency of 1000 Hz. For the correlation analysis, the data were resampled to 0.2 Hz (one data point every five seconds) to reduce noise effects and to smooth out fast oscillations or spikes. Additionally, the above-mentioned resampling rate ensures that the low homeostatic variations of the data are contained within the window sizes we will discuss.

##### 2.2. Correlation Index Calculations

In the following we will roughly sketch the mathematical framework used by selected correlation analysis. For a more detailed description of the different applied characteristics, especially the calculation of the error rates, please see [17].

To identify the above-mentioned positive correlation between ABP and ICP in monitoring data from the ICU, we use a windowing approach combined with the multitaper method (mtm [19]) to determine the coherence between segments of two time series that were synchronously recorded with a sampling rate of 0.2 Hz. From the isochronous time series , we select windows , of fixed size with , , and potentially different starting points and and then calculate the mtm-spectra and the mtm coherence between the windows :As the mtm provides a built-in significance test, each single frequency is tested for significance. Building on this, we define the pointwise selected correlation (PSC) assuming a fixed significance level for the built-in significance test:The requirement of being significant for a frequency in both spectra guarantees that only frequencies are considered that essentially contribute to the original signals, whereas, in case of the coherence the requirement assures that specific exhibits a correlation between the input signals. Repeating the PSC calculations for pairs of isochronous windows leads to the mean pointwise selected correlation (MPSC):The elements of the MPSC list represent the percentage of a significant occurrence in both spectra and the coherence calculation for each single frequency . With MPSC we are able to determine frequency intervals that contain relevant correlations within a whole dataset. After having identified such a frequency interval by examining several different datasets, we want to determine periods in the dataset where strong correlation with respect to occurs. Therefore we first estimate the degree of correlation of a distinct pair of windows with respect to by calculating the sum of all elements of PSC belonging to the frequency band . This sum divided by the length of is called selected correlation (sc):A pair of windows is called selected correlated if for a predefined threshold . The value therefore serves as a measure for the degree of correlation of a pair of data windows with respect to a specific frequency range . To obtain time resolved information about the selected correlation we determine the index for isochronous windows while shifting the starting point along the time axis. Additionally we use a statistical test, calculating error rates of false positives, to determine the significance of the threshold .

##### 2.3. Statistical Test

The statistical test for significance of , a kind of perturbation test, is based on the model prediction of isochronous correlations between ABP and ICP. Two segments should not be correlated if their starting points are quite apart from each other. Assuming that a value is meaningful if higher predefined threshold , we can count how often these separated windows produce values higher than . The amount of wrong hits is interpreted as the error rate of with respect to and therefore determines the significance of with respect to . To identify a sufficient offset for the input window we use the so-called mean windowed autocorrelation ():If the time shift is large enough to exclude autocorrelation artifacts, the subsequent values should be small and stable. With this offset we can calculate the error index, indicating whether the selected correlation is higher than a predefined limit , and the error rate , that is, the rate of obviously wrong hits with respect to : Accordingly a pair of data segments is called significantly correlated if the sc value of this pair is higher equal to the predefined limit . The significance of this correlation is specified by the appropriate error rate.

##### 2.4. Hilbert Phase Differences

Having identified a pair of windows exhibiting sufficient high correlation index , we have to determine the phasing between the two data windows. This is done by calculating the mean Hilbert phase difference (mhpd) of the corresponding data segments, leading to values of mhpd between 0 and 180 deg [17]. The above-described error rate calculations for the value can easily be adapted to calculate the error rates of mhpd by substituting the criterion with appropriate criteria called . If the correlation between the data will be called positive.

##### 2.5. Parameter Optimization

With the above-described tools we are now able to calculate the percentage of pairs of windows that are significantly positive correlated for each individual patient. This percentage is called selected correlation positive (SCP). As SCP describes the percentage of measurement time in which the cerebral regulatory systems, autoregulation, and compliance are distinctively disturbed, this index is a reliable predictive value for the patients outcome [17]. But the magnitude of an individual SCP depends on several parameters needed by the above-mentioned mathematical tools. In detail this parameter is the significance of the mtm built-in statistical test, the window size of the data pairs, the frequency interval used for the sc calculations and the limits, , and for the selected correlation and the mean Hilbert phase of the data. To find the best set of parameters we first vary all parameter belonging to in some natural limits (see Table 1) and calculate for each resulting parameter set the SCP for each patient assuming a of 50 deg, an appropriate offset for the error rate calculations as used in our previous study [17]. Then we determine the predictive capability of a specific parameter set with respect to our patient cohort by calculating the value of the Pearson correlation between the patients SCP and GOS. Additionally, we calculate a parameter called yield, which is the SCP value of the complete dataset. In other words, yield describes the sum of all SCP values derived from the entire patient population weighted by the patients individual observation time and therefore serves as a measure of the sensitivity of the method. Having found an optimal set of parameters for the calculations we subsequently vary for this fixed sc parameter set and test the impact on and yield exactly as described above.