- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Advances in Artificial Neural Systems

Volume 2012 (2012), Article ID 190359, 12 pages

http://dx.doi.org/10.1155/2012/190359

## Activation Detection on fMRI Time Series Using Hidden Markov Model

^{1}AT&T Labs, Florham Park, NJ 07932, USA^{2}Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA

Received 16 March 2012; Revised 23 June 2012; Accepted 23 June 2012

Academic Editor: Anke Meyer-Baese

Copyright © 2012 Rong Duan and Hong Man. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper introduces two unsupervised learning methods for analyzing functional magnetic resonance imaging (fMRI) data
based on hidden Markov model (HMM). HMM approach is focused on capturing the first-order statistical evolution among the samples of a voxel time series, and it can provide a complimentary perspective of the BOLD signals. Two-state HMM is created for each voxel, and the model parameters are estimated from the voxel time series and the stimulus paradigm. Two different activation detection methods are presented in this paper. The first method is based on the likelihood and likelihood-ratio test, in which an additional Gaussian model is used to enhance the contrast of the HMM likelihood map. The second method is based on certain distance measures between the two state distributions, in which the most likely HMM state sequence is estimated through the Viterbi algorithm. The distance between the on-state and off-state distributions is measured either through a *t*-test, or using the Kullback-Leibler distance (KLD). Experimental results on both normal subject and brain tumor subject are presented. HMM approach appears to be more robust in detecting the supplemental active voxels comparing with SPM, especially for brain tumor subject.

#### 1. Introduction

Functional magnetic resonance imaging (fMRI) is a well-established technique to monitor brain activities in the field of cognitive neuroscience. The temporal behavior of each fMRI voxel reflects the variations in the concentration of oxyhemoglobin and deoxyhemoglobin, measured through blood oxygen level-dependent (BOLD) contrast. BOLD signal is generally considered as an indirect indicator for brain activities, because neural activations may increase blood flow in certain regions of the brain.

##### 1.1. Characteristics of fMRI Data

fMRI data are collected as a time series of D images. Each point in the D image volume is called a voxel. fMRI data have four important characteristics: (1) large data volume; (2) relatively low SNR; (3) hemodynamic delay and dispersion; (4) fractal properties. Typically, one fMRI data set includes over -K voxels from a whole brain scan and therefore has -K time series. The observed time sequences are combinations of different types of signals, such as task-related, function-related, and transiently task-related (different kinds of transiently task-related signals coming from different regions of brain). These are the signals that convey brain activation information. There are also many types of noises, which can be physiology-related, motion-related, and scanning-related. The signal to noise ratio (SNR) in typical fMRI time series can be quite low, for example, around to . For different regions and different trails, the SNR level also varies significantly. Such noise nature causes major difficulty in signal analysis. Hemodynamic delay and dispersion further increase the complexity of fMRI signal structure. Special efforts have been made to construct flexible hemodynamic response function (HRF) which can model the hemodynamic delay and dispersion through various regions and different subjects [1, 2]. fMRI data also have fractal properties, which means that a class of objects may have certain interesting properties in common. In other words, fMRI data are approximately scale invariant or scale free.

##### 1.2. Methodology of Analyzing fMRI Data

Two areas of fMRI-based neural systems study have attracted lots of attention over the past two decades: functional activity detection and functional connectivity detection. Functional activity detection aims to locate the spatial areas that are associated with certain psychological tasks, commonly specified by a predefined paradigm. Functional connectivity detection focuses on finding spatially separated areas that have high temporal correlations [3]. Generally, functional connectivity detection is conducted under the resting-state condition. The differences in biophysical motivations and experiment designs of these two studies are reflected in their methodologies. Functional activity detection compares the temporal series of each voxel with the excitation paradigm and functional connectivity detection compares the voxel timeseries with other series in a predefined spatial region, that is, Region of Interesting (ROI), or “seed” region. Functional activity detection is more on the temporal correlation and functional connectivity detection is more on spatial correlation. Even though the two areas have some differences, they share many common statistical modelling strategies. Both of them attempt to build models to abstract the spatial and temporal relations from the observed fMRI data. To choose a model that can capture the properties of the time series accurately and efficiently is essential in both study areas.

A large number of methods have been proposed to analyze fMRI data. Most of them can be characterized into one of these two categories: modelling based approach and data driven approach. Reference [4] provided a detailed review. Even though the authors claimed the methodologies as functional connectivity detection, certain amount of the reviewed methods were commonly used in functional activity detection study too. This paper focuses on constructing a model to extract voxel temporal characteristics and test the voxel activities using functional activity detection as example. All the models referred following are for this task if no further clarification.

An established software called statistical parametric map (SPM) is a typical functional activity detection modelling package based on general linear model (GLM) [5], general linear model transforms a voxel time series into a space spanned by a set of basis vectors defined in the design matrix. These basis vectors include a set of paradigm waveforms convolved with hemodynamic response function (HRF), as well as several low-frequency DCT bases. The residual errors of this linear transform is modelled as Gaussian pdf. The key component of this method is how to constitute the design-matrix which can accurately model the brain activation effects and separate noises. Using GLM to analyze fMRI data has following intrinsic assumptions:(i)the activation patterns are spatially distributed in the same way for all subjects,(ii)the response between input stimulation and brain response is linear,(iii)the HRF function is the same for every voxel,(iv)the time series observation has a known Gaussian distribution,(v)the variance and covariance between repeated measurements are invariant,(vi)the time courses of different factors affecting the variance of fMRI signals can be reliably estimated in advance,(vii)the signals at different voxels are independent, and(viii)the intensity distribution of background (nonactive areas) is known whereas the distribution of active areas is not known.

Reference [6] substituted paradigm with the average temporal series of ROI as seed and applied SPM on functional connectivity detection. And the most recent release of SPM incorporates dynamic causal modelling to infer the interregional coupling, but it is designed more on EEG and MEG data [7].

There are some other methods which can be considered as special cases of GLM. For example, direct subtraction method subtracts the average of “off” period from the average of “on” period. Voxels with significant difference will then be identified as active. Students’ *t*-test can be used to measure the difference in the means by the standard deviations in “off” and “on” periods. The larger the *t*-value is, the larger the “on-off” difference is, and the more active the voxel is. Correlation coefficient is another special case of GLM. It measures the correlation coefficient between a reference function waveform and each voxel temporal signal waveform. Voxels with large correlation coefficient are considered to be connected. If the reference function waveform is defined by paradigm [8], it is functional activity detection, and if the reference function waveform is defined by some seed time course, it is functional connectivity detection [9].

There exist some problems in GLM-based methods. For example, GLM assumes one HRF function for all voxels. The BOLD signal is only an indirect indicator of neural activity, and many nonneural changes in the body could also influence the BOLD signal. Different brain areas may have different hemodynamic responses, which would not be accurately reflected by the general linear model. Also, GLM method typically requires grouping or averaging data over several task/control blocks, which reduces sensitivity for detecting transient task-related changes, and make it insensitive to significant changes not consistently time-synched to the task block design. Low SNR makes it possible for nontask-relevant components overshadow task-relevant components and further reduces the sensitivity and specificity. GLM only considers the time series and ignores relationships between voxels, hindering the detection of brain regions acting as functional units during the experiment. Another problem is that the GLM method does not extract the intrinsic structure of the data, which may significantly weaken its effectiveness when the a priori of fMRI signal in response to the experimental events is not known or may not be constant across all voxels.

Besides GLM-based methods, a few methods have been proposed to improve the accuracy of modelling fMRI temporal signals. Reference [10] introduced a Bayesian modelling method which used a two-state HMM to infer an optimal state sequence through Markov Chain Monte Carlo sampling. The method assumed the observation is a linear combination of a two-state HMM, which infer hidden psychological states, plus constant and trend. The model is designed as the combination of offset, linear trend and a set of two-state HMMs state sequences start at different time points. MCMC is used to estimate the optimal state sequence for each voxel. This method is good in interpreting the dynamics in each voxel, but the computing complexity is big concern as mentioned by the authors, and also the totally paradigm-free approach might introduce noise that irrelevant with the experiment design. Reference [11] applied state-space model and Kalman filter to model the baseline and stimulus effect without any parametric constrain. Reference [12] employed multiple reference functions with 100 ms shift to find the highest correlation coefficient reference function for specific voxels, which avoids the common practice of using a single-reference function for all voxels. Reference [13] proposed Gaussian mixture models to describe the mutually exclusive fMRI time sequence. Reference [14] introduced an unsupervised learning method based on hidden semi-Markov event sequence models (HSMESMs) method which had the advantage of explicitly modelling the state occupancy duration. The method decomposed an observation into true positive events, false positive events, and missing observations. The “off-on” paradigm transitions were modelled as left to right HMM true positive states, and the other periods were considered as semi-Markov false positive states. The likelihood of HSMESM was calculated iteratively to detect activity base on predefined threshold. Reference [15] used first-order Markov chain to estimate the time series, *t*-test, and mutual information to detect the actions. All these methods are essentially two-stage approach. The temporal property is modelled at each voxel independently and then spatial modelling is performed based on the summarized statistics from temporal analysis. Fully Bayesian spatiotemporal modelling [16] considered spatial and temporal information together. This method decomposed the observation data into spatiotemporal signal and noise, and space-time simultaneously specified autoregressive model (STSAR) was employed to construct noise model. Half-cosine HRF model and activation height model were used to construct fMRI signal models.

Granger causality analysis [17] and dynamic causal modelling [18] are two popular methods in functional connectivity detection in recent years. A series comments and controversies [19–22] have been dedicated to comparing these two methods on model selection, causality, and deconvolution from biophysiological view. Reference [23] criticized dynamic causal modelling from computation complexity and model validation from mathematical perspective. The advantage of these two methods are that they consider the spatial and temporal information at the same time, but the disadvantage is the model complexity.

All the modelling-based approaches mentioned above are either too simple to capture the temporal or spatial dynamics for different voxels and subjects, or too complicated to estimate parameters accurately and inference easily.

In addition to these modelling-based methods, there are also data driven methods in analyzing fMRI data. One popular example of this approach is independent component analysis (ICA). ICA decomposes a 4D fMRI data volume (3D spatial and 1D temporal) into a set of maximum temporal or spatial independent components by minimizing the mutual information between these components. ICA does not require the knowledge of stimulus or paradigm in data decomposition, and similar voxel activation patterns will usually appear in the same component. ICA is also called blind source separation, because it does not need prior knowledge, and it is able to identify “transient task related’’ components that could not be easily identified by the paradigm. The first application of ICA in fMRI data was the spatial ICA (sICA) [24]. Temporal ICA (tICA) was introduced later by [25]. Reference [26] compared the sICA with tICA and reported that the beneficial of each method depends on the independence of the underlying spatial or temporal signal. sICA maximizes independence spatially and the corresponding temporal information might be highly correlated, and vice versa for tICA. To consider the mutual independence between space and time simultaneously, [27] proposed a spatiotemporal independent component analysis (stICA). Extended from entropy-based one-dimension ICA decomposition introduced in [28], the authors embedded the spatial and temporal components at the same time and also incorporated the spatial skewed probability density function to replace the kurtosis and symmetric probability density function in decomposing the independent signals. As pointed out in [29], the disadvantages of the Infomax and entropy-based stICA algorithms used in [27] are that the number of parameters needed to be estimated is large, the local minima and sensitive to noise characteristics of the gradient descent optimization methods. To improve the stability, robustness, and simplify the computing complexity, [29] adopted the generalized eigenvalue decomposition and joint diagonalization on both spatial and temporal autocorrelation to achieve spatial and temporal independent signals simultaneously. ICA methods have showed promising results in fMRI analysis, but similar as all other data-driven methods, it is hard to interpret the output and it usually requires special knowledge and human intervention. Also, ICA does not specify which component, among many output components, is the activation component, and there is no statistical confidence level of each components extracted.

Clustering is another well-developed data driven approach in brain activity and connectivity detection. It has been used to identify regions with similar patterns of activations. Common clustering algorithms include hierarchical clustering, crisp clustering, K-means, self-organizing maps (SOM), and fussy clustering. The major drawback of most clustering methods is that they make assumptions about cluster shapes and sizes, which may deviate in observed data structures. The optimization techniques used in clustering may also result in local maxima and instable results. In addition, the number of clusters is frequently determined heuristically and randomly initialized, which makes the output inconsistent with each trial.

In this paper, we propose a simple dynamic state space model, which attempts to model the voxel time series as a random process driven by the experimental paradigm or some ROI area seed series. For a given voxel, its behavior is described by a two-state hidden Markov model with certain state distributions and state transitions. The HMM parameters are estimated from the prior statistics of the paradigm as well as from the testing time series. Two methods are introduced to detect the voxel activation based on the estimated HMM. The first method calculates the likelihood of each time series, given its HMM, and forms a likelihood map for all the voxels reside in a fMRI slice. A simple Gaussian model is also used to improve the contrast of this likelihood map. The second method uses the *t*-test or the Kullback-Leibler distance (KLD) to measure the distance between the on-state distribution and the off-state distribution. These distributions are estimated based on the most likely HMM state sequence, which is calculated through a Viterbi algorithm. The contribution of the method is that it unifies the robustness, stability, and reliability under the same framework in estimating paradigm driven fMRI study. First, it incorporates the dynamic characteristics of fMRI time series by adapting -state HMM model, which is robust in detecting active voxels with different delay and dispersion behaviors. Second, the proposed method utilizes paradigm prior knowledge in parameter estimation, which is not only to simplify the computing compared with the approach in [10], but also to improve the stability and reliability of the output due to the stability of paradigm.

The rest of this paper is organized as follows. In Section 2, we introduce the two-state hidden Markov model approach for fMRI data. In Section 3, we discuss activation detection methods base on the estimated HMM. In Section 4, we present the experimental results on two sets of fMRI data, one is normal subject and the other is brain tumor subject, and compare the results with GLM-based statistical parametric mapping package (SPM) [5].

#### 2. Hidden Markov Model for fMRI Time Series

##### 2.1. Hidden Markov Model

HMM is a very efficient stochastic method in modelling sequential data of which the distribution patterns tend to cluster and alternate among different clusters [30]. A hidden Markov model consists of a finite set of states. In a traditional Markov chain, the state is directly visible to the observer, and the state transition probabilities are the only parameters. In an HMM, only the observations influenced by the state are visible. Each of the hidden state is associated with a probability distribution. Transitions among the states are measured by transition probabilities. The most common first-order HMM implies that the state at any given time depends only on the state at the previous time step.

HMM is well developed in temporal pattern recognition applications. It was first applied to speech recognition [31]. Now it is widely used in multimedia [32], bioinformatics [33, 34], informational retrieval [35], and so forth.

An HMM can be described by the following elements [31]: (1) a set of observations , where is the number of time samples; (2) a set of states , where is the number of states; (3) a state-transition probability distribution , where ; (4) observation probability distribution for each state , where , , is a possible observation value; (5) an initial state distribution , where , .

An HMM is therefore denoted by . We further model each state distribution as a Gaussian pdf:

Let be a possible state sequence and assume that the observation samples are independent, the likelihood of an observed sequence given this HMM can be calculated as:

Given the observation and the HMM, the most likely state sequence , which maximizes the likelihood , can be calculated through the Viterbi algorithm [36]. The Viterbi path score function is defined as: where is the highest probable path ending in state at time . The induction can be expressed as:

In an application of HMM, multiple HMMs are trained by different groups of labeled data. The HMM parameters are estimated based on these training data. The test data will be assigned to the one which has the maximum likelihood.

##### 2.2. Brain Activation Detection

###### 2.2.1. HMM Likelihood Methods

In our unsupervised learning methods, HMM parameters are estimated directly from the experimental paradigm or the voxel time series under examination. This is different from conventional HMM applications where HMM parameters are usually estimated from some training data. The attempt of avoiding training process is motivated by the fact that the true activation behavior varies from voxel to voxel and from patient to patient. Therefore, it is not advisable to use the parameters from certain set of voxels to characterize other voxels.

Since the simple block paradigm has only two levels, “on, off,” in this work we let the number of state , that is, on-state and off-state .

Because of the first-order Markov assumption, that is, , the distribution of a state duration is exponential, and the expected value of a state duration can be expressed as: Given an experimental paradigm, let the length (i.e., time samples) of the ON period be , and the length of off period be , the transition matrix can be estimated as .

The parameters in can be estimated from the voxel time series . Assuming that the time samples are normalized, let denote the paradigm ON periods, and denote the paradigm off periods, the off-state Gaussian parameters are and the on-state Gaussian parameters are where is the total number of time samples in the off periods, and is the total number of time samples in the ON periods.

Because the paradigm always starts at the off state, the parameters in are set as and .

Given a 2-state HMM as specified, if an observation sequence does have two distinguishable states in consistence with the paradigm states, the resulting will be clearly different from , and the likelihood of such sequence given this model will be relatively high. If an observation sequence does not have such clear 2-state characteristic, the corresponding state transition will be somehow random and will not fit well with the specified matrix. In such situation, the likelihood of this sequence will be relatively low. Therefore, the value of voxel sequence likelihood can provide an indication about the activation of this voxel. A likelihood test on an fMRI slice will be able to produce a likelihood map with each point representing the likelihood of a voxel on this slice.

To enhance the contrast of this likelihood map, we introduce a simple Gaussian model for the samples. This model is consistent with the state distribution in the 2-state HMM. The likelihood of the entire sequence is calculated based on this model. The expectation is that if a voxel is non-active, its distribution in periods and periods should be similar, and therefore the likelihood to this model should be relatively high; on the other hand, if the voxel is active, its distribution in periods will be quite different from the distribution in periods, and therefore the likelihood of the whole sequence on this model will be relatively low. The substraction of the HMM log likelihood map and the Gaussian log likelihood map is equivalent to a general likelihood ratio test, and it provides an activation map with enhanced contrast.

###### 2.2.2. State Distribution Distance Methods

If a voxel is active, its fMRI time series can be partitioned into segments associated with two states, and each state can be described by a distribution. The assumption is that if the on-state distribution is significantly different from the off-state distribution, we have high confidence to declare a voxel as active, and vice versa. Therefore, the second method we are investigating attempts to measure the distance between the presumed on-state and off-state distributions.

There are many techniques available for measuring the distance of two distributions. We study two of such measures in this work, one is the *t*-test, and the other is the Kullback-Leibler divergence. Both on-state and off-state distributions are models as simple Gaussian pdfs.

Given the Gaussian parameters, and , the *t*-test calculates the difference of two mean values corrected by their variance values

A *t*-map is produced after the *t*-test is applied to all the voxles on an fMRI slice. High values in the map usually indicate active voxles.

The Kullback-Leibler divergence [37] is frequently used as a distance measure for two probability densities, although in theory it is not a true distance measure because it is not symmetric. In general it is defined in the form of “relative entropy,”

For two Gaussian pdfs, a close form expression for KLD is available:

These are well-established methods. However a critical issue in fMRI analysis is how to estimate the correct on-state and off-state distributions. A simple assumption is to let all time samples in the paradigm ON periods be the on-state samples and let all samples in the paradigm off periods be the off-state samples. We refer to this approach as the “paradigm state” approach. The SPM takes a similar approach, except that the block paradigm is convolved with an HRF, which is normally a low-pass filter characterizing the nature voxel response to a stimulus. The and are obtained by projecting the time series to the HRF convolved paradigm waveform, and the and are set to be the same to model the residual error between the voxel time series and the weighted paradigm waveform.

We take a different approach by applying the 2-state HMM on each voxel series and calculate the most likely state sequence using the Viterbi algorithm. We refer to this approach as the “Viterbi path’’ approach. Then the on-state and off-state statistics are calculated according to the optimal state assignment for each time sample. The are obtained from all samples belonging to the off-state, and are obtained from all samples belonging to the on-state.

#### 3. Experimental Results

##### 3.1. Normal Subject

The data set is collected from a test with self-paced bilateral sequential thumb-to-digits opposition task. The task paradigm consists of a 32 sec baseline followed by 4 cycles of 30 sec ON and 30 sec OFF. The time series is sampled at 0.25 Hz, which produces 68 time samples for each voxel. The first four samples are ignored during analysis because of initial unstable measurement. The BOLD image is acquired in a 1.5 T GE echo speed horizon scanner with the following parameters: TR/TE = 4000/60, FOV = 24 cm, matrix, slice thickness 5 mm without gap, and 28 slices to cover the entire brain. Following acquisition of the functional data (with a resolution of ), a set of 3 mm slice thickness, high resolution ( matrix size), gadolinium-enhanced images are also obtained according to clinical imaging protocol. The data is aligned to remove the limited motion between data sets then smoothed with a Gaussian kernel before further processing [5]. We further normalize each time series with the mean and variance of its paradigm off period. In order to compensate DC drifting in many voxels, each time series is partitioned into four equal-length segments, and normalization is performed separately on each of these segments. In the reported results, only one fMRI transverse slice is shown.

We first compare three methods based on two different distribution distance measures. These include the SPM with a *t*-test or an *f-*test, and our HMM Viterbi path method with a *t-*test or a KLD measure. The results are shown in Figure 1. From these results we can see that the primary motor and secondary motor areas are effectively highlighted by all these methods. We also have the following observations: (1) the HMM Viterbi path methods produce more compact and clearly highlighted regions, which indicates that Viterbi path estimation is more accurate than paradigm state estimation; (2) the HMM Viterbi path *t-*test method performs similarly to SPM *t-*test with some minor differences, mostly along the outer frontal regions; (3) the KLD methods have resemblance to SPM *F-*test in the sense that their results are pure positive, while *t-*test results are signed.

To test the effectiveness of our HMM likelihood ratio method, we compare its result with an SPM *t-*test result. In Figure 2, (a) shows the two-state HMM log-likelihood map; (b) shows the Gaussian log-likelihood map of the same slice; (c) shows the log-likelihood ratio test map. It can be seen that HMM log-likelihood map is almost the reverse of Gaussian log-likelihood map, which validates our expectation in Section 2.2.1. (c) is similar to (a), yet with enhanced contrast. This result resembles the SPM *t-*test result, although their magnitude scales are quite different.

We examine several active voxels detected by SPM and by HMM likelihood ratio test. In Figure 3, the SPM t-map and the HMM log likelihood ratio map are thresholded at certain level to yield similar number of active voxels. The corresponding voxel time series marked with “A,” “B,” “C,” and “D” are shown in Figure 4. The voxels “A” and “B” can be detected by both SPM and HMM likelihood ratio test. The voxels “C” and “D” are only highlighted by the HMM likelihood ratio test.

##### 3.2. Brain Tumor Subject

Functional MRI is not only used for normal brain function mapping, it is also widely used for neurosurgical planning and neurologic risk assessment in the treatment of brain tumors. The growth of a tumor can cause functional areas to shift from their original locations. Large tumors can cause these critical regions to shift dramatically. Localizing the motor strip and coregistering the results to a surgical scan prior to a neurosurgical intervention can help guide the direct cortical stimulation during an awake craniotomy and possibly shorten operation time. In some cases, using fMRI to confirm the expected location of the motor strip may avoid awake neurosurgery altogether.

The HRF for brain tumor patient is more complicated than that for normal healthy subjects. We compare our unsupervised -state HMM model with GLM-based SPM on brain tumor patient and found -state HMM model is more robust to HRF and it is more sensitive in detecting supplemental motor activation.

The machine specification and the functional data acquisition for a tumor patient is the same as for the normal subject described above. The experiment design is different. The test is self-paced bilateral sequential thumb-to-digits opposition task. The task paradigm consists of a sec baseline followed by cycles of sec ON and sec off. Each point is sec for a total of min. The time series is totally time samples for each voxel. The first samples are ignored during analysis because of initial unstable measurement. The patient has a tumor on his left frontal lobe. As seen in the high resolution fMRI image in Figure 5(a).

Figure 5(b) is a thresholded SPM *t-*test map, both left and right motor areas are detected by SPM and there is no supplemental voxels. SPM *t-*test shows that the tumor does not impact the patient’s motor area. Thresholded HMM likelihood ratio test result shown in Figure 5(c) indicates some weak motor activation in the left side motor area. In addition, there are some supplemental motor activation detected on surrounding areas. We further study several active voxels from Figures 5(b) and 5(c). The locations of selected active voxels are marked as “A,” “B,” “C,” “D,” “E.” “F,” “G,” and “H” in each figures, respectively. The corresponding voxel time series are shown from Figure 6. From these results, we can see that there are three types of voxels. The voxels “A” and “B” are detected by both SPM and HMM likelihood ratio test, which exhibit strong activation patterns. Voxels “C”, “G,” and “H” are detected only by SPM, and in fact they are either very weak activations or false positives. Voxels “D,” “E,” and “F” are detected only in HMM likelihood ratio test and their time series have strong activation patterns related to the paradigm but with different delay and dispersion. These results reaffirmed our understanding that SPM has difficulty in locating active voxels with unexpected delay and dispersion behaviors.

#### 4. Conclusion Remarks and Future Works

In this paper we have presented HMM-based method to detect active voxels in fMRI data. A -state HMM model is built based on paradigm on/off periods, and a 1-state HMM model is built based on paradigm off period. A log-likelihood ratio map is generated using the two log-likelihoods. Viterbi path is obtained for the -state HMM model. According to the Viterbi path, *t-*test map and KLD map are generated. From experiments we see that HMM methods are as effective as SPM method, and sometime HMM methods can detect supplemental active voxels that SPM may miss, especially in complicated cases with such as tumor patients. Overall we consider that the HMM methods are complementary to the SPM method, because SPM focuses on capturing fMRI signal waveform characteristics while HMM method attempts to describe fMRI signal stochastic behaviors. In other words, the HMM methods can provide a second opinion to the SPM test results, which can be very helpful in practical situations.

#### References

- M. W. Woolrich, T. E. J. Behrens, and S. M. Smith, “Constrained linear basis sets for HRF modelling using Variational Bayes,”
*NeuroImage*, vol. 21, no. 4, pp. 1748–1761, 2004. View at Publisher · View at Google Scholar · View at Scopus - P. Ciuciu, J. B. Poline, G. Marrelec, J. Idier, C. Pallier, and H. Benali, “Unsupervised robust nonparametric estimation of the hemodynamic response function for any fMRI experiment,”
*IEEE Transactions on Medical Imaging*, vol. 22, no. 10, pp. 1235–1251, 2003. View at Publisher · View at Google Scholar · View at Scopus - L. Lee, L. M. Harrison, and A. Mechelli, “A report of the functional connectivity workshop, Dusseldorf 2002,”
*NeuroImage*, vol. 19, no. 2, pp. 457–465, 2003. View at Publisher · View at Google Scholar · View at Scopus - K. Li, L. Guo, J. Nie, G. Li, and T. Liu, “Review of methods for functional brain connectivity detection using fMRI,”
*Computerized Medical Imaging and Graphics*, vol. 33, no. 2, pp. 131–139, 2009. View at Publisher · View at Google Scholar · View at Scopus - K. J. Friston, A. P. Holmes, K. J. Worsley, J. P. Poline, C. D. Frith, and R. S. J. Frackowiak, “Statistical parametric maps in functional imaging: a general linear approach,”
*Human Brain Mapping*, vol. 2, no. 4, pp. 189–210, 1994. View at Scopus - M. D. Greicius, B. Krasnow, A. L. Reiss, and V. Menon, “Functional connectivity in the resting brain: a network analysis of the default mode hypothesis,”
*Proceedings of the National Academy of Sciences of the United States of America*, vol. 100, no. 1, pp. 253–258, 2003. View at Publisher · View at Google Scholar · View at Scopus - J. Ashburner, “Spm:a history,”
*NeuroImage*, vol. 62, no. 2, pp. 791–800, 2012. View at Publisher · View at Google Scholar - G. K. Wood, “Visualization of subtle contrast-related intensity changes using temporal correlation,”
*Magnetic Resonance Imaging*, vol. 12, no. 7, pp. 1013–1020, 1994. View at Publisher · View at Google Scholar · View at Scopus - J. Cao and K. Worsley, “The geometry of correlation fields with an application to functional connectivity of the brain,”
*Annals of Applied Probability*, vol. 9, no. 4, pp. 1021–1057, 1999. View at Scopus - P. Hojen-Sorensen, L. Hansen, and C. Rasmussen, “Baysian modelling of fmri time series,” in
*Proceedings of the 13th Annual Conference on Advances in Neural Information Processing Systems (NIPS '99)*, pp. 754–760, 2000. - C. Gossl, D. Auer, and L. Fahtmeir, “Dynamic models in fmri,”
*Magnetic Resonance in Medicine*, vol. 43, no. 1, pp. 72–81, 2000. - M. Singh, W. Sungkarat, J. W. Jeong, and Y. Zhou, “Extraction of temporal information in functional MRI,”
*IEEE Transactions on Nuclear Science*, vol. 49, no. 5, pp. 2284–2290, 2002. View at Publisher · View at Google Scholar · View at Scopus - V. Sanguineti, C. Parodi, S. Perissinotto et al., “Analysis of fMRI time series with mixtures of Gaussians,” in
*Proceedings of the International Joint Conference on Neural Networks (IJCNN '2000)*, vol. 1, pp. 331–335, July 2000. View at Scopus - S. Faisan, L. Thoraval, J. P. Armspach, and F. Heitz, “Unsupervised learning and mapping of brain fMRI signals based on hidden semi-Markov event sequence models,” in
*Proceedings of the 6th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI '03)*, pp. 75–82, November 2003. View at Scopus - B. Thirion and O. Faugeras, “Revisiting non-parametric activation detection on fMRI time series,” in
*Proceedings of the IEEE Workshop on Mathematical Methods in Biomedical Image Analysis*, vol. 1, pp. 121–128, December 2001. View at Scopus - M. W. Woolrich, M. Jenkinson, J. M. Brady, and S. M. Smith, “Fully bayesian spatio-temporal modeling of FMRI data,”
*IEEE Transactions on Medical Imaging*, vol. 23, no. 2, pp. 213–231, 2004. View at Publisher · View at Google Scholar · View at Scopus - A. Roebroeck, E. Formisano, and R. Goebel, “Mapping directed influence over the brain using Granger causality and fMRI,”
*NeuroImage*, vol. 25, no. 1, pp. 230–242, 2005. View at Publisher · View at Google Scholar · View at Scopus - K. J. Friston, L. Harrison, and W. Penny, “Dynamic causal modelling,”
*NeuroImage*, vol. 19, no. 4, pp. 1273–1302, 2003. View at Publisher · View at Google Scholar · View at Scopus - K. Friston, “Dynamic causal modeling and Granger causality comments on: the identification of interacting networks in the brain using fMRI: model selection, causality and deconvolution,”
*NeuroImage*, vol. 58, pp. 303–305, 2011. View at Publisher · View at Google Scholar · View at Scopus - O. David, “fMRI connectivity, meaning and empiricism. Comments on: Roebroeck et al. The identification of interacting networks in the brain using fMRI: model selection, causality and deconvolution,”
*NeuroImage*, vol. 58, pp. 306–309, 2011. View at Publisher · View at Google Scholar · View at Scopus - A. Roebroeck, E. Formisano, and R. Goebel, “Reply to Friston and David. After comments on: the identification of interacting networks in the brain using fMRI: model selection, causality and deconvolution,”
*NeuroImage*, vol. 58, pp. 296–302, 2011. View at Publisher · View at Google Scholar · View at Scopus - A. Roebroeck, E. Formisano, and R. Goebel, “Reply to Friston and David. After comments on: the identification of interacting networks in the brain using fMRI: model selection, causality and deconvolution,”
*NeuroImage*, vol. 58, pp. 310–311, 2011. View at Publisher · View at Google Scholar · View at Scopus - G. Lohmann, K. Erfurth, K. Mller, and R. Turner, “Critical comments on dynamic causal modelling,”
*NeuroImage*, vol. 59, pp. 2322–2329, 2012. View at Publisher · View at Google Scholar - M. Mckeown, S. Makeig, G. Brown et al., “Analysis of fmri data by blind seperation into independent spatial components,”
*Human Brain Mapping*, vol. 6, pp. 160–188, 1998. - B. B. Biswal and J. L. Ulmer, “Blind source separation of multiple signal sources of fMRI data sets using independent component analysis,”
*Journal of Computer Assisted Tomography*, vol. 23, no. 2, pp. 265–271, 1999. View at Publisher · View at Google Scholar · View at Scopus - V. D. Calhoun, T. Adali, G. D. Pearlson, and J. J. Pekar, “Spatial and temporal independent component analysis of functional MRI data containing a pair of task-related waveforms,”
*Human Brain Mapping*, vol. 13, no. 1, pp. 43–53, 2001. View at Publisher · View at Google Scholar · View at Scopus - J. V. Stone, J. Porrill, N. R. Porter, and I. D. Wilkinson, “Spatiotemporal independent component analysis of event-related fMRI data using skewed probability density functions,”
*NeuroImage*, vol. 15, no. 2, pp. 407–421, 2002. View at Publisher · View at Google Scholar · View at Scopus - A. J. Bell and T. J. Sejnowski, “An information-maximization approach to blind separation and blind deconvolution,”
*Neural Computation*, vol. 7, no. 6, pp. 1129–1159, 1995. View at Scopus - F. J. Theis, P. Gruber, I. R. Keck, and E. W. Lang, “A robust model for spatiotemporal dependencies,”
*Neurocomputing*, vol. 71, no. 10–12, pp. 2209–2216, 2008. View at Publisher · View at Google Scholar · View at Scopus - G. McLachlan and D. Peel,
*Finite Mixture Models*, John Wiley & Sons, New York, NY, USA, 2000. - L. R. Rabiner, “Tutorial on hidden Markov models and selected applications in speech recognition,”
*Proceedings of the IEEE*, vol. 77, no. 2, pp. 257–286, 1989. View at Publisher · View at Google Scholar · View at Scopus - G. Xu, Y. F. Ma, H. J. Zhang, and S. Q. Yang, “An HMM-based framework for video semantic analysis,”
*IEEE Transactions on Circuits and Systems for Video Technology*, vol. 15, no. 11, pp. 1422–1433, 2005. View at Publisher · View at Google Scholar · View at Scopus - A. Krogh, M. Brown, I. S. Mian, K. Sjolander, and D. Haussler, “Hidden Markov Models in computational biology applications to protein modeling,”
*Journal of Molecular Biology*, vol. 235, no. 5, pp. 1501–1531, 1994. View at Publisher · View at Google Scholar · View at Scopus - R. Durbin, S. Eddy, A. Krogh, and G. Mitchison,
*Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids*, Cambridge University Press, 1999. - D. Miller, T. Leek, and R. Schwartz, “A hidden markov model information retrieval system,” in
*Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval*, pp. 214–221, ACM, 1999. - G. D. Forney, “The viterbi algorithm,”
*Proceedings of the IEEE*, vol. 61, no. 3, pp. 268–278, 1973. View at Scopus - S. Kullback and R. Leibler, “On information and sufficiency,”
*Annals of Mathematical Statistics*, vol. 22, no. 1, pp. 79–86, 1951. View at Publisher · View at Google Scholar