About this Journal Submit a Manuscript Table of Contents
Advances in Artificial Intelligence
Volume 2009 (2009), Article ID 219743, 11 pages
http://dx.doi.org/10.1155/2009/219743
Research Article

Bayesian Unsupervised Learning of DNA Regulatory Binding Regions

1Department of Mathematics, Åbo Akademi University, 20500 Turku, Finland
2Department of Mathematics, University of Linköping, 58183 Linköping, Sweden
3Department of Mathematics, The Royal Institute of Technology, 100 44 Stockholm, Sweden

Received 13 February 2009; Revised 6 June 2009; Accepted 2 July 2009

Academic Editor: Djamel Bouchaffra

Copyright © 2009 Jukka Corander et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Identification of regulatory binding motifs, that is, short specific words, within DNA sequences is a commonly occurring problem in computational bioinformatics. A wide variety of probabilistic approaches have been proposed in the literature to either scan for previously known motif types or to attempt de novo identification of a fixed number (typically one) of putative motifs. Most approaches assume the existence of reliable biodatabase information to build probabilistic a priori description of the motif classes. Examples of attempts to do probabilistic unsupervised learning about the number of putative de novo motif types and their positions within a set of DNA sequences are very rare in the literature. Here we show how such a learning problem can be formulated using a Bayesian model that targets to simultaneously maximize the marginal likelihood of sequence data arising under multiple motif types as well as under the background DNA model, which equals a variable length Markov chain. It is demonstrated how the adopted Bayesian modelling strategy combined with recently introduced nonstandard stochastic computation tools yields a more tractable learning procedure than is possible with the standard Monte Carlo approaches. Improvements and extensions of the proposed approach are also discussed.

1. Introduction

A major body of genomic information coded in the DNA is represented by the regulatory regions, which control the chronology, extent, and way in which genes are expressed. A promoter is the most important regulatory region as it controls the first step of gene expression, that is, the transcription of messenger RNA. A brief summary of and introduction to the biology of promoters are given in [1]. The basic characteristic of a promoter is that it contains binding sites for proteins called transcription factors (TFs). The binding sites in the DNA reside near the gene controlled by the promoters. The interaction of several TFs with their corresponding binding sites regulates the activation or repression of the gene. Hence, the promoter architecture is fundamental for understanding gene expression patterns, regulation networks, and cell specificity. The binding sites of one single TF are not identical, but contain (i.e., tolerate) some variation. The shared content of the binding sites representing the same TF is typically summarized by specifying statistically the degree of conservation in a short (5–20 bases) DNA pattern. The DNA pattern can be understood as a word or a string on a 4-letter alphabet which may contain some variation over its positions when multiple realizations from the same generating source are considered. The substrings of DNA that correspond to the multiple realizations are here called motif instances from a gene regulatory binding motif. A set of long DNA strings may simultaneously harbor multiple different such motifs, that is, TF binding sites for different genes. In [2, 3] motifs are defined as sets of words of certain length, such that the number of mismatches to a consensus word is smaller than a prescribed threshold. Here, we use instead a probabilistic classification framework to specify the motifs by assigning them to classes in an unsupervised manner.

An important task in computational biology is to detect novel motifs using algorithms that are capable of reasoning from noisy measurements in terms of artificial intelligence. The core of the motif detection problem is to discover novel words whose length is within a biologically meaningful range, such that multiple copies of the words are hidden inside the promoter regions of a set of genes, when the genes have shared properties of, for example, the expression profile. Putative motifs found in a computational manner could thus correspond to binding sites of some common transcription factors regulating a particular set of genes. The algorithms for motif discovery must also be able to avoid suggesting as novel motifs such DNA patterns that represent the background variation in the sequences. In a nutshell, the discovery problem consists of detecting multiple copies of only partially conserved short words in a long string of the characters {𝐴,𝐶,𝐺,𝑇}, such that the words are highly unlikely to occur under a stochastic null model describing general variation over consecutive letters within the string.

The problem may appear as rather straightforward at first sight. However, several of the involved computational challenges make motif discovery and identification a sincerely complex task [4]. The main obstacles can be listed as follows. A motif exhibits a certain degree of random variation (lack of conservation) in its contents, word lengths are unknown, the background (i.e., areas not containing the target words) structure of the DNA sequences is not random, and the space of putative candidates has an astronomic size. In addition, the motifs may appear in composite patterns, for example, as pairs of words with a fixed distance between them.

A considerable amount of effort has been devoted to solve such problems by statistical model families and by scanning algorithms detecting DNA pattern overrepresentation under a null background model; see, for example, [517]. Several additional related methodological papers can be found from the references of the quoted papers. In broad terms the methodological efforts for the above-mentioned problem can be divided into two categories, one where the target is to scan for the existence of a priori determined motifs within a set of sequences, and the other where one attempts to identify novel motifs in a given set of sequences. Our approach is solely concerned with the latter category, that is, we consider an unsupervised pattern recognition problem, although some brief remarks on the possibility of extending our method to a partially supervised situation are given in the final section.

Most statistical models for motif discovery use a Markovian probabilistic machinery, for example, ordinary Markov chains or hidden Markov models, to describe the observed patterns of the background DNA and to improve the motif specificity. However, some of the recent works, such as [11, 14], have demonstrated the particular potential of so-called variable-length (or variable-order) Markov (VLMC) models. Such models exemplify a rich class of probabilistic structures for which statistical learning theory has been developed much earlier in the general context [1820], and which are able to compress information efficiently through a relatively sparse parameterization, while not bargaining the expressiveness of the probabilistic characterization.

In the current paper, the VLMC model is used in two ways. Firstly, candidates of motif instances are obtained by fitting the VLMC model to the sequence data using the algorithm from [20] with the implementation due to [19], and then calculating probabilities for word counts using a compound Poisson approximation. The words which are most improbable under the null background model represent natural candidates of motif instances and can thus be used for an efficient initialization of an unsupervised learning process. Special attention has been devoted in the motif identification literature to the calculation of word count probabilities associated with any particular substrings within the investigated sequences under a null background DNA model see; [17, 2123]. However, the earlier works have not considered the calculation of such probabilities under a VLMC model, but only under ordinary Markov models of fixed length. Secondly, sufficient statistics (multinomial counts) are obtained from the sequence data under the VLMC context tree identified by the algorithm of [19], and these are used in the Bayesian unsupervised model learning to identify different types of motifs. However, it should be noticed that the sufficient statistics under the VLMC model are recalculated in the unsupervised stochastic learning steps when putative motif instances are shifted along the sequence and when they are reinserted to the background model.

Many Bayesian pattern recognition methods are principally based on the concept of latent class models, which were already early used in motif detection [5, 8]. Standard Bayesian Markov chain Monte Carlo (MCMC) computation tools, such as the Gibbs sampler [24], are available for models with a priori fixed numbers of classes. However, numerical convergence and mixing problems for such methods are notorious for challenging applications and motif discovery is not an exception in this respect, for example, see the discussion in [8]. The computational problems may arise already in a situation where only a single motif type is considered in the de novo detection context. Such problems are further accentuated in the currently considered framework, where the aim is to make formal statistical inference about the simultaneous presence of multiple putative a priori unknown motif types. Bayesian model-based inference in such a setting has been previously considered only by [15], where a reversible jump MCMC algorithm [25] was used for model learning. Motivated by the findings of [26] on general convergence issues with reversible Markov chains, we apply here instead the nonreversible Metropolis-Hastings algorithm they introduced for complex model learning applications.

The structure of this paper is as follows. First, we introduce the background DNA model based on a variable-length Markov process and consider a framework for obtaining putative candidates of motif instances by calculating DNA pattern probabilities under the background model. Thereafter, the unsupervised classification model is derived and in Section 4 we develop a parallel MCMC algorithm for making posterior inferences using a general class of nonreversible Metropolis-Hastings algorithms. Section 5 provides some illustrations of the developed method, and some remarks are given in the final section.

2. A Variable Length Markov Process as the Background DNA Model

Markov chains of higher-order memory were proposed a while ago for computing the expected frequencies of the nucleotides observed outside the motifs, that is, the background information in DNA; see [27] and the references therein. In particular, a third order stationary Markov chain is often chosen; see, for example, [8]. Here we consider a more general model that can handle longer Markov memories in a parsimonious manner. The background model will be formulated by means of the notion of a context of a symbol. This has been introduced to define a variable-length Markov chain (VLMC) as well as a universal finite memory source [1820, 28]. Only brief details of VLMC models are given here, for a comprehensive treatment we refer the reader to the original references.

We consider a DNA sequence 𝐱, which can be a long concatenation of strings of possibly varying length. Let 𝑥𝑟 denote a finite substring in 𝐱. The values of this string are denoted as 𝑥𝑟𝑗{1,2,3,4}=𝒳, where 𝑗 refers to an arbitrary position within the string 𝑥𝑟, and 𝒳 refers to the set of bases in the DNA. Let 𝑧𝑚𝑛=𝑧𝑚,,𝑧𝑛 be a string in 𝒳𝑚𝑛+1, which starts at position 𝑡=𝑛 and ends at position 𝑡=𝑚. Notice that, for 𝑚>𝑛 the string is written in the reverse order. In the sequel both 𝑡 and 𝑟 will be used as generic indices of sequence positions, when defining random processes. A repeated use of the chain rule for a random process yields (written using the short notation 𝑃(𝑍=𝑧)=𝑃(𝑧)) We introduce next the idea of a memory of variable order, in the sense that the conditional probabilities in the above product can depend only on a context relevant for them. Formally, we define the context as a (variable projection) function which maps the whole sequence past into a possibly shorter string.

Definition 1. A context is a map 𝑧0𝑧0𝑞+1, where 𝑞 (the context length) is defined by

Note that the superscript in 𝑧0 and 𝑧0𝑞+1 refers to the first element of any string that is given as an input argument to the function . Such an input string itself may in the sequel be indexed with arbitrary sub- and superscripts, such as 𝑧𝑟1, when it refers to a particular part of a string 𝑧. It should also be noted that the context is independent of 𝑡, and it can thus be called a stationary context.

Definition 2. Let If 𝑜<, then {𝑍𝑡}𝑡 is a stationary variable-length Markov chain (VLMC) of order 𝑜.

Since the context is stationary, (1) can be written as

We recall the 𝑘th-order Markov property defined by the equality of the conditional probabilities for all 𝑧0. If the context function turns out to be (𝑧0)=𝑧0𝑘+1 for all 𝑧0, then the process {𝑍𝑡}𝑡 is a full Markov chain of order 𝑘. In other words, a VLMC of finite order is then a full Markov chain of order 𝑜 with a variable memory, and the minimal state space represented by the VLMC contexts has no impact. However, in practice, the minimal state space represented by the VLMC contexts has a considerable impact, since in the genomic applications an estimated VLMC typically has an order of 7 while the size of the set of contexts |{(𝑧0)}| is small. In contrast, the memory requirements for the implementation of a full 7th-order ordinary Markov chain are enormous, which hampers the estimation and use of such models.

Using the contexts {} from a VLMC as such may lead to certain problems for a practical implementation with regard to predicting the next context based on the last one. Such problems are illustrated by the following small example.

Example 1. Let 𝑧51=43412. If 𝑞(𝑧4)=1 and 𝑞(𝑧5)=4, then (𝑧4)=3(34123). However, we cannot determine the probability of transition from (𝑧4)=3 to (𝑧5)=4341. Note that the superscript in the argument of 𝑞() refers to a position in the original string, whereas the superscript in the Definition 1 refers to the first element in an arbitrary input argument.

A definition that holds for finite data is also needed to extend the VLMC model to a finite string, such that probability transitions between the contexts can be calculated. The finiteness of real data and transition problems in Example 1 is handled by replacing the context length function 𝑞 with ̃𝑞 according to the following definition.

Definition 3. Let such that corresponding to ̃𝑞(𝑧𝛾1) replaces .

Note that ̃𝑞(𝑧𝛾1)=𝑞(𝑧𝛾1) for the largest contexts, implying that ̃𝑞(𝑧𝛾1) can be constructed recursively starting from the largest contexts. The presence of 𝛾 in (6) ensures that the context length is well defined for finite sequences and the part with the maximum ensures that the next context (𝑧1𝛾+1) can be predicted from the current context (𝑧𝛾1).

The motif instance candidates are identified by assuming the corresponding words to be overrepresented in the string 𝐱 with respect to the background VLMC model. Let 𝑅𝑛 be a stochastic number of nonoverlapping instances of a word calculated from the left, that is, more specifically 𝑅𝑛(̃𝑧𝑙1) equals the number of ̃𝑧𝑙1, such that no prefix of a ̃𝑧𝑙1 is a suffix to another ̃𝑧𝑙1 in 𝑍𝑛1. We then seek to compute for the observed occurrences 𝑟𝑛=𝑟𝑛(𝑧𝑗𝑗+𝑙1), within the range 𝑙[𝑙min,𝑙max] and 1𝑗𝑛+1𝑙.

While it is theoretically possible to calculate 𝑃(𝑅𝑛(̃𝑧𝑙1)𝑟𝑛) exactly for small 𝑛 as shown in [29], this does not work for the values of 𝑛 encountered in practice. Firstly, this is due to computational difficulties, and more importantly, the numerical difficulties of the techniques currently available. Instead, we approximate (7) using a compound Poisson distribution by invoking the results in [30]. The compound Poisson distribution CP(𝜆1,𝜆2,) refers to the probability distribution of the sum of random number 𝑇 of random variables 𝑇𝑖 where {𝑇𝑖}𝑖1 are independent, and 𝑇 is independent of 𝑇𝑖, 𝑃(𝑇𝑖=𝑘)=𝜆𝑘/𝜆 for any positive integer 𝑘 and every 𝑖, and 𝑇 is Poisson distributed, that is, 𝑇Po(𝜆).

The compound Poisson approximation is here implemented for sequences that are similar to those considered in [31]. We recall that the background model can be seen as a 𝑜th-order Markov chain. Any 𝑜th-order Markov chain with state space Ω can be transformed into a first-order Markov chain on the state space Ω𝑜. In the actual implementation the state space Ω={(𝑧0)} is designed under the target to keep the number of states low. However, for the discussion below, we assume simply that the model is represented with a state space Ω such that the corresponding r.v. 𝑌{1,,|Ω|} can be constructed to imply a first-order Markov chain {𝑌1,,𝑌𝑛} with respect to Ω (see Example 2 below).

It is assumed that any putative motif instance ̃𝑧𝑙1 can be unambiguously defined as a sequence of states 𝑤𝑎1 (i.e., strings), where any individual element 𝑤𝑖Ω. Additional states explicitly representing the transitions in the path 𝑤1𝑤2,𝑤2𝑤3,,𝑤𝑎1𝑤𝑎 are added to the state space Ω, which leads to the state space Ω. The probability of generating the sequence 𝑤1𝑤2,𝑤2𝑤3,,𝑤𝑎1𝑤𝑎 from other states than these special states is set to 0. Then, every time the first-order Markov chain visits the end state of the path, a motif instance candidate has been visited in the DNA sequence. This generic principle is illustrated in Example 2 where the candidate string equals ̃𝑧21=11.

Example 2. Suppose that Ω={0,1} and ̃𝑧21=11. Then Ω={0,1,11}, and the original transition probability matrix is transformed into where 𝑃(011)=𝑃(01) and 𝑃(111)=𝑃(111)=𝑃(11).

As seen from the above, the compound Poisson random variable 𝑅𝑛=𝑇𝑖=1𝑇𝑖 approximates the number of occurrences of ̃𝑧𝑙1 in a sequence. In the implemented model the First success distribution (Fs) is used for 𝑇𝑖, that is, 𝑇𝑖Fs(𝜃) and 𝑀Po(𝜆). Let 𝜏𝑦𝑖=inf{𝑗>1𝑌𝑗=𝑦𝑖} and set the compound Poisson approximation parameters as in [30], that is, 𝜃=𝑃(𝜏𝑦𝑙<𝜏𝑦𝑘𝑌1=𝑦𝑘) and 𝜆=𝑛𝑃(𝑌1=𝑦𝑘)𝑃(𝜏𝑦𝑙<𝜏𝑦𝑘𝑌1=𝑦𝑘). In general 𝑦𝑙 can be chosen rather freely, for example, to minimize the bound in [30]. In the practical implementation 𝑦𝑙 is instead set to the random variable representing the starting context of the motif, as the execution time for the currently available algorithm for evaluating the approximation bound in [30] is 1 day on a standard computer, and in this context the typical number of putative motif instance candidates to be evaluated is larger than 10000.

3. An Unsupervised Stochastic Learning Framework for Motif Discovery

The present learning model for motif detection operates by sequentially inserting randomly positioned contiguous windows onto the considered total DNA string and classifying their contents into groups representing putative motif types, such that the number of groups is unknown a priori. From an evolutionary perspective, the substrings inside the windows can also be viewed as fragments inserted into specific positions of a previously existing DNA sequence.

We now summarize the concepts for the motif detection task as follows.

(i)A putative gene regulatory motif can be interpreted as a statistical model of a binding site which defines for any particular DNA string of an appropriate length how well it fits to that site.(ii)A motif instance can be interpreted as an actual realization of the DNA under a particular regulatory motif model, which specifies a probability distribution over the bases at the positions belonging to the motif. The unknown (multinomial) parameters that specify this joint distribution are nuisance parameters and they are thus integrated out when making Bayesian inference.(iii)A motif type refers here to any particular motif. The type can be represented by a class of motif instances that are judged by the Bayesian model to correspond to the same motif. We assume that the number of motif models discoverable from the considered DNA sequence is a priori unknown and a primary target of the statistical learning. A putative gene regulatory motif type will be here indexed by 𝑐, 𝑐=1,,𝐾, such that 𝐾[0,𝑘max] is a stochastic number of motif types, each of which corresponds to an a priori unknown number of motif instances that are localized in 𝐱.

A representation of the components included in our unsupervised model is shown in Figure 1, with the used notation explained in detail below. Now, given 𝐾=𝑘, let 𝑈𝑐,𝑐=1,,𝑘, be a stochastic number, or the abundance, of motif instances representing the motif type 𝑐, governed by the probability 𝑃(𝑈𝑐=𝑢𝑐). Further, we let the common length 𝑙𝑐 of the 𝑈𝑐 motif instances be within the range [𝑙min,,𝑙max], such that 𝑙min and 𝑙max are positive integers with 𝑙min<𝑙max. Here 𝑙min, 𝑙max are the lower and upper bounds, respectively, on the motif length that can be modified using biological domain knowledge; see, for example, [11].

219743.fig.001
Figure 1: A schematic representation of the unsupervised motif discovery model.

Given the 𝑘 outcomes of (𝑢𝑐,𝑙𝑐), there are in total 𝑘𝑐=1𝑢𝑐 motif instances to be allocated onto 𝐱, such that an arbitrary motif instance of type 𝑐 contains 𝑙𝑐 contiguous positions. Let 𝑖𝑐{1,,𝑢𝑐} index an arbitrary motif instance of type 𝑐. Within 𝐱, a motif instance is represented by the pair (𝑟𝑖𝑐,𝑙𝑐) identifying a substring 𝑥𝑟 which contains the motif instance 𝑖𝑐 of the length 𝑙𝑐.

The substring representing the motif instance is explicitly denoted as 𝑥𝑖𝑐=𝑥𝑟𝑖𝑐𝑥𝑟𝑖𝑐+1𝑥𝑟𝑖𝑐+𝑙𝑐1. The random variables corresponding to the bases observed within a motif instance are governed by the probabilities 𝐩𝑐𝑗=(𝑝𝑐𝑗𝑎)𝑎𝒳, such that 𝑝𝑐𝑗𝑎 represents the probability of observing base 𝑎 at position 𝑗 among the instances of type 𝑐. Generally, this parametric construction is referred to as a weight matrix in the bioinformatics literature. Given the motif types with the unknown underlying distributions, the bases observed at each position are here modeled as conditionally independent multinomial trials with respect to any type.

An allocation of the motif instances onto 𝐱 implies that the sequence 𝐱 is split into two distinct parts, one containing the 𝑘𝑐=1𝑢𝑐 strings 𝑥𝑖𝑐, and the other containing the remainder of 𝐱 representing the background. Hence, we consider the sequence structure in terms of a model of randomly inserted motif instances as in [32].

Let now 𝑚 denote a particular configuration of a motif allocation model as described above. Thus, 𝑚 designates 𝑢𝑐 motif instances of type 𝑐, 𝑐=1,,𝑘, allocated in random positions in a generic sequence. In probabilistic terms, 𝑚 partitions the data 𝐱 into conditionally independent groups. Let be the space of all eligible such partitions. The partitions are defined using a set of random variables 𝑌𝑖 according to Both the motif instance classification structure and the sequence realizations are thus embedded into the set of 𝑌𝑖. An arbitrary realization of these is denoted in the sequel by 𝐲. Note here that the motif data depend only on the motif type and position in the motif instance, not the surrounding data. The background model is restarted, with the same probability law, after every motif instance, but we do not burden the notation further by this.

The unsupervised motif discovery model is now formally defined by the joint probability where 𝑃(𝐲𝑚) is a the marginal distribution (likelihood) of the data given model 𝑚 and 𝑃(𝑚) is a probability distribution representing our prior uncertainty about different models.

Here, motif discovery task is formulated as joint maximization of the posterior distribution of the background model and the partition of the motif instance candidates into classes representing distinct motif types, which leads to where The joint maximization reflects the fact that the marginal likelihood of the sequence data under the background model increases when substrings that have a low probability of occurring are instead considered as motif instances. On the other hand, increasing the number of motif types and their lengths increases the number of multinomial parameters that are needed to represent the probabilities of observing the different bases over the motifs. Thus, the model must target a balance between these two aspects when aiming at a simultaneous identification of the posterior optimal number of motif types 𝑘, as well as allocation and alignment of motifs into the classes representing the different types.

To enable efficient identification of the posterior optimal model structure we will use a constant prior 𝑃(𝑚)=1/|| for the structural layer of our model. Notice that the space of eligible model structures depends on the a priori limits specified for motif lengths and number of motif types. A constant prior can be described to conceptually arise through the following generating model for a set of fragments randomly inserted within a DNA sequence. Consider an initial sequence of length 𝑤. Given a stochastic total number 𝑈 of fragments (i.e., all motif instances), one can choose the first positions for the fragments in (𝑤𝑈) ways, such that for each of them, a corresponding motif instance of a certain length 𝑙𝑐 will be inserted into the sequence 𝑧 between that position and the subsequent one. The probability for any particular arrangement of the fragments is thus (𝑤𝑈)1. Furthermore, every fragment can now be chosen with the probability 𝑘1 from the 𝑘 alternative sources, which specifies the generating model for the prior probabilities of the structural layer. By specifying suitable distributions for 𝑈 and 𝑘, a uniform distribution for the structure 𝑚 is implied, similarly to the random urn model considered in [33].

Next we derive an expression for the marginal likelihood under the classification model using an approach similar to that adopted in [33]. Let 𝐼𝑥𝑟𝑖𝑐,𝑗(𝑎) be an indicator function that has value one if the value 𝑎𝒳 is found at position 𝑗 in the motif 𝑥𝑟𝑖𝑐 and has value zero otherwise. Then is the number of times the symbol 𝑎 appears at position 𝑗 in the motifs in class 𝑐 and 𝑛𝑐𝑗=𝑛𝑐 is the total number of motif instances of type 𝑐. Similarly, let 𝑛𝑔𝑎 and 𝑛𝑔 be the number of times 𝑎 appears after the context denoted by 𝑔 and the number of times the context 𝑔 appears, respectively, where the context can have parts belonging to both the background and one or several motif instances. Recall that the context 𝑔 refers to a particular string with elements from the alphabet 𝒳. Now we can continue with the actual calculation of 𝑃(𝐲𝑚).

We use different priors for the probabilities in the background and the motifs. Both are Dirichlet, however, with hyperparameters 𝛼 (for the background) and 𝜆 (for the motifs). Dirichlet distribution is the standard choice of a prior in Bayesian modeling of DNA sequences, and in addition to the computational convenience related to this prior, there are theoretical arguments supporting it in a variety of different contexts, see, for example, the discussion in [33, 34]. Let and let us next introduce and the Dirichlet density on 𝒱𝑔. Another prior is considered for the motifs on the simplex Let us set as well as Consider now the conditional probability 𝑃(𝑌𝑛𝑜+1=𝑧𝑛𝑜+1𝑧𝑜1,𝜷,𝐩), which can be factorized using the chain rule of probabilities as Here (12) and the VLMC property in the sense of (4) are used, resulting in By combining the above distributions, it is possible to derive the marginal data distribution analytically for our classification model. The formula for the marginal distribution is given in Theorem 1, for which a proof is provided in appendix.

Theorem 1. If 𝑃(𝑌𝑛𝑜+1=𝑧𝑛𝑜+1𝑧𝑜1,𝜷,𝐩) is given in (25) and 𝜙(𝜷,𝐩) is as in (23), then the marginal likelihood under the classification model equals

4. Learning Sequence Partitions by Stochastic Search Operators

Standard MCMC algorithms such as the Gibbs sampler or Metropolis-Hastings algorithm [24] have regularly been used for the earlier inferential problems associated with motif discovery [5]. However, numerical convergence and mixing problems for such methods are burdening the motif discovery task. Furthermore, as the objectives here are even more challenging due to the a priori unknown number of motif types, standard stochastic computation is not expected to provide a feasible strategy. The unsupervised learning model for motif discovery developed in the previous section has the characteristics that enable the use of the nonstandard Bayesian computational strategy as developed in [26]. We will now describe the search operators embedded in the nonreversible algorithm.

Let 𝑝prop(𝑚) denote a generic fixed distribution that assigns probabilities on the space , conditional on the model structure 𝑚. A nonreversible Metropolis-Hastings algorithm may then be constructed by defining its transition kernel according to the acceptance probability where 𝑚 is a candidate state (model structure), generated from a current state 𝑚 under 𝑝prop(𝑚). The important difference between this algorithm and the ordinary reversible Metropolis-Hastings is the lack of the ratio of the proposal probabilities 𝑝prop(𝑚𝑚),𝑝prop(𝑚𝑚) which ensures that the stationary distribution of the generated Markov chain equals the sought posterior distribution. However, as shown by [26], consistent estimates of the model posterior probabilities can still be constructed from a realization of the nonreversible chain. The major strength of this algorithm is that it can use more complex proposal mechanisms than the ordinary reversible MCMC algorithms, when it is not feasible in practice to calculate analytically the proposal probabilities. Also, the direct utilization of the analytically calculated marginal likelihood 𝑃(𝐲𝑚) avoids the effects of Monte Carlo error in the estimation of the underlying parameters, for example, as compared to ordinary Gibbs sampler estimation where values for all model parameters are sequentially sampled. The latter advantage has been discussed also in the earlier motif discovery literature, where collapsed Gibbs samplers have been used for the model learning [5, 8].

In addition to the nonreversible transition kernel of the algorithm, we utilize 𝑛 parallel interacting search processes analogously to [26]. The search process as a whole can be defined as follows.

Let {𝑚𝑡𝑗,𝑡=0,1,;𝑗=1,,𝑛} and {𝐼𝑡,𝑡=0,1,} be 𝑛+1 stochastic processes defined as follows.

(1)Define a sequence of strictly decreasing probabilities {𝛼𝑡,𝑡=1,2,}, such that 𝛼𝑡>𝛼𝑡+1, and 𝛼𝑡0 as 𝑡.(2)Define the stochastic process {𝐼𝑡,𝑡=0,1,} as 𝐼0=0, and 𝑃(𝐼𝑡=1)=𝛼𝑡, 𝑃(𝐼𝑡=0)=1𝛼𝑡, independently for 𝑡=1,2,.(3)Let 𝑚0𝑗, 𝑗=1,,𝑛, be arbitrary initial states of {𝑚𝑡𝑗,𝑡=0,1,;𝑗=1,,𝑛}. Given a realization {𝐼𝑡,𝑡=0,1,}, the transition mechanism of the processes {𝑚𝑡𝑗,𝑡>0;𝑗=1,,𝑛} depends on values of 𝐼𝑡 according to steps (4) and (5).(4)For each 𝑡, such that 𝐼𝑡=0, transition from 𝑚𝑡𝑗 to the next state 𝑚(𝑡+1)𝑗 is determined according to the probability (27) under the proposal distribution 𝑝prop(𝑚), for 𝑗=1,,𝑛.(5)For each 𝑡, such that 𝐼𝑡=1, transition from 𝑚𝑡𝑗 to the next state 𝑚(𝑡+1)𝑗 is determined according to the following distribution over the set {𝑚𝑡𝑗,𝑗=1,,𝑛} of candidate states: independently for 𝑗=1,,𝑛.

The 𝑛 processes {𝑚𝑡𝑗,𝑡=0,1,;𝑗=1,,𝑛} defined above are not time homogeneous Markov chains. However, as 𝑡, their transition probabilities converge to those of the time homogeneous Markov chain defined in (27). The parallel interacting processes are defined to yield an efficient, yet consistent scheme for exchange of information between them. The processes have a tendency to coalesce towards the states which are associated with higher marginal likelihoods. Also, when multiple model structures with roughly equal marginal likelihoods are present, the probabilities (28) lead to a more dispersed proposal distribution.

One of the major challenges in the MCMC computation is the need to design proposal operators for new models. Therefore, our emphasis is on developing intelligent proposals that are able to discriminate the information content of different segments of the data. We now specify the search operators that generate candidate states according to the distribution 𝑝prop(𝑚), which in fact consists of several components as typically in MCMC. Some of these are intelligent operators similar to those used in protein sequence classification by [35], and some are locally random operators similar to those used in [26].

Assume first that a nonempty set {𝐱𝑟𝑖,𝑖=1,2,} of initial candidates of motif instances has been made available out of the total sequence 𝐱. Such candidates could also be continuously chosen and/or discarded as a part of the nonreversible algorithm according to a stochastic search operator under our learning model. However, to limit the computational burden, we chose in our experiments to utilize a number of results from renewal theory and compound Poisson approximations to provide a set of putative motif instances (as explained in Section 2).

To identify an optimal classification of the candidate motif instances we use for each process repeatedly the following four search operators in the transition kernel: Merge, Split, Slide, and Move. These are defined as follows.

(1)Merge. Let 𝑐max be the motif type with the largest index. For each pair of motif types find the optimal alignment of the motif instances with respect to the marginal likelihood, when the instances are merged into a single group. When the lengths of the motif instances in the two classes are distinct, the bases in the shorter strings that are lacking in a column of any particular alignment are treated as missing data. The marginal likelihood 𝑃(𝐲𝑚𝑡𝑗) can then still be calculated as in (26), because the expression is based on the sufficient statistics (counts) arising for each column in the alignment. If any of the (𝑐max2) mergings results in a higher marginal likelihood than 𝑃(𝐲𝑚𝑡𝑗), use that model structure as the proposal value 𝑚𝑗, else use 𝑚𝑡𝑗.(2)Split. Choose a motif type 𝑐 randomly. Calculate the pairwise Hamming distances of the corresponding motif instances and cluster them using the standard hierarchical single linkage algorithm. Split the group of motif instances optimally into two subgroups according to the hierarchical clustering.(3)Slide. Choose a motif type 𝑐 randomly. Slide the corresponding motif instances randomly backwards or forwards and change their length randomly. Both the sliding and length change are preformed with respect to a simple uniform proposal distribution. The sliding step relocates the motif instance maximally 5 bases backwards or forwards with equal probabilities for all possible configurations, apart from those that would place the motif instance outside of the sequence. The length is changed by 1 or 2 bases, by randomly either shortening or extending the motif instances with equal probabilities. If the resulting sequence does not satisfy the prior limits 𝑙min,𝑙max, the proposal will be discarded and a new value is generated from the original motif instance.(4)Move. Choose a motif instance randomly among all instances and choose a motif type 𝑐 randomly among the remaining 𝑐max1 types. Slide the motif instance randomly backwards or forwards and group it with the motif type 𝑐.

In addition to the above operators, both randomly chosen single motif instances and motif types are proposed to be reinserted into the background model. Since there is a strictly positive probability of associating each motif instance with any type or the background by a successive use of the proposal steps, the MCMC framework is irreducible. Consequently, the following result holds for the stochastic learning, as the algorithm introduced here satisfies the general conditions stated in [26].

Theorem 2. Let be a model space for any given set of motif instance candidates, and let 𝑡 be the part of the space explored at time 𝑡 by the search process defined by the parallel nonreversible Metropolis-Hastings algorithm defined above. Let Then as 𝑡.

5. Empirical Illustration

To briefly illustrate the unsupervised learning framework we have generated simulated data for the motifs reb1, matalpha2, pdr1/3 from the SCPD database [36]. The simulation was performed by first estimating a VLMC based on the SCPD background sequence for these motifs. Then, a sequence was generated using the estimated VLMC model, into which motif data was randomly inserted. Each inserted motif instance was generated using the positional weight matrices of the corresponding motifs [36]. This resulted in a sequence of approximately 10000 nucleotides.

To obtain a set of motif instance candidates, we applied the following procedure. First, we neglected the words 𝑧𝑗𝑗+𝑙1 that occur in 𝐱 less than 4 times, because they are deemed to have very small chances of being conclusively judged not to belong to the background DNA sequence. Then, probabilities for the remaining substrings 𝑧𝑗𝑗+𝑙1 were ranked. Those among the 100 most improbable according to the background model, with respect to 𝑅𝑃(𝑛(𝑧𝑗𝑗+𝑙1)𝑟𝑛), were finally considered as candidates for being motifs. The prior limits of motif length were set according to the interval [7, 15] and the upper bound 𝐾=10 was used for the number of motif types.

The behavior of the MCMC optimization for parallel 50 processes is shown visually in Figure 2. Out of the 80 motif instances (30 reb1, 30 matalpha2, 20 pdr1/3) embedded in the SCPD-like sequence of length approximately 10000 nucleotides, our algorithms were able to identify 64. The adjusted Rand index (see, [37, 38]) between the optimal unsupervised classification of the candidates and the underlying true classification is 0.9623, which indicates a high fidelity of the results for those motif instances that were found.

219743.fig.002
Figure 2: An illustration of the behavior of the parallel stochastic search process. The vertical axis corresponds to the log𝑃(𝐲𝑚𝑡𝑗), and the horizontal axis is the time index.

We also tested the algorithm on real data under comparable prior settings for the motifs MCB and PDR1/3 from SCPD. This was done by providing the upstream regulating regions containing MCB and PDR1/3 (with a total length of 6149 nucleotides) as an input data set for the algorithm. This resulted in 7/32 motifs identified (2 MCB and 5 PDR1/3), which were all correctly grouped with respect to the classification in SCPD.

6. Discussion

The unsupervised classification model was developed under the assumption that the bases at the different positions of a motif instance are conditionally independent given the motif type. This assumption is analogous to the strategy used in a majority of motif identification methods, however, a more elaborate model structure could also be developed by a factorization of the marginal likelihood under a low-order dependence structure, such as a sparse Bayesian network or a context tree similar to that utilized in the construction of the VLMC. This type of a factorization principle has earlier been successfully used in the ordinary likelihood-based motif scanning; see, for example, [11, 13]. The advantage of such likelihood factorizations would in the Bayesian setting be that the marginal likelihoods could still be analytically calculated under suitable Dirichlet priors, thus enabling the utilization of the nonreversible MCMC computation for model learning. However, such a strategy would nevertheless significantly increase the computational complexity associated with the learning procedure, as the size of the model space would increase considerably.

The statistical learning method described here could be modified to incorporate more specific biological knowledge in terms of informative prior distributions for occurrence of certain nucleotides in any given part of a motif instance. Such priors may be designed from well curated biological databases using the basic properties of the Dirichlet distribution. To reduce the computational intensity of our experiments, we used the stochastic search initialized only by the candidate motif instances. Also, the VLMC model for the background was first fitted to the data, whereafter the context tree was not re-estimated during the motif discovery phase. In a fully coherent Bayesian learning procedure it would necessary to consider both aspects entirely as parts of the structural layer of the model and to learn them in parallel with the motif discovery. Similarly, it would be preferable to also monitor the learning process by creating an extensive set of posterior summaries of the motif configurations and visited segments of the sequence as in [15], instead of solely targeting the posterior mode of the model structure.

The fact that sequence scanning yields a large number of motifs that cannot be explained with the knowledge built from the current databases is an open problem in the scanning field, as confirmed by similar studies using YMF [39] and Weeder [40], as well as metastudies such as [41] and even supervised clustering algorithms such as LOGOS [10]. Despite such high “noise” level, we have illustrated that a model-based approach has potential to simultaneously discover multiple motif types hidden in the sequences. From an intuitive perspective such an approach should make more sense than screening for a single motif type at a time. The latter can be suboptimal, for example, in cases where separate motif types are present in the data, such that they are closely related in an evolutionary sense. Here our focus has been more on the mathematical formulation of a statistical framework that has potential for a simultaneous discovery of multiple motif types. We wish to emphasize the advantages of formulating the statistical problem using a Bayesian learning framework which enables the use of nonstandard MCMC computation. In future our aim is to develop more concrete practical tools extending the basic model and the implementation of the learning algorithm, by exploiting a massively parallel computation architecture to pursue large-scale validatory and exploratory experiments.

Appendix

Mathematical details of the derivation of the result in Theorem 1 are given below. For convenience of reference we recall the definition of the Dirichlet integral where 𝛼𝑖 and for all 𝑖,𝛼𝑖>0.

A marginal distribution for 𝑧𝑛𝑜+1 given 𝑧𝑜1 is Now (23) and (25) yield that (A.2) is equal to The Dirichlet integral can now be used to establish (for each context 𝑔) and similarly for each motif type 𝑐 this yields Further, using (A.4) and (A.5) the stated result is established.

Acknowledgments

The authors thank Dr. Torkel Erhardsson, Linköpings Universitet, for valuable discussions about compound Poisson approximation. The authors would also like to remember their late friend and colleague Björn Larsson, graduate student in Linköping, who participated in the initial stage of this work before losing his life in a traffic accident. The work was supported by grant no. 4042140 from the Swedish Research Council (VR/NT) and by grant no. 121301 from Academy of Finland.

References

  1. T. Werner, “Models for prediction and recognition of eukaryotic promoters,” Mammalian Genome, vol. 10, no. 2, pp. 168–175, 1999. View at Publisher · View at Google Scholar
  2. E. Eskin and P. A. Pevzner, “Finding composite regulatory patterns in DNA sequences,” Bioinformatics, vol. 18, supplement 1, pp. S354–363, 2002.
  3. L. Marsan and M.-F. Sagot, “Algorithms for extracting structured motifs using a suffix tree with an application to promoter and regulatory site consensus identification,” Journal of Computational Biology, vol. 7, no. 3-4, pp. 345–362, 2000. View at Publisher · View at Google Scholar
  4. U. Ohler and H. Niemann, “Identification and analysis of eukaryotic promoters: recent computational approaches,” Trends in Genetics, vol. 17, no. 2, pp. 56–60, 2001. View at Publisher · View at Google Scholar
  5. J. S. Liu, A. F. Neuwald, and C. E. Lawrence, “Bayesian models for multiple local sequence alignment and Gibbs sampling strategies,” Journal of the American Statistical Association, vol. 90, pp. 1156–1170, 1995.
  6. W. Thompson, E. C. Rouchka, and C. E. Lawrence, “Gibbs recursive sampler: finding transcription factor binding sites,” Nucleic Acids Research, vol. 31, no. 13, pp. 3580–3585, 2003. View at Publisher · View at Google Scholar
  7. M. Gupta and J. S. Liu, “Discovery of conserved sequence patterns using a stochastic dictionary model,” Journal of the American Statistical Association, vol. 98, no. 461, pp. 55–66, 2003. View at Publisher · View at Google Scholar
  8. S. T. Jensen, X. S. Liu, Q. Zhou, and J. S. Liu, “Computational discovery of gene regulatory binding motifs: a Bayesian perspective,” Statistical Science, vol. 19, no. 1, pp. 188–204, 2004. View at Publisher · View at Google Scholar
  9. S. T. Jensen and J. S. Liu, “BioOptimizer: a Bayesian scoring function approach to motif discovery,” Bioinformatics, vol. 20, no. 10, pp. 1557–1564, 2004. View at Publisher · View at Google Scholar
  10. E. P. Xing, W. Wu, M. I. Jordan, and R. M. Karp, “Logos: a modular Bayesian model for de novo motif detection,” Journal of Bioinformatics and Computational Biology, vol. 2, no. 1, pp. 127–154, 2004. View at Publisher · View at Google Scholar
  11. I. Ben-Gal, A. Shani, A. Gohr, et al., “Identification of transcription factor binding sites with variable-order Bayesian networks,” Bioinformatics, vol. 21, no. 11, pp. 2657–2666, 2005. View at Publisher · View at Google Scholar
  12. L. Hertzberg, O. Zuk, G. Getz, and E. Domany, “Finding motifs in promoter regions,” Journal of Computational Biology, vol. 12, no. 3, pp. 314–330, 2005. View at Publisher · View at Google Scholar
  13. Y. Barash, G. Elidan, N. Friedman, and T. Kaplan, “Modeling dependencies in protein-DNA binding sites,” in Proceedings of the 7th Annual International Conference on Computational Molecular Biology (RECOMB '03), pp. 28–37, ACM Press, Berlin, Germany, April 2003.
  14. X. Zhang, H. Huang, M. Li, and T. Speed, “Finding short DNA motifs using permuted Markov models,” Bioinformatics, vol. 21, pp. 894–906, 2005.
  15. S. M. Li, J. Wakefield, and S. Self, “A transdimensional Bayesian model for pattern recognition in DNA sequences,” Biostatistics, vol. 9, no. 4, pp. 668–685, 2008. View at Publisher · View at Google Scholar
  16. J. Hawkins, C. Grant, W. S. Noble, and T. L. Bailey, “Assessing phylogenetic motif models for predicting transcription factor binding sites,” Bioinformatics, vol. 25, no. 12, pp. i339–i347, 2009. View at Publisher · View at Google Scholar
  17. T. Marschall and S. Rahmann, “Efficient exact motif discovery,” Bioinformatics, vol. 25, no. 12, pp. i356–i364, 2009. View at Publisher · View at Google Scholar
  18. P. Bühlmann and A. J. Wyner, “Variable length Markov chains,” Annals of Statistics, vol. 27, no. 2, pp. 480–513, 1999.
  19. M. Mächler and P. Bühlmann, “Variable length Markov chains: methodology, computing, and software,” Journal of Computational and Graphical Statistics, vol. 13, no. 2, pp. 435–455, 2004. View at Publisher · View at Google Scholar
  20. J. Rissanen, “A universal data compression system,” IEEE Transactions on Information Theory, vol. 29, no. 5, pp. 656–664, 1983.
  21. I. Abnizova and W. R. Gilks, “Studying statistical properties of regulatory DNA sequences, and their use in predicting regulatory regions in the eukaryotic genomes,” Briefings in Bioinformatics, vol. 7, no. 1, pp. 48–54, 2006. View at Publisher · View at Google Scholar
  22. M. C. Frith, Y. Fu, L. Yu, J.-F. Chen, U. Hansen, and Z. Weng, “Detection of functional DNA motifs via statistical over-representation,” Nucleic Acids Research, vol. 32, no. 4, pp. 1372–1381, 2004. View at Publisher · View at Google Scholar
  23. J. Zhang, B. Jiang, M. Li, J. Tromp, X. Zhang, and M. Q. Zhang, “Computing exact P-values for DNA motifs,” Bioinformatics, vol. 23, no. 5, pp. 531–537, 2007. View at Publisher · View at Google Scholar
  24. C. P. Robert and G. Casella, Monte Carlo Statistical Methods, Springer, New York, NY, USA, 1999.
  25. P. Green, “Reversible jump Markov chain Monte Carlo computation and Bayesian model determination,” Biometrika, vol. 82, pp. 711–732, 1995.
  26. J. Corander, M. Gyllenberg, and T. Koski, “Bayesian model learning based on a parallel MCMC strategy,” Statistics and Computing, vol. 16, no. 4, pp. 355–362, 2006. View at Publisher · View at Google Scholar
  27. E. E. Stückle, C. Emmrich, U. Grob, and P. J. Nielsen, “Statistical analysis of nucleotide sequences,” Nucleic Acids Research, vol. 18, no. 22, pp. 6641–6647, 1990.
  28. B. Ron, Y. Singer, and N. Tishby, “The power of amnesia: learning of probabilistic automata with variable memory lengths,” Machine Learning, vol. 25, pp. 117–149, 1996.
  29. M. Régnier, “A unified approach to word occurrence probabilities,” Discrete Applied Mathematics, vol. 104, pp. 259–280, 2000.
  30. T. Erhardsson, “Compound Poisson approximation for Markov chains using Stein's method,” Annals of Probability, vol. 27, no. 1, pp. 565–596, 1999.
  31. T. Erhardsson, “Compound Poisson approximation for counts of rare patterns in Markov chains and extreme sojourns in birth-death chains,” Annals of Applied Probability, vol. 10, no. 2, pp. 573–591, 2000.
  32. J. L. Thorne, H. Kishino, and J. Felsenstein, “Inching towards reality: an improved likelihood model for sequence evolution,” Journal of Molecular Evolution, vol. 34, pp. 3–16, 1992.
  33. J. Corander, M. Gyllenberg, and T. Koski, “Random partition models and exchangeability for bayesian identification of population structure,” Bulletin of Mathematical Biology, vol. 69, no. 3, pp. 797–815, 2007. View at Publisher · View at Google Scholar
  34. D. Geiger and D. Heckerman, “A characterization of the Dirichlet distribution through global and local parameter independence,” Annals of Statistics, vol. 25, no. 3, pp. 1344–1369, 1997.
  35. P. Marttinen, J. Corander, P. Törönen, and L. Holm, “Bayesian search of functionally divergent protein subgroups and their function specific residues,” Bioinformatics, vol. 22, no. 20, pp. 2466–2474, 2006. View at Publisher · View at Google Scholar
  36. J. Zhu and M. Q. Zhang, “SCPD: a promoter database of the yeast Saccharomyces cerevisiae,” Bioinformatics, vol. 15, no. 7-8, pp. 607–611, 1999.
  37. B. G. Mirkin and L. B. Chernyi, “Measurement of the distance between distinct partitions of a finite set of objects,” Automation and Remote Control, vol. 31, pp. 786–792, 1970.
  38. L. Hubert and P. Arabie, “Comparing partitions,” Journal of Classification, vol. 2, no. 1, pp. 193–218, 1985. View at Publisher · View at Google Scholar
  39. S. Sinha and M. Tompa, “Discovery of novel transcription factor binding sites by statistical overrepresentation,” Nucleic Acids Research, vol. 30, no. 24, pp. 5549–5560, 2002. View at Publisher · View at Google Scholar
  40. G. Pavesi, P. Mereghetti, G. Mauri, and G. Pesole, “Weeder Web: discovery of transcription factor binding sites in a set of sequences from co-regulated genes,” Nucleic Acids Research, vol. 32, web server issue, pp. W199–W203, 2004. View at Publisher · View at Google Scholar
  41. M. Tompa, N. Li, T. L. Bailey, et al., “Assessing computational tools for the discovery of transcription factor binding sites,” Nature Biotechnology, vol. 23, no. 1, pp. 137–144, 2005. View at Publisher · View at Google Scholar