Abstract

Background. The emergence of next-generation sequencing platform gives rise to a new generation of assembly algorithms. Compared with the Sanger sequencing data, the next-generation sequence data present shorter reads, higher coverage depth, and different error profiles. These features bring new challenging issues for de novo transcriptome assembly. Methodology. To explore the influence of these features on assembly algorithms, we studied the relationship between read overlap size, coverage depth, and error rate using simulated data. According to the relationship, we propose a de novo transcriptome assembly procedure, called Euler-mix, and demonstrate its performance on a real transcriptome dataset of mice. The simulation tool and evaluation tool are freely available as open source. Significance. Euler-mix is a straightforward pipeline; it focuses on dealing with the variation of coverage depth of short reads dataset. The experiment result showed that Euler-mix improves the performance of de novo transcriptome assembly.

1. Introduction

With the rapid development of next-generation sequencing technologies, studies on genomics and transcriptomics are moving into a new era. However, while these new technologies produce a great quantity of highly accurate sequences, they also have a major drawback, in that most of these efficient technologies produce shorter read lengths. For instance, technologies based on cyclic reversible termination [1] and ligation-based sequencing [2] produce read lengths ranging from 15 bps to 125 bps. These lengths are sufficient for resequencing, but challenging for de novo assembly. In response to this problem, several new assemblers that are designed for short reads have recently been introduced. They can be divided into three categories: (1) greedy extension approaches, such as SSAKE [3], VCAKE [4], and SHARCGS [5]; (2) overlap-layout-consensus approaches, such as Edena [6]; (3) Euler-path approaches, such as Velvet [7], EULER-SR [8], AllPATHS [9], and ABySS [10]. Among them, Euler-path approaches seem more appropriate for processing large amount of short reads [10], because they use π‘˜-mer hashing to detect overlaps at less computational costs compared to traditional overlap-layout-consensus approaches. Recently, research works on Euler-path approaches have focused on both error removal and repeat resolution for genomic sequences, whereas only a few works shed light on de novo transcriptome assembly [11, 12]. However, de novo transcriptome assembly offers a unique opportunity to study the metabolic states of organisms [12] and provides an alternative path to study nonmodel organisms [13] and thus is a desirable and challenging approach. The main difference between genome assembly and transcriptome assembly is the variation of coverage depth. For example, in a genome assembly project, short reads are randomly sampled from a genome, and thus the coverage depth is anticipated to be uniformly distributed on the genome. On the other hand, the distribution of short reads in a transcriptome analysis project is highly dependent on gene expression levels, and the abundance of expressed genes exhibits a power-law distribution [14]. While the coverage depth is related to the key parameter π‘˜ (or π‘˜-mer size) for Euler-path approaches [7, 15], it seems that a single run of an Euler-path approach program would not be sufficient for a de novo transcriptome assembly project. In this paper, we study the relationships between sequencing error rate, coverage depth, and parameter π‘˜ using simulated data. Accordingly, we propose a transcriptome assembly procedure, called Euler-mix, for de novo assembly of whole transcriptome shotgun sequencing data. The primary innovation of Euler-mix is to utilize the relationship between parameterπ‘˜ in Euler-path approaches and the coverage depth of sequence data. Finally, we demonstrate the performance and practicability of the proposed procedure, using a real transcriptome dataset of mice.

2. Results

2.1. On the Relationship between Coverage Depth and Optimum k

For genome assembly projects, it has been shown that the parameter π‘˜ of Euler-path approaches affects assembly results and is related with coverage depth [7]. Because the coverage depths of transcripts are correlated with expression levels and are thus varied, it is necessary to study the relationship between coverage depth and π‘˜β€™s that optimize assembly. To do this, we conducted an experiment on two simulated transcriptome datasets of mice, one is error-free and the other is with a sequencing error rate ~0.3%. Each dataset is composed of 80 million pair-ended 36 bp reads, and the range of transcript coverage depths extends from 1 to 4266, where coverage depths are proportional to corresponding expression levels that were computed from all mouse libraries in the NCBI dbEST database. We separately assembled each transcript by Velvet with different parameter π‘˜β€™s, and a π‘˜ value was classified as optimum for a transcript if the consistent recall rate (Section 4) of the transcript was above 95%. Figures 1(a) and 1(b) show the relationship between optimum π‘˜ and coverage depth for the error-free dataset and the error-rate-0.3% dataset, respectively. Here, a red-green heat map was used to indicate the degree of optimization: a green cell represents a higher ratio of transcripts that achieve 95% consistent recall rate and a red cell represents a lower ratio of these transcripts. From these figures, we observed two phenomena. First, upper left corners of Figures 1(a) and 1(b) are red, which means that optimum π‘˜β€™s of transcripts of lower coverage depths are distributed on smaller values. On the other hand, the lower right corner of Figure 1(b) is red, which implies that optimum π‘˜β€™s of transcripts of higher coverage depths are distributed on larger values when there is a sequencing error rate.

Since the meaning of π‘˜ can be treated as the minimum length of overlap for two short reads to form a longer contig, the lower coverage depth implies less chance to have an overlap longer than or equal to π‘˜. This means shorter contigs and furthermore explains why a smaller π‘˜ is more suitable for transcripts of lower coverage depths. Unsurprisingly, the well-known Lander-Waterman model [16] explains this first phenomenon. In their model, the expected number of contigs in a genome assembly project is (π‘βˆ—πΊ/𝐿)π‘’βˆ’(1βˆ’(π‘˜/𝐿))𝑐, where (1) 𝐺= genome length; (2) 𝐿 = read length; (3) 𝑐 = coverage depth; and (4) π‘˜ = minimum length required for the detection of an overlap. Let every transcript be the genome in the Lander-Waterman model. We used this formula to estimate the relationship between optimum π‘˜β€™s and coverage depth, where a π‘˜ value was classified as optimum for a transcript if the expected number of contigs is less than or equal to 1. In Figure 3(c), we summarized the relationship between optimum π‘˜β€™s and coverage depth as what we did in Figures 3(a) and 3(b), and it showed high similarity between Figures 3(a) and 3(c) hence described the first phenomenon.

For the second phenomenon, it should have been due to the fact that sequencing errors would result in β€œtips” and β€œdetours” in underlying de Bruijn graphs [7]. Although current Euler-path approaches have been designed to handle most of such undesirable cases, the sequencing error rate times the higher coverage depths means more erroneously called bases, which means more chances to produce longer tips and detours that would not be resolved. Additionally, a smaller π‘˜, compared to a larger π‘˜, would give more chances to produce tips and detours. Thus, using a larger π‘˜ to assemble very high coverage depth data is a practical approach when there is a sequencing error rate.

2.2. The Effect of Sequencing Error Rate

Because a sequencing error rate of 0.3% is commonly seen in the control lane of the Illumina Solexa sequencer [17], it is possible that the sequencing error rate might increase for noncontrol lanes. To see the crosstalks among coverage depth, sequencing error rate, and optimum π‘˜, we arbitrarily picked five mouse transcripts and generated simulated datasets with coverage depths 2x, 4x, 8x, 16x,…, and 16384x, respectively. Additionally, errors were simulated with average rates of 0%, 0.3%, 0.6%, 0.9%,…, and 2.4% for every coverage depth (Section 4). Figure 2 shows results of one simulated transcript (see Figures S1–S4 in Supplementary Material available on line at http://dx.doi.org/10.5402/2012/816402) for results of other four transcripts), which demonstrate a consistent trend with Figure 1(b). With the increased error rate, the range of optimum π‘˜β€™s of each coverage depth narrows and a positive correlation between coverage depth and optimum π‘˜β€™s becomes noticeable. It should be noticed that, for all datasets with sequencing errors, no π‘˜ remains optimum for most tested coverage depths.

2.3. The Euler-Mix Assembly Procedure

Choosing an appropriate parameter π‘˜ for Euler-path approaches is a practical issue for short read sequence data. From the above experiments, we see that coverage depth affects the distribution of optimum π‘˜β€™s and that no π‘˜ is optimum for all coverage depths if there are sequencing errors. Thus, the issue of choosing π‘˜ becomes tricky, especially for de novo transcriptome assembly, where data of different coverage depths are mixed in one sample. However, utilizing the correlation between coverage depth and optimum π‘˜β€™s, we can merge the results of different parameter π‘˜β€™s together and produce a more accurate assembly of transcriptome sequencing data.

To this end, we propose the Euler-mix procedure, which integrates existing assembly programs to deal with the varying coverage depth of transcriptome shotgun sequencing data. The Euler-mix procedure is based on two observations, (1) a larger π‘˜ is suitable for data of higher coverage depth; while a smaller π‘˜ is suitable for data of lower coverage depth and (2) the assembly result of an optimum π‘˜ is similar with that of an adjacent optimum π‘˜ in a range of coverage depth. Note that the second observation was because of the 95% consistent recall rate ensured by an optimum π‘˜. Figure 3 shows the overview of Euler-mix procedure.

The Euler-mix procedure contains three stages. For the initial stage, we assembled reads using an Euler-path approach. Since the coverage depth affects the selection of the parameter π‘˜, we applied all applicable π‘˜β€™s to assemble reads. We selected 19 as the smallest π‘˜ and the length of reads as the upper bound of π‘˜. Then, we obtained different assembled results for the same input dataset. Because of the variation of coverage depths and applying multiple π‘˜β€™s, some results have better performance for transcripts of higher coverage depths, while others for transcripts of lower coverage depths. For the second stage, we use sequence assembly tools to assemble those results again using larger overlap size. This was done in order to join them together, because there could be duplications and overlaps between results by different π‘˜β€™s. In our experiment, we use Minimus [18] as the second stage’s assembler with its default overlap size (π‘˜=40). Minimus is a lightweight assembler, which is designed as a component of a larger assembly pipeline. It provides a systematic way to compute overlaps, identify uniquely assembled contigs, and use multiple sequence alignment to generate consensus sequences. These steps efficiently merge multiresults of an Euler assembler with different π‘˜β€™s into a more accurate file of contigs. For the third stage, we remap reads to the contig file using resequence tools, such as AMOScmp-shortReads [19]. After this third stage, we acquired a final assembly result with expression level information.

2.4. Comparing Euler-Mix with Existing Tools

To test the performance of Euler-mix and compare it with existing tools, we used the simulated dataset of the entire mouse transcriptome with a sequencing error rate of 0.3% (Section 4) as a benchmark. In the experiment, we used all π‘˜β€™s in three Euler-path approaches, including Velvet, EULER-SR, and ABySS, to compare with Euler-mix. For Euler-mix, we separately adopted aforementioned algorithms as the underlying assembly algorithm. Note that we processed the entire dataset at the same time, instead of processing each transcript one by one, because we were trying to mimic the actual application of the transcriptome assembly. Table 1 shows the results on the simulated data executed by Velvet with π‘˜ from 17 to 35, and by Euler-mix using Velvet as the underlying algorithm (see Supplementary Tables S1 and 2 for other algorithms). For overlap measures (Section 4), it shows that Euler-mix achieved the best precision, recall, and F-measure, where the recall rate was improved by about 5%. For the consistent measures (see materials and methods), it shows that Euler-mix achieved the second best precision (96.32%), whereas the best precision (96.59%) was achieved by Velvet with π‘˜=17, whose recall was just 8.13%. With the best consistent recall, Euler-mix also achieved the best consistent F-measure. Compared with Velvet with best consistent F-measures (π‘˜=21 and π‘˜=23), Euler-mix improved consistent recall by more than 4%, reduced the number of contigs longer than or equal to 100 bps from 80,166 and 65,348 to 48,183 and extended the average size of contigs from 582 and 701 to 1,001. It shows that Euler-mix filled many of the gaps between highly fragmented contigs. Furthermore, the precision rate was higher than 90% no matter which underlying algorithm was applied, which implies that the quality of resulting contigs is reliable. Such enhancement also appeared when comparing Euler-mix with EULER-SR and ABySS.

2.5. A Transcriptome Assembly Application

To evaluate Euler-mix in real condition, we use a real data recently published by Trapnell et al. [20]. The data includes 430 million pair-ended 75-bp RNA-Seq reads from a mouse myoblast cell line over a differentiation time series. We selected the dataset in one time point (NCBI Short Read Archive, accession no. SRX017794, run SRR037945) which contains 44.37 million pair-ended 75 bp reads, to evaluate Euler-mix and Velvet with different π‘˜ parameter.

In Euler-mix procedure, we performed Velvet with π‘˜ from 21 to 75 on the dataset firstly. Considering the quantity of the results, we used Velvet in single-end mode with π‘˜=39 as the second stage’s assembler because Minimus is a light weight assembler which is not suitable for dataset too large. Note that the selection of π‘˜ in the second stage’s assembler is a trade-off between precision and recall (specificity and sensitivity). Table 2 shows the experimental results of Euler-mix and Velvet with the best consistent F-measure (π‘˜=33). Note that we used RNA sequences of M_musculus in NCBI RefSeq database as reference sequences for computation of all evaluation measures (see Materials and Methods). The number of transcripts detected defined here is the number of transcripts with overlap recall rate greater than or equal to 80%. Compared with Velvet with the best consistent F-measure, Euler-mix improved consistent F-measure by more than 8% and increased the number of transcripts detected from 2,544 to 10,646. This experiment result shows that Euler-mix has significant improvement in de novo transcriptome assembly in real case.

3. Discussion

In our experiments, we found that optimal parameter π‘˜β€™s of Euler-path assemblers are positively correlated with the coverage depth of the sequence data. This phenomenon is due to the fact that lengths of overlaps between the reads are highly dependent on the coverage depth and error rate. The lower the coverage depth, the lower the probability of having a longer overlap between reads. Thus, selecting a shorter minimum overlap as a criterion for assembling reads is more suitable for low coverage depth data. On the other hand, the higher coverage depth may amplify the occurrence of sequencing errors in reads. Therefore, selecting a longer minimum overlap as a criterion for assembling reads should filter out the noise for high coverage depth data.

As for transcriptome sequencing data, the coverage depths are associated with expression levels. Because the expression levels exhibit a power-law distribution, choosing an appropriate parameter π‘˜ for Euler-path approaches becomes a problematic issue. However, these problems can be solved by taking all of the possible overlap sizes into account. Our experiments show that by merging the results of different π‘˜β€™s of Euler-path approaches, we shall obtain better performance for de novo transcriptome assembly.

Similarly, the same benefit also appears in the overlap-layout-consensus approach. We applied Edena [6] to assemble the simulated data with different overlap sizes, and compare them with the combined result merged by Minimus. It shows that the combined result reduces the number of contigs longer than or equal to 100 bps and extends average contig size, which suggests that combining results of different π‘˜β€™s improves the performance (see Supplementary Table S3).

In this paper, we also proposed consistent measures in addition to using only overlap measures (see materials and methods). One of the most important differences between these two kinds of measures is that consistent measures distinguish better assembly results from other assembly results. In most of our experiments, overlap measures gave precision higher than 99%, but consistent measures give precision values different from each other. In other words, consistent measures do provide a more accurate evaluation on the performance of assembly results.

Our next target is to investigate how to use mate-pair information properly in Euler-mix. Because most existing assemblers use mate-pair information that is aimed at repeat of genomic DNA and based on the assumption that short read data have uniform coverage depth, it is an interesting issue for transcriptome data to propagate the mate-pair information from first stage’s assembler to the second stage’s assembler correctly in Euler-mix and produce even longer and more accurate contigs.

4. Materials and Methods

4.1. Simulated Dataset

We created a synthetic dataset that mimicked the experimental data of transcriptome shotgun sequencing. The synthetic dataset of 80 million pair-ended 36 bp reads was randomly sampled from 26,332 transcripts of mice, which were collected from the NCBI RefSeq database [21]. To mimic the varied coverage depth of transcriptome shotgun sequencing data, the number of reads of each transcript was proportional to the number of ESTs multiplied by the length of the transcript, where the EST numbers were computed according to the NCBI dbEST database. Most transcripts have low coverage depths, and the variation of coverage depth is large, extending from 1 to 4,266 (see Figure 4). Additionally, the distribution of the coverage depth is a power-law distribution and is similar to the experiment data of the whole transcriptome shotgun sequencing for HeLa [22]. To better fit in with real-world data, we applied error rates that were slightly increased from start to end in reads. For the average error rate of 0.3%, the error rate at first nucleotide is 0.2% and increased 0.005% for every next nucleotide. Similarly, for average error rates 0.6%, 0.9%,…, and 2.4%, the error rates start with 0.5%, 0.8%,…, and 2.3%, respectively. The sizes of inserts were uniformly distributed from 175 to 225.

4.2. Overlap Measures

To evaluate the performance of assembly results, we created overlap measures, which take the following steps. First, we used MegaBLAST [23] to align assembled contigs longer than or equal to 100 bps to reference sequences, and only alignments with at least 95% identity were taken into account. The union of all alignment areas in the reference was treated as true positives, and we computed the overlap recall rate using the following formula:Recall=numberoftruepositivebasesinreferencetotallengthofreference.(1)

Similarly, the union of all alignment areas on the contig side was treated as true positives in contigs, and the overlap precision rate was defined asPrecision=numberoftruepositivebasesincontigstotallengthofcontigs.(2)

By that, the weighted harmonic mean of precision and recall, F-measure, was defined asF-measure=2Γ—precisionΓ—recallprecision+recall.(3)

Figure 5(a) presents an example of how overlap precision and overlap recall were computed: the overlap precision rate is (π‘Ž1+π‘Ž2+π‘Ž34)/(𝐢1+𝐢2) and the overlap recall rate is (π‘Ž12+π‘Ž3+π‘Ž4)/(𝑅1+𝑅2). Note that the true positive area may include overlapping regions of alignments, so we named these measures overlap. Also note that these measures may overestimate the performance because of recounting the overlaps. However, most works use them as the benchmark, for example, the β€œsequence coverage” used in Velvet and the β€œgenome coverage” used in ABySS. Accordingly, we take overlap measures as the upper bound of performance.

4.3. Consistent Measures

Theoretically, a perfect assembly result consists of exactly the same sequences as the reference sequences. Thus, in such a perfect result, every repetitive region is maintained and no alignment rearrangement is included. In order to distinguish better assembly results from those collapse repetitive regions or contain alignment rearrangements, we designed consistent measures, which exclude some alignments from counting the true positive area. For example, Figure 5(a) shows a case where alignments π‘Ž1 and π‘Ž2 overlap in the reference sequence; similarly, alignments π‘Ž3 and π‘Ž4 overlap in a contig. All these cases would amplify accuracy aversively if considered only under the overlap measures. To remedy this problem, we choose only one alignment from π‘Ž1 and π‘Ž2 as the true positive; in the same way, only one alignment was chosen from π‘Ž3 and π‘Ž4. As a result, the consistent precision rate is (π‘Ž1+π‘Ž4)/(𝐢1+𝐢2) and consistent recall rate is (π‘Ž1+π‘Ž4)/(𝑅1+𝑅2) in Figure 5(a). Figure 5(b) shows a case where alignments π‘Ž1 and π‘Ž2 are rearranged. Since the assembly results with alignment rearrangements should be treated differently from those without alignment rearrangements, only one alignment is considered correct. Figure 5(c) shows that one contig has two alignments from different transcripts. In the case of the transcriptome assembly, one contig should stand for only one transcript; thus only one alignment is chosen in Figure 5(c).

To follow all these aforementioned rules, the consistent measures were implemented according to the following steps.

Step 1. BLAST contigs against reference sequences; sort all alignments according to bit scores.

Step 2. For every alignment from highest bit score to lowest bit score, add an alignment into the list Alignment Collection (initially empty) if 2(a) it does not overlap with any alignment in Alignment Collection for more than 5% area in any side (the case of Figure 5(a)), 2(b) it is β€œconsistent” with all alignments in Alignment Collection (the case of Figures 5(b) and 5(c)).

Step 3. Compute union area of all alignments in Alignment Collection for reference side, take this union area as true positive, and thus compute recall rate. Similarly, precision rate would be computed.

Note the true positive area in each contig is aligned with the same transcript, and thus the consistent precision rate represents the quality of an assembly result.

Authors’ Contribution

Chien-Chih Chen and Wen-Dar Lin contributed equally to this paper.

Acknowledgments

Chien-Chih Chen, Yu-Jung Chang, and Jan-Ming Ho were partially supported by National Science Council grant NSC-97-2312-B-001-006.

Supplementary Materials

Figures S1~S4 show the relationship between optimum k's and coverage depth for four transcriptome sequences in different error rate.

Table S1 shows the results on the simulated data executed by Euler-SR with k from 19 to 27, and by Euler-mix using Euler-SR as the underlying algorithm. Table S2 shows the results on the simulated data executed by ABySS with k from 19 to 35, and by Euler-mix using ABySS as the underlying algorithm. Table S3 shows the results on the simulated data executed by Edena with k from 19 to 35, and by Euler-mix using Edena as the underlying algorithm.

  1. Supplementary Material
  2. Supplementary Material
  3. Supplementary Material
  4. Supplementary Material
  5. Supplementary Material