We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Robust consensus computation.
- Authors
Rausch, Tobias; Emde, Anne-Katrin; Reinert, Knut
- Abstract
Introduction High-throughput sequencing technologies with short read data pose a new challenge to the current three-phase assembly methodology: Overlap-Phase, Layout-Phase, and Consensus-Phase. We describe a new consensus method that is robust in the face of high coverage, shorter reads, and genomic variation. Methods Given an initial layout of the reads, we generate a consensus sequence and a multi-read alignment with the following protocol: (1) Computation of all necessary (with respect to the layout) pairwise overlap alignments. (2) Extraction of all gapless alignment segments and generation of a segment-based weighted overlap graph (see Fig. 1). Conflicts between segment matches are resolved using a novel multiple segment match refinement algorithm. (3) An adjustment of the edge-weights using a variant of the triplet extension pioneered in the T-Coffee package. By means of the triplet extension we increase the weights of clique-edges within the overlap graph and thus, these edges are more likely to be chosen in the subsequent progressive alignment stage. (4) A progressive graphbased alignment of the reads using the heaviest common subsequence algorithm and a guide tree computed from the pairwise alignment scores. Note that our algorithm does not align single nucleotides but the segments identified in the overlap alignment phase. This ensures that columns with genetic variation (e.g., SNPs) are preserved. (5) Output of the multi-read alignment, the gapped consensus and all positioned reads with their respective deltas. The output can be visualized in Hawkeye (see Fig. 2). Results We used a read simulator and real data from the NCBI trace archive to evaluate our consensus tool. The main parameters of the read simulator are the source sequence length, the average read length, the number of reads and the error rate per base call. In addition, multiple haplotypes can be simulated. Two further parameters, namely the number of SNPs and the number of indels, specify the genetic variation randomly introduced into these haplo-types. We performed two experiments: (1) Given a source sequence length of 10000, we simulated reads under different settings. The read length varied from 35 to 200, the coverage from 20x to 50x and the error rate per base call from 2% to 4%. In all cases, the computed gap-free consensus matched the simulated source sequence in each position with coverage > 2. (2) Given two haplotypes each of length 10000 with 100 SNPs and 5 Indels, we simulated reads of length 200, coverage 20 and 4% error rate. We then manually inspected the multi-read alignment with Hawkeye to evaluate the consensus in case of genetic variation (see Fig. 2). Conclusion The results on simulated data are encouraging and preliminary results on real data show that our consensus quality is comparable to other tools. It remains to be shown that our program outperforms other tools in diffficult settings, namely high coverage and short, error-prone read data. The consensus tool is part of the SeqAn library http:// www.seqan.de and the read simulator is available on request: rausch@inf.fu-berlin.de.
- Subjects
GENOMICS; ALGORITHMS; GRAPH theory; NUCLEOTIDES; HUMAN genetic variation
- Publication
BMC Bioinformatics, 2008, Vol 9, p1
- ISSN
1471-2105
- Publication type
Article
- DOI
10.1186/1471-2105-9-S10-P4