Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Current »

Overview:

This section provides directions for generating SSCS (Single Strand Consensus Sequence) reads and trimming molecular indexes from raw fastq files. 

Learning Objectives:

  1. Use python script to generate SSCS Reads.
  2. Use flexbar to trim molecular indexes from duplex seq libraries.

Tutorial: SSCS Reads

First we want to generate SSCS reads where we take advantage of the molecular indexes added during library prep. To do so we will use a "majority rules" python script (named SSCS_DCS.py) which was heavily modified by DED from a script originally created by Mike Schmitt and Scott Kennedy for the original duplex seq paper. This script can be found in the $BI/bin directory. For the purpose of this tutorial, the paired end sequencing of sample DED110 (prepared with a molecular index library) has been placed in the  $BI/gva_course/mixed_population directory. Invoking the script is as simple as typing SSCS_DCS.py; adding -h will give a list of the available options. The goal of this command is to generate SSCS reads, for any molecular index where we have at least 2 reads present, and to generate a log file which will tell us some information about the data.

Click here for solution of how to copy the DED110 fastq files to a new directorry called BDIB_Error_Correction
cds
mkdir BDIB_Error_Correction
cd BDIB_Error_Correction
cp $BI/gva_course/mixed_population/DED110*.fastq .

 

 

 Interrogate the SSCS_DCS.py script to determine how to invoke it. Click here for hints before the answer

You can often get more information about python scripts by typing the name of the script followed by the -h command.

 

The -h command should show you these options as being the key options to use/consider
  -f1 FASTQ1, --fastq1 FASTQ1
                        fastq read1 file to check
  -f2 FASTQ2, --fastq2 FASTQ2
                        fastq read2 file to check
  -p PREFIX, --prefix PREFIX
                        prefix for output files
  -s, --SSCS            calculate SSCS sequence, off by default. IF DCS
                        specificed, automatically on
  -m MINIMUM_READS, --minimum_reads MINIMUM_READS
                        minimum number of reads needed to support SSCS reads
  --log LOG             name of output log file
Using that information, see if you can figure out how to put the command together
SSCS_DCS.py -f1 DED110_CATGGC_L006_R1_001.fastq -f2 DED110_CATGGC_L006_R2_001.fastq -p DED110 -s -m 2 --log SSCS_Log

This should take ~10 minutes to complete in an idev shell. 

Error correction evaluation:

The SSCS_Log is a great place to start. Use the tail command to look at the last 8 lines of the log file to determine how many reads made it from raw reads to error corrected SSCS reads. 

help with the tail command
tail -n 8 SSCS_Log
 Approximately what fraction of raw reads became SSCS reads?

There are approximately 1/6th as many SSCS reads as raw reads:

Total Reads: 6066836

SSCS count: 978142

While this is somewhat misleading as it takes a minimum of 2 reads to generate a single SSCS read, we do have some additional information regarding what happened to the other reads. The first thing is to consider is the "Dual MI Reads" these represent the reads which correctly had the 12bp of degenerate sequence and the 4bp anchor. In this case, more than 1.5 million reads lacked an identifyable molecular index on read 1 and/or read 2. By that regard, we had ~1/4 as many SSCS reads as raw reads.

Perhaps more interesting is the number of errors removed. This is also available in the SSC_Log file, but in the middle of the file and don't have any good handle to grep with. One option is to cat the entire file and scroll around, another is to use tail/head commands you can get the specific lines only:

help with the tail command
tail -n 94 SSCS_Log | head -n 86

The 3 columns are the read posistion, the number of bases changed, and the number of bases not changed. If you copy and paste these 3 columns into excel you can easily calculate the sum of the 2nd column to see that 446,104 bases were changed. The read position is based on the 5-3' sequence, and you should notice that generally the higher the read position, the more errors were corrected. This should make sense based on what we have talked about with decreasing quality scores as read length increases.

Tutorial (Trimmed Reads):

Rework for fastx_trimmer ... fastx_trimmer -f 17

Optional not recommended tutorial trimming reads with flexbar:

For an another discussion about version control and when it is necessary to update to new tools and versions of programs, take a look at the trimmed reads tutorial from last year which used flexbrar simply because 'it worked before so keep using it'. Compare the simplistic fastx_trimmer commands used in this tutorial to all the work that went into flexbar last year. So while "well enough can be left alone", sometimes it is still better to use new tools. As the heading suggests, we don't actually suggest that you USE flexbar to trim this data set or any other, just something worth looking at to see how different programs operate or are invoked to achieve the same goals.

Next step:

You should now have 3 new .fastq files which we will use to call variants in: DED110_SSCS.fastq, trimmed_1.fastq, and trimmed_2.fastq. You could take these files into a more in depth breseq tutorial that was prepared last year for comparisons of the specific mutations that are eliminated using the error correction (SSCS). Link to other tutorial.

Return to GVA2017

  • No labels