Overview
Before you start the alignment and analysis processes, it can be useful to perform some initial quality checks on your raw data. If you don't do this (or don't do this sufficiently), you may notice at the end of your analysis some things still are not clear: for example, maybe a large portion of reads do not map to your reference or maybe the reads map well except the ends do not align at all. Both of these results can give you clues about how you need to process the reads to improve the quality of data that you are putting into your analysis.
A stitch in time saves nine
Learning Objectives
This tutorial covers the commands necessary to use several common programs for evaluating read files in FASTQ format and for processing them (if necessary).
- Use basic linux commands to determine read count numbers and pull out specific reads.
- Diagnose common issues in FASTQ read files that will negatively impact analysis.
- Trim adaptor sequences and low quality regions from the ends of reads to improve analysis.
Interactive development (idev) sessions
As we discussed in our first tutorial the head node is a space shared by all and we don't like stepping on each others toes. While the launcher_creator.py helper script makes working with the compute nodes much easier, they still take time to initiate a run (waiting in the que) and if you have errors in your commands your job will fail and you will lose your place in line. An idev (or interactive development session) is a way to move off the head node and onto a single compute node, but work interactively to see if your commands actually work, give you much quicker feedback, and if everything goes as you hope, your data. idev sessions are much more limited in duration and in general its not necessary to see every line a program spits out once you are familiar with the type of data you will get. Additionally, we are going to use a priority access reservation set up special for the summer school that you normally would not have access to but should guarantee immediate starting of your idev session.
Copy and paste the following command, and read through the commented lines to make sure it is function correctly:
idev -m 180 -r CCBB_Day_1 -A UT-2015-05-18 # This should return the following: # We found an ACTIVE reservation request for you, named CCBB_Day_1. # Do you want to use it for your interactive session? # Enter y/n [default y]: # If for any reason you don't see the above message let me know by raising your hand. # Your answer should be y, which should return the following: # Reservation : --reservation=CCBB_Day_1 (ACTIVE) # Some of you may see a new prompt stating something like the following: # We need a project to charge for interactive use. # We will be using a dummy job submission to determine your project(s). # We will store your (selected) project $HOME/.idevrc file. # Please select the NUMBER of the project you want to charge.\n # 1 OTHER_PROJECTS # 2 UT-2015-05-18 # Please type the NUMBER(default=1) and hit return: # If you see this message, again let me know. # You will then see something similar to the following: # job status: PD # job status: R # --> Job is now running on masternode= nid00032...OK # --> Sleeping for 7 seconds...OK # --> Checking to make sure your job has initialized an env for you....OK # --> Creating interactive terminal session (login) on master node nid00032. # If this takes more than 1 minute get my attention.
Your idev command line contains 3 flags: -m, -r -A. Using the `idev -h` command, can you figure out what these 3 flags mean and what you told the system you wanted to do?
FASTQ data format
A common question is 'after you submit something for sequencing what do you get back?' The answer is FASTQ files. While there is some additional log files that you may be able to get off the instrument, the reality is none of those are actually 'data' of anything other than high level instrument performance. The good news is you don't actually need anything else. For single end sequencing you would have a single file, while paired end sequencing provides 2 files: 1 for read1 and another for read2. Each file contains a repeating 4-line entry for each individual read.
@SRR030257.1 HWI-EAS_4_PE-FC20GCB:6:1:385:567/1 TTACACTCCTGTTAATCCATACAGCAACAGTATTGG + AAA;A;AA?A?AAAAA?;?A?1A;;????566)=*1
- Line 1 is the read identifier, which describes the machine, flowcell, cluster, grid coordinate, end and barcode for the read. Except for the barcode information, read identifiers will be identical for corresponding entries in the R1 and R2 fastq files.
- Line 2 is the sequence reported by the machine.
- Line 3 is almost always just '+' . (occasionally the line will be the same as the first line except the intial @ symbol is changed to a +)
- Line 4 is a string of Ascii-encoded base quality scores, one character per base in the sequence. For each base, an integer quality score = -10 log(probability base is wrong) is calculated, then added to 33 to make a number in the ASCII printable character range.
See the Wikipedia FASTQ format page for more information.
Determine 2nd sequence in a FASTQ file
What the 2nd sequence in the file $BI/gva_course/mapping/data/SRR030257_1.fastq
is?
More advanced solutions to do slightly different things:
The -n option can be used to control how many lines of a file are printed:
The output of the head command can be piped to the
tail
command to isolate specific groups of lines:The grep command can be used to look for lines that contain only ACTG or N:
^ by increasing the "-m" value we can now quickly get a block of sequence of any size of our choosing. This is the first truly useful command on this page. With a block of sequence, you can start to see things like:
if the first/last bases are always the same
if the reads are the same length
- if a single sequence shows up a huge number of times
- This is our first example that there are often many different ways to do the same thing in NGS analysis. While some may be faster, or more efficient, the same answers are still achieved. Don't let the pursuit of perfection keep you from getting your answers in whatever way you can justify.
Counting sequences
Often, the first thing you (or your boss) want to know about your sequencing run is simply, "how many reads do I have?". For the $BI/gva_course/mapping/data/SRR030257_1.fastq file, the answer is 3,800,180. How can we figure that out?
The grep (or Global Regular Expression Print) command can be used to determine the number of lines which match some criteria as shown above. Above we searched for:
- anything from the group of ACTGN with the [] marking them as a group
- matching any number of times *
- from the beginning of the line ^
- to the end of the line $
Here, since we are only interested in the number of reads that we have, we can make use of knowing the 3rd line in the fastq file is a + and a + only, and grep's -c option to simply report the number of reads in a file.
Remember computers always answer exactly what you ask, the trick is asking the right question
Without the anchors you asked the computer, "how many lines have a + symbol on them". With the anchors you asked "how many lines start with a + symbol and have no other characters on the line. Remember, this only works when we know for certain that line 3 is a "+" symbol by itself. This is where head/tail can be useful.
We can also check using similar methods (and give another example of different analysis giving us the same result):
grep -c "^[ACTGN]*$" $BI/gva_course/mapping/data/SRR030257_1.fastq
wc -l $BI/gva_course/mapping/data/SRR030257_1.fastq
Counting with compressed files
Thus far we have looked at a FASTQ file. Because FASTQ files contain millions-billions of lines and billions+ characters, they are often stored in a compressed format. Specifically they are typically stored in a 'gzipped' format to save storage space and typically will end with ".gz" so you can identify them. While the files are easily changed from compressed to noncompressed and back again (and you will do some of this throughout the course and plenty more in your own work), the bigger the file, the longer such actions will take.
Sometimes you've already set up some commands to do a particular analysis with a program that accepts gzipped compressed files as inputs, but you are still interested in checking how many reads you have overall (maybe you want to calculate how many reads survive a trimming-mapping-extraction pipeline). For years, the way I had done this was by using pipes to link commands to force output back to the word count command. I was thrilled because it meant I didn't have to gunzip all the files then gzip them back when I was done. Specifically, I used gunzip -c
to write decompressed data to standard output (-c
means "to console", and leaves the original .gz
file untouched) and then piped that output to wc -l
to get the line count, copy pasted that into excel, and divided the cell value by 4 to get the final answer.
gunzip -c $BI/gva_course/mapping/data/SRR030257_2.fastq.gz | wc -l
Does that sound tedious to you? Until the last year, to me it just sounded like how determine the number of reads in a compressed file without having to re-gzip them after you had the answer.
In the last 1.5 years, by trying to do something slightly different with grep I was looking at the grep manual when the following option jumped out at me:
-Z, -z, --decompress Force grep to behave as zgrep.
After some googling I found out that a tremendous amount of time could be saved by just using the zgrep command:
zgrep -c "^+$" /corral-repl/utexas/BioITeam/gva_course/mapping/data/SRR030257_2.fastq.gz
While you shouldn't spend a large amount of time looking for the perfect solution, don't be afraid to try new things (as long as your data is backed up somewhere incase you mess a file up beyond recognition or repair)
While checking the number of reads a file has can solve some of the most basic problems, it doesn't really provide any direct evidence as to the quality of the sequencing data. To get this type of information before starting meaningful analysis other programs must be used.
Evaluating FASTQ files with FastQC
FastQC overview
Once you move past the most basic questions about your data, you need to move onto more substantive assessments. As discussed above, this often-overlooked step helps guide the manner in which you process the data, and can prevent many headaches that could require you to redo an entire analysis after they rear their ugly heads.
FastQC is a tool that produces a quality analysis report on FASTQ files that has great examples and is easy to understand:
Below is a recap of what was discussed during the prestation:
First and foremost, the FastQC "Summary" on the left should generally be ignored. Its "grading scale" (green - good, yellow - warning, red - failed) incorporates assumptions for a particular kind of experiment, and is not applicable to most real-world data. Instead, look through the individual reports and evaluate them according to your experiment type.
The FastQC reports I find most useful are:
The Per base sequence quality report, which can help you decide if sequence trimming is needed before alignment.
The Sequence Duplication Levels report, which helps you evaluate library enrichment / complexity. But note that different experiment types are expected to have vastly different duplication profiles.
The Overrepresented Sequences report, which helps evaluate adapter contamination.
A couple of other things to note:
- For many of its reports, FastQC analyzes only the first 200,000 sequences in order to keep processing and memory requirements down.
- Some of FastQC's graphs have a 1-100 vertical scale that is tricky to interpret. The 100 is a relative marker for the rest of the graph.
- For example, sequence duplication levels are relative to the number of unique sequences.
Running FastQC
FastQC is available from the TACC module system on lonestar. Interactive GUI versions are also available for Windows and Macintosh and can be downloaded from the Babraham Bioinformatics web site. We don't want to clutter up our work space so copy the SRR030257_1.fastq file to a new directory named GVA_fastqc_tutorial on scratch, use the module system to load fastqc, use fastqc's help option after the module is loaded to figure out how to run the program. Once the program is completed use scp to copy the important file back to your local machine (The bold words are key words that may give you a hint of what steps to take next)
Looking at FastQC output
As discussed in the introduction tutorial, you can't run a web browser directly from your command line environment. You should copy the results back to your local machine (via scp
) so you can open them in a web browser.
A reminder about the scp tutorial if you didn't get to it in the first part of today's class
Here is a more detailed description of how to use scp to transfer files around. In this case, you will replace "README" with "SRR030257_1_fastqc.html"
Exercise: Should we trim this data?
Based on this FastQC output, should we trim (1) adaptor sequences from the ends of the reads AND/OR (2) low quality regions from the ends of the reads?
FASTQ Processing Tools
Cutadapt
Cutadapt provides a simple command line tool for manipulating fasta and fastq files. The program description on their website provides good details of all the capabilities and examples for some common tasks. Cutadapt is also available via the TACC module system allowing us to turn it on when we need to use it and not worry about it other times.
module spider cutadapt module load cutadapt
Trimming low quality bases
Low quality base reads from the sequencer can cause an otherwise mappable sequence not to align. There are a number of open source tools that can trim off 3' bases and produce a FASTQ file of the trimmed reads to use as input to the alignment program, but cutadapt has the advantage of being a module on TACC and therefore the easiest to use. To run the program, you simply type 'cutadapt' followed by whatever options you want, and then the name of the fastq files without any option in front of it. Use the -h option to see all the different things you can do and see if you can figure out how to trim the reads down to 34 bases.
The -l 34 option says that base 34 should be the last base (i.e., trim down to 34 bases)
The -o sets the output file, in this case SRR030257_1.trimmed.fastq
Listing the input file without any option in front of it (SRR030257_1.fastq) is a common way to specify input files.
Exercise: compressing the trimmed file
Compressed files are smaller, easier to transfer, and many programs allow for their use directly. How would you tell cutadapt to compress (gzip) its output file?
Both of the above solutions give the same final product, but are clearly achieved in different ways. This is done to show you that data analysis is a results driven process, if the result is correct, and you know how you got the result it is correct as long as it is reproducible.
Adapter trimming
As mentioned above, cutadapt can be used to trim specific sequences, and based on our fastqc analysis, the sequence AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA is significantly overrepresented in our data. How would you use cutadapt to remove those sequences from the fastq file?
Command portion | purpose |
---|---|
-o SRR030257_1.trimmed.depleted.fastq | create this new output file |
-a AAAAAAAAAAAAAAAAA | remove bases containing this sequence |
-l 34 | trim reads to 34 bases |
-m 16 | discard any read shorter than 16 bases after sequence removed as these are more likely difficult to uniquely align to the genome |
SRR030257_1.fastq | use this file as input |
From the summary printed to the screen you can see that this removed a little over an additional 2.2M bp of sequence.
A note on versions
In our first tutorial we mentioned how knowing what version of a program you are using can be. When we loaded the the cutadapt module we didn't specify what version to load. Can you figure out what version you used, and what the most recent version of the program there is? .
Figuring out the most recent version is a little more complicated. Unlike programs on your computer like Microsoft Office or your internet browser, there is nothing in an installed program that tells you if you have the newest version or even what the newest version is. If you go to the programs website (easily found with google or this link), the changes section lists all the versions that have been list with v2.3 being released on April 25th this year.
Optional Exercise: Improve the quality of R2 the same way you did for R1.
Unfortunately we don't have time during the class to do this, but as a potential exercise in your free time, you could improve R2 the same way you did R1 and use the improved fastq files in the subsequent read mapping and variant calling tutorials to see the difference it can make.