Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

While you could write your own sequence converter, hopefully it jumps out at you that this is something someone else must have done before. In situations like this, you can often spend a few minutes on google finding a stackoverlow question/answer that deals with something like this. Some will be in reference to how to code such things, but the particularly useful ones will be the ones that point to a program or repository where someone has already done this for you. 

In our case on TACC, the BioITeam has uploaded such a conversion script to one of the shared spaces. Using the which command, can you figure out where the this case the bp_seqconvert.pl script is located?

Code Block
languagebash
titleRecall that we have used the which command to determine where executable files are located, and only take 2 pieces of information.
which bp_seqconvert.pl

By the end of the class, hopefully you will recognize the "/corral-repl/utexas/BioITeam/" part of the answer as the BioITeam. The "bin" folder specifically is full of binary or (typically small) bash/python/perl/R scripts that someone has written to help the community. 

The bp_seqconvert.pl script is a common script written in bioperl that is a helpful utility for converting between many common sequence formats. If you try to run the script however, you get the following error message:

...

 perl script is included as part of the bioperl module package. Rather than attempt to find it as part of a conda package, or in some other repository we will use the module version. If needing this script in the future outside of TACC, https://metacpan.org/dist/BioPerl/view/bin/bp_seqconvert

Code Block
languagebash
titleRecall that we have used the which command to determine where executable files are located, and only take 2 pieces of information.
module load bioperl/1.007002
which -a bp_seqconvert.pl

If you run the which command above outside of an idev session you should see 2 results. If you run from inside an idve node you get 1 result. On the head node (outside idev) 1 that points to the BioITeam near where you keep finding your data (/corral-repl/utexas/BioITeam/

...

) part of the answer as the BioITeam. The "bin" folder specifically is full of binary or (typically small) bash/python/perl/R scripts that someone has written to help the TACC community. The other is in a folder specifically associated with the bioperl module.


Info
titleWhy do you get 2 different results depending on if you are inside or outside of an idev node

This has to do with how compute nodes are configured. On stampede2 /corral-repl/ and all of its subdirectories are not accessible so even though the BioITeam is in your $PATH, on the compute node, the command line can't access it. This is why in later tutorials you have to log out of the idev session to copy new raw data files to work with.

If you try to run the BioITeam version of the script from the head node /corral-repl/utexas/BioITeam/bin/bp_seqconvert.pl , you get the following error message:

No Format
Can't locate Bio/SeqIO.pm in @INC (@INC contains: /corral-repl/utexas/BioITeam//local/share/perl5 /corral-repl/utexas/BioITeam//perl5/lib/perl5/x86_64-linux-thread-multi /corral-repl/utexas/BioITeam//perl5/lib/perl5 /corral-repl/utexas/BioITeam//perl5/lib64/perl5/auto /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /corral-repl/utexas/BioITeam/bin/bp_seqconvert.pl line 8.
BEGIN failed--compilation aborted at /corral-repl/utexas/BioITeam/bin/bp_seqconvert.pl line 8.
Info
titleDeciphering error messages

The above error message is pretty helpful, but much less so if you are not familiar with perl. As I doubt anyone in the class is more familiar with perl than I am, and I am not familiar with perl hardly at all, this is a great opportunity to explain how I would tackle the error message to figure out what is going on.

  1. "compilation aborted at /corral-repl/utexas/BioITeam/bin/bp_seqconvert.pl line 8." 
    1. The last line here actually tells us that the script did not get very far, only to line 8.
    2. My experience with other programing language tells me that the beginning of scripts is all about checking that the script has access to all the things it needs to do what it is intended to do, so this has me thinking some kind of package might be missing.
  2. "(@INC contains: ..."
    1. This reads like the PATH variable, but is locations I don't recognize as being in my path, suggesting this is not some external binary or other program.
    2. Many of the individual pathways list "lib" in one form or another. This reinforces the idea from above that some kind of package is missing.
  3. "Can't locate Bio/SeqIO.pm in @INC"
    1. "Can't locate" reads like a plain text version of something being missing, and like something generic that is not related to my system/environment (like all the listed directories), and not related to the details of the script I am trying to run (like the last line that details the name of the script we tried to envoke)
    2. This is what should be googled for help solving the problem. 
      1.  the google results list similar error messages associated with different repositories/programs (github issues) suggesting some kind of common underlying problem.
      2. The 3rd result https://www.biostars.org/p/345331/ reads like a gneric generic problem and sure enough the answers detail needing to have the Bio library installed from cpan (perl's package management system)

...

We get this error message because on TACCstampede2, while perl is installed, but the bioperl library isn'trequired bioperl module is not loaded by default. Rather than having to actually install the library yourself (as we will do for several libraries in the SVDetect tutorial later today), we are lucky that TACC has a bioperl module. So, you must have the bioperl module loaded before the script will run. As it is fairly rare that you need to convert sequence files between different format, bioperl is actually not listed as one of the modules on your .bashrc file in your $HOME directory that you set up yesterday. Additionally it gives an opportunity to have you working with the module system. 

After loading the bioperl library to get past the error mesagemessage, run the script from the BioITeam without any arguments to get the help message:

Code Block
module load bioperl
/corral-repl/utexas/BioITeam/bin/bp_seqconvert.pl

If you find yourself needing to do lots of sequence conversions, and find the bp_seqconvert.pl script useful to do them, you may want to add a 'module load bioperl/1.007002' line to your .bashrc filefile. Recall that because the bp_seqconvert.pl script exists in 2 different locations as 2 different copies only the first one in the PATH variable will be used. Using the which -a command you see the copy used will be the module version unless you specifically envoke the BioITeam version. In this case it does not matter as the scripts are the same.

Convert a gbk reference to a embl reference

...

Info
titleyou may be wondering why creating an index is a common first step for many different mapping programs.
Warning
titleIMPORTANT

Like an index for a book (in the olden days before Kindles and Nooks), creating an index for a computer database allows quick access to any "record" given a short "key". In the case of mapping programs, creating an index for a reference sequence allows it to more rapidly place a read on that sequence at a location where it knows at least a piece of the read matches perfectly or with only a few mismatches. By jumping right to these spots in the genome, rather than trying to fully align the read to every place in the genome, it saves a ton of time.

Indexing is a separate step in running most mapping programs because it can take a LONG time if you are indexing a very large genome (like our own overly complicated human genome). Furthermore, you only need to index a genome sequence once, no matter how many samples you want to map. Keeping it as a separate step means that you can skip it later when you want to align a new sample to the same reference sequence.

Finally, it's time to map the reads! 

Code Block
languagebash
titleAgain, try reading the help for the bowtie2 command to figure out how to run the command yourself. Remember these are paired-end reads.
collapsetrue
bowtie2

This command can take a while (~5 minutes) and is extremely taxing. This is longer than we want to run a job on the head node (especially when all of us are doing it at once). In fact, in previous years, TACC has noticed the spike in usage when multiple students forgot to make sure they were on idev nodes and complained pretty forcefully to us about it. Let's not have this be one of those years. Use the showq -u command to make sure you are on an idev node. " given a short "key". In the case of mapping programs, creating an index for a reference sequence allows it to more rapidly place a read on that sequence at a location where it knows at least a piece of the read matches perfectly or with only a few mismatches. By jumping right to these spots in the genome, rather than trying to fully align the read to every place in the genome, it saves a ton of time.

Indexing is a separate step in running most mapping programs because it can take a LONG time if you are indexing a very large genome (like our own overly complicated human genome). Furthermore, you only need to index a genome sequence once, no matter how many samples you want to map. Keeping it as a separate step means that you can skip it later when you want to align a new sample to the same reference sequence.


Finally, it's time to map the reads! 

Code Block
languagebash
titleAgain, try reading the help for the bowtie2 command to figure out how to run the command yourself. Remember these are paired-end reads.
collapsetrue
bowtie2
Warning
titleIMPORTANT

This command can take a while (~5 minutes) and is extremely taxing. This is longer than we want to run a job on the head node (especially when all of us are doing it at once). In fact, in previous years, TACC has noticed the spike in usage when multiple students forgot to make sure they were on idev nodes and complained pretty forcefully to us about it. Let's not have this be one of those years. Use the hostname or  showq -u command to make sure you are on an idev node.

Commandon idev nodeon head node
hostnamelists a compute node starting with a C followed by a number before "stampede2.tacc.utexas.edu"lists a login node plus number before "stampede2.tacc.utexas.edu"
showq -u

-bash: showq: command not found

shows you a summary of jobs you have. (very likely empty during these tutorials)

If you are not sure if you are on an idev node or are seeing other output with one or both commands, speak up on zoom and I'll show(q) -u what you are looking for. Yes, your instructor likes bad puns. My apologies.

If you are not on an idev node, and need help to relaunch it, click over to the idev tutorial.

...

We have actually massively under-utilized stampede2 in this example by only using 8 cores. We ran the command using only a single processor (a single "thread") 8 processors rather than the 272 68 we have available . For programs that support multithreaded execution (and most mappers do because they are obsessed with speed) we could have sped things up by using all 272 processors for the bowtie process.on our idev session. if we increase to 68 total processors and rerun the analysis, how long do you expect the command to take?

Expand
titleSee if you can figure out how Modify the previous mapping command to re-run this analysis using all 68 cores. Click here for a hint

You need to use increase the -p, for "processors" option . Since we had 68 cores available to our job.from 8 to 68. 

Code Block
languagebash
titleclick here to check your answer
collapsetrue
bowtie2 -t -p 68 -x bowtie2/NC_012967.1 -1 SRR030257_1.fastq -2 SRR030257_2.fastq -S bowtie2/SRR030257.sam

Try it out and compare the speed of execution by looking at the log files.times listed at the end of each command

Expand
titleHow much faster was it using all 68 processors?

8 processor took a little over 5 minutes, 68 processors took ~ 57 seconds~1 minute. Can you think of any reasons why it was ~ 5x faster rather than ~8x faster?

Expand
titleAnswer

Anytime you use multiprocessing correctly things will go faster, but even if a program can divide the input perfectly among all available processors, and combine the outputs back together perfectly, there is "overhead" in dividing things up and recombining them. These are the types of considerations you may have to make with your data: When is it better to give more processors to a single sample? How fast do I actually need the data to come back?

An additional note from the stampede2 user manual is that while there are 68 cores available, and each core is capable of hyperthreading 4 x processors per core using all 272 processors is rarely the go to solution. While I am sure that this is more rigorously and appropriately tested in some other manner, I ran a test using different numbers of processors with the following results:

-p optiontime (min:sec)
2721:54
1361:13
680:57
341:14
172:25
85:12
49:01
218:13
135:01

Again while there are almost certainly better ways to benchmark this, there are 2 things of note that are illustrated here:

  1. ~doubling the number of processors does not reduce the time in half, and while some applications may use hyperthreading on the individual cores appropriately, and assuming a program can/will actually makes things take longer. 
  2. Working on your laptop (which likely has at most 4-8 processors available) would significantly increase the amount of time these tutorials take.

...

  • In the bowtie2 example, we mapped in --local mode. Try mapping in --end-to-end mode (aka global mode).

  • Do the BWA tutorial so you can compare their outputs (note BWA has a conda package making it even easier to try).
    • Did bowtie2 or BWA map more reads?
    • In our examples, we mapped in paired-end mode. Try to figure out how to map the reads in single-end mode and create this output.
    • Which aligner took less time to run? Are there any options you can change that:
      • Lead to a larger percentage of the reads being mapped? (increase sensitivity)
      • Speed up run time without causing many fewer reads to be mapped? (increase performance)

...