Once you know you are working with the best quality data (Evaluating Raw Sequencing data tutorial) possible, the first step in nearly every NGS analysis pipeline is to map sequencing reads to a reference genome. In this tutorial we'll explore these basic principles using bowtie2 on TACC.
The world of read mappers is settling down after being a bioinformatics Wild West where there was a new gun in town every week that promised to be a faster and more accurate shot than the current record holder. Things seem to have reached the point where there is mainly a trade-off between speed, accuracy, and configurability among read mappers that have remained popular. There are over 50 read mapping programs listed here. Each mapper has its own set of limitations (on the lengths of reads it accepts, on how it outputs read alignments, on how many mismatches there can be, on whether it produces gapped alignments, etc). It is possible a different read mapper would be better for your set of experiments. More will be discussed about selecting a good tool on Friday.
Previous versions of this class and tutorial have covered using bowtie and bwa. Please consult these tutorials for more specific information on each mapping program. A previous version of this tutorial included a trimmed down version of the bwa tutorial if you just want the 'flavor' of what other read mappers involve. |
This tutorial covers the commands necessary to use bowtie2 to map reads to a reference genome, and concepts applicable to many more mappers.
SAM/BAM format for downstream analysis.Please see the Introduction to mapping presentation on the course outline for more details of the theory behind read mapping algorithms and critical considerations for using these tools and references correctly.
The following DNA sequencing read data files were downloaded from the NCBI Sequence Read Archive via the corresponding European Nucleotide Archive record. They are Illumina Genome Analyzer sequencing of a paired-end library from a (haploid) E. coli clone that was isolated from a population of bacteria that had evolved for 20,000 generations in the laboratory as part of a long-term evolution experiment (Barrick et al, 2009). The reference genome is the ancestor of this E. coli population (strain REL606), so we expect the read sample to have differences from this reference that correspond to mutations that arose during the evolution experiment.
Rather than having to download these files from the SRA or EUN and NCBI, these data files are available in the following directory:
$BI/gva_course/mapping/data |
You may recognize this as the same files we used for the fastqc and cutadapt tutorial. If you chose to improve the quality of R2 reads using cutadapt as you did for R1 in the tutorial, you could use the improved reads in this tutorial to see what a difference the improved reads can make for read mapping.
File Name | Description | Sample |
|---|---|---|
| Paired-end Illumina, First of pair, FASTQ format | Re-sequenced E. coli genome |
| Paired-end Illumina, Second of pair, FASTQ format | Re-sequenced E. coli genome |
| Reference Genome in Genbank format | E. coli B strain REL606 |
The easiest way to run the tutorial is to copy this entire directory into a new folder called "GVA_bowtie2_mapping" on your $SCRATCH space and then run all of the commands from inside that directory. See if you can figure out how to do that. When you're in the right place, you should get output like this from the ls command.
tacc:/scratch/<#>/<UserName>/GVA_bowtie2_mapping$ ls NC_012967.1.gbk SRR030257_1.fastq SRR030257_2.fastq SRR030257_2.fastq.gz |
cds cp -r $BI/gva_course/mapping/data GVA_bowtie2_mapping cd GVA_bowtie2_mapping ls |
NGS data can be quite large, a single lane of an Illumina Hi-Seq run generates 2 files each with 100s of millions of lines. Printing all of that can take an enormous amount of time and will likely crash your terminal long before it finishes. If you find yourself in a seemingly endless scroll of sequence (or anything else for that matter) remember control+c will kill whatever command you just executed. If hitting control+c several times doesn't work, control +z will stop the process, you then need to kill the process using |
Remember, from the introduction tutorial, there are multiple ways to look at our sequencing files without using cat:
| Command | useful for | bad if |
|---|---|---|
| head | seeing the first lines of a file (10 by default) | file is binary |
| tail | seeing the last lines of a file (10 by default) | file is binary |
| cat | print all lines of a file to the screen | the file is big and/or binary |
| less | opens the entire file in a separate program but does not allow editing | if you are going to type a new command based on the content, or forget the q key exits the view, or file is binary |
| more | prints 1 page worth of a file to the screen, can hold enter key down to see next line repeatedly. Contents will remain when you scroll back up. | you forget that you hit the q key to stop stop looking at the file, or file is binary |
grep -c "^+$" SRR030257_1.fastq |
sed -n 2p SRR030257_1.fastq | awk -F"[ATCGNatcgn]" '{print NF-1}' |
Occasionally you might download a sequence or have it emailed to you by a collaborator in one format, and then the program that you want to use demands that it be in another format. Why do they have to be so picky? Everybody has own favorite formats and/or those that they are the most familiar with but humans can typically pick the information they need out of comparable formats. Programs can only be written to assume a single type of format (or allow you to specify a format if the author is particularly generous), and can only find things in single locations based on that format.
While you could write your own sequence converter, hopefully it jumps out at you that this is something someone else must have done before. In situations like this, you can often spend a few minutes on google finding a stackoverlow question/answer that deals with something like this. Some will be in reference to how to code such things, but the particularly useful ones will be the ones that point to a program or repository where someone has already done this for you.
In this case the bp_seqconvert.pl perl script is included as part of the bioperl module package. Rather than attempt to find it as part of a conda package, or in some other repository we will use the module version. If needing this script in the future outside of TACC, https://metacpan.org/dist/BioPerl/view/bin/bp_seqconvert.
module load bioperl bp_seqconvert.pl |
If you run on an idev node you get 1 result related to the bioperl module, but if you run on the head node (outside idev) you get 2 results. On the head node, 1 points to the BioITeam near where you keep finding your data (/corral-repl/utexas/BioITeam/) which is part of the BioITeam, specifically the "bin" folder which is full of binary or (typically small) bash/python/perl/R scripts that someone has written to help the TACC community. The other is in a folder specifically associated with the bioperl module. You can load and unload the bioperl module to see the difference.
If you try to run the BioITeam version of the script (
We get this error message because because while perl is installed on stampede2, the required SeqIO.pm library is not available by default but it is easily installed with the bioperl module. As it is likely rare that you will need to convert sequence files between different format, bioperl is actually not listed as one of the modules on your .bashrc file in your $HOME directory that you set up yesterday, but if you find yourself using the command `module load bioperl` often, you may want to add it.
How does the computer know which location to use?
Using just the script name by itself, will use which ever is found first, but you can always force the computer to use a given copy by specifying the full path to the copy you want. Thus, the following 2 commands are not equal:
While the commands are different, both copies can use the same bioperl library SeqIO.pm when the bioperl module is loaded and thus work. |
Convert the Genbank file NC_012967.1.gbk to EMBL format, and name this new file NC_012967.1.embl.
module load bioperl bp_seqconvert.pl --from genbank --to embl < NC_012967.1.gbk > NC_012967.1.embl head -n 100 NC_012967.1.embl |
It is somewhat frustrating or confusing that this command does not give us any output saying it was successful. The fact that you get your prompt back is often the only sign the computer has finished doing something. |
remember that you can quit the less and more views with the q key. |
Sometimes you only want to work with a subset of a full data file to check for overall trends, or to try out a new piece of code. Convert only the first 10,000 lines of SRR030257_1.fastq to FASTA format.
head -n 10000 SRR030257_1.fastq | bp_seqconvert.pl --from fastq --to fasta > SRR030257_1.fasta |
The line of ASCII characters was lost. Remember, those are your "base quality scores". Many mappers will use the base quality scores to improve how the reads are aligned by not placing as much emphasis on poor bases. |
Bowtie2 is a complete rewrite of an older program bowtie. In terms of configurability, sensitivity, and speed it is useful for a wide range of projects. After years of teaching bwa mapping along with bowtie2, bowtie2 alone is now taught as I never recommend anyone use bwa, and based on positive feedback we continue with this set up. For some more details about how read mappers work see the bonus presentation about read mapping details and file formats on the course home page, and if you find a compelling reason to use bwa (or any other read mapper) rather than bowtie2 after the course is over, I'd love to hear from you.
Create a fresh output directory named bowtie2. We are going to create a specific output directory for the bowtie2 mapper within the directory that has the input files so that you can compare the results of other mappers if you choose to do the other tutorials.
mkdir bowtie2 |
First you need to convert the reference file from GenBank to FASTA using what you learned above. Name the new output file NC_012967.1.fasta and put it in the same directory as NC_012967.1.gbk.
bp_seqconvert.pl --from genbank --to fasta < NC_012967.1.gbk > NC_012967.1.fasta |
While you could consult previous year's tutorial for installing bowtie2 via the module system, this year's course will be using the conda system to install it. The bowtie2 home page can be found here, and if you needed to download the program itself, version 2.5.1 could be downloaded here. Instead, we want to make sure the bowtie2 version 2.5.1 is installed via conda Like we did for fastqc and cutadapt. See if you can figure out how to install bowtie2 into a new conda environment named "GVA-bowtie2-mapping". Note that "2" is actually part of the program name, neither a typo nor a comment on the program version.
Remember that we want to use the https://anaconda.org/ search function and end up at the bowtie2 page: https://anaconda.org/bioconda/bowtie2. Like we discussed with the
As mentioned in explaining why cutadapt installed version 1.18 instead of 4.4, the default anaconda channel and the bioconda channel do not always have all necessary program requirements to install the latest version of programs. In the list of new packages that were to be installed the following line lists that the bowtie2 version that will be installed will be 2.4.1: bowtie2 bioconda/linux-64::bowtie2-2.4.1-py38he513fc3_0 While it may seem like installing a different version of the program is bad behavior, this is actually a huge benefit of the conda program. Often changes from version to version of a program are small and only effect subsets of the program, and the conda package installer is designed to find whatever way it can to get you a working version of the program. If we know that there is a particular version we want (be it the newest version, or a previous version you want to use to maintain consistent behavior in a given data set) and we tell conda that we want that version, if conda can't install that version it wont prompt you to proceed it will just fail.
Since we dont have a lot of information about what is causing the conflict with bowtie2 version 2.5.1, a simple step to try is to give the installation access to conda-forge. Similar to bioconda, conda-forge is a channel that is community run rather than company run and is both more nimble at including new things, and expansive for including more tools. More information about conda-forge can be found here.
|
We just went thought a lot of work to make sure we installed the version that we wanted, but sometimes we need to work the other way and figure out what version we have already been working with such as yesterday with cutadapt.
The above should show you now have access to version 2.5.1. If you have a different version listed (such as 2.3.5 or 2.3.2) make sure you are using the conda installation with access to conda forge and not relying on the TACC module, and then get my attention for clarification. |
The following command is extremely taxing to the head node and thus means we should not run it on the head node (especially when all of us are doing it at once). In fact, in previous years, TACC has noticed the spike in usage when multiple students forgot to make sure they were on idev nodes and complained pretty forcefully to us about it. Let's not have this be one of those years. Use the
If you are not sure if you are on an idev node or are seeing other output with one or both commands, speak up on zoom and I'll show(q) -u what you are looking for. Yes, your instructor likes bad puns. My apologies. If you are not on an idev node, and need help to relaunch it, click over to the idev tutorial. |
For many read mappers, the first step in mapping reads to a genome is quite often indexing the reference file. Put the output of this command into the bowtie directory we created a minute ago. The command you need is:
bowtie2-build |
Try typing this alone in the terminal and figuring out what to do from the help show just from typing the command by itself.
The command requires 2 arguments. The first argument is the reference sequence in FASTA format. The second argument is the "base" file name to use for the created index files. It will create a bunch of files beginning bowtie/NC_012967.1*. |
bowtie2-build NC_012967.1.fasta bowtie2/NC_012967.1 |
Take a look at your output directory using ls bowtie2 to see what new files have appeared. These files are binary files, so looking at them with head or tail isn't instructive and can cause issues with your terminal. If you insist on looking at them (or accidentally do so before you read this) and your terminal begins behaving oddly, simply close it and log back into stampede2 with a new terminal, and start a new idev session.
Like an index for a book (in the olden days before Kindles and Nooks), creating an index for a computer database allows quick access to any "record" given a short "key". In the case of mapping programs, creating an index for a reference sequence allows it to more rapidly place a read on that sequence at a location where it knows at least a piece of the read matches perfectly or with only a few mismatches. By jumping right to these spots in the genome, rather than trying to fully align the read to every place in the genome, it saves a ton of time. Indexing is a separate step in running most mapping programs because it can take a LONG time if you are indexing a very large genome (like our own overly complicated human genome). Furthermore, you only need to index a genome sequence once, no matter how many samples you want to map. Keeping it as a separate step means that you can skip it later when you want to align a new sample to the same reference sequence. |
bowtie2 |
It is important that you use 8 processors when doing this mapping due to course time constraints.
bowtie2 -t -p 8 -x bowtie2/NC_012967.1 -1 SRR030257_1.fastq -2 SRR030257_2.fastq -S bowtie2/SRR030257.sam # the -t command is not required for the mapping, but it can be particularly informative when you begin comparing different mappers |
|
Your final output file is in SAM format. It's just a text file, so you can peek at it and see what it's like inside. Two warnings though:
head or grep or more or using a viewer like IGV, which we will cover in a later tutorial.Still, you should recognize some of the information on a line in a SAM file from the input FASTQ, and some of the other information is relatively straightforward to understand, like the position where the read mapped. Give this a try:
head bowtie2/SRR030257.sam |
| If you thought the answer was the mapping coordinates of the read pairs you were right! |
We have actually massively under-utilized stampede2 in this example by only using 8 cores. We ran the command using only 8 processors rather than the 48 we have available on our idev session. if we increase to 48 total processors and rerun the analysis, how long do you expect the command to take?
You need to increase the
Try it out and compare the speed of execution by looking at the times listed at the end of each command
|
One consequence of using multithreading that might be confusing is that the aligned reads might appear in your output SAM file in a different order than they were in the input FASTQ. This happens because small sets of reads get continuously packaged, "sent" to the different processors, and whichever set "returns" fastest is written first. You can force them to appear in the same order (at a slight cost in speed) by adding the --reorder flag to your command, but is typically only necessary if the reads are already ordered or you intend to do some comparison between the input and output (something I have never done in my own work).
The next steps are often to view the output using a specific viewer on your local machine, or to begin identifying variant locations where the reads differ from the reference sequence. These will be the next things we cover in the course.
In the bowtie2 example, we mapped in --local mode. Try mapping in --end-to-end mode (aka global mode).
Here is a link to help you return to the GVA 2023 course schedule.