Evaluating and processing raw sequencing data GVA2019

Overview

Before you start the alignment and analysis processes, it can be useful to perform some initial quality checks on your raw data. If you don't do this (or even if you do), you may notice later that something looks fishy in the the output: for example, many of your reads are not mapping or the ends of many of your reads do not align. Both can give you clues about whether you need to process the reads to improve the quality of data that you are putting into your analysis.

For many years this tutorial is discussed at some length for if it should be included as a main tutorial, if it should be included as an optional tutorial, or if it should be ignored all together as the quality of data increases. Recently a colleague of mine spent several days working with and trying to understand some data he got back before reaching out for help, after a few hours of running into a wall, fastqc was used to determine that the library was not constructed correctly in less than 30 minutes. Thus cementing the information as an important tutorial for a quick check which may save you significant amounts of time later on. 

Learning Objectives

This tutorial covers the commands necessary to use several common programs for evaluating read files in FASTQ format and for processing them (if necessary).

  • Introduction to the development nodes (and idev sessions) on TACC.
  • Diagnose common issues in FASTQ read files that will negatively impact analysis.
  • Trim adaptor sequences and low quality regions from the ends of reads to improve analysis.

Table of Contents

Interactive development (idev) sessions

As we discussed in our first tutorial the head node is a space shared by all and we don't like stepping on each others toes. While the launcher_creator.py helper script makes working with the compute nodes much easier, they still take time to initiate a run (waiting in the que) and if you have errors in your commands your job will fail and you will lose your place in line. An idev (or interactive development session) is a way to move off the head node and onto a single compute node, but work interactively to see if your commands actually work, give you much quicker feedback, and if everything goes as you hope, your data. idev sessions are much more limited in duration and in general its not necessary to see every line a program spits out once you are familiar with the type of data you will get. Additionally, we are going to use a priority access reservation set up special for the summer school that you normally would not have access to but should guarantee immediate starting of your idev session.

Copy and paste the following command, and read through the commented lines to make sure it is function correctly:

Starting an idev session
idev  -m 180 -r CCBB_Day_1 -A UT-2015-05-18
 
# This should return the following:
#  We found an ACTIVE reservation request for you, named CCBB_Day_1.
#  Do you want to use it for your interactive session?
#  Enter y/n [default y]: 


# If for any reason you don't see the above message let me know by raising your hand.
 
# Your answer should be y, which should return the following:
#  Reservation      : --reservation=CCBB_Day_1 (ACTIVE)
 
# Some of you may see a new prompt stating something like the following:
# We need a project to charge for interactive use.
# We will be using a dummy job submission to determine your project(s).
# We will store your (selected) project $HOME/.idevrc file.
# Please select the NUMBER of the project you want to charge.\n
# 1 OTHER_PROJECTS
# 2 UT-2015-05-18
# Please type the NUMBER(default=1) and hit return:
 
# If you see this message, again let me know.
 
# You will then see something similar to the following:
# job status:  PD
# job status:  R
# --> Job is now running on masternode= nid00032...OK
# --> Sleeping for 7 seconds...OK
# --> Checking to make sure your job has initialized an env for you....OK
# --> Creating interactive terminal session (login) on master node nid00032.
 
# If this takes more than 1 minute get my attention.

Your idev command line contains 3 flags: -m, -r -A. Using the `idev -h` command, can you figure out what these 3 flags mean and what you told the system you wanted to do?

 Click here to see if you are correct...

From the OPTIONS: section of the idev help output:

-m     minutes            sets time in minutes (default: 30)

-r     reservation_name   requests use of a specific reservation

-A     account_name       sets account name (default: -A none)

So you requested an idev node for 180 minutes, using the reservation named CCBB_Day_1, and asked that it be charged to the account named UT-2015-05-18.


Illumina sequence data format (FASTQ)

GSAF gives you paired end sequencing data in two matching FASTQ format files, containing reads for each end sequenced: for example, Sample_ABC_L005_R1.cat.fastq and Sample_ABC_L005_R2.cat.fastq. Each read end sequenced is represented by a 4-line entry in the FASTQ file.

A 4-line FASTQ file entry looks like this:

A four-line FASTQ file entry representing one sequence
@HWI-ST1097:104:D13TNACXX:4:1101:1715:2142 1:N:0:CGATGT
GCGTTGGTGGCATAGTGGTGAGCATAGCTGCCTTCCAAGCAGTTATGGGAG
+
=<@BDDD=A;+2C9F<CB?;CGGA<<ACEE*1?C:D>DE=FC*0BAG?DB6
  1. Line 1 is the read identifier, which describes the machine, flowcell, cluster, grid coordinate, end and barcode for the read. Except for the barcode information, read identifiers will be identical for corresponding entries in the R1 and R2 fastq files.
  2. Line 2 is the sequence reported by the machine.
  3. Line 3 is always '+' from GSAF (it can optionally include a sequence description but rarely or never actually does)
  4. Line 4 is a string of Ascii-encoded base quality scores, one character per base in the sequence. For each base, an integer quality score = -10 log(probability base is wrong) is calculated, then added to 33 to make a number in the ASCII printable character range.

See the Wikipedia FASTQ format page for more information.


Now that you know the basics, see if you can complete the following exercises on your own.

Exercise: Examine the 2nd sequence in a FASTQ file

What the 2nd sequence in the file $BI/gva_course/mapping/data/SRR030257_1.fastq is?

 Stuck? click here for a hint

Use the head command.

Still stuck? Click here for the full Head command
head $BI/gva_course/mapping/data/SRR030257_1.fastq 
 Answer

The 2nd sequence has ID = @SRR030257.2 HWI-EAS_4_PE-FC20GCB:6:1:407:767/1, and the sequence TAAGCCAGTCGCCATGGAATATCTGCTTTATTTAGC

If thats what you thought it was congratulations, if it is different, do you see where we got it from? If it doesn't make sense ask for help.

Counting sequences

If you get an error from running a program, one of the first thing to check is that the length of your FASTQ files is evenly divisible by four and — if the program expects paired reads — that the R1 and R2 files have the same number of reads. The wc command (word count) using the -l switch to tell it to count lines, not words, is perfect for this:

Using wc -l to count lines
wc -l $BI/gva_course/mapping/data/SRR030257_1.fastq 

Exercise: Counting FASTQ file lines

How many sequences are in the FASTQ file above?

 Answer

The wc -l command says there are 15200720 lines. FASTQ files have 4 lines per sequence, so the file has 15,200,720/4 or 3,800,180 sequences.

As mentioned many programs have problems if R1 and R2 do not have the same number of reads. While you can obviously change the wc -l command to check R2 rather than R1, fastq files are often stored in a compressed state to save disk space. By using pipes to link commands, you can still count the lines, and you don't have to uncompress the file to do it! Specifically, you can use gunzip -c to write decompressed data to standard output (-c means "to console", and leaves the original *.gz file untouched). You then pipe that output to wc -l to get the line count. 

Using wc -l on a compressed file
gunzip -c $BI/gva_course/mapping/data/SRR030257_2.fastq.gz | wc -l

How many lines/sequences does the compressed file contain? Does this agree with what you found for R1?

 Can I do math on the command line?

Of course, but the bash shell has a really strange syntax for arithmetic: it uses a double-parenthesis operator. Additionally unlike a calculator that automatically prints the result to the screen when it performs an operation, we have to explicitly tell bash that we want to see what the result is. We do this using the echo command, and assigning the result to a non-named temporary variable.

Arithmetic in Bash
echo $((15200720 / 4))

While this is certainly possible, memorizing different formats is often not worth the effort and it can be easier to use another program (ie excel or a standard calculator to do this type of work)



 Alternative using grep

grep or Global Regular Expression Print can also be used to determine the number of lines which match some criteria. Since we know the 3rd line in the fastq file is a + and a + only, we can look for a line that only has a + in it, and use that number to determine the number of sequence blocks in the file.


grep example
grep -c "^+$" $BI/gva_course/mapping/data/SRR030257_2.fastq

the -c option tells grep to count the lines (rather than printing them all to the screen and tell you how many it found. The characters between the "" is what grep is looking for. The ^ symbol means, look for the beginning of the line, the $ symbol means look for the end of the line. Once again you see this returns 3800180 reads.




While checking the number of reads a file has can solve some of the most basic problems, it doesn't really provide any direct evidence as to the quality of the sequencing data. To get this type of information before starting meaningful analysis other programs must be used.


Place your sticky note on your computer when you have made it this far and start looking over the fastqc links below. Once everyone has caught up we will go over this together.


FASTQ Evaluation Tools

The first order of business after receiving sequencing data should be to check your data quality. As discussed above, this often-overlooked step helps guide the manner in which you process the data, and can prevent many headaches that could require you to redo an entire analysis after they rear their ugly heads.

FastQC

FastQC is a tool that produces a quality analysis report on FASTQ files. Online documentation for FastQC 

First and foremost, the FastQC "Summary" on the left should generally be ignored. Its "grading scale" (green - good, yellow - warning, red - failed) incorporates assumptions for a particular kind of experiment, and is not applicable to most real-world data. Instead, look through the individual reports and evaluate them according to your experiment type.

The FastQC reports I find most useful are:

  1. The Per base sequence quality report, which can help you decide if sequence trimming is needed before alignment.
  2. The Sequence Duplication Levels report, which helps you evaluate library enrichment / complexity. But note that different experiment types are expected to have vastly different duplication profiles.
  3. The Overrepresented Sequences report, which helps evaluate adapter contamination.
 A couple of other things to note about FastQC
  • For many of its reports, FastQC analyzes only the first 200,000 sequences in order to keep processing and memory requirements down.
  • Some of FastQC's graphs have a 1-100 vertical scale that is tricky to interpret. The 100 is a relative marker for the rest of the graph. For example, sequence duplication levels are relative to the number of unique sequences,

Running FastQC

FastQC is available from the TACC module system on lonestar. Interactive GUI versions are also available for Windows and Macintosh and can be downloaded from the Babraham Bioinformatics web site. We don't want to clutter up our work space so copy the SRR030257_1.fastq file to a new directory named GVA_fastqc_tutorial on scratch, use the module system to load fastqc, use fastqc's help option after the module is loaded to figure out how to run the program. Once the program is completed use scp to copy the important file back to your local machine (The bold words are key words that may give you a hint of what steps to take next)

Running FastQC example
mkdir $SCRATCH/GVA_fastqc_tutorial
cd $SCRATCH/GVA_fastqc_tutorial
cp $BI/gva_course/mapping/data/SRR030257_1.fastq .
module load fastqc
 
fastqc -h  # examine program options
fastqc SRR030257_1.fastq  # run the program

Exercise: FastQC results

What did FastQC create?

 Answer
ls -l shows something like this
-rwxr-xr-x 1 ded G-802740 498588268 May 23 12:06 SRR030257_1.fastq
-rw-r--r-- 1 ded G-802740    291714 May 23 12:07 SRR030257_1_fastqc.html
-rw-r--r-- 1 ded G-802740    455677 May 23 12:07 SRR030257_1_fastqc.zip

The SRR030257_1.fastq file is what we analyzed, so FastQC created the other two items. SRR030257_1_fastqc.html represents the results in a file viewable in a web browser. SRR030257_1_fastqc.zip is just a Zipped (compressed) version of the results.

Looking at FastQC output

You can't run a web browser directly from your command line environment. You should copy the results back to your local machine (via scp) to open them in a web browser.

Transferring fastqc data back to computer
# on tacc terminal
pwd
 
# on new terminal of local computer
scp <username>@ls5.tacc.utexas.edu:<pwd_results_from_other_window>/SRR030257_1_fastqc.html ~/Desktop
 
# open the newly transferred file from from the desktop and see how the data looks


Exercise: Should we trim this data?

Based on this FastQC output, should we trim (1) adaptor sequences from the ends of the reads AND/OR (2) low quality regions from the ends of the reads?

 Answer

The Per base sequence quality report does not look great, but more importantly, nearly 1.5% of all the sequences are all A's according to the Overrepresented sequences. This is something that often comes up in miseq data that has shorter insert sizes than the overall read length. Next we'll start looking at how to trim our data before continuing.

FASTQ Processing Tools

Cutadapt

Cutadapt provides a simple command line tool for manipulating fasta and fastq files. The program description on their website provides good details of all the capabilities and examples for some common tasks. Cutadapt is also available via the TACC module system allowing us to turn it on when we need to use it and not worry about it other times.

cutadapt module description
module spider cutadapt
module load cutadapt

Trimming low quality bases

Low quality base reads from the sequencer can cause an otherwise mappable sequence not to align. There are a number of open source tools that can trim off 3' bases and produce a FASTQ file of the trimmed reads to use as input to the alignment program, but  cutadapt has the advantage of being a module on TACC and therefore the easiest to use. To run the program, you simply type 'cutadapt' followed by whatever options you want, and then the name of the fastq files without any option in front of it. Use the -h option to see all the different things you can do and see if you can figure out how to trim the reads down to 34 bases.

 Hint

Type cutadapt -h to see program documentation.

As there is a large number of options, look below the possible solution for more detailed information of what to focus on

One possible solution
cutadapt -l 34 -o SRR030257_1.trimmed.fastq SRR030257_1.fastq
  • The -l 34 option says that base 34 should be the last base (i.e., trim down to 34 bases)
  • The -o sets the output file, in this case SRR030257_1.trimmed.fastq
  • Listing the input file without any option in front of it (SRR030257_1.fastq) is a common way to specify input files.

Exercise: compressing the trimmed file 

Compressed files are smaller, easier to transfer, and many programs allow for their use directly. How would you tell cutadapt to compress (gzip) its output file?

 Hint

Type cutadapt -h to see program documentation and look for information about compressed files

 key portion of help

Above the citation you see a paragraph that starts:

Input may also be in FASTA format. Compressed input and output is supported and auto-detected from the file name (.gz, .xz, .bz2)

So simply by adding .gz to the output file name, cutadapt will compress it after it does the trimming.

Possible solution using the program directly
cutadapt -l 34 -o SRR030257_1.trimmed.fastq.gz SRR030257_1.fastq
Possible solution using gzip yourself
gzip SRR030257_1.trimmed.fastq

Both of the above solutions give the same final product, but are clearly achieved in different ways. This is done to show you that data analysis is a results driven process, if the result is correct, and you know how you got the result it is correct as long as it is reproducible.

Adapter trimming

 As mentioned above, cutadapt can be used to trim specific sequences, and based on our fastqc analysis, the sequence AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA is significantly overrepresented in our data. How would you use cutadapt to remove those sequences from the fastq file?

 

 Hint

Again, we go back to the program documentation to find what we are looking for: cutadapt -h

Look below the possible solution for more detailed information on what to focus on if you cant find what you are looking for .

Possible solution
cutadapt -o SRR030257_1.trimmed.depleted.fastq -a AAAAAAAAAAAAAAAAAAAA -l 34 -m 16 SRR030257_1.fastq

 

Command portionpurpose
-o SRR030257_1.trimmed.depleted.fastqcreate this new output file

-a AAAAAAAAAAAAAAAAA

remove bases containing this sequence
-l 34trim reads to 34 bases
-m 16discard any read shorter than 16 bases after sequence removed as these are more likely difficult to uniquely align to the genome
SRR030257_1.fastquse this file as input

From the summary printed to the screen you can see that this removed a little over an additional 2.2M bp of sequence.

A note on versions 

In our first tutorial we mentioned how knowing what version of a program you are using can be. When we loaded the the cutadapt module we didn't specify what version to load. Can you figure out what version you used, and what the most recent version of the program there is? .

 How to figure out the currently installed version

try using the module system or the program's help files

Still not sure?
module spider cutadapt

cutadapt --version

Figuring out the most recent version is a little more complicated. Unlike programs on your computer like Microsoft Office or your internet browser, there is nothing in an installed program that tells you if you have the newest version or even what the newest version is. If you go to the programs website (easily found with google or this link), the changes section lists all the versions that have been list with v2.3 being released on April 25th this year.

 Take a moment to think about why there might be such a big discrepancy before clicking here for the list of possible reasons I put together.

The biggest reason is that someone at tacc has to go through a process of noticing that there is a new version, figuring out if all of the changes are compatible with tacc, installing it, and then fielding questions and problems from users who were used to using the old version and have problems with the new version somehow.

The next reason is that the existing version works, and if you read through some of the recent changes, are very small and do not effect the function of the program very much.


Together, this is why I encourage you to make note of what version of the programs you use when you use them (primarily by loading modules complete with the versions in your .bashrc file), and to consider installing programs yourself when appropriate (as is discussed in the advanced trimmomatic tutorial for read trimming).


Optional Exercise: Improve the quality of R2 the same way you did for R1.

Unfortunately we don't have time during the class to do this, but as a potential exercise in your free time, you could improve R2 the same way you did R1 and use the improved fastq files in the subsequent read mapping and variant calling tutorials to see the difference it can make.



Return to GVA2019 course page