Evaluating and processing raw sequencing data (GVA14)

Overview

Before you start the alignment and analysis processes, it can be useful to perform some initial quality checks on your raw data. If you don't do this (or even if you do), you may notice later that something looks fishy in the the output: for example, many of your reads are not mapping or the ends of many of your reads do not align. Both can give you clues about whether you need to process the reads to improve the quality of data that you are putting into your analysis.

Here we will assume you have data from GSAF's Illumina HiSeq or MiSeq sequencer.

Learning Objectives

This tutorial covers the commands necessary to use several common programs for evaluating read files in FASTQ format and for processing them (if necessary).

  • Diagnose common issues in FASTQ read files that will negatively impact analysis.
  • Trim adaptor sequences and low quality regions from the ends of reads to improve analysis.

Table of Contents

When following along here, please start an idev session for running any example commands:

idev -m 60 -q development

Illumina sequence data format (FASTQ)

GSAF gives you paired end sequencing data in two matching FASTQ format files, containing reads for each end sequenced: for example, Sample_ABC_L005_R1.cat.fastq and Sample_ABC_L005_R2.cat.fastq. Each read end sequenced is representd by a 4-line entry in the FASTQ file.

A 4-line FASTQ file entry looks like this:

A four-line FASTQ file entry representing one sequence
@HWI-ST1097:104:D13TNACXX:4:1101:1715:2142 1:N:0:CGATGT
GCGTTGGTGGCATAGTGGTGAGCATAGCTGCCTTCCAAGCAGTTATGGGAG
+
=<@BDDD=A;+2C9F<CB?;CGGA<<ACEE*1?C:D>DE=FC*0BAG?DB6
  1. Line 1 is the read identifier, which describes the machine, flowcell, cluster, grid coordinate, end and barcode for the read. Except for the barcode information, read identifiers will be identical for corresponding entries in the R1 and R2 fastq files.
  2. Line 2 is the sequence reported by the machine.
  3. Line 3 is always '+' from GSAF (it can optionally include a sequence description)
  4. Line 4 is a string of Ascii-encoded base quality scores, one character per base in the sequence. For each base, an integer quality score = -10 log(probabilty base is wrong) is calculated, then added to 33 to make a number in the ASCII printable character range.

See the Wikipedia FASTQ format page for more information.

Exercise: Examine the 2nd sequence in a FASTQ file

What is the 2nd sequence in the file $BI/gva_course/mapping/data/SRR030257_1.fastq ?

 Hint

Use the head command.

 Answer
head $BI/gva_course/mapping/data/SRR030257_1.fastq

Executing the command above reports that the 2nd sequence has ID = @SRR030257.2 HWI-EAS_4_PE-FC20GCB:6:1:407:767/1, and the sequence TAAGCCAGTCGCCATGGAATATCTGCTTTATTTAGC

Counting sequences

If you get an error from running a program, one of the first thing to check is that the length of your FASTQ files is evenly divisible by four and — if the program expects paired reads — that the R1 and R2 files have the same number of reads. The wc command (word count) using the -l switch to tell it to count lines, not words, is perfect for this:

Using wc -l to count lines
wc -l $BI/gva_course/mapping/data/SRR030257_1.fastq

Exercise: Counting FASTQ file lines

How many sequences are in the FASTQ file above?

 Answer

The wc -l command says there are 15200720 lines. FASTQ files have 4 lines per sequence, so the file has 15,200,720/4 or 3,800,180 sequences.

What if your fastq file has been compressed, for example by gzip? By using pipes to link commands, you can still count the lines, and you don't have to uncompress the file to do it!

Using wc -l on a compressed file
gunzip -c $BI/web/yeast_stuff/Sample_Yeast_L005_R1.cat.fastq.gz | wc -l

Here you use gunzip -c to write decompressed data to standard output (-c means "to console", and leaves the original *.gz file untouched). You then pipe that output to wc -l to get the line count.

Exercise: Counting compressed FASTQ lines

How many sequences are in the compressed FASTQ file above?

 Answer

The wc -l command says there are 2368720 lines so the file has 2,368,720/4 or 592,180 sequences.

 How do I do math on the command line?

The bash shell has a really strange syntax for arithmetic: it uses a double-parenthesis operator. Go figure.

Arithmetic in Bash
echo $((2368720 / 4))

FASTQ Evaluation Tools

The first order of business after receiving sequencing data should be to check your data quality. This often-overlooked step helps guide the manner in which you process the data, and can prevent many headaches that could require you to redo an entire analysis after they rear their ugly heads.

FastQC

FastQC is a tool that produces a quality analysis report on FASTQ files.

Useful links:

First and foremost, the FastQC "Summary" should generally be ignored. Its "grading scale" (green - good, yellow - warning, red - failed) incorporates assumptions for a particular kind of experiment, and is not applicable to most real-world data. Instead, look through the individual reports and evaluate them according to your experiment type.

The FastQC reports I find most useful are:

  1. The Per base sequence quality report, which can help you decide if sequence trimming is needed before alignment.
  2. The Sequence Duplication Levels report, which helps you evaluate library enrichment / complexity. But note that different experiment types are expected to have vastly different duplication profiles.
  3. The Overrepresented Sequences report, which helps evaluate adapter contamination.
 A couple of other things to note about FastQC
  • For many of its reports, FastQC analyzes only the first 200,000 sequences in order to keep processing and memory requirements down.
  • Some of FastQC's graphs have a 1-100 vertical scale that is tricky to interpret. The 100 is a relative marker for the rest of the graph. For example, sequence duplication levels are relative to the number of unique sequences,

Running FastQC

FastQC is available from the TACC module system on lonestar. Interactive GUI versions are also available for Windows and Macintosh and can be downloaded from the Babraham Bioinformatics web site.

FastQC creates a sub-directory for each analyzed FASTQ file, so we should copy the file we want to look at locally first. Here's how to run FastQC using the version we installed:

Running FastQC example
# setup
module load fastqc
cds
mkdir fastqc_test
cd fastqc_test
cp $BI/web/yeast_stuff/Sample_Yeast_L005_R1.cat.fastq.gz .

# running the program
fastqc Sample_Yeast_L005_R1.cat.fastq.gz

 
# examine extra options
fastqc -h

Exercise: FastQC results

What did FastQC create?

 Answer
ls -l shows something like this
drwxrwxr-x 4 abattenh G-803889     4096 May 20 22:59 Sample_Yeast_L005_R1.cat_fastqc
-rw-rw-r-- 1 abattenh G-803889   198239 May 20 22:59 Sample_Yeast_L005_R1.cat_fastqc.zip
-rwxr-xr-x 1 abattenh G-803889 51065629 May 20 22:59 Sample_Yeast_L005_R1.cat.fastq.gz

The Sample_Yeast_L005_R1.cat.fastq.gz file is what we analyzed, so FastQC created the other two items. Sample_Yeast_L005_R1.cat_fastqc is a directory (the "d" in "drwxrwxr-x"), so use ls Sample_Yeast_L005_R1.cat_fastqc to see what's in it. Sample_Yeast_L005_R1.cat_fastqc.zip is just a Zipped (compressed) version of the whole directory.

Looking at FastQC output

You can't run a web browser directly from your "dumb terminal" command line environment. The FastQC results have to be placed where a web browser can access them. You should copy the results back to your local machine (via scp or a GUI secure ftp client) to open them in a web browser.

If you want to skip that step (we recommend doing it for practice!), we have put a copy of the output at this URL:

FastQC results URL
http://web.corral.tacc.utexas.edu/BioITeam/yeast_stuff/Sample_Yeast_L005_R1.cat_fastqc/fastqc_report.html

Exercise: Should we trim this data?

Based on this FastQC output, should we trim (1) adaptor sequences from the ends of the reads AND/OR (2) low quality regions from the ends of the reads?

 Answer

The Per base sequence quality report does not look good. The data should probably be trimmed (to a constant 40 or 50 bp) before alignment.

Samstat

The samstat program can also produce a quality report for FASTQ files. (We also use it again later to report on aligned sequences in a BAM file).

This program is not available through the TACC module system but is available in our $BI/bin directory (which is on your $PATH because of our common profile). You should be able just to type samstat and see some documentation.

Running samstat on FASTQ files

Running samstat on FASTQ example
# setup
cds
mkdir samstat_test
cd samstat_test
cp $BI/gva_course/mapping/data/SRR030257_1.fastq .

# run the program
samstat SRR030257_1.fastq

This produces a file named SRR030257_1.fastq.html which you need to view in a web browser. We put a copy at this URL:

URL for viewing samstat results
http://loving.corral.tacc.utexas.edu/bioiteam/SRR030257_1.fastq.html

FASTQ Processing Tools

Trimming low quality bases

Low quality base reads from the sequencer can cause an otherwise mappable sequence not to align. There are a number of open source tools that can trim off 3' bases and produce a FASTQ file of the trimmed reads to use as input to the alignment program.

FASTX Toolkit

The FASTX-Toolkit provides a set of command line tools for manipulating fasta and fastq files. The available modules are described on their website. They include a fast fastx_trimmer utility for trimming fastq sequences (and quality score strings) before alignment.

FASTX-Toolkit is available via the TACC module system.

FASTX_toolkit module description
module spider fastx_toolkit
module load fastx_toolkit

Here's an example of how to run fastx_trimmer to trim all input sequences down to 50 bases. By default the program reads its input data from standard input and writes trimmed sequences to standard output:

fastx_trimmer example
gunzip -c $BI/web/yeast_stuff/Sample_Yeast_L005_R1.cat.fastq.gz | fastx_trimmer -l 50 -Q 33 > trimmed.fq
  • The -l 50 option says that base 50 should be the last base (i.e., trim down to 50 bases)
  • the -Q 33 option specifies how base qualities on the 4th line of each fastq entry are encoded. The FASTX toolkit is an older program, written in the time when Illumina base qualities were encoded differently. These days Illumina base qualities follow the Sanger FASTQ standard (Phred score + 33 to make an ASCII character).

Exercise: compressing the fastx_trimmer output

How would you tell fastx_trimmer to compress (gzip) its output file?

 Hint

Type fastx_trimmer -h to see program documentation

 Answer

You could supply the -z option like this:

fastx_trimmer example
gunzip -c $BI/web/yeast_stuff/Sample_Yeast_L005_R1.cat.fastq.gz | fastx_trimmer -l 50 -Q 33 -z > trimmed.fq.gz

Or you could gzip the output yourself:

fastx_trimmer example
gunzip -c $BI/web/yeast_stuff/Sample_Yeast_L005_R1.cat.fastq.gz | fastx_trimmer -l 50 -Q 33 | gzip > trimmed.fq.gz

Exercise: fastx toolkit programs

What other fastx manipulation programs are part of the fastx toolkit?

 Hint

Type fastx_ then tab to see their names
See all the programs like this:

fastx toolkit programs
ls $TACC_FASTX_BIN

Adapter trimming

Data from RNA-seq or other library prep methods that resulted in very short fragments can cause problems with moderately long (100-250 base) reads since the 3' end of sequence can extend through to the 3' adapter at a variable position and even past the end of the fragment. This 3' adapter contamination can cause the "real" insert sequence not to align because the adapter sequence does not correspond to the bases at the 3' end of the reference genome sequence.

Unlike general fixed-length trimming (e.g. trimming 100 bp sequences to 40 or 50 bp), adapter trimming removes differing numbers of 3' bases depending on where the adapter sequence is found.

The GSAF website describes the flavaors of Illumina adapter and barcode sequence in more detail https://utexas.atlassian.net/wiki/display/GSAF/Illumina+-+all+flavors

Cutadapt

The cutadapt program is an excellent tool for removing adapter contamination. The program is not available through TACC's module system but we've installed a copy in our $BI/bin directory.

The most common application of cutadapt is to remove adapter contamination from small RNA library sequence data, so that's what we'll show here. Note that this step is increasingly needed for genomic sequencing of MiSeq data with 250 base reads.

Running cutadapt on small RNA library data

When you run cutadapt you give it the adapter sequence to trim, and this is different for R1 and R2 reads.

cutadapt command for R1 sequences
cutadapt -m 22 -O 10 -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC
cutadapt command for R2 sequences
cutadapt -m 22 -O 10 -a TGATCGTCGGACTGTAGAACTCTGAACGTGTAGA

Notes:

  • The -m 22 option says to discard any sequence that is smaller than 22 bases after trimming. This avoids problems trying to map very short, highly ambiguous sequences.
  • the -O 10 option says not to trim 3' adapter sequences unless at least the first 10 bases of the adapter are ssen at the 3' end of the read. This prevents trimming short 3' sequences that just happen by chance to match the first few adapter sequence bases.
 The gory details on the *-a* adapter sequence argument

Please refer to https://utexas.atlassian.net/wiki/display/GSAF/Illumina+-+all+flavors for Illumina library adapter layout.

The top strand, 5' to 3', of a read sequence looks like this.

Illumina library read layout
<P5 capture> <indexRead2> <Read 1 primer> [insert] <Read 2 primer> <indexRead1> <P7 capture>

The -a argument to cutadapt is documented as the "sequence of adapter that was ligated to the 3' end". So we care about the <Read 2 primer> for R1 reads, and the <Read 1 primer> for R2 reads.

The "contaminent" for adapter trimming will be the <Read 2 primer> for R1 reads. There is only one Read 2 primer:

Read 2 primer, 5' to 3', used as R1 sequence adapter
AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC

The "contaminant" for adapter trimming will be the <Read 1 primer> for R2 reads. However, there are three different Read 1 primers, depending on library construction:

Read 1 primer depends on library construction
TCTACACGTTCAGAGTTCTACAGTCCGACGATCA    # small RNA sequencing primer site
CAGGTTCAGAGTTCTACAGTCCGACGATCA        # "other"
TCTACACTCTTTCCCTACACGACGCTCTTCCGATCT  # TruSeq Read 1 primer site. This is the RC of the R2 adapter

Since R2 reads are the reverse complement of R1 reads, the R2 adapter contaminent will be the RC of the Read 1 primer used.

For ChIP-seq libraries where reads come from both DNA strands, the TruSeq Read 1 primer is always used.
Since it is the RC of the Read 2 primer, its RC is just the Read 1 primer back
Therefore, for ChIP-seq libraries only one cutadapt command is needed:

Cutadapt adapter sequence for ChIP-seq lib
raries, both R1 and R2 reads}
cutadapt -a GATCGGAAGAGCACACGTCTGAACTCCAGTCAC

For RNAseq libraries, we use the small RNA sequencing primer as the Read 1 primer.
The contaminent is then the RC of this, minus the 1st and last bases:

Small RNA library Read 1 primer, 5' to 3', used as R2 sequence adapter
TCTACACGTTCAGAGTTCTACAGTCCGACGATCA    # R1 primer - small RNA sequencing Read 1 primer site, 5' to 3'
TGATCGTCGGACTGTAGAACTCTGAACGTGTAGA    # R2 adapter contaminent (RC of R1 small RNA sequencing Read 1 primer)

Flexbar

Flexbar provides a flexible suite of commands for demultiplexing barcoded reads and removing adapter sequences or low quality regions from the ends of reads.

Example of trimming adaptor sequence from right end of reads
flexbar -n 1 --adapters adaptors.fna --source example.fastq --target example.ar --format fastq-sanger --adapter-threshold 2 --adapter-min-overlap 6 --adapter-trim-end RIGHT_TAIL
Example adaptors.fna file
>adaptor1
AATGATACGGCGACCACCGAGATCTACACTCTTTCCCTACACGACGCTCTTCCGATCT
>adaptor2
AGATCGGAAGAGCACACGTCTGAACTCCAGTCACNNNNNNNNATCTCGTATGCCGTCTTCTGCTTG
>adaptor1_RC
AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTAGATCTCGGTGGTCGCCGTATCATT
>adaptor2_RC
CAAGCAGAAGACGGCATACGAGATNNNNNNNNGTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT

Note that flexbar only searches for the sequences given (with options to allow for a given number of mismatches) NOT the reverse complement of those sequences therefore you must provide them yourself.

Trimmomatic

Trimmomatic offers similar options to Flexbar with the potential benefit that many illumina adaptor sequences are already "built-in". It is available here.

More Example Data

See if you can figure out what's wrong with these data sets (copy them to your $SCRATCH directory before analyzing them) and then process them to get rid of the problem(s). If you're very ambitious, you could also map them to the reference genomes and perform variant calling before and after cleaning them up to see how the results change. Each file has a different problem.

Example #1: Single-end Illumina MiSeq data for E. coli

Example read and reference files #1
$BI/gva_course/read_processing/JJM104_TAAGGCGA-TAGATCGC_L001_R1_001.fastq.gz
$BI/gva_course/read_processing/REL606.fna
 What's wrong with this data?
This

 

Example #2: Paired-end Illumina Genome Analyzer IIx data for E. coli

Example read and reference files #2
$BI/gva_course/read_processing/61FTVAAXX_2_R1_ZDB172.fastq.gz
$BI/gva_course/read_processing/61FTVAAXX_2_R2_ZDB172.fastq.gz
$BI/gva_course/read_processing/REL606.fna
 What's wrong with this data?
There was some sort of problem during library prep that highly biased the beginning of reads to "T". Unfortunately, post-processing can't help with this one. The read sequences are fine, but the coverage across the genome is so uneven that many regions of the genome were not sampled (have zero coverage) even though the volume of sequencing data was very high for this microbial genome. The facility had to do a new library prep and re-sequence to correct this issue.