2022 Pre-processing raw sequences
Before you start the alignment and analysis processes, it us useful to perform some initial quality checks on your raw data. You may also need to pre-process the sequences to trim them or remove adapters. Here we will assume you have paired-end data from GSAF's Illumina HiSeq sequencer.
Reservations
Use our summer school reservation (CoreNGSday3) when submitting batch jobs to get higher priority on the ls6 normal queue today:
sbatch --reservation=CoreNGSday3 <batch_file>.slurmidev -m 120 -N 1 -A OTH21164 -r CoreNGSday3
- 1 FASTQ Quality Assurance tools
- 1.1 FastQC
- 1.1.1 Running FastQC
- 1.1.1.1 Start an idev session
- 1.1.1.2 Running fastqc on a FASTQ file
- 1.1.2 Looking at FastQC output
- 1.1.1 Running FastQC
- 1.2 Using MultiQC to consolidate multiple QC reports
- 1.2.1 Start an idev session
- 1.1 FastQC
- 2 Trimming sequences
- 3 Adapter trimming with cutadapt
- 3.1.1 Start an idev session
- 3.1.2 cutadapt command for R1 sequences (GSAF RNA library)
- 3.1.3 cutadapt command for R2 sequences (GSAF RNA library)
- 3.1.4 Illumina library read layout
- 3.1.5 Read 2 primer, 5' to 3', used as R1 sequence adapter
- 3.1.6 Read 1 primer depends on library construction
- 3.1.7 Cutadapt adapter sequence for ChIP-seq lib
- 3.1.8 Small RNA library Read 1 primer, 5' to 3', used as R2 sequence adapter
- 3.2 cutadapt example
- 3.3 paired-end data considerations
- 3.4 running cutadapt in a batch job
FASTQ Quality Assurance tools
The first order of business after receiving sequencing data should be to check your data quality. This often-overlooked step helps guide the manner in which you process the data, and can prevent many headaches.
FastQC
FastQC is a tool that produces a quality analysis report on FASTQ files.
Useful links:
FastQC report for a Good Illumina dataset
FastQC report for a Bad Illumina dataset
First and foremost, the FastQC "Summary" should generally be ignored. Its "grading scale" (green - good, yellow - warning, red - failed) incorporates assumptions for a particular kind of experiment, and is not applicable to most real-world data. Instead, look through the individual reports and evaluate them according to your experiment type.
The FastQC reports I find most useful, and why:
Should I trim low quality bases?
consult the Per base sequence quality report
based on all sequences
Do I need to remove adapter sequences?
consult the Adapter Content report
Do I have other contamination?
consult the Overrepresented Sequences report
based on the 1st 100,000 sequences, trimmed to 75bp
How complex is my library?
consult the Sequence Duplication Levels report
but remember that different experiment types are expected to have vastly different duplication profiles
For many of its reports, FastQC analyzes only the first ~100,000 sequences in order to keep processing and memory requirements down. Consult the Online documentation for each FastQC report for full details.
Running FastQC
FastQC is available as part of BioContainers on ls6. To make it available:
# Load the main BioContainers module then load the fastqc module
module load biocontainers # make take a while
module load fastqc
It has a number of options (see fastqc --help | more) but can be run very simply with just a FASTQ file as its argument.
Running fastqc on a FASTQ file
# make sure you're in your $SCRATCH/core_ngs/fastq_prep directory
cds
cd core_ngs/fastq_prep
fastqc small.fqExercise: What did FastQC create?
Let's unzip the .zip file and see what's in it.
unzip small_fastqc.zipWhat was created?
Looking at FastQC output
You can't run a web browser directly from your "dumb terminal" command line environment. The FastQC results have to be placed where a web browser can access them. One way to do this is to copy the results back to your laptop, for example by using scp from your computer (read more at Copying files from TACC to your laptop).
For convenience, we put an example FastQC report at this URL:
https://web.corral.tacc.utexas.edu/BioinformaticsResource/CoreNGS/yeast_stuff/Sample_Yeast_L005_R1.cat_fastqc/fastqc_report.html
Exercise: Based on this FastQC output, should we trim this data?
Newer versions of FastQC have slightly different report formats. See this example:
https://web.corral.tacc.utexas.edu/BioinformaticsResource/CoreNGS/reports/wcaar_mqc_report.html
Using MultiQC to consolidate multiple QC reports
FastQC reports are all well and good, but what if you have dozens of samples? It quickly becomes tedious to have to look through all the separate FastQC reports, including separate R1 and R2 reports for paired end datasets.
The MultiQC tool helps address this issue. Once FastQC reports have been generated, it can scan them and create a consolidated report from all the individual reports.
Whats even cooler, is that MultiQC can also consolidate reports from other bioinformatics tools (e.g. bowtie2 aligner statistics, samtools statistics, cutadapt, Picard, and may more). And if your favorite tool is not known by MultiQC, you can configure custom reports fairly easily. For more information, see this recent Byte Club tutorial on Using MultiQC.
Here we're just going to create a MultiQC report for two paried-end ATAC-seq datasets – 4 FASTQ files total. First stage the data:
mkdir -p $SCRATCH/core_ngs/multiqc/fqc.atacseq
cd $SCRATCH/core_ngs/multiqc/fqc.atacseq
cp $CORENGS/multiqc/fqc.atacseq/*.zip .You should see these 4 files in your $SCRATCH/core_ngs/multiqc/fqc.atacseq directory:
50knuclei_S56_L007_R1_001_fastqc.zip 5knuclei_S77_L008_R1_001_fastqc.zip
50knuclei_S56_L007_R2_001_fastqc.zip 5knuclei_S77_L008_R2_001_fastqc.zip Now make the BioContainers MultiQC accessible in your environment.
# Load the main BioContainers module if you have not already
module load biocontainers # may take a while
# Load the multiqc module and ask for its usage information
module load multiqc
multiqc --helpEven though multiqc has many options, it is quite easy to create a basic report by just pointing it to the directory where individual reports are located:
cd $SCRATCH/core_ngs/multiqc
multiqc fqc.atacseqExercise: How many reports did multiqc find?
Exercise: What was created by running multiqc?
You can see the resulting MultiQC report here: https://web.corral.tacc.utexas.edu/BioinformaticsResource/CoreNGS/reports/atacseq/multiqc_report.html.
And an example of a MultiQC report that includes both standard and custom plots is this is the Tag-Seq post-processing MultiQC report produced by the Bioinformatics Consulting Group: https://web.corral.tacc.utexas.edu/BioinformaticsResource/CoreNGS/reports/mqc_tagseq_trim_JA21030_SA21045_mouse.html
Trimming sequences
There are two main reasons you may want to trim your sequences:
As a quick way to remove 3' adapter contamination, when extra bases provide little additional information
For example, 75+ bp ChIP-seq reads – 50 bases are more than enough for a good mapping, and trimming to 50 is easier than adapter removal, especially for paired end data.
You would not choose this approach for RNA-seq data, where 3' bases may map to a different exon, and that is valuable information.
Instead you would specifically remove adapter sequences.
Low quality base reads from the sequencer can affect some programs
This is an issue with sequencing for genome or transcriptome assembly.
Aligners such as bwa and bowtie2 seem to do fine with a few low quality bases, soft clipping them if necessary.
There are a number of open source tools that can trim off 3' bases and produce a FASTQ file of the trimmed reads to use as input to the alignment program.
FASTX Toolkit
The FASTX Toolkit provides a set of command line tools for manipulating both FASTA and FASTQ files. The available modules are described on their website. They include a fast fastx_trimmer utility for trimming FASTQ sequences (and quality score strings) before alignment.
FASTX Toolkit is available as a BioContainers module.
module load biocontainers # takes a while
module spider fastx
module load fastxtools
Here's an example of how to run fastx_trimmer to trim all input sequences down to 50 bases. By default the program reads its input data from standard input and writes trimmed sequences to standard output:
Trimming FASTQ sequences to 50 bases with fastx_trimmer
# make sure you're in your $SCRATCH/core_ngs/fastq_prep directory
cd $SCRATCH/core_ngs/fastq_prep
zcat Sample_Yeast_L005_R1.cat.fastq.gz | fastx_trimmer -l 50 -Q 33 > trim50_R1.fq
The -l 50 option says that base 50 should be the last base (i.e., trim down to 50 bases)
The -Q 33 option specifies how base Qualities on the 4th line of each FASTQ entry are encoded.
The FASTX Toolkit is an older program written in the time when Illumina base qualities were encoded differently, so its default does not work for modern FASTQ files.
These days Illumina base qualities follow the Sanger FASTQ standard (Phred score + 33 to make an ASCII character).
Exercise: compressing fastx_trimmer output
How would you tell fastx_trimmer to compress (gzip) its output file?
Exercise: other fastx toolkit programs
What other FASTQ manipulation programs are part of the FASTX Toolkit?
The FASTX Toolkit also has programs that work on FASTA files. To see them, type fasta_ then tab twice (completion) to see their names.
Adapter trimming with cutadapt
Data from RNA-seq or other library prep methods that result in short fragments can cause problems with moderately long (50-100bp) reads, since the 3' end of sequence can be read through to the 3' adapter at a variable position. This 3' adapter contamination can cause the "real" insert sequence not to align because the adapter sequence does not correspond to the bases at the 3' end of the reference genome sequence.
Unlike general fixed-length trimming (e.g. trimming 100 bp sequences to 50 bp), specific adapter trimming removes differing numbers of 3' bases depending on where the adapter sequence is found.
You must tell any adapter trimming program what your R1 and R2 adapters look like.
The GSAF website describes the flavors of Illumina adapter and barcode sequences in more detail: Illumina - all flavors (USE with Caution, this is outdated but can be useful for a basic understanding of the adapters, the GSAF primarily only uses UDI's for all projects).
The cutadapt program, available in BioContainers, is an excellent tool for removing adapter contamination.
module load biocontainers
module spider cutadapt
module load cutadapt
cutadapt --helpA common application of cutadapt is to remove adapter contamination from RNA library sequence data. Here we'll show that for some small RNA libraries sequenced by GSAF, using their documented small RNA library adapters.
When you run cutadapt you give it the adapter sequence to trim, and this is different for R1 and R2 reads. Here's what the options look like (without running it on our files yet).
cutadapt command for R1 sequences (GSAF RNA library)
cutadapt -m 22 -O 4 -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC <fastq_file>
cutadapt command for R2 sequences (GSAF RNA library)
cutadapt -m 22 -O 4 -a TGATCGTCGGACTGTAGAACTCTGAACGTGTAGA <fastq_file>
Notes:
The -m 22 option says to discard any sequence that is smaller than 22 bases (minimum) after trimming.
This avoids problems trying to map very short, highly ambiguous sequences.
the -O 4 (Overlap) option says not to trim 3' adapter sequences unless at least the first 4 bases of the adapter are seen at the 3' end of the read.
This prevents trimming short 3' sequences that just happen by chance to match the first few adapter sequence bases.
Figuring out which adapter sequence to use when can be tricky. Your sequencing provider can tell you what adapters they used to prep your libraries. For GSAF's adapter layout, please refer to Illumina - all flavors (USE with Caution, this is outdated but can be useful for a basic understanding of the adapters, the GSAF primarily only uses UDI's for all projects) (you may want to read all the "gory details" below later).
Exercise: other cutadapt options
The cutadapt program has many options. Let's explore a few.
How would you tell cutadapt to trim trailing N's?
How would you control the accuracy (error rate) of cutadapt's matching between the adapter sequences and the FASTQ sequences?
Suppose you are processing 100 bp reads with 30 bp adapters. By default, how many mismatches between the adapter and a sequence will be tolerated?
How would you require a more stringent matching (i.e., allowing fewer mismatches)?
cutadapt example
Let's run cutadapt on some real human miRNA data.
First, stage the data we want to use. This data is from a small RNA library where the expected insert size is around 15-25 bp.
Setup for cutadapt on miRNA FASTQ
mkdir -p $SCRATCH/core_ngs/fastq_prep
cd $SCRATCH/core_ngs/fastq_prep
cp $CORENGS/human_stuff/Sample_H54_miRNA_L004_R1.cat.fastq.gz .
cp $CORENGS/human_stuff/Sample_H54_miRNA_L005_R1.cat.fastq.gz .
Exercise: How many reads are in these files? Is it single end or paired end data?
Exercise: How long are the reads?
Adapter trimming is a rather slow process, and these are large files. So to start with we're going to create a smaller FASTQ file to work with.
zcat Sample_H54_miRNA_L004_R1.cat.fastq.gz | head -2000 > miRNA_test.fqNow execute cutadapt like this:
Cutadapt batch command for R1 FASTQ
cutadapt -m 20 -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC miRNA_test.fq 2> miRNA_test.cuta.log | gzip > miRNA_test.cutadapt.fq.gzNotes:
Here's one of those cases where you have to be careful about separating standard output and standard error.
cutadapt writes its FASTQ output to standard output, and writes summary information to standard error.
In this command we first redirect standard error to a log file named miRNA_test.cuta.log using the 2> syntax, in order to capture cutadapt diagnostics.
Then standard output is piped to gzip, whose output is the redirected to a new compressed FASTQ file.
(Read more about Standard I/O streams)
You should see a miRNA_test.cuta.log log file when the command completes. How many lines does it have?
Take a look at the first 15 lines.
It will look something like this:
cutadapt log file
This is cutadapt 1.18 with Python 3.7.1
Command line parameters: -m 20 -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC miRNA_test.fq
Processing reads on 1 core in single-end mode ...
Finished in 0.06 s (113 us/read; 0.53 M reads/minute).
=== Summary ===
Total reads processed: 500
Reads with adapters: 492 (98.4%)
Reads that were too short: 64 (12.8%)
Reads written (passing filters): 436 (87.2%)
Total basepairs processed: 50,500 bp
Total written (filtered): 10,909 bp (21.6%)Notes:
The Total reads processed line tells you how many sequences were in the original FASTQ file.
Reads with adapters tells you how many of the reads you gave it had at least part of an adapter sequence that was trimmed.
Here adapter was found in nearly all (98.4%) of the reads. This makes sense given this is a short (15-25 bp) RNA library.
The Reads that were too short line tells you how may sequences were filtered out because they were shorter than our minimum length (20) after adapter removal (these may have ben primer dimers).
Here ~13% of the original sequences were removed, which is reasonable.
Reads written (passing filters) tells you the total number of reads that were written out by cutadapt
These are reads that were at least 20 bases long after adapters were removed