Before you start the alignment and analysis processes, it us useful to perform some initial quality checks on your raw data. You may also need to pre-process the sequences to trim them or remove adapters. Here we will assume you have paired-end data from one of GSAF's Illumina sequencers.
Reservations
Use today's summer school reservation (core-ngs-class-0604) when submitting batch jobs to get higher priority on the ls6 normal queue.
# Request a 180 minute idev node on the normal queue using our reservation idev -m 120 -N 1 -A OTH21164 -r core-ngs-class-0604 # Wednesday idev -m 120 -N 1 -A OTH21164 -r core-ngs-class-0605 # Thursday # Request a 120 minute interactive node on the development queue idev -m 120 -N 1 -A OTH21164 -p development
# Using our reservation sbatch --reseservation=core-ngs-class-0604 <batch_file>.slurm # or this on Thursday: sbatch --reseservation=core-ngs-class-0604 <batch_file>.slurm
Note that the reservation name (core-ngs-class-0604) is different from the TACC allocation/project for this class, which is OTH21164.
The first order of business after receiving sequencing data should be to check your data quality. This often-overlooked step helps guide the manner in which you process the data, and can prevent many headaches.
FastQC is a tool that produces a quality analysis report on FASTQ files.
Useful links:
First and foremost, the FastQC "Summary" should generally be ignored. Its "grading scale" (green - good, yellow - warning, red - failed) incorporates assumptions for a particular kind of experiment, and is not applicable to most real-world data. Instead, look through the individual reports and evaluate them according to your experiment type.
The FastQC reports I find most useful, and why:
For many of its reports, FastQC analyzes only the first ~100,000 sequences in order to keep processing and memory requirements down. Consult the Online documentation for each FastQC report for full details.
Make sure you're in an idev session. If you're in an idev session, the hostname command will display a name like c455-021.ls6.tacc.utexas.edu. But if you're on a login node the hostname will be something like login2.ls6.tacc.utexas.edu.
If you're on a login node, start an idev session like this:
idev -m 120 -N 1 -A OTH21164 -r core-ngs-class-0604 # Wednesday idev -m 120 -N 1 -A OTH21164 -r core-ngs-class-0605 # Thursday # or, without the reservation idev -m 120 -N 1 -A OTH21164 -p development
FastQC is available as part of BioContainers on ls6. To make it available:
# Load the main BioContainers module then load the fastqc module module load biocontainers module load fastqc
It has a number of options (see fastqc --help | more) but can be run very simply with just a FASTQ file as its argument.
# make sure you're in your $SCRATCH/core_ngs/fastq_prep directory cds cd core_ngs/fastq_prep fastqc small.fq
Exercise: What did FastQC create?
Let's unzip the .zip file and see what's in it.
unzip small_fastqc.zip
What was created?
You can't run a web browser directly from your "dumb terminal" command line environment. The FastQC results have to be placed where a web browser can access them. One way to do this is to copy the results back to your laptop, for example by using scp from your computer (read more at Copying files from TACC to your laptop).
For convenience, we put an example FastQC report at this URL:
https://web.corral.tacc.utexas.edu/BioinformaticsResource/CoreNGS/yeast_stuff/Sample_Yeast_L005_R1.cat_fastqc/fastqc_report.html
Exercise: Based on this FastQC output, should we trim this data?
FastQC reports are all well and good, but what if you have dozens of samples? It quickly becomes tedious to have to look through all the separate FastQC reports, including separate R1 and R2 reports for paired end datasets.
The MultiQC tool helps address this issue. Once FastQC reports have been generated, it can scan them and create a consolidated report from all the individual reports.
Whats even cooler, is that MultiQC can also consolidate reports from other bioinformatics tools (e.g. bowtie2 aligner statistics, samtools statistics, cutadapt, Picard, and may more). And if your favorite tool is not known by MultiQC, you can configure custom reports fairly easily. For more information, see this recent Byte Club tutorial on Using MultiQC.
Here we're just going to create a MultiQC report for two paired-end ATAC-seq datasets – 4 FASTQ files total. First stage the data:
mkdir -p $SCRATCH/core_ngs/multiqc/fqc.atacseq cd $SCRATCH/core_ngs/multiqc/fqc.atacseq cp $CORENGS/multiqc/fqc.atacseq/*.zip .
You should see these 4 files in your $SCRATCH/core_ngs/multiqc/fqc.atacseq directory:
50knuclei_S56_L007_R1_001_fastqc.zip 5knuclei_S77_L008_R1_001_fastqc.zip 50knuclei_S56_L007_R2_001_fastqc.zip 5knuclei_S77_L008_R2_001_fastqc.zip
Now make the BioContainers MultiQC accessible in your environment.
# Load the main BioContainers module if you have not already module load biocontainers # may take a while # Load the multiqc module and ask for its usage information module load multiqc multiqc --help | more
Even though multiqc has many options, it is quite easy to create a basic report by just pointing it to the directory where individual reports are located:
cd $SCRATCH/core_ngs/multiqc multiqc fqc.atacseq
Exercise: How many reports did multiqc find?
Exercise: What was created by running multiqc?
You can see the resulting MultiQC report here: https://web.corral.tacc.utexas.edu/BioinformaticsResource/CoreNGS/reports/atacseq/multiqc_report.html.
And an example of a MultiQC report that includes both standard and custom plots is this is the Tag-Seq post-processing MultiQC report produced by the Bioinformatics Consulting Group: https://web.corral.tacc.utexas.edu/BioinformaticsResource/CoreNGS/reports/mqc_tagseq_trim_JA21030_SA21045_mouse.html
There are two main reasons you may want to trim your sequences:
There are a number of open source tools that can trim off 3' bases and produce a FASTQ file of the trimmed reads to use as input to the alignment program.
The FASTX Toolkit provides a set of command line tools for manipulating both FASTA and FASTQ files. The available modules are described on their website. They include a fast fastx_trimmer utility for trimming FASTQ sequences (and quality score strings) before alignment.
FASTX Toolkit is available as a BioContainers module.
module load biocontainers # takes a while module spider fastx module load fastxtools
Here's an example of how to run fastx_trimmer to trim all input sequences down to 50 bases.
Where does fastx_trimmer read its input from? And where does it write its output? Ask the program for its usage.
# will fastx_trimmer give us usage information? fastx_trimmer --help # no, it wants you to use the -h option to ask for help: fastx_trimmer -h
The usage: is its help information
fastx_trimmer [-h] [-f N] [-l N] [-t N] [-m MINLEN] [-z] [-v] [-i INFILE] [-o OUTFILE]
Because the [-i INFILE] [-o OUTFILE] options are shown in brackets [ ], reading from a file and writing to a file are optional. That means that by default the program reads its input data from standard input and writes trimmed sequences to standard output:
# make sure you're in your $SCRATCH/core_ngs/fastq_prep directory cd $SCRATCH/core_ngs/fastq_prep zcat Sample_Yeast_L005_R1.cat.fastq.gz | fastx_trimmer -l 50 -Q 33 \ > trim50_R1.fq
Exercise: compressing fastx_trimmer output
How would you tell fastx_trimmer to compress (gzip) its output file?
Exercise: other fastx toolkit programs
What other FASTQ manipulation programs are part of the FASTX Toolkit?
The FASTX Toolkit also has programs that work on FASTA files. To see them, type fasta_ then tab twice (completion) to see their names.
Data from RNA-seq or other library prep methods that result in short fragments can cause problems with moderately long (50-100bp) reads, since the 3' end of sequences can be read into (or even through) to the 3' adapter at different read offsets . This 3' adapter contamination can cause the "real" insert sequence not to align because the adapter sequence does not correspond to the bases at the 3' end of the reference genome sequence.
Unlike general fixed-length trimming (e.g. trimming 100 bp sequences to 50 bp), specific adapter trimming removes differing numbers of 3' bases depending on where the adapter sequence is found.
You must tell any adapter trimming program what your R1 and R2 adapters look like.
The GSAF website describes the flavors of Illumina adapter and barcode sequences in more detail: /wiki/spaces/GSAF/pages/38735668.
The cutadapt program, available in BioContainers, is an excellent tool for removing adapter contamination.
module load biocontainers module spider cutadapt module load cutadapt cutadapt --help | more # or cutadapt --help | less
A common application of cutadapt is to remove adapter contamination from RNA library sequence data. Here we'll show that for some small RNA libraries sequenced by GSAF, using their documented small RNA library adapters.
When you run cutadapt you give it the adapter sequence to trim, and the adapter sequence is different for R1 and R2 reads. Here's what the options look like (without running it on our files yet).
cutadapt -m 22 -O 4 -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC <fastq_file>
cutadapt -m 22 -O 4 -a TGATCGTCGGACTGTAGAACTCTGAACGTGTAGA <fastq_file>
Notes:
Figuring out which adapter sequence to use when can be tricky. Your sequencing provider can tell you what adapters they used to prep your libraries. For GSAF's adapter layout, please refer to /wiki/spaces/GSAF/pages/38735668 (you may want to read all the "gory details" below later).
Exercise: other cutadapt options
The cutadapt program has many options. Let's explore a few.
How would you tell cutadapt to trim trailing N's?
How would you control the accuracy (error rate) of cutadapt's matching between the adapter sequences and the FASTQ sequences?
Suppose you are processing 100 bp reads with 30 bp adapters. By default, how many mismatches between the adapter and a sequence will be tolerated?
How would you require a more stringent matching (i.e., allowing fewer mismatches)?
Let's run cutadapt on some real human miRNA (micro-RNA) data.
First, stage the data we want to use. This data is from a small RNA library where the expected insert size is around 15-25 bp.
mkdir -p $SCRATCH/core_ngs/fastq_prep cd $SCRATCH/core_ngs/fastq_prep cp $CORENGS/human_stuff/Sample_H54_miRNA_L004_R1.cat.fastq.gz . cp $CORENGS/human_stuff/Sample_H54_miRNA_L005_R1.cat.fastq.gz .
Exercise: How many reads are in these files? Is it single end or paired end data?
Exercise: How long are the reads?
Adapter trimming is a rather slow process, and these are large files. So to start with we're going to create a smaller FASTQ file to work with.
# Remember, FASTQ files have 4 lines per read zcat Sample_H54_miRNA_L004_R1.cat.fastq.gz | head -2000 > miRNA_test.fq
Now execute cutadapt like this. Note that the backslash ( \ ) here is just a line continuation character so that we can split a long command onto multiple lines to make it more readable.
cd $SCRATCH/core_ngs/fastq_prep cp $CORENGS/human_stuff/miRNA_test.fq . cutadapt -m 20 -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC miRNA_test.fq \ 2> miRNA_test.cuta.log \ | gzip > miRNA_test.cutadapt.fq.gz
Notes:
You should see a miRNA_test.cuta.log log file when the command completes. How many lines does it have?
Take a look at the first 15 lines.
It will look something like this:
This is cutadapt 1.18 with Python 3.7.1 Command line parameters: -m 20 -a AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC miRNA_test.fq Processing reads on 1 core in single-end mode ... Finished in 0.06 s (113 us/read; 0.53 M reads/minute). === Summary === Total reads processed: 500 Reads with adapters: 492 (98.4%) Reads that were too short: 64 (12.8%) Reads written (passing filters): 436 (87.2%) Total basepairs processed: 50,500 bp Total written (filtered): 10,909 bp (21.6%)
Notes:
Special care must be taken when removing adapters for paired-end FASTQ files.
Now we're going to run cutadapt on the larger FASTQ files, and also perform paired-end adapter trimming on some yeast paired-end RNA-seq data.
Since batch jobs can't be submitted from an idev session, make sure you are back on a login node (just exit the idev session).
First stage the 4 FASTQ files we will work on:
mkdir -p $SCRATCH/core_ngs/cutadapt cd $SCRATCH/core_ngs/cutadapt cp $CORENGS/human_stuff/Sample_H54_miRNA_L004_R1.cat.fastq.gz . cp $CORENGS/human_stuff/Sample_H54_miRNA_L005_R1.cat.fastq.gz . cp $CORENGS/alignment/Yeast_RNAseq_L002_R1.fastq.gz . cp $CORENGS/alignment/Yeast_RNAseq_L002_R2.fastq.gz .
Instead of running cutadapt on the command line, we're going to submit a job to the TACC batch system to perform single-end adapter trimming on the two lanes of miRNA data, and paired-end adapter trimming on the two yeast RNAseq FASTQ files.
Paired end adapter trimming is rather complicated, so instead of trying to do it all in one command line we will use one of the handy BioITeam scripts that handles all the details of paired-end read trimming, including all the environment setup.
Paired-end RNA fastq trimming script
The BioITeam has an a number of useful NGS scripts that can be executed by anyone on ls6. or stampede3. They are located in the /work/projects/BioITeam/common/script/ directory.
For groups that participate in BRCF pods, the scripts are available in /mnt/bioi/script on any compute server.
The name of the script we want is trim_adapters.sh. Just type the full path of the script with no arguments to see its help information:
/work/projects/BioITeam/common/script/trim_adapters.sh
You should see something like this:
trim_adapters.sh 2025_05_01
Trim adapters from single- or paired-end sequences using cutadapt. Usage:
trim_adapters.sh <in_fq> <out_pfx> [ paired min_len adapter1 adapter2 ]
Required arguments:
in_fq For single-end alignments, path to input fastq file.
For paired-end alignemtts, path to the the R1 fastq file
which must contain the string 'R1' in its name. The
corresponding 'R2' must have the same path except for 'R1'
out_pfx Desired prefix of output files.
Optional arguments:
paired 0 = single end alignment (default); 1 = paired end.
min_len Minimum sequence length after adapter removal. Default 32.
adapter1 3' adapter. Default GATCGGAAGAGCACACGTCTGAACTCCAGTCAC (NEB).
Specifiy 'illumina' for AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC
(standard Illumina TruSeq3 indexed adapter).
adapter2 5' adapter. Default TGATCGTCGGACTGTAGAACTCTGAACGTGTAGA (NEB).
Specifiy 'illumina' for AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTA
(standard Illumina TruSeq universal adapter).
Environment variables:
show_only 1 = only show what would be done (default not set)
keep 1 = keep intermediate file(s) (default 0, don't keep)
cuta_args other cutadapt options (e.g. '--trim-n --max-n=0.25')
Examples:
export cuta_args='-O 5'; trim_adapters.sh my.fastq.gz h54_b1 1 40
trim_adapters.sh my_fastq.gz yeast_b3 1 28 Illumina Illumina
Based on this information, here are the three cutadapt commands we want to execute:
/work/projects/BioITeam/common/script/trim_adapters.sh Sample_H54_miRNA_L004_R1.cat.fastq.gz H54_miRNA_L004 0 20 /work/projects/BioITeam/common/script/trim_adapters.sh Sample_H54_miRNA_L005_R1.cat.fastq.gz H54_miRNA_L005 0 20 /work/projects/BioITeam/common/script/trim_adapters.sh Yeast_RNAseq_L002_R1.fastq.gz yeast_rnaseq 1
Let's put these command into a cuta.cmds commands file. But first we need to learn a bit about Editing files in Linux.
Exercise: Create cuta.cmds file
Use nano or emacs to create a cuta.cmds file with the 3 cutadapt processing commands above. If you have trouble with this, you can copy a pre-made commands file:
cd $SCRATCH/core_ngs/cutadapt cp $CORENGS/tacc/cuta.cmds .
Or use this "cat to MARKER" trick, also known as an heredoc. The MARKER tag can be anything; below it is EOL.
cd $SCRATCH/core_ngs/cutadapt cat > cuta.cmds << EOL /work/projects/BioITeam/common/script/trim_adapters.sh Sample_H54_miRNA_L004_R1.cat.fastq.gz H54_miRNA_L004 0 20 /work/projects/BioITeam/common/script/trim_adapters.sh Sample_H54_miRNA_L005_R1.cat.fastq.gz H54_miRNA_L005 0 20 /work/projects/BioITeam/common/script/trim_adapters.sh Yeast_RNAseq_L002_R1.fastq.gz yeast_rnaseq 1 EOL
When you're finished you should have a cuta.cmds file that is 3 lines long (check this with wc -l).
Next create a batch submission script for your job and submit it to the normal queue with a maximum run time of 1 hour.
cd $SCRATCH/core_ngs/cutadapt launcher_creator.py -j cuta.cmds -n cuta -t 01:00:00 -a OTH21164 \ -m 'module unload xalt' -q normal sbatch --reservation=core-ngs-class-0604 cuta.slurm # or this on Thursday: sbatch --reservation=core-ngs-class-0605 cuta.slurm showq -u # or, if you're not on the reservation: launcher_creator.py -j cuta.cmds -n cuta -t 01:00:00 -a OTH21164 \ -m 'module unload xalt' -q development sbatch cuta.slurm showq -u
(The -m 'module unload xalt' option addresses a bug in the module system when running certain bioinformatics programs.)
How will you know your job is done?
You should see several log files when the job is finished:
Take a look at the first part of the yeast_rnaseq.acut.pass1.log log file:
It will look something like this:
This is cutadapt 1.18 with Python 3.7.1 Command line parameters: -m 32 -a GATCGGAAGAGCACACGTCTGAACTCCAGTCAC --trim-n --paired-output yeast_rnaseq_R2.tmp.cuta.fastq -o yeast_rnaseq_R1.tmp.cuta.fastq Yeast_RNAseq_L002_R1.fastq.gz Yeast_RNAseq_L002_R2.fastq.gz Processing reads on 1 core in paired-end legacy mode ... WARNING: Legacy mode is enabled. Read modification and filtering options *ignore* the second read. To switch to regular paired-end mode, provide the --pair-filter=any option or use any of the -A/-B/-G/-U/--interleaved options. Finished in 105.06 s (16 us/read; 3.68 M reads/minute). === Summary === Total read pairs processed: 6,440,847 Read 1 with adapter: 3,875,741 (60.2%) Read 2 with adapter: 0 (0.0%) Pairs that were too short: 112,847 (1.8%) Pairs written (passing filters): 6,328,000 (98.2%) ...
The corresponding yeast_rnaseq.acut.pass2.log file looks like this:
This is cutadapt 1.18 with Python 3.7.1 Command line parameters: -m 32 -a TGATCGTCGGACTGTAGAACTCTGAACGTGTAGA --paired-output yeast_rnaseq_R1.cuta.fastq -o yeast_rnaseq_R2.cuta.fastq yeast_rnaseq_R2.tmp.cuta.fastq yeast_rnaseq_R1.tmp.cuta.fastq Processing reads on 1 core in paired-end legacy mode ... Finished in 65.08 s (10 us/read; 5.83 M reads/minute). === Summary === Total read pairs processed: 6,328,000 Read 1 with adapter: 90,848 (1.4%) Read 2 with adapter: 0 (0.0%) Pairs that were too short: 0 (0.0%) Pairs written (passing filters): 6,328,000 (100.0%) Total basepairs processed: 1,198,172,994 bp Read 1: 639,128,000 bp Read 2: 559,044,994 bp Total written (filtered): 1,197,894,462 bp (100.0%) ...
Exercise: Verify that both adapter-trimmed yeast_rnaseq fastq files have 6,328,000 reads