Overview:
This section provides directions for generating SSCS (Single Strand Consensus Sequence) reads and trimming molecular indexes from raw fastq files.
Learning Objectives:
- Use python script to generate SSCS Reads.
- Use flexbar cutadapt to trim molecular indexes from duplex seq libraries.
Tutorial: SSCS Reads
First we want to generate SSCS reads where we take advantage of the molecular indexes added during library prep. To do so we will use a "majority rules" python script (named SSCS_DCS.py) which was heavily modified by DED from a script originally created by Mike Schmitt and Scott Kennedy for the original duplex seq paper. This script can be found in the $BI/bin directory. For the purpose of this tutorial, the paired end sequencing of sample DED110 (prepared with a molecular index library) has been placed in the $BI/gva_course/mixed_population directory. Invoking the script is as simple as typing SSCS_DCS.py; adding -h will give a list of the available options. The goal of this command is to generate SSCS reads, for any molecular index where we have at least 2 reads present, and to generate a log file which will tell us some information about the data.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
cds mkdir BDIBGVA_Error_Correction cd BDIBGVA_Error_Correction cp $BI/gva_course/mixed_population/DED110*.fastq . |
...
Expand | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||
You can often get more information about python scripts by typing the name of the script followed by the -h command.
|
This should take ~10 10 minutes or less to complete in an idev shell. Suggest looking over the alternative library prep presentation or the duplex sequencing paper itself in the mean time
Error correction evaluation:
The SSCS_Log is a great place to start. Use the tail command to look at the last 8 lines of the log file to determine how many reads made it from raw reads to error corrected SSCS reads.
...
Expand | ||
---|---|---|
| ||
There are approximately 1/6th as many SSCS reads as raw reads: Total Reads: 6066836 SSCS count: 978142 While this is somewhat misleading as it takes a minimum of 2 reads to generate a single SSCS read, we do have some additional information regarding what happened to the other reads. The first thing is to consider is the "Dual MI Reads" these represent the reads which correctly had the 12bp of degenerate sequence and the 4bp anchor. In this case, more than 1.5 million reads lacked an identifyable identifiable molecular index on read 1 and/or read 2. By that regard, we had ~1/4 as many SSCS reads as raw reads. |
...
The 3 columns are the read posistion, the number of bases changed, and the number of bases not changed. If you copy and paste these 3 columns into excel you can easily calculate the sum of the 2nd column to see that 446,104 bases were changed. The read position is based on the 5-3' sequence, and you should notice that generally the higher the read position, the more errors were corrected. This should make sense based on what we have talked about with decreasing quality scores as read length increases.
Tutorial (Trimmed Reads with
...
cutadapt):
From our earlier tutorial on read quality control you likely remember that you can load the fastx_toolkit cutadapt as a module. If you feel like you need a hint to do this, pause and think for a minute and try some things. If you still can't get it, raise your hand and talk to us as this is a concept that you should be able to do on your own by now so we need to help explain things differently.
Code Block | ||||
---|---|---|---|---|
| ||||
$cutadapt fastx<tabx2>-h # will display the following: fastx_artifacts_filter fastx_nucleotide_distribution_graph.sh fastx_reverse_complement fastx_barcode_splitter.pl fastx_nucleotide_distribution_line_graph.sh fastx_trimmer fastx_clipper fastx_quality_stats fastx_uncollapser fastx_collapser fastx_renamer # interrogate the commands you think might have the answer to what you are trying to do using the '-h' option to all the options available to determine how to trim the first 16 bases off DED110_CATGGC_L006_R1_001.fastq and DED110_CATGGC_L006_R2_001.fastq |
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
fastx_trimmercutadapt -fu 17 -i DED110_CATGGC_L006_R1_001.fastq -o DED110.R1.trimmed.fastq fastx_trimmercutadapt -fu 17 -i DED110_CATGGC_L006_R2_001.fastq -o DED110.R2.trimmed.fastq |
...
Code Block | ||||
---|---|---|---|---|
| ||||
# 1. use a semicolon to separate the two commands so that the second will start as soon as the first finishes: fastx_trimmercutadapt -fu 17 -i DED110_CATGGC_L006_R1_001.fastq -o DED110.R1.trimmed.fastq; fastx_trimmercutadapt -fu 17 -i DED110_CATGGC_L006_R2_001.fastq -o DED110.R2.trimmed.fastq # 2. use a double && between the commands so the second will start as soon as the first finishes, if it finishes without any errors: fastx_trimmercutadapt -fu 17 -i DED110_CATGGC_L006_R1_001.fastq -o DED110.R1.trimmed.fastq && fastx_trimmercutadapt -fu 17 -i DED110_CATGGC_L006_R2_001.fastq -o DED110.R2.trimmed.fastq # 3. use a trailing & to have the commands run in the background: fastx_trimmercutadapt -fu 17 -i DED110_CATGGC_L006_R1_001.fastq -o DED110.R1.trimmed.fastq & fastx_trimmercutadapt -fu 17 -i DED110_CATGGC_L006_R2_001.fastq -o DED110.R2.trimmed.fastq & |
...
Expand | ||
---|---|---|
| ||
The 3rd solution will finish before the other two because they are actually executed at the same time rather than waiting for one to finish. In many circumstances this is among the best ways to do something like this, and 'simple' read trimming with the fastx toolkit cutadapt is one of them. If you are doing something much more computationally intense (say read mapping, variant calling, or genome assembly) trying to complete the tasks at the same time will often leave you with no results at all as you run out of memory even on the compute nodes and the programs error out. |
Checking the current contents of the directory will show you we've now made 2 new .trimmed.fastq files in addition to the trio of .fastq files we made in the error correction part of the tutorial. The DED110_SSCS.fastq is the one of most interest to us for the follow up tutorial, while both the .trimmed.fastq files will be of interest. Rather than working with 3 files for 2 samples (error corrected and trimmed), use what you have learned about piping to generate a single file called DED110_all.trimmed.fastq and check your work.
Expand | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||
|
Code Block | ||||
---|---|---|---|---|
| ||||
cat *.trimmed.fastq > DED110_all.trimmed.fastq # The above could also be done as 2 sequential steps with naming each file separately, and using a >> on the second line. head DED110_all.trimmed.fastq tail DED110_all.trimmed.fastq wc -l *.trimmed.fastq # these 4 commands should give you all the information you need to make sure you have a single file with all the information from the first 2. Ask if you aren't sure you ahvehave the right solution. |
Next step:
You should now have 2 new .fastq files which we will use to call variants in: DED110_SSCS.fastq, and DED110_all.trimmed.fastq. You should take these files into a more in depth breseq tutorial for comparisons of the specific mutations that are eliminated using the error correction (SSCS). Link to other tutorial.
Optional not recommended tutorial trimming reads with flexbar:
For an another discussion about version control and when it is necessary to update to new tools and versions of programs, take a look at the trimmed reads tutorial from last year which used flexbrar simply because 'it worked before so keep using it'. Compare the simplistic fastx_trimmer commands used in this tutorial to all the work that went into flexbar last year. So while "well enough can be left alone", sometimes it is still better to use new tools. As the heading suggests, we don't actually suggest that you USE flexbar to trim this data set or any other, just something worth looking at to see how different programs operate or are invoked to achieve the same goals.