Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Overview:

This section provides directions for generating SSCS (Single Strand Consensus Sequence) reads and trimming molecular indexes from raw fastq files. 

Learning Objectives:

  1. Use python script to generate SSCS Reads.
  2. Use flexbar to trim molecular indexes from duplex seq libraries.

Tutorial: SSCS Reads

First we want to generate SSCS reads where we take advantage of the molecular indexes added during library prep. To do so we will use a "majority rules" python script (named SSCS_DCS.py) which was heavily modified by DED from a script originally created by Mike Schmitt and Scott Kennedy for the original duplex seq paper. This script can be found in the $BI/bin directory. For the purpose of this tutorial, the paired end sequencing of sample DED110 (prepared with a molecular index library) has been placed in the  $BI/gva_course/mixed_population directory. Invoking the script is as simple as typing SSCS_DCS.py; adding -h will give a list of the available options. The goal of this command is to generate SSCS reads, for any molecular index where we have at least 2 reads present, and to generate a log file which will tell us some information about the data.

Code Block
languagebash
titleClick here for solution of how to copy the DED110 fastq files to a new directorry called BDIB_Error_Correction
collapsetrue
cds
mkdir BDIBGVA_Error_Correction
cd BDIBGVA_Error_Correction
cp $BI/gva_course/mixed_population/DED110*.fastq .

...

 



Expand
titleInterrogate the SSCS_DCS.py script to determine how to invoke it. Click here for hints before the answer

You can often get more information about python scripts by typing the name of the script followed by the -h command. 


Code Block
titleThe -h command should show you these options as being the key options to use/consider
  -f1 FASTQ1, --fastq1 FASTQ1
                        fastq read1 file to check
  -f2 FASTQ2, --fastq2 FASTQ2
                        fastq read2 file to check
  -p PREFIX, --prefix PREFIX
                        prefix for output files
  -s, --SSCS            calculate SSCS sequence, off by default. IF DCS
                        specificed, automatically on
  -m MINIMUM_READS, --minimum_reads MINIMUM_READS
                        minimum number of reads needed to support SSCS reads
  --log LOG             name of output log file
Code Block
languagebash
titleUsing that information, see if you can figure out how to put the command together
collapsetrue
SSCS_DCS.py -f1 DED110_CATGGC_L006_R1_001.fastq -f2 DED110_CATGGC_L006_R2_001.fastq -p DED110 -s -m 2 --log SSCS_Log

This should take ~10 10 minutes or less to complete in an idev shell.  Suggest looking over the alternative library prep presentation or the duplex sequencing paper itself in the mean time

Error correction evaluation:

The SSCS_Log is a great place to start. Use the tail command to look at the last 8 lines of the log file to determine how many reads made it from raw reads to error corrected SSCS reads. 

...

Expand
titleApproximately what fraction of raw reads became SSCS reads?

There are approximately 1/6th as many SSCS reads as raw reads:

Total Reads: 6066836

SSCS count: 978142

While this is somewhat misleading as it takes a minimum of 2 reads to generate a single SSCS read, we do have some additional information regarding what happened to the other reads. The first thing is to consider is the "Dual MI Reads" these represent the reads which correctly had the 12bp of degenerate sequence and the 4bp anchor. In this case, more than 1.5 million reads lacked an identifyable identifiable molecular index on read 1 and/or read 2. By that regard, we had ~1/4 as many SSCS reads as raw reads.

...

The 3 columns are the read posistion, the number of bases changed, and the number of bases not changed. If you copy and paste these 3 columns into excel you can easily calculate the sum of the 2nd column to see that 446,104 bases were changed. The read position is based on the 5-3' sequence, and you should notice that generally the higher the read position, the more errors were corrected. This should make sense based on what we have talked about with decreasing quality scores as read length increases.

Tutorial (Trimmed Reads with

...

cutadapt):

From our earlier tutorial on read quality control you likely remember that you can load the fastx_toolkit cutadapt as a module. If you feel like you need a hint to do this, pause and think for a minute and try some things. If you still can't get it, raise your hand and talk to us as this is a concept that you should be able to do on your own by now so we need to help explain things differently.

Code Block
titleUse what you know about fastx cutadapt and help functions to try to determine how you want to trim the first 16 bases off the R1 and R2 reads. Click here for a hint.
collapsetrue
$ fastx<tabx2> #cutadapt -h
will display the following:
fastx_artifacts_filter                       fastx_nucleotide_distribution_graph.sh       fastx_reverse_complement
fastx_barcode_splitter.pl                    fastx_nucleotide_distribution_line_graph.sh  fastx_trimmer
fastx_clipper                                fastx_quality_stats                          fastx_uncollapser
fastx_collapser                              fastx_renamer  
 
# interrogate the commands you think might have the answer to what you are trying to do using the '-h' option all the options available to determine how to trim the first 16 bases off DED110_CATGGC_L006_R1_001.fastq and DED110_CATGGC_L006_R2_001.fastq
Code Block
languagebash
titleClick here for 2 example commands that will work.
collapsetrue
fastx_trimmercutadapt -f 17 -i DED110_CATGGC_L006_R1_001.fastq -o DED110.R1.trimmed.fastq
fastx_trimmercutadapt -f 17 -i DED110_CATGGC_L006_R2_001.fastq -o DED110.R2.trimmed.fastq

...

Checking the current contents of the directory will show you we've now made 2 new .trimmed.fastq files in addition to the trio of .fastq files we made in the error correction part of the tutorial. The DED110_SSCS.fastq is the one of most interest to us for the follow up tutorial, while both the .trimmed.fastq files will be of interest. Rather than working with 3 files for 2 samples (error corrected and trimmed), use what you have learned about piping to generate a single file called DED110_all.trimmed.fastq and check your work.

Expand
titleNeed a hint?
commandfunction
catprints contents of a file to the screen
>writes whatever happened on the left side to the right side
>>appends whatever happened on the left side to the right side
wc -lhow many lines are in whatever is specified next
headview the top lines of a file
tailview the bottom lines of a file
Code Block
titlePossible solution
collapsetrue
cat *.trimmed.fastq > DED110_all.trimmed.fastq  
# The above could also be done as 2 sequential steps with naming each file separately, and using a >> on the second line.
head DED110_all.trimmed.fastq
tail DED110_all.trimmed.fastq
wc -l *.trimmed.fastq
 
# these 4 commands should give you all the information you need to make sure you have a single file with all the information from the first 2. Ask if you aren't sure you ahve the right solution.

Next step:

You should now have 2 new .fastq files which we will use to call variants in: DED110_SSCS.fastq, and DED110_all.trimmed.fastq. You should take these files into a more in depth breseq tutorial for comparisons of the specific mutations that are eliminated using the error correction (SSCS). Link to other tutorial.

Optional not recommended tutorial trimming reads with flexbar:

For an another discussion about version control and when it is necessary to update to new tools and versions of programs, take a look at the trimmed reads tutorial from last year which used flexbrar simply because 'it worked before so keep using it'. Compare the simplistic fastx_trimmer commands used in this tutorial to all the work that went into flexbar last year. So while "well enough can be left alone", sometimes it is still better to use new tools. As the heading suggests, we don't actually suggest that you USE flexbar to trim this data set or any other, just something worth looking at to see how different programs operate or are invoked to achieve the same goals.

...