Diploid genomes
The initial steps in calling variants for diploid or multi-ploid organisms with NGS data are the same as what we've already seen:
- Map the raw data to a suitable reference
- Look for SNPs and small-scale indels
Expectations of output are quite different however, which can add statistical power to uncovering variation in populations or organisms with more than two expected variants at the same location.
If you happen to be working with a model organism with extensive external data (ESPECIALLY HUMAN), then there are even more sophisticated tools like the Broad Institute's GATK that can improve both sensitivity and specificity of your variant calls.
Many example datasets are available from the 1000 genomes project specifically for method evaluation and training. We'll explore a trio (mom, dad, child). Their accession numbers are NA12892, NA12891, and NA12878 respectively. To make the exercise run more quickly, we'll focus on data only from chromosome 20.
All the data we'll use is located here:
$BI/ngs_course/human_variation
This directory contains raw data (the .fastq files), mapped data (the .bam files) and variant calls (the .vcf files). It also contains the subdirectory ref
with special references.
The 1000 Genomes project is really oriented to producing .vcf files; the file "ceu20.vcf" contains all the latest genotypes from this trio based on abundant data from the project.
.bam files containing a subset of mapped human whole exome data are also available on these three; those are the three files "NA*.bam".
We've pre-run samtools and GATK on each sample individually - those are the *GATK.vcf and *samtools.vcf files.
We've also pre-run samtools and GATK on the trio, resulting in GATK.all.vcf and samtools.all.vcf.
We'll now show the steps for:
- Mapping
- Single-sample variant calling with samtools
- Multiple-sample variant calling with samtools
We'll return to this example data shortly to demonstrate a much more involved tool, GATK, to do the same steps.
Note if you're trying this on Stampede: The $BI directory is not accessible from compute nodes on Stampede so you will need to make a copy of your data on $SCRATCH and update file locations accordingly to get this demo to run.
Optional: Mapping Exercise
Expand here if you'd like to try this on your own...
Let's use Anna Battenhouse's shell script align_bwa.sh
to align the fastq files to the hg19 version of the genome, and the python program launcher_creator.py
to create the job submission script.
Don't forget that for this to work, you need to have appended $BI/bin
to your path.
Show me how to modify my path...
BI="/corral-repl/utexas/BioITeam"
PATH=$PATH:$BI/bin
export PATH
Move into your scratch directory and then try to figure out how to create and qsub
the align_bwa.sh
command to align the data file $BI/ngs_course/human_variation/allseqs_R1.fastq
against the hg19
reference. Call the output "test".
Solution
echo "align_bwa.sh $BI/ngs_course/human_variation/allseqs_R1.fastq test hg19 1 > aln.test.log 2>&1" > commands
launcher_creator.py -l map_call.sge -n map_call -t 01:00:00 -j commands
module load bwa
module load samtools
qsub map_call.sge
Caution: If you are using this example outside an SSC course, you must use the -a option to specify a valid allocation (e.g. BME_2012)
Output
login1$ echo "align_bwa.sh $BI/ngs_course/human_variation/allseqs_R1.fastq test hg19 1" > commands
launcher_creator.py -l map_call.sge -n map_call -t 01:00:00 -j commands
Job file has 1 lines.
Using 12 cores.
Launcher successfully created. Type "qsub map_call.sge" to queue your job.
login1$ qsub map_call.sge
-----------------------------------------------------------------
-- Welcome to the Lonestar4 Westmere/QDR IB Linux Cluster --
-----------------------------------------------------------------
--> Checking that you specified -V...
--> Checking that you specified a time limit...
--> Checking that you specified a queue...
--> Setting project...
--> Checking that you specified a parallel environment...
--> Checking that you specified a valid parallel environment name...
--> Checking that the number of PEs requested is valid...
--> Ensuring absence of dubious h_vmem,h_data,s_vmem,s_data limits...
--> Requesting valid memory configuration (23.4G)...
--> Verifying HOME file-system availability...
--> Verifying WORK file-system availability...
--> Verifying SCRATCH file-system availability...
--> Checking ssh setup...
--> Checking that you didn't request more cores than the maximum...
--> Checking that you don't already have the maximum number of jobs...
--> Checking that you don't already have the maximum number of jobs in queue development...
--> Checking that your time limit isn't over the maximum...
--> Checking available allocation...
--> Submitting job...
Your job 586249 ("map_call") has been submitted
Note that the input is paired-end data.
Output
Your directory should have content like this when done (from ls -lt
):
-rw-r--r-- 1 sphsmith G-801020 5732 May 20 23:01 map_call.o586338
-rw------- 1 sphsmith G-801020 392 May 20 23:01 test.flagstat.txt
-rw------- 1 sphsmith G-801020 2175952 May 20 23:01 test.sorted.bam.bai
-rw------- 1 sphsmith G-801020 334782188 May 20 23:01 test.sorted.bam
-rw-r--r-- 1 sphsmith G-801020 13135 May 20 23:00 map_call.e586338
-rw------- 1 sphsmith G-801020 411244396 May 20 23:00 test.bam
-rw------- 1 sphsmith G-801020 45695040 May 20 22:49 test.read2.sai
-rw------- 1 sphsmith G-801020 45372400 May 20 22:39 test.read1.sai
-rw-r--r-- 1 sphsmith G-801020 0 May 20 22:26 map_call.pe586338
and samtools flagstat test.sorted.bam
should yield:
Running flagstat...
4546280 + 0 in total (QC-passed reads + QC-failed reads)
0 + 0 duplicates
3992274 + 0 mapped (87.81%:nan%)
4546280 + 0 paired in sequencing
2273140 + 0 read1
2273140 + 0 read2
40290 + 0 properly paired (0.89%:nan%)
3636946 + 0 with itself and mate mapped
355328 + 0 singletons (7.82%:nan%)
44128 + 0 with mate mapped to a different chr
15634 + 0 with mate mapped to a different chr (mapQ>=5)
We would normally use the BAM file from the previous mapping step to call variants in this raw data. However, for the purposes of this course we will use the actual BAM file provided by the 1000 Genomes Project (from which the .fastq
file above was derived, leading to some oddities in it).
$BI/ngs_course/human_variation/NA12878.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam
With samtools, this is a two-step process:
samtools mpileup
command transposes the mapped data in a sorted BAM file fully to genome-centric coordinates. It starts at the first base on the first chromosome for which there is coverage and prints out one line per base. Each line has information on every base observed in the raw data at that base position along with a lot of auxiliary information depending on which flags are set. It calculates the Bayseian prior probability given by the data, but does not estimate a real genotype.bcftools
with a few options added uses the prior probability distribution and the data to calculate a genotype for the variants detected.
Here are the commands, piped together:
samtools mpileup -uf $BI/ref_genome/fasta/ucsc/ucsc.hg19.fasta \
$BI/ngs_course/human_variation/NA12878.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam \
| bcftools view -vcg - > test.raw.vcf
Note that the reference chosen for mpileup must be exactly the same as the reference used to create the bam file. The 1000 genomes project has created it's own reference and so the command listed above won't work - we have to use the 1000 genomes reference which is $BI/ngs_course/human_variation/ref/hs37d5.fa
. We could have chosen another mapper if we'd wanted to though.
Exercise:
Write a modified version of the command above to use the proper reference into a commands
file, create the job submission script with launcher_creator.py
and submit the job.
Solution
As we did for mapping, we need to submit these to the Lonestar queue:
echo "samtools mpileup -uf $BI/ngs_course/human_variation/ref/hs37d5.fa \
$BI/ngs_course/human_variation/NA12878.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam \
| bcftools view -vcg - > test.raw.vcf" > commands
launcher_creator.py -l call_vars.sge -n call_vars -t 01:00:00 -j commands
qsub call_vars.sge
Output
login1$ echo "samtools mpileup -uf /corral-repl/utexas/BioITeam/ngs_course/human_variation/ref/hs37d5.fa \
> /corral-repl/utexas/BioITeam/ngs_course/human_variation/NA12878.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam \
> | bcftools view -vcg - > test.raw.vcf" > commands
login1$ launcher_creator.py -l call_vars.sge -n call_vars -t 01:00:00 -j commands
Job file has 1 lines.
Using 12 cores.
Launcher successfully created. Type "qsub call_vars.sge" to queue your job.
login1$ qsub call_vars.sge
-----------------------------------------------------------------
-- Welcome to the Lonestar4 Westmere/QDR IB Linux Cluster --
-----------------------------------------------------------------
--> Checking that you specified -V...
--> Checking that you specified a time limit...
--> Checking that you specified a queue...
--> Setting project...
--> Checking that you specified a parallel environment...
--> Checking that you specified a valid parallel environment name...
--> Checking that the number of PEs requested is valid...
--> Ensuring absence of dubious h_vmem,h_data,s_vmem,s_data limits...
--> Requesting valid memory configuration (23.4G)...
--> Verifying HOME file-system availability...
--> Verifying WORK file-system availability...
--> Verifying SCRATCH file-system availability...
--> Checking ssh setup...
--> Checking that you didn't request more cores than the maximum...
--> Checking that you don't already have the maximum number of jobs...
--> Checking that you don't already have the maximum number of jobs in queue development...
--> Checking that your time limit isn't over the maximum...
--> Checking available allocation...
--> Submitting job...
Your job 586369 ("call_vars") has been submitted
After your single-sample variant calling job completes
If you don't want to wait, click here...
If you don't want to wait, CHANGE TO A NEW DIRECTORY and do this:
ln -s $BI/ngs_course/human_variation/NA12878.chrom20.samtools.vcf test.raw.vcf
and continue with the rest of these exercises. Remember to return to your job directory if you want to see your actual data.
You can examine some of the variant calls with:
Note the extensive header information, followed by an ordered list of variants.
A bcftools
auxiliary perl script called vcfutils.pl
can provide some useful stats. It's safe to run this on the head node since it's quick. This is not part of a standard TACC module but it's in our common $BI/bin directory.
vcfutils.pl qstats test.raw.vcf
Output
login1$ vcfutils.pl qstats test.raw.vcf
QUAL #non-indel #SNPs #transitions #joint ts/tv #joint/#ref #joint/#non-indel
186 1011 1011 757 0 2.9803 0.0000 0.0000 2.9803
142 2036 2036 1441 0 2.4218 0.0000 0.0000 2.0059
122 3091 3091 2156 0 2.3059 0.0000 0.0000 2.1029
109 4177 4177 2925 0 2.3363 0.0000 0.0000 2.4259
99.5 5237 5237 3647 0 2.2937 0.0000 0.0000 2.1361
92 6273 6273 4354 0 2.2689 0.0000 0.0000 2.1489
85.5 7328 7328 5105 0 2.2964 0.0000 0.0000 2.4704
79 8352 8352 5811 0 2.2869 0.0000 0.0000 2.2201
74 9369 9369 6497 0 2.2622 0.0000 0.0000 2.0725
69 10553 10553 7311 0 2.2551 0.0000 0.0000 2.2000
65 11782 11782 8176 0 2.2673 0.0000 0.0000 2.3764
61 13059 13059 9090 0 2.2902 0.0000 0.0000 2.5179
57 14280 14280 9945 0 2.2941 0.0000 0.0000 2.3361
53 15301 15301 10656 0 2.2941 0.0000 0.0000 2.2935
48.8 16323 16323 11347 0 2.2803 0.0000 0.0000 2.0876
45 17460 17460 12122 0 2.2709 0.0000 0.0000 2.1409
41.8 18639 18639 12899 0 2.2472 0.0000 0.0000 1.9328
39.8 19660 19660 13572 0 2.2293 0.0000 0.0000 1.9339
37.8 21013 21013 14496 0 2.2243 0.0000 0.0000 2.1538
35.8 22309 22309 15380 0 2.2197 0.0000 0.0000 2.1456
33.8 23568 23568 16251 0 2.2210 0.0000 0.0000 2.2448
31.8 24846 24846 17101 0 2.2080 0.0000 0.0000 1.9860
29.8 26171 26171 17975 0 2.1931 0.0000 0.0000 1.9379
28 27308 27308 18688 0 2.1680 0.0000 0.0000 1.6816
26 28654 28654 19551 0 2.1478 0.0000 0.0000 1.7867
24 29947 29947 20371 0 2.1273 0.0000 0.0000 1.7336
22 31050 31050 21065 0 2.1097 0.0000 0.0000 1.6968
20 32081 32081 21719 0 2.0960 0.0000 0.0000 1.7347
17.1 33251 33251 22446 0 2.0774 0.0000 0.0000 1.6411
13.2 34447 34447 23238 0 2.0732 0.0000 0.0000 1.9604
10.4 35751 35751 24067 0 2.0598 0.0000 0.0000 1.7453
9.53 36772 36772 24724 0 2.0521 0.0000 0.0000 1.8049
8.65 38023 38023 25539 0 2.0457 0.0000 0.0000 1.8693
7.8 39301 39301 26360 0 2.0369 0.0000 0.0000 1.7965
6.98 40665 40665 27196 0 2.0192 0.0000 0.0000 1.5833
6.2 42394 42394 28243 0 1.9958 0.0000 0.0000 1.5352
5.46 44018 44018 29211 0 1.9728 0.0000 0.0000 1.4756
4.77 45553 45553 30168 0 1.9609 0.0000 0.0000 1.6557
4.13 47020 47020 31081 0 1.9500 0.0000 0.0000 1.6480
3.54 48572 48572 32063 0 1.9422 0.0000 0.0000 1.7228
3.01 50463 50463 33139 0 1.9129 0.0000 0.0000 1.3202
Here is an honest write-up about the Ts/Tv metric, referencing this 2002 paper by Ebersberger. Bottom line: between about 2.1 and 2.8 is OK for Ts/Tv (also called Ti/Tv).
You can also get some quick stats with some linux one-liners on this page; there are more thorough analysis programs built to work with vcf's.
Linux one-liner stats for this case
login1$ grep -c INDEL test.raw.vcf
3432
login1$ cat test.raw.vcf | awk 'BEGIN {FS=";"} {for (i=1;i<=NF;i++) {if (index($i,"AF1")!=0) {print $i} }}' | \
awk 'BEGIN {FS="="} {print int($2*10)/10}' | sort | uniq -c | sort -n -r | head
36757 1
19404 0.5
1819 0.4
36 0.6
17 0.8
16 0.7
1 0
Keep in mind that variant files only record variation that can be seen with the data provided. Where ever sample sequence exactly matches the reference (i.e. is homozygous wildtype relative to the reference) there will be no data. Which looks the same as if you had no data in those regions; this leads us to our next topic.
This is all fine, but if you're actually trying to study human (or other organism) genetics, you must discriminate homozygous WT from a lack of data. This is done by providing many samples to the variant caller simultaneously. This concept extends further to populations; calling variants across a large and diverse population provides a stronger Bayesian prior probability distribution for more sensitive detection.
To instruct samtools to call variants across many samples, you must simply give it mapped data with each sample tagged separately. Samtools allows two methods to do this:
By providing separate bam files for each sample, like this:
samtools mpileup -uf hs37d5.fa \
NA12878.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam \
NA12891.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam \
NA12892.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam \
| bcftools view -vcg - > all.samtools.vcf
By providing one or more bam files, each containing mapped reads from multiple samples tagged with unique samtools @RG
tags.
samtools mpileup -uf hs37d5.fa all.bam | bcftools view -vcg - > all.raw.vcf
The output file from the first option is in $BI/ngs_course/human_variation
CAVEAT: If you intend to use the second option, you must make sure you tag your reads with the right RG tag; this can easily be done during the samse
or sampe
stage of mapping with bwa with the -r
option, using samtools merge
, with picard tools AddOrReplaceReadGroup command, or with your own perl/python/bash commands. Note that our align_bwa.sh
script takes care of this detail for us.
Identify the lineage
If genetics works, you should be able to identify the child based strictly on the genotypes. Can you do it?
Hint...
You're trying to find the genotypes in the all.samtools.vcf and all.GATK.vcf files, and then use your knowledge of Mendelian inheritance to figure out which of the three samples is the only one that could be a child of the other two.
One solution...
This linux one-liner should give you a snapshot of data sufficient to figure it out:
cat all.samtools.vcf | head -10000 | awk '{if ($6>500) {print $2"\t"$10"\t"$11"\t"$12}}' | grep "0/0" | sed s/':'/' '/g | awk '{print $2"\t"$5"\t"$8}' | tail -100 | sort | uniq -c
Here's the output and discussion
tacc:/scratch/01057/sphsmith/human_variation$ cat all.samtools.vcf | head -10000 | awk '{if ($6>500) {print $2"\t"$10"\t"$11"\t"$12}}' | grep "0/0" | sed s/':'/' '/g | awk '{print $2"\t"$5"\t"$8}' | tail -100 | sort | uniq -c
12 0/0 0/1 0/0
5 0/0 0/1 0/1
3 0/1 0/0 0/0
4 0/1 0/0 0/1
8 0/1 0/0 1/1
43 0/1 0/1 0/0
24 0/1 1/1 0/0
1 1/1 0/1 0/0
Here are the steps going into this command:
1) Dump the contents of all.samtools.vcf
2) Take the first 10,000 lines
3) If the variant quality score is greater than 500, then print fields 2 (SNP position), 10, 11, and 12 (the 3 genotypes).
4) Filter for only lines that have at least one homozygous SNP (exercise to the reader to understand why...)
5) Break the genotype call apart from other information about depth: "sed" turns the colons into spaces so that awk can just print the genotype fields.
6) Take the last 100 lines, sort them, then count the unique lines
Here is my interpretation of the data:
1) This method effectively looks at a very narrow genomic region, probably within a homologous recombination block.
2) The most telling data: the child will have heterozygous SNPs from two homozygous parents.
3) So all this data is consistent with column 1 (NA12878) being the child:
12 0/0 0/1 0/0
5 0/0 0/1 0/1
4 0/1 0/0 0/1
8 0/1 0/0 1/1
43 0/1 0/1 0/0
24 0/1 1/1 0/0
"Outlier" data are:
3 0/1 0/0 0/0
1 1/1 0/1 0/0
This is, in fact, the correct assessment - NA12878 is the child.
Look at the differences between single- and multiple-sample SNP calls
bcftools has many other sub-commands such as performing association tests and estimating allele frequency spectrums.
You can't get gene-context information from bcftools - you have to shift to variant annotation tools to do that.
If you don't need a variant caller that uses a full Bayesian inference engine with robust filtering, protections, etc. but just want to look for weird base-artifacts within your mapped data, check out samtools mpileup
output directly.