Human Trios -- GVA2023

Overview

Trio (or familial) analysis has been exceptionally powerful for identifying rare childhood diseases.  The most prominent publication in this area is this first example of whole exome sequencing saving a life.  There are many other publications since and some review articles such as this one.  Familial analysis is critical for rare, autosomal dominant diseases because, almost by definition, the mutations may be "private" to each individual so we can't look across big populations to find one single causative mutation.  But within families, we can identify bad private mutations in single genes or pathways and then look across populations to find commonality at the gene or pathway level to explain a phenotype.

Learning Objectives

  1. Review differences in calling mutations on higher level organisms
  2. Call SNVs from multiple samples simultaneously
  3. Determine relationship between the individuals based on the results

Diploid genomes

The initial steps in calling variants for diploid or multi-ploid organisms with NGS data are the same as what we've already seen:

  1. Map the raw data to a suitable reference
  2. Look for SNVs.

Expectations of output are quite different however, which can add statistical power to uncovering variation in populations or organisms with more than two expected variants at the same location. If you happen to be working with a model organism with extensive external data (ESPECIALLY HUMAN), then there are even more sophisticated tools like the Broad Institute's GATK that can improve both sensitivity and specificity of your variant calls. A separate tutorial will introduce you to GATK and another will look at the exome nature of this data.

Get some data

Many example datasets are available from the 1000 Genomes Project specifically for method evaluation and training. We'll explore a trio (mom, dad, child) hdfd. Known as the CEU Trio from the 1000 genomes project  their accession numbers are NA12892, NA12891, and NA12878 respectively. To make the exercise run FAR more quickly, we'll focus on data only from chromosome 20.

Possible errors on idev nodes

As mentioned yesterday, you can not copy from the BioITeam (because it is on corral-repl) while on an idev node. Logout of your idev session, copy the files.

All the data we'll use is located here:

Diploid genome (human) example files. This directory is very large and will take a few minutes to copy. Read ahead while the copy command runs
mkdir $SCRATCH/GVA_Human_trios
cd $SCRATCH/GVA_Human_trios
cp -r $BI/ngs_course/human_variation raw_files
ls raw_files

This directory contains the following:

  1. compressed raw data (the .fastq.gz files)
  2. mapped data (the .bam files)
  3. variant calls (the .vcf files)
  4. the subdirectory ref with special references
  5. .bam files containing a subset of mapped human whole exome data are also available on these three; those are the three files "NA*.bam".
  6. We've pre-run samtools and GATK on each sample individually - those are the *GATK.vcf and *samtools.vcf files.
  7. We've also pre-run samtools and GATK on the trio, resulting in GATK.all.vcf and samtools.all.vcf. (these files are from old versions)
  8.  The 1000 Genomes project is really oriented to producing .vcf files; the file "ceu20.vcf" contains all the latest genotypes from this trio based on abundant data from the project. 

Single-sample variant calling with bcftools

We would normally use the BAM file from a previous mapping step to call variants in this raw data. However, for the purposes of this tutorial we will use the actual BAM file provided by the 1000 Genomes Project (from which the .fastq file above was derived), which does lead to some oddities in the data. As a bonus tutorial, you could map the data yourself and using what you learned in the bowtie2 tutorial and then use the resultant .bam files.

For now, the bam file we want to focus on is:

$SCRATCH/GVA_Human_trios/raw_files/NA12878.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam

With bcftools, this is a two-step process:

  1. bcftools mpileup command transposes the mapped data in a sorted BAM file fully to genome-centric coordinates. It starts at the first base on the first chromosome for which there is coverage and prints out one line per base. Each line has information on every base observed in the raw data at that base position along with a lot of auxiliary information depending on which flags are set. It calculates the Bayseian prior probability given by the data, but does not estimate a real genotype.
  2. bcftools call with a few options added uses the prior probability distribution and the data to calculate a genotype for the variants detected.


Remember to make sure you are on an idev done

It is unlikely that you are currently on an idev node as copying the files while on an idev node causes problems as discussed. Remember the hostname command and showq -u can be used to check if you are on one of the login nodes or one of the compute nodes.

If you need more information or help re-launching a new idev node, please see this tutorial.

You should request at least 60 minutes on the idev session to make sure the commands have time to finish running.

Recall that we installed samtools and bcftools in our GVA-SNV conda environment. Make sure you have activated your GVA-SNV environment and you have access to samtools and bcftools version 1.17

 Click here for commands and expected output if you are not sure
activate conda envrionment
conda activate GVA-SNV
Make sure you have the expected versions of samtools and bcftools installed
samtools --version
bcftools --version


samtools --version output:

samtools 1.17
Using htslib 1.17
Copyright (C) 2023 Genome Research Ltd.
# Followed by a bunch of compilation details.

bcftools --version output:

bcftools 1.17
Using htslib 1.17
Copyright (C) 2023 Genome Research Ltd.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
 What to do if you do not get version 1.17 for both samtools and bcftools in the above version checks?

If you are not seeing the correct versions, there is either a problem activating or creating your environment. Either try to activate the environment again, go back to the SNV tutorial, or ask for help before continuing.



Calling variants using bcftools. Note the bcftools command is quite long and may wrap around 2 lines your monitor or extend to the right of what you can see without scrolling over
cd $SCRATCH/GVA_Human_trios 
bcftools mpileup --threads 48 -O u -f raw_files/ref/hs37d5.fa raw_files/NA12878.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam | bcftools call --threads 64 -v -c - > trios_tutorial.raw.vcf

The above command may take ~20 minutes to run based on some student's experience and will not produce many lines of output leading students to worry their terminal has locked up. More than likely this is not the case, but you can try hitting the return key 1 or 2 times to see if your terminal adds a blank line to verify. As this command is taking so long it is recommended that you read ahead and/or switch to reading another tutorial while waiting for this command to finish. This is 1 reason that idev nodes are not the usual way you will interact with your data, much better to submit to the queue and do something (anything) else.

One potential issue with this type of approach is that vcf files only record variation that can be seen with the data provided. When all reads mapping to a given location exactly match the reference (i.e. is homozygous wildtype relative to the reference) there will be no data. Which looks the same as if you had no data in those regions; this leads us to our next topic. 

Multiple-sample variant calling with bcftools

Not being able to tell between no data and wildtype is not the end of the world for a single sample, but if you're actually trying to study human (or other organism) genetics, you must discriminate homozygous WT from a lack of data. This is done by providing many samples to the variant caller simultaneously. This concept extends further to populations; calling variants across a large and diverse population provides a stronger Bayesian prior probability distribution for more sensitive detection.

To instruct bcftools to call variants across many samples, you must simply give it mapped data with each sample tagged separately. bcftools allows two methods to do this:

  1. By providing separate bam files for each sample. Note that in the following code block, he trailing \ symbols tells the command line that you are not done entering the command and not to execute any commands until you hit return without a preceding \ mark.

    Warning, DO NOT run this command yet.

    bcftools multi-sample variants: separate bam files
    #bcftools mpileup --threads 64 -O u -f raw_files/ref/hs37d5.fa \
      raw_files/NA12878.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam \
      raw_files/NA12891.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam \
      raw_files/NA12892.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam \
        | bcftools call -v -c - > trios_tutorial.multi-sample.vcf
    


    The output file from this option is
    $BI/gva_course/GVA.multi-sample.vcf if you want to work with it without having to wait on the analysis to run personally.


  2. By providing one or more bam files, each containing mapped reads from multiple samples tagged with unique samtools @RG tags.

    Do not run this command

    This command comes with a sizable caveat: If you intend to use this option, you must make sure you tag your reads with the right RG tag; this can easily be done during the samse or sampe stage of mapping with bwa with the -r option, using samtools merge, with picard tools AddOrReplaceReadGroup command, or with your own perl/python/bash commands. The problem is that it requires prior knowledge of all the samples you are going to group together ahead of time, and individual sample input files have to be merged together. There are obviously times that this will be the desired and easier, but for the purpose of this course, we believe it makes more sense to keep all input files separate and only have a single merged output file. As part of the tutorial we do provide the appropriate files necessary for the following command to run.

    samtools multi-sample variants: one or more bam files using @RG
    bcftools mpileup --threads 48 -O u -f raw_files/ref/hs37d5.fa <all_with@RG.bam> | bcftools call --threads 48 -v -c - > trios_tutorial.all.raw.vcf 

    An observation for the individuals interested in phylogenies

    I have a hunch that the use of read tags could be useful for your analysis IF you have a finite set of samples that you intend to analyze, have all the data available at once, and are willing to repeat your analysis from near scratch in the event of either of these factors changing. This is an excellent example of the importance of finding papers that have done similar analyses, and mimicking as much of what they have done as possible/reasonable.

Based on the discussion above, we are selecting the first solution and providing details of how this command should be run. As this command will generate very little output and take ~30 minutes to complete, you are once again reminded that the output file from this option is available $BI/gva_course/GVA.multi-sample.vcf if you want to work with it without having to wait on the analysis to run personally.

samtools multi-sample variants: separate bam files
cd $SCRATCH/GVA_Human_trios
bcftools mpileup --threads 48 -O u -f raw_files/ref/hs37d5.fa \
  raw_files/NA12878.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam \
  raw_files/NA12891.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam \
  raw_files/NA12892.chrom20.ILLUMINA.bwa.CEU.exome.20111114.bam \
    | bcftools call -v -c - > trios_tutorial.multi-sample.vcf 

The observant students who start this command might notice that this time mpileup does give a message that it is working with 3 samples, and 3 input files (while all previous analysis have used 1 file and 1 sample).

Identify the lineage

If genetics works, you should be able to identify the child based strictly on the genotypes.  Can you do it?

 Hint...

You're trying to find the genotypes in the trios_tutorial.multi-sample.vcf file, and then use your knowledge of Mendelian inheritance to figure out which of the three samples is the only one that could be a child of the other two. 

If you are working with provided data rather than generating the data yourself

Notice that if you are working with data provided rather than data generated you may have 2 different file names "trios_tutorial.multi-sample.vcf" if you analyzed the data yourself and "GVA.multi-sample.vcf" if you copied the data from the BioITeam directory.

In the next code block you will either need to change the name of the file that you print to the screen with the cat command, or you need to change the name of the file using the move command.

This linux one-liner should give you a snapshot of data sufficient to figure it out:
cat trios_tutorial.multi-sample.vcf | tail -10000 | awk '{if ($6>500) {print $2"\t"$10"\t"$11"\t"$12}}' | grep "0/0" | sed s/':'/'\t'/g | awk '{print $2"\t"$4"\t"$6}' | tail -100 | sort | uniq -c | sort -n -r
 Explanation of command

Here are the steps going into this command:

  1. cat trios_tutorial.multi-sample.vcf |
    1. Dump the contents of trios_tutorial.multi-sample.vcf and pipe it to the next command
  2. tail -10000 |
    1. Take the last 10,000 lines and pipe it to the next command. As the top of the file has header information, the last lines are all data
  3. awk '{if ($6>500) {print $2"\t"$10"\t"$11"\t"$12}}' | 
    1. If the variant quality score (the 6th column or $6) is greater than 500, then print the following fields 2 (SNP position), 10, 11, and 12 (the 3 genotypes). and pipe to next command
  4. grep "0/0" |
    1. Filter for only lines that have at least one homozygous SNP and pipe them to the next command
    2.  Think about genetics and why this is important. If you aren't sure ask.
  5. sed s/':'/'\t'/g | awk '{print $2"\t"$4"\t"$6}' |
    1. Break the genotype call apart from other information about depth: "sed" turns the colons into tabs so that awk can just print the genotype fields. and pipe to next output
  6. tail -100 | sort | uniq -c | sort -n -r
    1. Take the last 100 lines. 100 is used to ensure we get some good informative counts, but not so many that noise becomes a significant problem.
    2. sort them
    3. then count the unique lines
    4. sort them again, in numeric order, and print them in reversed order

For more information about the individual commands and their options https://explainshell.com/ is a good jumping off point.

example output of sample solution
     35 0/1	0/1	0/0
     20 0/0	0/1	0/0
     18 0/1	0/0	0/1
     14 0/1	1/1	0/0
      6 0/0	0/1	0/1
      5 1/1	1/1	0/0
      1 1/1	0/1	0/0
      1 0/1	0/0	0/0

Given this information can you make any determination about family structure for these 3 individuals? The first column is the number of occurrences generated by the uniq -c command in our large 1 liner, with the following 3 columns being the different individual samples. So consider that while some lines may not show a viable mendelian inheritance pattern, you should weight things according to how many times each scenario occurred as our filtering was fairly limited.

 Discussion of the output

Overall this data is consistent with column 1 (NA12878) being the child. Lines marked with an * are inconsistent:

     35 0/1	0/1	0/0
     20 0/0 0/1 0/0
18 0/1 0/0 0/1 14 0/1 1/1 0/0 # middle can't be child 6 0/0 0/1 0/1 5 1/1 1/1 0/0 * No mendelian combination exists 1 1/1 0/1 0/0 * 1 0/1 0/0 0/0 *

This is, in fact, the correct assessment - NA12878 is the child.

Going further

Refining your analysis

Can you modify the large 1 liner command to be more strict, or to include more examples such that you eliminate the non-mendelian inheritance situations and consider a larger number of loci?

Scope and usefulness

  • This same type of analysis can be done on much larger cohorts, say cohorts of 100 or even 1000s of individuals with known disease state to attempt to identify associations between allelic state and disease state.
  • This is the first step of building a phylogeny of 3 related individuals. Expanding this could be of use for genetic counseling or larger phylogenetic analysis.


Return to GVA2023 page.