Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

We should mention that launcher_creator.py does some under-the-hood magic for you and automatically calculates how many cores to request on lonestar, assuming you want one core per process. You don't know it, but you should be grateful that this saves you from ever having to think about a confusing calculation that even the most seasoned computational biologists rutinely routinely got wrong (and hence made a script to avoid having to do it anymore).

...

Now that we have an understanding of what the different parts of running a job is, let's actually run a job. Our goal of this sample job will be to provide you with something to look back on and remember what you did whlie while you were here. As a saftey safety measure, you can not submit jobs from inside an idev node (similarly you can not run a commands file that submits new jobs on the compute nodes). So check if you are on an idev node (showq -u), and if so, logout before continuing. Navigate to the $SCRATCH directory before doing the following.

Code Block
languagebash
titlehow to make a sample commands file
collapsetrue
# remember that things after the # sign are ignored by bash and most all programing languages
cds  # move to your scratch directory
nano commands
 
# the following lines should be entered into nano
echo "My name is _____ and todays date is:" > BDIBGVA2019.output.txt
date >> BDIBGVA2019.output.txt
echo "I have just demonstrated that I know how to redirect output to a new file, and to append things to an already created file. Or at least thats what I think I did" >> BDIBGVA2019.output.txt
 
echo "i'm going to test this by counting the number of lines in the file that I am writing to. So if the next line reads 4 I remember I'm on the right track" >> BDIBGVA2019.output.txt
wc -l BDIBGVA2019.output.txt >> BDIBGVA2019.output.txt
 
echo "I know that normally i would be typing commands on each line of this file, that would be executed on a compute node instead of the head node so that my programs run faster, in parallel, and do not slow down others or risk my tacc account being locked out" >> BDIBGVA2019.output.txt
 
echo "i'm currently in my scratch directory on lonestar. there are 2 main ways of getting here: cds and cd $SCRATCH:" >>BDIB>>GVA2019.output.txt
pwd >> BDIBGVA2019.output.txt
 
echo "over the last week I've conducted multiple different types of analysis on a variety of sample types and under different conditions. Each of the exercises was taken from the website https://wikis.utexas.edu/display/bioiteam/Genome+Variant+Analysis+Course+20172019" >> BDIBGVA2019.output.txt
 
echo "using the ls command i'm now going to try to remind you (my future self) of what tutorials I did" >> BDIBGVA2019.output.txt
 
ls -1 >> BDIBGVA2019.output.txt
 
echo "the contents of those directories (representing the data i downloaded and the work i did) are as follows: ">> BDIBGVA2019.output.txt
lsfind */*. >> BDIBGVA2019.output.txt
 
echo "the commands that i have run on the headnode are: " >> BDIBGVA2019.output.txt
history >> BDIBGVA2019.output.txt
 
echo "the contents of this, my commands file, which i will use in the launcher_creator.py script are: ">>BDIB>>GVA2019.output.txt
cat commands >> BDIBGVA2019.output.txt
 
echo "finally, I will be generating a job.slurm file using the launcher_creator.py script using the following command:" >> BDIBGVA2019.output.txt
echo 'launcher_creator.py -w 1 -N 1 -n "what_i_did_at_BDIB_2017GVA2019" -t 00:0215:00 -a "UT-2015-05-18"' >> BDIBGVA2019.output.txt # this will create a my_first_job.slurm file that will run for 215 minutes
echo "and i will send this job to the que using the the command: sbatch what_i_did_at_BDIB_2017GVA2019.slurm" >> BDIBGVA2019.output.txt  # this will actually submit the job to the Queue Manager and if everything has gone right, it will be added to the development queue.

 
ctrl ctrloo # keyboard command to write your nano output
crtl crtlxx # keyboard command to close the nano interface
wc -l commands  # use this command to verify the number of lines in your commands file.
# expected output:
3130 commands

# if you get a much larger number than 3130 edit your commands file with nano so each command is a single line as they appear above. 
launcher_creator.py -w 1 -N 1 -n "what_i_did_at_BDIBGVA2019_2017" -t 00:0215:00 -a "UT-2015-05-18"
sbatch what_i_did_at_BDIB_2017GVA2019.slurm

Interrogating the launcher queue

...

Based on our example you may have expected 1 new file to have been created during the job submission (BDIBGVA2019.output.txt), but instead you will find 2 extra files as follows: what_i_did.e(job-ID), and what_i_did.o(job-ID). When things have worked well, these files are typically ignored. When your job fails, these files offer insight into the why so you can fix things and resubmit. 

...

Code Block
languagebash
titlemake a single final file using the cat command and copy to a useful work directory
collapsetrue
# remember that things after the # sign are ignored by bash 
cat BDIBGVA2019.output.txt > end_of_class_job_submission.final.output 
mkdir $WORK/BDIB_GVA_2017GVA2019
mkdir $WORK/BDIB_GVA_2017GVA2019/end_of_course_summary/  # each directory must be made in order to avoid getting a no such file or directory error
cp end_of_class_job_submission.final.output $WORK/BDIB_GVA_2017GVA2019/end_of_course_summary/
cp what_i_did* $WORK/BDIB_GVA_2017GVA2019/end_of_course_summary/  # note this grabs the 2 output files generated by tacc about your job run as well as the .slurm file you created to tell it how to run your commands file

cp commands $WORK/BDIB_GVA_2017GVA2019/end_of_course_summary/


Return to GVA2017 GVA2019 to work on any additional tutorials you are interested in.