Running Madgraph 5 (and aMC@NLO) on Stampede
First, you need to be a member of our project on Stampede. This determines how TACC accounts for CPU use. If you aren't a member, first get an account at the TACC portal page, then let Peter know and he will add you to the project.
For questions on connecting to the machine and other details of use, check out the Stampede User Guide.
I've installed a copy of Madgraph 5 and Fastjet on Stampede under /work/02130/ponyisi/madgraph/
. I think all members of the project should have access to this directory. I've made modifications to make a 126 GeV Higgs the default.
Before running Madgraph you should run module swap intel gcc
; we need to use the gcc compiler family (in particular gfortran), not the Intel ones.
After running bin/mg5
from the top-level Madgraph directory, one can configure either LO or NLO computation.
- LO:
You probably want to editExample: Madgraph ttZ + up to 2 jets, leading order
generate p p > t t~ z @0 add process p p > t t~ z j @1 add process p p > t t~ z j j @2 output ttZ-LO # output directory
output_dir/run_card.dat
to change the number of events that will be generated in a run, and to set theickkw
variable to1
to enable ME+PS (matrix element+parton shower) matching.
- NLO:
I haven't fully validated NLO yet.aMC@NLO ttZ
generate p p > t t~ z [QCD] output ttZ-NLO # output directory
Do not run the launch
command. We want to submit to the batch queues on our own terms. Stampede uses "SLURM" as its batch system. This is vaguely like any other batch system out there.
One feature of Stampede is that computing cores are allocated in blocks of 16 (one node). So even a single job will take (and be charged for) 16 slots. We can take advantage of this by submitting Madgraph jobs to a node in multicore mode (default); they will then take use all 16 cores. (So in short, we submit one Madgraph job per run, which will then use 16 cores.) Create the following script in the output directory above, changing ttZ-LO
as appropriate:
Batch script goes here ...
Then call sbatch batch_script
from the output directory. This will go off and run Madgraph on a node somewhere. You can look at the job output by looking at
The programs scp
and rsync
can be used to move files to and from Stampede. Keep files in $WORK
or $HOME
on Stampede.