...
For questions on connecting to the machine and other details of use, check out the Stampede User Guide. Also look at this page on setting permissions on your files and directories.
...
Running Madgraph manually
I've installed a copy of Madgraph 5 and Fastjet on Stampede under /work/02130/ponyisi/madgraph/
. I think all members of the project should have access to this directory. I've made modifications to make a 125 GeV Higgs the default.
...
Do not run the launch
command. We want to submit to the batch queues on our own terms. Stampede uses "SLURM" as its batch system. This is vaguely like any other batch system out there.
Running Madgraph in multicore mode (still works, but Condor is better, see below)
One feature of Stampede is that computing cores are allocated in blocks of 16 (one node). So even a single job will take (and be charged for) 16 slots. We can take advantage of this by submitting Madgraph jobs to a node in multicore mode (default); they will then take use all 16 cores. (So in short, we submit one Madgraph job per run, which will then use 16 cores.) Create the following script in the output directory above, changing ttZ-LO
as appropriate:
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
#!/bin/bash
#SBATCH -J ttZ-LO
#SBATCH -o ttZ-LO.o
#SBATCH -n 1
#SBATCH -p normal
#SBATCH -t 10:00:00
# For peace of mind, in case we forgot before submission
module swap intel gcc
# Following is needed for Delphes
. /work/02130/ponyisi/root/bin/thisroot.sh
bin/generate_events <<EOF
0
0
EOF
|
Then call sbatch batch_script_multicore
from the output directory. This will go off and run Madgraph on a node somewhere. You can look at the job output by looking at the file ttZ-LO.o
.
Running Madgraph with Condor
One feature of Stampede is that computing cores are allocated in blocks of 16 (one node). Madgraph doesn't deal with this so well (it wants a batch system with individual slots), so we humor it by booting a small Condor cluster within the node allocation from SLURM.
Create the following script in the output directory above, changing ttZ-LO
as appropriate:
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
#!/bin/bash #SBATCH -J ttZ-LO #SBATCH -o ttZ-LO.o # MUST ask for one job per node (so we get one Condor instance per node) #SBATCH -n 5 -N 5 #SBATCH -p normal #SBATCH -t 10:00:00 # For peace of mind, in case we forgot before submission module swap intel gcc # Following is needed for Delphes . /work/02130/ponyisi/root/bin/thisroot.sh # path to Condor installation. Every job gets a private configuration file, handled by our scripts CONDOR=/work/02130/ponyisi/condor # create Condor configuration files specific to this job $CONDOR/condor_configure.py --configure # update environment variables to reflect job-local configuration $($CONDOR/condor_configure.py --env) # start Condor servers on each node ibrun $CONDOR/condor_configure.py --startup # Run job bin/generate_events <<EOF 0 0 EOF # cleanly shut down Condors ibrun $CONDOR/condor_configure.py --shutdown |
Then call sbatch batch_script_condor
from the output directory. This will go off and run Madgraph on a node somewhere. You can look at the job output by looking at the file ttZ-LO.o
.
File transfer
The programs scp
and rsync
can be used to move files to and from Stampede. Keep files in $WORK
or $HOME
on Stampede.