Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

...

For questions on connecting to the machine and other details of use, check out the Stampede User Guide. Also look at this page on setting permissions on your files and directories.

...

Code Block
none
none
titlebatch_script_condor
#!/bin/bash
#SBATCH -J ttZ-LO
#SBATCH -o ttZ-LO.o
# MUST ask for one job per node (so we get one Condor instance per node)
#SBATCH -n 5 -N 5
#SBATCH -p normal
#SBATCH -t 10:00:00
# For peace of mind, in case we forgot before submission
module swap intel gcc
# Following is needed for Delphes
. /work/02130/ponyisi/root/bin/thisroot.sh
# path to Condor installation.  Every job gets a private configuration file, handledcreated by our scripts
CONDOR=/work/02130/ponyisi/condor
# create Condor configuration files specific to this job
$CONDOR/condor_configure.py --configure
# update environment variables to reflect job-local configuration
$($CONDOR/condor_configure.py --env)
# start Condor servers on each node
ibrun $CONDOR/condor_configure.py --startup
# Run job
bin/generate_events --cluster <<EOF
0
0
EOF
# cleanly shut down Condors 
ibrun $CONDOR/condor_configure.py --shutdown

Then call sbatch batch_script_condor from the output directory. This will go off and run Madgraph on a node somewhereover a bunch of nodes. You can look at the job output by looking at the file ttZ-LO.o.

...

The programs scp and rsync can be used to move files to and from Stampede. Keep files in $WORK or $HOME on Stampede.

Running Pythia and Delphes

Pythia and Delphes 3 are part of the Madgraph installation. They will be automatically run as part of bin/generate_events if the cards pythia_card.dat and delphes_card.dat exist in the Cards directory. A Delphes card for 140-pileup ATLAS is part of the template so will be in your Cards directory; you can copy it to delphes_card.dat. However this will not handle pileup.

Making a gridpack

It's great to have Stampede, but we may need to run the generated Madgraph code in other environments. In particular it appears that the Snowmass effort is trying to collect Madgraph codes for various processes. One way to make a distributable version of a Madgraph run is to create a "gridpack." These are frozen versions of the code (no parameter changes allowed, integration grids already fixed) which can be easily run on Grid sites.

To make a gridpack, ensure that you're happy with the cards for your process, then edit Cards/run_card.dat to set .true. = gridpack. Then run generate_events as normal via a batch job (you probably want to set it to generate very few events). This will produce a file in your output directory called something like run_01_gridpack.tar.gz. Now you can follow the instructions under Submitting Madgraph gridpacks to Panda to run jobs on the Grid.