Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 4.0

...

After running bin/mg5 from the top-level Madgraph directory, one can configure either LO or NLO computation.

  • LO:
    Code Block
    nonenone
    titleExample: Madgraph ttZ + up to 2 jets, leading order
    none
    generate p p > t t~ z @0
    add process p p > t t~ z j @1
    add process p p > t t~ z j j @2
    output ttZ-LO # output directory
    
    You probably want to edit output_dir/run_card.dat to change the number of events that will be generated in a run, and to set the ickkw variable to 1 to enable ME+PS (matrix element+parton shower) matching.
  • NLO: none
    Code Block
    none
    titleaMC@NLO ttZ
    none
    generate p p > t t~ z [QCD]
    output ttZ-NLO # output directory
    
    I haven't fully validated NLO yet.

...

One feature of Stampede is that computing cores are allocated in blocks of 16 (one node). So even a single job will take (and be charged for) 16 slots. We can take advantage of this by submitting Madgraph jobs to a node in multicore mode (default); they will then take use all 16 cores. (So in short, we submit one Madgraph job per run, which will then use 16 cores.) Create the following script in the output directory above, changing ttZ-LO as appropriate:

none
Code Block
none
titlebatch_script_multicore
none
#!/bin/bash
#SBATCH -J ttZ-LO
#SBATCH -o ttZ-LO.o
#SBATCH -n 1
#SBATCH -p normal
#SBATCH -t 10:00:00
# For peace of mind, in case we forgot before submission
module swap intel gcc
# Following is needed for Delphes
. /work/02130/ponyisi/root/bin/thisroot.sh
bin/generate_events <<EOF
0
0
EOF

...

Create the following script in the output directory above, changing ttZ-LO as appropriate:

none
Code Block
none
titlebatch_script_condor
none
#!/bin/bash
#SBATCH -J ttZ-LO
#SBATCH -o ttZ-LO.o
# MUST ask for one job per node (so we get one Condor instance per node)
#SBATCH -n 5 -N 5
#SBATCH -p normal
#SBATCH -t 10:00:00
# For peace of mind, in case we forgot before submission
module swap intel gcc
# Following is needed for Delphes
. /work/02130/ponyisi/root/bin/thisroot.sh
# path to Condor installation.  Every job gets a private configuration file, created by our scripts
CONDOR=/work/02130/ponyisi/condor
# create Condor configuration files specific to this job
$CONDOR/condor_configure.py --configure
# update environment variables to reflect job-local configuration
$($CONDOR/condor_configure.py --env)
# start Condor servers on each node
ibrun $CONDOR/condor_configure.py --startup
# Run job
bin/generate_events --cluster <<EOF
0
0
EOF
# cleanly shut down Condors 
ibrun $CONDOR/condor_configure.py --shutdown

...