...
To run Sherpa, create a directory and an appropriate run card in it:
Code Block |
---|
|
# The following generates p p -> t t~ tau+ tau-
(run){
# ATLAS general parameters
MASS[6]=172.5
MASS[23]=91.1876
MASS[24]=80.399
WIDTH[23]=2.4952
WIDTH[24]=2.085
SIN2THETAW=0.23113
MAX_PROPER_LIFETIME=10.0
MI_HANDLER=Amisic
# Events to generate?
EVENTS = 1000
# Output events in HEPMC format, with this filename prefix
HEPMC_OUTPUT = ttll-hi
# take tau mass into account
MASSIVE[15] = 1
}(run)
(beam){
# beams are protons at 7 TeV each
BEAM_1 = 2212; BEAM_ENERGY_1 = 7000;
BEAM_2 = 2212; BEAM_ENERGY_2 = 7000;
}(beam)
(processes){
# collide light quarks/gluons (93) to top, antitop, tau, antitau, and up to two additional light quarks/gluons
Process 93 93 -> 6[a] -6[b] 15 -15 93{2};
# on shell top -> W b
DecayOS 6[a] -> 5 24;
DecayOS -6[b] -> -5 -24;
# matrix element/showering match with CKKW prescription
CKKW sqr(30/E_CMS)
# enhance rates for final states with seven and eight partons - events will be weighted
Enhance_Factor 2.0 {7}
Enhance_Factor 5.0 {8}
# Factorization/renormalization scales: loose prescription for high multiplicity states
Scales LOOSE_METS{MU_F2}{MU_R2} {8}
# Allow larger error for high multiplicity states
Integration_Error 0.1 {7,8}
End process;
}(processes)
(selector){
# Dilepton mass between 7 GeV and CM energy
Mass 15 -15 7.0 E_CMS
}(selector)
(mi){
# Multiple interactions on
MI_HANDLER = Amisic # None or Amisic
}(mi)
|
You can then create a SLURM batch submission file and submit:
Code Block |
---|
|
#!/bin/bash
#SBATCH -J my-process
#SBATCH -o my-process.txt
#SBATCH -n 160
#SBATCH -p normal
#SBATCH -t 24:00:00
# launch on 160 cores, you have 24 hours max for job. In practice it's a hybrid process (1 process per node, 16 threads per process).
ibrun /work/02130/ponyisi/sherpa-intel/bin/Sherpa MPI_COMBINE_PROCS=16
|