DG-SWEM Fortran User Guide
This tutorial teaches you how to get to your first test case in DG-SWEM, the experimental discontinous galerkin method counterpart for ADCIRC, on TACC systems.
Obtaining DGSWEM
There is a C version of DGSWEM and a Fortran version. We will focus on DGSWEM-Fortran, which can be obtained from a github repo (it’s private so you have to ask around).
Building DGSWEM
DGSWEM comes with its own makefile.
make sure you have intel compilers on
cd into the /work directory and run “make all”. This will automatically compile and link your program based on your runtime environment.
You should get a adcprep, adcpost, dgswem, and dgswem_serial binary
Building on Frontera
Load the necessary modules:
module load TACC intel/23.1.0 impi
This loads the Intel compilers for Fortran and C: ifort
and icx
, as well as MPI. Then run make all
as above.
Building on Vista
On the new machine Vista, we will instead use NVIDIA compilers. Here we can choose between the regular CPU version and a GPU-accelerated version (still in development). First, load the necessary modules by running
module load TACC
The Fortran compiler is now nvfortran
and the C and C++ compilers nvc
and nvc++
.
CPU version
This is the default version. To compile, run
cd work
make compiler=nvhpc all
GPU version
This code is maintained in a separate branch gpu
. Run
to sync this branch. Then compile:
Build a test case
Zach from the water institute has graciously lended us a program to build a test case, which we will adapt for DGSWEM: . This code will generate us a rectangular beach with some incoming waves, with a specified amount of elements. Let’s create a 1000 element case for now.
python slopingbeach.py 1000 sb1000
output: sb1000 (mesh file), sb1000.15 (control file), sb1000.info (metadata)
Adapting the Test Case
The control file will have two options that are not valid in DGSWEM currently:
IM = 511112 Change this to 0.
NOUTGE = -3 (DGSWEM does not read NETCDF output yet). Change this to 1.
Running DG-SWEM
Preparing the input files
Apart from the ADCIRC input above, we need a DG-specific control file fort.dg
in the same directory that we’re running the program. A sample file can be found in the work
directory.
An important option is the rainfall
flag, which denotes the following options:
rainfall = 0
: No rainrainfall = 1
- Rain is generated using the R-CLIPER model based on the wind inputrainfall = 2
- Constant rain is generated on the whole domainrainfall = 3
Rain is generated using the IPET model
Serial run
To run on a single CPU, just type
Once done, the output files such as fort.61
and fort.63
will be created in the same directory (if enabled in fort.15
).
Parallel run
To run in parallel, we need to first perform domain decomposition. Run
It will ask for the number of MPI ranks you want to run on, and the names of the input files. This creates multiple PE****
folders, one per MPI rank.
If running on a local machine, we can proceed to the next step. Otherwise if running on TACC, consult the docs for starting jobs:
Run the following:
where <N>
is the number of MPI ranks we configured in the previous step. (Note that DG-SWEM doesn’t have write cores like ADCIRC). Afterwards run
to grab and agglomerate the partitioned output files into one file, i.e. the fort.63
in each PE directory into a single fort.63
in the run directory.
GPU run on Vista (experimental)
Here we will run the code on an interactive queue. Start a 60-minute GPU session on a single node with:
Then run
Note that the MPI-GPU version is still not supported. Additionally, only rainfall = 0
currently works for this GPU version.
For more details on interactive sessions, see https://docs.tacc.utexas.edu/hpc/vista/#launching-interactive
Congrats! You have just run your first test case of dgswem!
Welcome to the University Wiki Service! Please use your IID (yourEID@eid.utexas.edu) when prompted for your email address during login or click here to enter your EID. If you are experiencing any issues loading content on pages, please try these steps to clear your browser cache.