This tutorial teaches you how to get to your first test case in DGSWEMDG-SWEM, the experimental discontinous galerkin method counterpart for ADCIRC, on TACC systems.
Obtaining DGSWEM
There is a C version of DGSWEM and a Fortran version. We will focus on DGSWEM-Fortran, which can be obtained from a github repo (it’s private so you have to ask around).
Building DGSWEM
DGSWEM comes with its own makefile.
make sure you have intel compilers on
cd into the /work directory and run “make all”. This will automatically compile and link your program based on your runtime environment.
You should get a adcprep, adcpost, dgswem, and dgswem_serial binary
Building on Frontera
Load the necessary modules:
...
This loads the Intel compilers for Fortran and C: ifort
and icx
, as well as MPI. Then run make all
as above.
Building on Vista
On the new machine Vista, we will instead use NVIDIA compilers. Here we can choose between the regular CPU version and a GPU-accelerated version (still in development). First, load the necessary modules by running
...
The Fortran compiler is now nvfortran
and the C and C++ compilers nvc
and nvc++
.
CPU version
This is the default version. To compile, run
Code Block | ||
---|---|---|
| ||
cd work make compiler=nvhpc all |
GPU version
This code is maintained in a separate branch gpu
. Run
Code Block | ||
---|---|---|
| ||
git fetch --all git checkout gpu |
to sync this branch. Then compile:
Code Block | ||
---|---|---|
| ||
cd work make all |
...
Build a test case
Zach from the water institute has graciously lended us a program to build a test case, which we will adapt for DGSWEM:
View file | ||
---|---|---|
|
python slopingbeach.py 1000 sb1000
output: sb1000 (mesh file), sb1000.15 (control file), sb1000.info (metadata)
Adapting the Test Case
The control file will have two options that are not valid in DGSWEM currently:
IM = 511112 Change this to 0.
NOUTGE = -3 (DGSWEM does not read NETCDF output yet). Change this to 1.
...
Running DG-SWEM
...
Preparing the
...
input files
Apart from the ADCIRC input above, we need a DG-specific control file fort.dg
in the same directory that we’re running the program. A sample file can be found in the work
directory.
An important option is the rainfall
flag, which denotes the following options:
rainfall = 0
: No rainrainfall = 1
- Rain is generated using the R-CLIPER model based on the wind inputrainfall = 2
- Constant rain is generated on the whole domainrainfall = 3
Rain is generated using the IPET model
Serial run
To run on a single CPU, just type
Code Block |
---|
./dgswem_serial |
Once done, the output files such as fort.61
and fort.63
will be created in the same directory (if enabled in fort.15
).
Parallel run
To run in parallel, we need to first perform domain decomposition. Run
Code Block |
---|
./adcprep |
It will ask for the number of MPI ranks you want to run on, and the names of the input files.
Running the Test Case
Run ./padcirc (it This creates multiple PE****
folders, one per MPI rank.
If running on a local machine, we can proceed to the next step. Otherwise if running on TACC, consult the docs for starting jobs:
Run the following:
Code Block |
---|
mpirun -np <N> ./dgswem |
where <N>
is the number of MPI ranks we configured in the previous step. (Note that DG-SWEM doesn’t have write cores like adcircADCIRC). Afterwards run
Code Block |
---|
./adcpost |
to grab and agglomerate the partitioned output files into one file, i.e. the fort.63
in each PE directory into a single fort.63
in the run directory.
GPU run on Vista (experimental)
Here we will run the code on an interactive queue. Start a 60-minute GPU session on a single node with:
Code Block |
---|
idev -p gh -N 1 -m 60 |
Then run
Code Block |
---|
./dgswem_serial |
Note that the MPI-GPU version is still not supported. Additionally, only rainfall = 0
currently works for this GPU version.
For more details on interactive sessions, see https://docs.tacc.utexas.edu/hpc/vista/#launching-interactive
...
Congrats! You have just run your first test case of dgswem!
...