Two BRCF research pods have NVIDIA GPU servers; however their use is restricted to the groups who own those pods.
Servers
Hopefog pod
hfogcomp04.ccbb.utexas.edu compute server on the Hopefog pod (Ellington/Marcotte):
- Dell PowerEdge R750XA
- dual 24-core/48-thread CPUs (48 cores, 96 hyperthreads total)
- 512 GB system RAM
- 2 NVIDIA Ampere A100 GPUs w/32GB onboard RAM each
Wilke pod
wilkcomp03.ccbb.utexas.edu compute server on the Wilke pod:
- GIGABYTE MC62-G40-00 workstation
- AMD Ryzen 5975WX CPU (32 cores, 64 hyperthreads total)
- 512 GB system RAM
- 4 NVIDIA RTX 6000 Ada GPUs
GPU-enabled software
AlphaFold
The AlphaFold protein structure solving software is available on all AMD GPU servers. The /stor/scratch/AlphaFold directory has the large required database, under the data.3 sub-directory. There is also an AMD example script /stor/scratch/AlphaFold/alphafold_example_amd.sh and an alphafold_example_nvidia.sh script if the POD also has NVIDIA GPUs, (e.g. the Hopefog pod). Interestingly, our timing tests indicate that AlphaFold performance is quite similar on all the AMD and NVIDIA GPU servers.
TensorFlow and PyTorch examples
Two Python scripts are located in /stor/scratch/GPU_info that can be used to ensure you have access to the server's GPUs from TensorFlow or PyTorch. Run them from the command line using time to compare the run times.
- Tensor Flow
- time ( python3 /stor/scratch/GPU_info/tensorflow_example.py )
- should take 30s or less with GPU, > 1 minute with CPUs only
- this is a simple test, and on CPU-only servers multiple cores are used but only 1 GPU, one reason why the times are not more different
- time ( python3 /stor/scratch/GPU_info/tensorflow_example.py )
- PyTorch
- time ( python3 /stor/scratch/GPU_info/pytorch_example.py )
- takes ~30s or less to complete on wilkcomp03
- takes ~1m to complete on hfogcomp04.
- time ( python3 /stor/scratch/GPU_info/pytorch_example.py )
If GPUs are available and accessible, the output generated will indicate they are being used.
Resources
Command-line diagnostics
Use nvidia-smi to verify access to the server's GPUs
CUDA
Both hfogcomp04 and wilkcomp03 have both CUDA 11.8 and CUDA 12.x installed.
To ensure CUDA 11 is made active:
export CUDA_HOME=/usr/local/cuda-11.8 export PATH=$CUDA_HOME/bin:$PATH
To ensure CUDA 12 is made active:
export CUDA_HOME=/usr/local/cuda-12 export PATH=$CUDA_HOME/bin:$PATH
Sharing resources
Since there's no batch system on BRCF POD compute servers, it is important for users to monitor their resource usage and that of other users in order to share resources appropriately.
- Use top to monitor running tasks (or top -i to exclude idle processes)
- commands while top is running include:
- M - sort task list by memory usage
- P - sort task list by processor usage
- N - sort task list by process ID (PID)
- T - sort task list by run time
- 1 - show usage of each individual hyperthread
- they're called "CPUs" but are really hyperthreads
- this list can be long; non-interactive mpstat may be preferred
- Use mpstat to monitor overall CPU usage
- mpstat -P ALL to see usage for all hyperthreads
- mpstat -P 0 to see specific hyperthread usage
- Use free -g to monitor overall RAM memory and swap space usage (in GB)
- Use nvidia-smi to monitorGPU usage