Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Two BRCF research pods have NVIDIA GPU servers; however their use is restricted to the groups who own those pods. 

Servers

Hopefog pod

hfogcomp04.ccbb.utexas.edu compute server on the Hopefog pod (Ellington/Marcotte):

  • Dell PowerEdge R750XA
  • dual 24-core/48-thread CPUs (48 cores, 96 hyperthreads total)
  • 512 GB RAM
  • 2 NVIDIA Ampere A100 GPUs w/32GB onboard RAM each

Wilke pod

wilkcomp03.ccbb.utexas.edu compute server on the Wilke pod:

** In progress **

Resources

Tests

Two Python scripts are located in /stor/scratch/GPU_info that can be used to ensure you have access to the server's GPUs. Run them from the command line using time to compare the run times.

  • Tensor Flow
    • time (python3 /stor/scratch/GPU_info/tensorflow_example.py )
      • should take 30s or less with GPU, > 1 minute with CPUs only
  • PyTorch – **not yet working**
    • time (python3 /stor/scratch/GPU_info/pytorch_example.py )

Note that this is a simple test, and on CPU-only servers multiple cores are used but only 1 GPU, one reason why the times are not more different.

CUDA

These servers have both CUDA 11 and CUDA 12 installed







  • No labels