Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

...

POD nameDescriptionBRCF delegatesCompute serversStorage serverUnix Groups
AMD GPU POD

PUD with GPU resources available for instructional and research use.

Note: This POD uses UT EID authentication


Anna Battenhouse
  • amdgcomp01.ccbb.utexas.edu, amdgcomp02.ccbb.utexas.edu, amdgcomp03.ccbb.utexas.edu
    • Dual 64-core EPYC 7V13 CPUs
    • 512 GB RAM
    • 8 AMD Radeon Instinct MI-100 GPUs w/32GB onboard RAM each

amdbstor01.ccbb.utexas.edu

  • 12 6-TB disks
  • 72 TB raw, 42 TB usable

Per course and research project. See

CBRS PODShared POD for CBRS core facilitiesAnna Battenhouse
  • cbrscomp01.ccbb.utexas.edu,
    cbrscomp02.ccbb.utexas.edu
    • Dell PowerEdge R640
    • dual 26-core/52-thread CPUs
    • 768 GB RAM
    • 960 GB SATA SSD for ultra-high-speed local I/O, mounted as /ssd1 (not backed up)

cbrsstor01.ccbb.utexas.edu

  • 24 16-TB disks
  • 384 TB raw, 220 TB usable
BCG, CBRS_BIC, CBRS_CryoEM, CBRS_microscopy, CBRS_org, CBRS_proteomics
Chen/Wallingford/Raccah PODShared POD for members of the Jeffrey Chen, John Wallingford and Doran Raccah labs


  • chencomp01.ccbb.utexas.edu
    • Dell PowerEdge R410
    • dual 4-core/8-thread CPUs
    • 64 GB RAM
  • chencomp02.ccbb.utexas.edu
    • Dell AMD node
    • dual 64-core/128-thread AMD EPYC CPUs
    • 768 GB RAM
    • 1.9 GB NVMe for ultra-high-speed local I/O, mounted as /ssd1 (not backed up)

chenstor01.ccbb.utexas.edu

  • 24 8-TB disks
  • 192 TB raw, 106 TB usable


Chen, Raccah, Wallingford
Dickinson/Cambronne PODShared POD for members of the Dan Dickinson and Lulu Cambronne labs
  • Dan Dickinson
  • Lulu Cambronne
  • djdicomp01.ccbb.utexas.edu
    • Dell PowerEdge R410
    • dual 4-core/8-thread CPUs
    • 64 GB RAM

djdistor01.ccbb.utexas.edu

  • 24 8-TB disks
  • 192 TB raw, 106 TB usable


Dickinson, Cambronne
Educational (EDU) POD

Dedicated instructional POD

Note: This POD uses UT EID authentication

Course instructors.

See The Educational POD

  • edupod.cns.utexas.edu
    • virtual host for pool of 3 physical servers listed below
  • educcomp01.ccbb.utexas.edu
  • educcomp02.ccbb.utexas.edu
  • educcomp04.ccbb.utexas.edu
    • Dell PowerEdge R640
    • dual 28-core/52-thread CPUs
    • 1 TB RAM

educstor01.ccbb.utexas.edu

  • 24 4-TB disks
  • 96 TB raw, 53 TB usable


Per course. See The Educational POD
Georgiou/WCAAR PODShared POD for members of the Georgiou lab and the Waggoner Center for Alcoholism & Addiction Research (WCAAR)
  • Russ Durrett (Georgiou lab)
  • Dayne Mayfield (WCAAR)
  • wcarcomp01.ccbb.utexas.edu
    • Dell PowerEdge R430
    • dual 16-core/32-thread CPUs
    • 256 GB RAM
  • wcarcomp02.ccbb.utexas.edu
    • Dell PowerEdge R430
    • dual 18-core/36-thread CPUs
    • 384 GB RAM
  • wcarcomp03.ccbb.utexas.edu
    • Dell PowerEdge R640
    • dual 26-core/52-thread CPUs
    • 1 TB RAM
    • 1.8 TB SATA SSD for ultra-high-speed local I/O, mounted as /ssd1 (not backed up)

georstor01.ccbb.utexas.edu

  • 12 8-TB disks + 12 14-TB disks
  • 264 TB raw, 158 TB usable


Georgiou, WCAAR

GSAF POD

Anchor
GSAF_POD
GSAF_POD

Shared POD for use by GSAF customers. 2TB Work area allocation available for participating groups.

Contact Anna Battenhouse, for more information.

  • Anna Battenhouse
  • Dhivya Arasappan
  • gsafcomp01.ccbb.utexas.edu
  • gsafcomp02.ccbb.utexas.edu
    • Dell PowerEdge R410
    • dual 4-core/8-thread CPUs
    • 64 GB RAM
  • gsafcbig01.ccbb.utexas.edu
    • Dell PowerEdge R720
    • dual  6-core/12-thread CPUs
    • 192 GB RAM

gsafstor01.ccbb.utexas.edu

  • 24 6-TB disks
  • 144 TB raw, 90 TB usable

GSAF customer groups:
Alper, Atkinson, Baker, Barrick, Bolnick, Bray,  Browning, Cannatella, Contrearas, Crews, Drew, Dudley, Eberhart, Ellington, GSAFGuest, Hawkes, HoWinson, HyunJunKim, Kirisits, Leahy, Leibold, LiuHw, Lloyd, Manning, Matz, Mueller, Paull, Press, SSung, ZhangYJ

GSAF internal & instructional groups:
GSAF, 
BioComputing2017, CCBB_Workshops_1,   FRI-BigDataBio

Hopefog (Ellington) PODShared POD for Ellington & Marcotte lab special projects
  • Anna Battenhouse
  • hfogcomp01.ccbb.utexas.edu
    • Dell PowerEdge R730xd
    • dual 10-core/20-thread CPUs
    • 250 GB RAM
    • 37 TB local RAID storage,  mounted as /raid (not backed up)
  • hfogcomp02.ccbb.utexas.edu,
    hfogcomp03.ccbb.utexas.edu
    • AMD GPU servers
    • 48-core/96-hyperthread EPYC CPU
    • 512 GB RAM
    • 8 AMD Radeon Instinct MI-50 GPUs w/32GB onboard RAM each
  • hfogcomp04.ccbb.utexas.edu
    • Dell PowerEdge R750XA
    • dual 24-core/48-thread CPUs
    • 512 GB RAM
    • 2 NVIDIA Ampere A100 GPUs w/80GB onboard RAM each
  • hfogcomp05.ccbb.utexas.edu – available soon!
    • GIGABYTE MC62-G40-00
    • 32-core/64-thread AMD Ryzen CPU
    • 512 GB RAM
    • 4 NVIDIA RTX 6000 Ada GPUs, 48G RAM each

hfogstor01.ccbb.utexas.edu

  • 24 6-TB disks
  • 144 TB raw, 90 TB usable
Ellington, Marcotte, Wilke
Iyer/Kim PODShared POD for members of the Vishy Iyer and Jonghwan Kim labs
  • Anna Battenhouse
  • iyercomp02.ccbb.utexas.edu (aka dragonfly.icmb.utexas.edu)
    • Dell PowerEdge R410
    • dual 4-core/8-thread CPUs
    • 64GB RAM
  • iyercomp03.ccbb.utexas.edu (aka adler3.icmb.utexas.edu)
    • Dell PowerEdge R720
    • dual  6-core/12-thread CPUs
    • 192 GB RAM

iyerstor01.ccbb.utexas.edu

  • 24 6-TB disks
  • 144 TB raw, 90 TB usable


Iyer, JKim
Kirkpatrick PODShared POD for members of Kirkpatrick and Harpak labsTBD
  • kirkcomp01.ccbb.utexas.edu
    • Dell PowerEdge R640
    • dual 26-core/52-thread CPUs
    • 768 GB RAM
    • 1.9 TB SSD for high-speec local I/O, mounted as /ssd1 (not backed up)

kirkstor01.ccbb.utexas.edu

  • 12 18-TB disks
  • 216 TB raw, 124 TB usable
Kirkpatrick, Harpak
Lambowitz /CCBB POD

Shared POD for use by CCBB affiliates and the Alan Lambowitz lab.


  • Hans, Hofmann, Rebecca Young Brim (Hofmann lab & CCBB affiliates)
  • Jun Yao (Lambowitz lab)
  • lambcomp01.ccbb.utexas.edu
    • Dell PowerEdge R410
    • dual 4-core/8-thread CPUs
    • 64 GB RAM
  • ccbbcomp01.ccbb.utexas.edu
    • Dell PowerEdge R420
    • dual 4-core CPUs
    • 96 GB RAM
  • ccbbcomp02.ccbb.utexas.edu
    • Dell PowerEdge R720
    • dual  6-core/12-thread CPUs
    • 192 GB RAM

lambstor01.ccbb.utexas.edu

  • 18 16-TB disks
  • 288 TB raw, 170 TB usable


Lambowitz groups:
Lambowitz, LambGuest

CCBB groups:
Cannatella, Hawkes, Hillis, Hofmann, Jansen

 

Instructional groups:
FRI-BigDataBio


LiveStrong DT POD

POD for members of Dell Medical School's LiveStrong Diagnostic Theraputics group.

Note: This POD uses UT EID authentication

  • Jeanne Kowalski
  • Song (Stephen) Yi
  • livecomp01.ccbb.utexas.edu
    • Dell PowerEdge R440
    • dual 14-core/28-thread CPUs
    • 192 GB RAM
    • 480 GB SATA SSD for ultra-high-speed local I/O, mounted as /ssd1 (not backed up)
  • livecomp02.ccbb.utexas.edu, livecomp03.ccbb.utexas.edu
    • AMD GPU server
    • 48-core/96-hyperthread EPYC CPU
    • 512 GB RAM
    • 8 AMD Radeon Instinct MI-50 GPUs with 32GB onboard RAM each
  • livecomp04.ccbb.utexas.edu
    • Dell PowerEdge R640
    • dual 26-core/52-hyperthread CPUs
    • 768 GB RAM
    • 1.9 TB SSD for high-speec local I/O, mounted as /ssd1 (not backed up)

livestor01.ccbb.utexas.edu

  • 24 10-TB disks
  • 240 TB raw, 132 TB usable

Jeanne Kowalski groups:
CancerClinicalGenomics, ColoradoData, MultipleMyeloma

Stephen Yi groups:
SongYi

Lauren Ehrlich groups:
Ehrlich_COVID19, Ehrlich

Instructional groups:
FRI-BigDataBio

Marcotte PODSingle-lab POD for members of the Edward Marcotte lab
  • Anna Battenhouse
  • marccomp01.ccbb.utexas.edu (aka hopper.icmb.utexas.edu)
    • Dell PowerEdge R730
    • dual 18-core/36-thread CPUs
    • 768 GB RAM
  • marccomp02.ccbb.utexas.edu (aka ada.icmb.utexas.edu)
    • Dell PowerEdge R610
    • dual 4-core/8-thread CPUs
    • 96 GB RAM
  • marccomp03.ccbb.utexas.edu (aka perutz.ccbb.utexas.edu)
    • Dell PowerEdge R610
    • dual 4-core/8-thread CPUs
    • 96 GB RAM

marcstor02.ccbb.utexas.edu

  • 24 12-TB disks
  • 288 TB raw, 160 TB usable


Marcotte
Ochman/Moran PODShared POD for members of the Howard Ochman and Nancy Moran labs
  • Howard Ochman
  • ochmcomp01.ccbb.utexas.edu
    • Dell PowerEdge R430
    • dual 18-core/36-thread CPUs
    • 384 GB RAM
  • ochmcomp02.ccbb.utexas.edu
    • Dell PowerEdge R640
    • dual 26-core/52-hyperthread CPUs
    • 1024 GB RAM
    • 1.9 TB SSD for high-speec local I/O, mounted as /ssd1, (not backed up)

ochmstor01.ccbb.utexas.edu

  • 24 8-TB disks
  • 192 TB raw, 106 TB usable


Ochman, Moran
Rental PODShared POD for POD rental customers
  • Anna Battenhouse (overall)
  • Daylin Morgan (Brock)
  • rentcomp01.ccbb.utexas.edu
    • Dell PowerEdge R640
    • dual 18-core/36-thread CPUs
    • 768 GB RAM
    • 900 GB SATA SSD for ultra-high-speed local I/O, mounted as /ssd1 (not backed up)
  • rentcomp02.ccbb.utexas.edu
    • Dell PowerEdge R640
    • dual 18-core/36-thread CPUs
    • 256 GB RAM
    • 450 GB SATA SSD for ultra-high-speed local I/O, mounted as /ssd1 (not backed up)

rentstor01.ccbb.utexas.edu

  • 12 12-TB disks
  • 144 TB raw, 90 TB usable
Brock, Calder, Champagne, Curley, Fleet, Gaydosh (AddHealth, FragileFamilies, VUSNAPS), Gray, Gross, Hillis, Raccah, Seidlits, Sullivan, YiLu, Zamudio
Wilke PODFor use by members of the Claus Wilke lab and the AG3C collaboration
  • Aaron Feller
  • Alexis Hill
  • wilkcomp01.ccbb.utexas.edu
  • wilkcomp02.ccbb.utexas.edu
    • Dell PowerEdge R930
    • quad 14-core/28-thread CPUs
    • 1 TB RAM
  • wilkcomp03.ccbb.utexas.edu
    • GIGABYTE MC62-G40
    • 48-core AMD Ryzen 5975 CPU
    • 500 G system RAM
    • 4 NVIDIA RTX 6000 Ada GPUs, 48G RAM each
    • 2 TB SSD for fast local I/O, mounted as /ssd1 (not backed up)

wilkstor01.ccbb.utexas.edu

  • 18 16-TB disks
  • 288 TB raw, 170 TB usable


Wilke

...

ResourceDescriptionNetwork availabilityFor details
SSH

Remote access to the bash shell's command line, and remote file transfer commands such as scp and rsync.


  • Standard ssh command unrestricted from the UT campus network (excluding Dell Medical School)
  • Off-campus ssh access:
    • UT VPN service active, or
    • Public key installed in ~/.ssh/authorized_keys
  • Notes:
    • Direct storage server access for file transfers are only accessible from the UT campus network or with the UT VPN service active.
SambaAllows mounting of shared POD storage as a remote file system that can be browsed from your Windows or Mac desktop/laptop computer
  • Unrestricted from the UT campus network (excluding Dell Medical School)
  • Off-campus access requires the UT VPN service to be active
HTTPSAccess to web-based R Studio server and JupyterHub applications
  • Unrestricted for BRCF-managed accounts
    • For PODs using EID authentication (e.g. Livestrong), an active UT EID is required

...

Code Block
languagebash
# change to your home directory where the symlinks will be created
cd 
ln -s -fsf /stor/work/BCG bcg_work
ln -ssf -f /stor/scratch/BCG bcg_scratch

# Then, use the symbolic link when copying data from TACC
rsync -avrW $SCRATCH/analysis/ abattenh@cbrsstor01.ccbb.utexas.edu:~/bcg_scratch/analysis/

...

Shared Work areas are backed up weekly. Scratch areas are not backed up. Both Work and Scratch areas may have quotas, depending on the POD (e.g. on the Rental or GSAF pod); such quotas are generally in the multi-terabyte range.

Because it has a large quota and is regularly backed up and archived, your group's Work area is where large research artifacts that need to be preserved should be located.

Scratch, on the other hand, can be used for artifacts that are transient or can easily be re-created (such as downloads from public databases).

See Manage storage areas by project activity for important guidelines for Work and Scratch area contents.

...

Note that any directory in any file system tree named tmp, temp, or backups is not backed up. Directories with these names are intended for temporary files, especially large numbers of small temporary files. See "Cannot create tempfile" error and Avoid having too many small files.

Periodic and long-term archiving

...

What is too many? Ten million or more.

If the files are small, they don't take up much storage space. But the fact that there are so many causes the backup or archiving to run for a really long time. For weekly backups, this can mean that the previous week's backup is not done by the time the next one starts. For archiving, it means it can take weeks on end to archive a single directory that has many millions of small files.

Backing up gets even worse when a directory with many files is just moved or renamed. In this case the files need to be deleted from the old location and added to the new one – and both of these operations can be extremely long-running.

To see how many files (termed "inodes" in Unix) there are under a directory tree, use the df -i command. For example:

Code Block
languagebash
df -i /stor/work/MyGroup/my_dir

...

1) Move the files to a temporary directory.
The backup process excludes any sub-directory anywhere in the file system directory tree named tmp, temp, or backups. So if there are files you don't care about, just rename the directory to, for example, tmp. There will be a one-time deletion of the directory under its previous name, but that would be it. 

...

3) Zip or Tar the directory
If these are important files you need to have backed up, ziping or taring the directory is the way to go. This converts a directory and all its contents into a single, larger file that can be backed up or archived efficiently. Please Contact Us if you would like us to help with this, since with our direct access to the storage server we can perform zip and tar operations much more efficiently than you can from a compute server.

If your analysis pipeline creates many small files as a matter of course, you should consider modifying the processing to create small files in a tmp directory then ziping or taring the as a final step.

Memory usage considerations

...

Running processes unattended

While POD compute servers do not have a batch system, you can still run multiple tasks simultaneously in several different ways. 

For example, you can use terminal multiplexer tools like screen or tmux to create virtual terminal sessions that won't go away when you log off. Then, inside a screen  or  tmux  session you can create multiple sub-shells where you can run different commands.

You can also use the command line utility nohup to start processes in the background, again allowing you to log off and still have the process running.

 Here are some links on how to use these tools:

...

Lower priority for large, long-running jobs

If you have one or more jobs that uses multiple threads, or does significant I/O, its execution can affect system responsiveness for other users.

To help avoid this, please use the renice tool to manipulate the priority of your tasks (a priority of 15 is a good choice). It's easy to do, and here's a quick tutorial: http://www.thegeekstuff.com/2013/08/nice-renice-command-examples/?utm_source=tuicool

For example, before you start any tasks, you can set the default priority to nice 15 as shown here. Anything you start from then on (from this shell) should inherit the nice 15 value.

Code Block
languagebash
renice +15 $$

...

Many programs offer an option to divide their work among multiple processes, which can reduce the total clock time the program will run. The option may refer to "processes", "cores" or "threads", but actually target the available computing units on a server. Examples include: samtools sort --threads option; bowtie2 -p/--threads option; in R, library(doParallel); registerDoParallel(cores = NN).

One thing to keep in mind here is the difference between cores and hyperthreads. Cores are physical computing units, while hyperthreads are virtual computing units -- kernel objects that "split" each core into two hyperthreads so that the single compute unit can be used by two processes.

The AvailablePODs 31976279 table describes the compute servers that are associated with each BRCF pod, along with their available cores and (hyper)threads. (Note that most servers are dual-CPU, meaning that total core count is double the per-CPU core count, so a dual 4-core CPU machine would have 8 cores.) You can also see the hyperthread and core counts on any server via:

Code Block
languagebash
cat /proc/cpuinfo | grep -c 'core id'           # actually the number of hyperthreads!
cat /proc/cpuinfo | grep 'siblings' | head -1   # the real number of physical cores

(Yes, the fact that 'core id' gives hyperthreads and 'siblings' the number of cores is confusing. But what do you expect -- this is Unix (smile))

Since hyperthreads look like available computing units ("CPUs in OS displays), parallel processing options that detect "cores" usually really detect hyperthreads. Why does this matter? 

...

So before you select a process/core/thread count for your program, consider whether it will perform significant I/O. If so, you can specify a higher count. If it is compute bound (e.g. machine learning), be sure to specify a count low enough to leave free hyperthreads for others to use.

Note that this issue with machine learning (ML) workflows being incredibly compute bound is the main reason ML processing is best run on GPU-enabled servers. While none of our current PODs have GPUs, GPU-enabled servers are available at TACC. Additionally, Austin's Advanced Micro Devices, who are trying to compete with NVIDIA in the GPU market, will soon be offering a "GPU cloud" that will be available to UT researchers. We're working with them on this initiative and will provide access information when it is available.

Input/Output considerations

...

Code Block
ls /st                   # Typing this + Tab expands to /stor
ls /stor/sy              # Typing this + Tab expands to /stor/system
ls /stor/system/o        # Typing this + Tab expands to /stor/system/opt
ls /stor/system/opt/sam  # Typing this + Tab expands to /stor/system/opt/samtools (not uniquely)

# Typing this + Tab twice will list many possible completions:
ls /stor/system/opt/samtools/bam

Reduce the I/O priority of your processes

Similar to the way renice reduces the CPU priority of your processes (see above), ionice can reduce the I/O priority. This can be done for all your processes or for specific ones:

Code Block
# lower I/O priority for process number <pid>
ionice -c 2 -n 7 -p <pid>

# lower I/O priority for all your processes
ionice -c 2 -n 7 -u <uid>

# and here's how to find your <uid> (user ID)
grep $USER /etc/passwd | awk -F ':' '{print $3}'

Transfer large files directly to the storage server

...