Lonestar Essentials
Introduction
The Lonestar Linux Cluster consists of 1,888 compute nodes, with two 6-Core processors per node, for a total of 22,656 cores. It is configured with 44 TB of total memory and 276TB of local disk space. The theoretical peak compute performance is 302 TFLOPS. In addition, for users interested in large memory, high throughput computing, Lonestar provides five nodes, six cores each, each with 1TB of memory. For visualization and GPU programming, Lonestar also provides eight GPU nodes with six cores. The system supports a 1PB global, parallel file storage, managed by the Lustre file system. Nodes are interconnected with InfiniBand technology in a fat-tree topology with a 40Gbit/sec point-to-point bandwidth. A 10 PB capacity archival system is available for long term storage and backups.
Lonestar Architecture Overview
- 2 public login nodes
- 2x 6 core 3.3 GHz Intel Xeon 5680 processors
- 24 GB RAM
- 1888 Compute Nodes
- 2x 6 core 3.3 GHz Intel Xeon 5680 processors
- 24 GB RAM
- 22,656 processors total
- 14 Largemem nodes
- 4x 6 core Intel Xeon E7540 2.0GHz processors
- 1 TB RAM
- 8 GPU nodes
- 2x 6-core Intel Xeon X5670 2.93GHz processors
- 24 GB RAM
- 2X NVIDIA Tesla M2070 GPU
- Interconnect
- Mellanox QDR Infiniband (40 Gbit/s)
- Filesystems
- 65 GB/node local SATA
- $HOME, $WORK, $SCRATCH filesystems
- Corral filesystem (if you have an allocation)
- Ranch archival
Welcome to the University Wiki Service! Please use your IID (yourEID@eid.utexas.edu) when prompted for your email address during login or click here to enter your EID. If you are experiencing any issues loading content on pages, please try these steps to clear your browser cache.