Add TAMU data #2
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
GPU Resources at Texas A&M University HPRC (by Cluster and GPU Type)
Grace Cluster (Production since 2021)
(Grace is a 940-node Dell cluster. It includes 132 GPU-equipped nodes: 100 with A100s, 9 with RTX6000s, 8 with T4s, and 15 with A40s[4][5].)
FASTER Cluster (Composable GPU Cluster, since 2021)
(FASTER is a 180-node Dell cluster using Liqid composable fabric. It has no fixed GPU nodes; instead, a pool of 200 T4, 40 A100, 8 A10, 4 A30, and 8 A40 GPUs can be dynamically attached to nodes as needed[9][10].)
ACES Cluster
(ACES is a 130-node Dell cluster with a “composable” accelerator testbed. It hosts a mix of GPUs and accelerators: 30 NVIDIA H100 GPUs, 4 NVIDIA A30 GPUs, and 120 Intel GPU Max (Ponte Vecchio) units on Liqid PCIe fabrics[14]. In addition, ACES includes a single NVIDIA Grace-Hopper (GH200) node (gh01) containing one H100 GPU paired with an ARM Grace CPU[15].)
Launch Cluster (Launched 2023, Regional HPC)
(Launch is a 45-node Dell cluster with AMD EPYC Genoa CPUs. It has 10 GPU nodes, each with two A30 GPUs (20 total)[18]. Additionally, each of its two login nodes is equipped with one A30 GPU for visualization, bringing the A30 count to 22[16][17].)
ViDaL 2.0 Cluster (Secure Data Analytics, 2025)
(ViDaL 2.0 is an 18-node secure Dell cluster for sensitive data. It includes 4 GPU nodes, each with two NVIDIA H100 “NVL” 94GB GPUs (these are Hopper GPUs with HBM3e memory, paired via NVLink)[19]. ViDaL 2.0 replaced the older ViDaL 1.0, which had 4 nodes with V100 GPUs[20].)
NVIDIA DGX SuperPOD
(Texas A&M is deploying a new AI supercomputer based on NVIDIA’s DGX SuperPOD architecture. This system will comprise a total of 760 NVIDIA H200 Hopper-generation GPUs, connected by NVIDIA Quantum-2 InfiniBand networking[21]. The $45 million SuperPOD is expected to triple A&M’s computing capacity and rank among the fastest academic AI systems[22][23].)
Student count
In the fall of 2025, the Computer Science Department at Texas A&M University has enrolled 1,790 undergraduate, 395 Master's, and 179 Doctoral students [26].
[1] [2] [3] Hardware - Texas A&M HPRC
https://hprc.tamu.edu/kb/User-Guides/Terra/Hardware/
[4] [5] [6] [7] [8] Hardware - Texas A&M HPRC
https://hprc.tamu.edu/kb/User-Guides/Grace/Hardware/
[9] [11] [12] [13] Hardware - Texas A&M HPRC
https://hprc.tamu.edu/kb/User-Guides/FASTER/Hardware/
[10] [14] [24] Systems | High Performance Research Computing
https://hprc.tamu.edu/resources/comparison.html
[15] Grace-Hopper - Texas A&M HPRC
https://hprc.tamu.edu/kb/User-Guides/ACES/Grace_Hopper/
[16] [17] [18] Hardware - Texas A&M HPRC
https://hprc.tamu.edu/kb/User-Guides/Launch/Hardware/
[19] [20] [25] research.tamu.edu
https://research.tamu.edu/wp-content/uploads/2024/12/dor-fact-sheet-hprc.pdf
[21] [23] AI for Aggieland | Texas A&M University Engineering
https://engineering.tamu.edu/news/2025/02/ai-for-aggieland.html
[22] Texas A&M System Triples Supercomputing Capacity – Texas A&M Stories
https://stories.tamu.edu/news/2025/02/10/texas-am-system-triples-supercomputing-capacity/
https://engineering.tamu.edu/news/2025/02/ai-for-aggieland.html
[26] Student count Texas A&M CS Department
https://engineering.tamu.edu/cse/about/facts.html