Skip to content

Conversation

@mrsergazinov
Copy link

GPU Resources at Texas A&M University HPRC (by Cluster and GPU Type)

Grace Cluster (Production since 2021)

  • NVIDIA A100 (40 GB, PCIe): 200 GPUs (in 100 nodes, each with 2× A100 40GB)[4][5].
  • NVIDIA Quadro RTX 6000 (Turing, 24 GB): 18 GPUs (in 9 nodes, each with 2× RTX 6000)[6][5].
  • NVIDIA Tesla T4 (16 GB): 32 GPUs (in 8 nodes, each with 4× T4)[7][8].
  • NVIDIA A40 (48 GB): 30 GPUs (in 15 nodes, each with 2× A40)[7][8].
    (Grace is a 940-node Dell cluster. It includes 132 GPU-equipped nodes: 100 with A100s, 9 with RTX6000s, 8 with T4s, and 15 with A40s[4][5].)

FASTER Cluster (Composable GPU Cluster, since 2021)

  • NVIDIA Tesla T4 (16 GB): 200 GPUs[9][10].
  • NVIDIA A100 (40 GB, PCIe): 40 GPUs[9][10].
  • NVIDIA A10 (24 GB): 8 GPUs[11][10].
  • NVIDIA A30 (24 GB): 4 GPUs[12][10].
  • NVIDIA A40 (48 GB): 8 GPUs[13][10].
    (FASTER is a 180-node Dell cluster using Liqid composable fabric. It has no fixed GPU nodes; instead, a pool of 200 T4, 40 A100, 8 A10, 4 A30, and 8 A40 GPUs can be dynamically attached to nodes as needed[9][10].)

ACES Cluster

  • NVIDIA H100 (80 GB, Hopper PCIe): 30 GPUs[14].
  • NVIDIA A30 (24 GB): 4 GPUs[14].
  • Intel Ponte Vecchio (GPU Max 1100): 120 GPUs[14].
  • NVIDIA GH200 Grace-Hopper Superchip: 1 superchip (1× Hopper H100 GPU + 72-core Grace CPU on a single module)[15].
    (ACES is a 130-node Dell cluster with a “composable” accelerator testbed. It hosts a mix of GPUs and accelerators: 30 NVIDIA H100 GPUs, 4 NVIDIA A30 GPUs, and 120 Intel GPU Max (Ponte Vecchio) units on Liqid PCIe fabrics[14]. In addition, ACES includes a single NVIDIA Grace-Hopper (GH200) node (gh01) containing one H100 GPU paired with an ARM Grace CPU[15].)

Launch Cluster (Launched 2023, Regional HPC)

  • NVIDIA A30 (24 GB): 22 GPUs (20 across 10 compute nodes, each with 2× A30; plus 1 A30 in each of 2 launch login nodes)[16][17].
  • (No H100 GPUs in Launch) – this cluster’s GPU nodes are A30-based[18].
    (Launch is a 45-node Dell cluster with AMD EPYC Genoa CPUs. It has 10 GPU nodes, each with two A30 GPUs (20 total)[18]. Additionally, each of its two login nodes is equipped with one A30 GPU for visualization, bringing the A30 count to 22[16][17].)

ViDaL 2.0 Cluster (Secure Data Analytics, 2025)

  • NVIDIA H100 NVL (Hopper 94 GB, dual-PCIe per node): 8 GPUs (in 4 secure GPU nodes, each with 2× H100 94GB)[19].
    (ViDaL 2.0 is an 18-node secure Dell cluster for sensitive data. It includes 4 GPU nodes, each with two NVIDIA H100 “NVL” 94GB GPUs (these are Hopper GPUs with HBM3e memory, paired via NVLink)[19]. ViDaL 2.0 replaced the older ViDaL 1.0, which had 4 nodes with V100 GPUs[20].)

NVIDIA DGX SuperPOD

  • NVIDIA H200 (Hopper next-gen GPUs): 760 GPUs[21].
    (Texas A&M is deploying a new AI supercomputer based on NVIDIA’s DGX SuperPOD architecture. This system will comprise a total of 760 NVIDIA H200 Hopper-generation GPUs, connected by NVIDIA Quantum-2 InfiniBand networking[21]. The $45 million SuperPOD is expected to triple A&M’s computing capacity and rank among the fastest academic AI systems[22][23].)

Student count

In the fall of 2025, the Computer Science Department at Texas A&M University has enrolled 1,790 undergraduate, 395 Master's, and 179 Doctoral students [26].

[1] [2] [3] Hardware - Texas A&M HPRC
https://hprc.tamu.edu/kb/User-Guides/Terra/Hardware/
[4] [5] [6] [7] [8] Hardware - Texas A&M HPRC
https://hprc.tamu.edu/kb/User-Guides/Grace/Hardware/
[9] [11] [12] [13] Hardware - Texas A&M HPRC
https://hprc.tamu.edu/kb/User-Guides/FASTER/Hardware/
[10] [14] [24] Systems | High Performance Research Computing
https://hprc.tamu.edu/resources/comparison.html
[15] Grace-Hopper - Texas A&M HPRC
https://hprc.tamu.edu/kb/User-Guides/ACES/Grace_Hopper/
[16] [17] [18] Hardware - Texas A&M HPRC
https://hprc.tamu.edu/kb/User-Guides/Launch/Hardware/
[19] [20] [25] research.tamu.edu
https://research.tamu.edu/wp-content/uploads/2024/12/dor-fact-sheet-hprc.pdf
[21] [23] AI for Aggieland | Texas A&M University Engineering
https://engineering.tamu.edu/news/2025/02/ai-for-aggieland.html
[22] Texas A&M System Triples Supercomputing Capacity – Texas A&M Stories
https://stories.tamu.edu/news/2025/02/10/texas-am-system-triples-supercomputing-capacity/
https://engineering.tamu.edu/news/2025/02/ai-for-aggieland.html
[26] Student count Texas A&M CS Department
https://engineering.tamu.edu/cse/about/facts.html

@vercel
Copy link

vercel bot commented Dec 22, 2025

@mrsergazinov is attempting to deploy a commit to the francois chaubard's projects Team on Vercel.

A member of the Team first needs to authorize it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant