Systems: Difference between revisions

From NU HPC Wiki
Jump to navigation Jump to search
No edit summary
 
(57 intermediate revisions by the same user not shown)
Line 1: Line 1:
Nazarbayev University High Performance Computing team currently operates three main facilities - Irgetas, Shabyt, and Muon. Below we provide a brief overview of them.     
Nazarbayev University High Performance Computing team currently operates three main facilities - Irgetas, Shabyt, and Muon. Below we provide a brief overview of them.     


== Irgetas cluster (expected deployment - July/August 2025) ==
== Irgetas cluster ==
Irgetas cluster is NU's most advanced computational facility on campus. It is manufactured by Hewlett Packard Enterprise (HPE) and has the following basic configuration:     
[[File:Irgetas_picture_1.jpg|420x420px|border]] [[File:Irgetas_picture_2.jpg|420x420px|border]] [[File:Irgetas_picture_3.jpg|420x420px|border]]
 
[[File:Irgetas_picture_4.jpg|420x420px|border]] [[File:Irgetas_picture_5.jpg|420x420px|border]] [[File:Irgetas_picture_6.jpg|420x420px|border]]
 
The Irgetas cluster is NU's most advanced computational facility on campus. It was deployed in September 2025 and features high compute density and efficiency enabled by direct liquid cooling. Manufactured by Hewlett Packard Enterprise (HPE), it has the following configuration:     


*; 6 GPU compute nodes. Each GPU node features
*; 6 GPU compute nodes. Each GPU node features
Line 8: Line 12:
*: Four Nvidia H100 SMX5 GPUs (80 GB HBM3)
*: Four Nvidia H100 SMX5 GPUs (80 GB HBM3)
*: 768 GB DDR5-4800 RAM (12-channel)
*: 768 GB DDR5-4800 RAM (12-channel)
*: 1.92 GB local SSD scratch storage
*: 1.92 TB local SSD scratch storage
*: Two aggregated Infiniband NDR 400 Gbps adapters (800 Gbps total)
*: Two Infiniband NDR 400 Gbps network adapters (800 Gbps total)
*: Rocky Linux 9
*: 25 Gbps SPF28 Ethernet network adapter
*: Rocky Linux 9.6


*; 10 compute nodes. Each CPU node features
*; 10 CPU compute nodes. Each CPU node features
*: Two AMD EPYC 9684X CPUs (3D V-Cache, 96 cores / 192 threads, 2.55 GHz Base)
*: Two AMD EPYC 9684X CPUs (96 cores / 192 threads, 2.55 GHz Base, 1152 MB 3D V-Cache)
*: 384 GB DDR5-4800 RAM (12-channel)
*: 384 GB DDR5-4800 RAM (12-channel)
*: 1.92 GB local SSD scratch storage
*: 1.92 TB local SSD scratch storage
*: Infiniband NDR 200 Gbps adapter
*: Infiniband NDR 200 Gbps network adapter
*: Rocky Linux 9
*: 25 Gbps SPF28 Ethernet network adapter
*: Rocky Linux 9.6
 
*; 1 Interactive login node
*: AMD EPYC 9684X (96 cores / 192 threads, 2.55 GHz Base, 1152 MB 3D V-Cache)
*: 192 GB DDR5-4800 RAM (12-channel)
*: 7.68 TB local SSD scratch storage
*: Infiniband NDR 200 Gbps network adapter
*: 25 Gbps SPF28 Ethernet network adapter
*: Rocky Linux 9.6


*; Interactive login node
*; 1 Management node
*: AMD EPYC 9684X (3D V-Cache, 96 cores / 192 threads, 2.55 GHz Base)
*: AMD EPYC 9354 CPUs (32 cores / 64 threads, 3.25 GHz Base)
*: 192 GB 12-channel DDR5-4800 RAM
*: 256 GB DDR5-4800 RAM (8-channel)
*: 7.68 GB local SSD scratch storage
*: 15.36 TB local SSD storage
*: Infiniband NDR 200 Gbps adapter
*: 25 Gbps SPF28 Ethernet network adapter
*: Rocky Linux 9
*: Rocky Linux 9.6


*; 122 TB (raw) NVMe SSD storage server for software and user home directories
*; NVMe SSD storage server for software and user home directories (/shared)
*: Two AMD EPYC 9354 CPUs (32 cores / 64 threads, 3.25 GHz Base)
*: Two AMD EPYC 9354 CPUs (32 cores / 64 threads, 3.25 GHz Base)
*: 768 GB DDR5-4800 RAM (12-channel)
*: 768 GB DDR5-4800 RAM (12-channel)
*: Two aggregated Infiniband NDR 400 Gbps adapters (800 Gbps total)
*: 122 TB total raw capacity
*: 80 GBps sustained sequential read speed
*: 92 TB total usable space in RAID 6 configuration
*: 20 GBps sustained sequential write speed
*: Sustained sequential read speed from compute nodes > 80 GBps
*: Sustained sequential write speed from compute nodes > 20 GBps
*: Two Infiniband NDR 400 Gbps network adapters (800 Gbps total)
*: 25 Gbps SPF28 Ethernet network adapter
*: Rocky Linux 9.6


*; Nvidia Infiniband NDR Quantum-2 QM9700 switch
*; Nvidia Infiniband NDR Quantum-2 QM9700 managed switch (compute network)
*: 64 ports (400 Gbps per port)
*: 64 ports (400 Gbps per port)


The system is physically located in the data center in Block 1.
*; HPE Aruba Networking CX 8325‑48Y8C 25G SFP/SFP+/SFP28 switch (application network)
*: 48 ports (SFP28, 25 Gbps per port)
 
*; HPE Aruba Networking 2930F 48G 4SFP+ switch (management network)
*: 48 ports (1 Gbps per port)
 
*; HPE Cray XD Direct liquid cooling system
*: HPE Cray XD 75kW 208V FIO In-Rack Coolant Distribution Unit
*: Three-chiller setup with BlueBox ZETA Rev HE FC 3.2
 
The system is assembled in a single rack and physically located in NU data center in Block 1.
 
{| class="wikitable"
|+Irgetas cluster theoretical peak performance
!Subsystem
!FP8
!FP16
!FP32
!FP64
|-
|CPUs (total)
|
|
|245.0 TFLOPS
|122.5 TFLOPS
|-
|GPUs (total)
|47,492 TFLOPS
|23,746 TFLOPS
|1,606 TFLOPS
|803 TFLOPS
|}
 
[[File:Irgetas_rack.png|frameless|496x496px]]
 
<br>


== Shabyt cluster ==
== Shabyt cluster ==
[[File:Shabyt_picture_3.jpg|420x420px]] [[File:Shabyt picture 1.jpg|420x420px]] [[File:Shabyt_picture_2.jpg|alt=|420x420px]]
[[File:Shabyt_picture_3.jpg|420x420px|border]] [[File:Shabyt picture 1.jpg|420x420px|border]] [[File:Shabyt_picture_2.jpg|alt=|420x420px|border]]
 
The Shabyt cluster is manufactured by Hewlett Packard Enterprise (HPE) and deployed in 2020. For several years it served as the primary platform for performing computational tasks by NU researchers. It has the following hardware configuration:


Shabyt cluster was manufactured by Hewlett Packard Enterprise (HPE) and deployed in 2020. It serves as the primary platform for performing computational tasks by NU and NLA researchers. It has the following hardware configuration:
*; 20 CPU compute nodes. Each CPU node features
*: Two AMD EPYC 7502 CPUs (32 cores / 64 threads, 2.5 GHz Base)
*: 256 GB DDR4-2933 RAM (8-channel)
*: Infiniband EDR 100 Gbps network adapter
*: Rocky Linux 8.10


* 20 compute nodes with dual AMD EPYC 7502 CPUs (32 cores / 64 threads, 2.5 GHz Base), 256 GB 8-channel DDR4-2933 RAM, Rocky Linux 8.10
*; 4 GPU compute nodes. Each GPU node features
* 4 compute nodes with dual AMD EPYC 7452 CPUs (32 cores / 64 threads, 2.3 GHz Base), 256 GB 8-channel DDR4-2933 RAM, dual Nvidia Tesla V100 GPUs 32GB HBM2 RAM, Rocky Linux 8.10
*: Two AMD EPYC 7452 CPUs (32 cores / 64 threads, 2.3 GHz Base)
* 1 interactive login node with a single AMD EPYC 7502P CPU (32 cores / 64 threads, 2.5 MHz Base), 256 GB 8-channel DDR4-2933 RAM, Rocky Linux 8.10
*: Two Nvidia V100 GPUs (32 GB HBM2)
* Mellanox Infiniband EDR 100 Gb/s interconnect in each node for compute traffic
*: 256 GB DDR4-2933 RAM (8-channel)
* Mellanox Infiniband EDR v2 36P Managed switch (100 Gbps per port)
*: Infiniband EDR 100 Gbps network adapter
* 16 TB (raw) internal NVMe SSD storage for software and user home directories (/shared)
*: Rocky Linux 8.10
* 144 TB (raw) HPE MSA 2050 SAS HDD Array for backups and large data storage for user groups (/zdisk)
 
* Theoretical peak CPU performance of the system is about 61 TFLOPS (double precision)
*; 1 Interactive login node
* Theoretical peak GPU performance of the system is about 56 TFLOPS (double precision) or 898 TFLOPS (half precision)
*: AMD EPYC 7502P CPU (32 cores / 64 threads, 2.5 GHz Base)
The system is assembled in two racks and is physically located at NU data center in Block C2
*: 256 GB DDR4-2933 RAM (8-channel)
*: Infiniband EDR 100 Gbps network adapter
*: Rocky Linux 8.10
 
*; A storage system based on BeeGFS and consisting of two NVMe SSD storage servers in RAID 6 configuration for software and user home directories (/shared). The total capacity is 16 TB (raw), 9.9 TB (usable). Each storage server features
*: AMD EPYC 7452 CPU (32 cores / 64 threads, 2.3 GHz Base)
*: 128 GB DDR4-2933 RAM (8-channel)
*: Two Infiniband EDR 100 Gbps network adapters (200 Gbps total)
*: Rocky Linux 8.10
 
*; 144 TB (raw) HPE MSA 2050 SAS HDD Array in RAID 6 configuration for backups and large data storage for user groups (/zdisk)
 
*; Mellanox Infiniband EDR v2 Managed switch (compute network)
*: 36 ports (100 Gbps per port)
 
*; HPE 5700 48G 4XG 2QSFP+ switch (application network)
*: 48 ports (1 Gbps per port)
 
*; Aruba 2540 48G 4SFP+ switch (management network)
*: 48 ports (1 Gbps per port)
 
The system is assembled in two racks and is physically located in NU data center in Block C2
 
{| class="wikitable"
|+Shabyt cluster theoretical peak performance
!Subsystem
!FP8
!FP16
!FP32
!FP64
|-
|CPUs (total)
|
|
|121.7 TFLOPS
|60.8 TFLOPS
|-
|GPUs (total)
|
|897.6 TFLOPS
|112.2 TFLOPS
|56.1 TFLOPS
|}


[[File:Shabyt racks.png|frameless|532x532px]]  &nbsp; &nbsp; &nbsp; &nbsp;  [[File:Shabyt_hardware_scheme.png|frameless|544x544px]]
[[File:Shabyt racks.png|frameless|532x532px]]  &nbsp; &nbsp; &nbsp; &nbsp;  [[File:Shabyt_hardware_scheme.png|frameless|544x544px]]
<br>


== Muon cluster ==
== Muon cluster ==
[[File:Muon picture 1.jpg|420x420px]] [[File:Muon picture 3.jpg|420x420px]] [[File:Muon picture 2.jpg|420x420px]]
[[File:Muon picture 1.jpg|420x420px|border]] [[File:Muon picture 3.jpg|420x420px|border]] [[File:Muon picture 2.jpg|420x420px|border]]


Muon is an older cluster used by the faculty of Physics Department. It was manufactured by HPE and first deployed in 2017. It has the following hardware configuration:
Muon is an older cluster used by the faculty of Physics Department. It was manufactured by HPE and first deployed in 2017. It has the following hardware configuration:
* 10 compute nodes with Intel Xeon CPU E5-2690v4 (14 cores / 28 threads, 2600 MHz Base), 64 GB 4-channel DDR4-2400 RAM, Rocky Linux 8.10
 
* 1 interactive login node with Intel Xeon CPU E5-2640v4 (10 cores / 20 threads, 2400 MHz Base), 64 GB 4-channel DDR4-2400 RAM, Rocky Linux 8.10
*; 10 CPU compute nodes. Each CPU node features
* 2.84 TB (raw) SSD storage for software and user home directories (/shared)
*: Intel Xeon CPU E5-2690v4 (14 cores / 28 threads, 2600 MHz Base)
* 7.2 TB (raw) HDD RAID 5 storage for backups and large data storage for user group (/zdisk)
*: 64 GB DDR4-2400 RAM (4-channel)
* 1 Gbps Ethernet network adapter in all nodes for compute traffic
*: 1 Gbps Ethernet network adapter
* HPE 5800 Ethernet switch (1 Gb/s per port)
*: Rocky Linux 8.10
 
*; Interactive login node
*: Intel Xeon CPU E5-2640v4 (10 cores / 20 threads, 2400 MHz Base)
*: 64 GB DDR4-2400 RAM (4-channel)
*: 10 Gbps Ethernet network adapter (WAN traffic)
*: 10 Gbps Ethernet network adapter (compute traffic)
*: Rocky Linux 8.10
 
*; 2.84 TB (raw) SSD storage for software and user home directories (/shared)
 
*; 7.2 TB (raw) HDD RAID 5 storage for backups and large data storage for user group (/zdisk)
 
*; HPE 5800 Ethernet switch (compute network)
 
The system is physically located in NU data center in Block 1.
 
{| class="wikitable"
|+Muon cluster theoretical peak performance
!Subsystem
!FP8
!FP16
!FP32
!FP64
|-
|CPUs (total)
|
|
|11.6 TFLOPS
|5.8 TFLOPS
|}
 
<br>


== Other facilities on campus ==
== Other facilities on campus ==
There are several other computational facilities available at NU that are ''not'' managed by NU HPC team. Below we provide brief information about them. All inquiries about their use for your research projects should be addressed to person responsible for it.
There are several other computational facilities at NU that are not managed by the NU HPC Team. Brief information about them is provided below. All inquiries regarding their use for research projects should be directed to the person responsible for each facility.
{| class="wikitable" style="float: left; margin: auto"
{| class="wikitable" style="float: left; margin: auto"
|+
|+

Latest revision as of 13:04, 1 October 2025

Nazarbayev University High Performance Computing team currently operates three main facilities - Irgetas, Shabyt, and Muon. Below we provide a brief overview of them.

Irgetas cluster

The Irgetas cluster is NU's most advanced computational facility on campus. It was deployed in September 2025 and features high compute density and efficiency enabled by direct liquid cooling. Manufactured by Hewlett Packard Enterprise (HPE), it has the following configuration:

  • 6 GPU compute nodes. Each GPU node features
    Two AMD EPYC 9654 CPUs (96 cores / 192 threads, 2.4 GHz Base)
    Four Nvidia H100 SMX5 GPUs (80 GB HBM3)
    768 GB DDR5-4800 RAM (12-channel)
    1.92 TB local SSD scratch storage
    Two Infiniband NDR 400 Gbps network adapters (800 Gbps total)
    25 Gbps SPF28 Ethernet network adapter
    Rocky Linux 9.6
  • 10 CPU compute nodes. Each CPU node features
    Two AMD EPYC 9684X CPUs (96 cores / 192 threads, 2.55 GHz Base, 1152 MB 3D V-Cache)
    384 GB DDR5-4800 RAM (12-channel)
    1.92 TB local SSD scratch storage
    Infiniband NDR 200 Gbps network adapter
    25 Gbps SPF28 Ethernet network adapter
    Rocky Linux 9.6
  • 1 Interactive login node
    AMD EPYC 9684X (96 cores / 192 threads, 2.55 GHz Base, 1152 MB 3D V-Cache)
    192 GB DDR5-4800 RAM (12-channel)
    7.68 TB local SSD scratch storage
    Infiniband NDR 200 Gbps network adapter
    25 Gbps SPF28 Ethernet network adapter
    Rocky Linux 9.6
  • 1 Management node
    AMD EPYC 9354 CPUs (32 cores / 64 threads, 3.25 GHz Base)
    256 GB DDR5-4800 RAM (8-channel)
    15.36 TB local SSD storage
    25 Gbps SPF28 Ethernet network adapter
    Rocky Linux 9.6
  • NVMe SSD storage server for software and user home directories (/shared)
    Two AMD EPYC 9354 CPUs (32 cores / 64 threads, 3.25 GHz Base)
    768 GB DDR5-4800 RAM (12-channel)
    122 TB total raw capacity
    92 TB total usable space in RAID 6 configuration
    Sustained sequential read speed from compute nodes > 80 GBps
    Sustained sequential write speed from compute nodes > 20 GBps
    Two Infiniband NDR 400 Gbps network adapters (800 Gbps total)
    25 Gbps SPF28 Ethernet network adapter
    Rocky Linux 9.6
  • Nvidia Infiniband NDR Quantum-2 QM9700 managed switch (compute network)
    64 ports (400 Gbps per port)
  • HPE Aruba Networking CX 8325‑48Y8C 25G SFP/SFP+/SFP28 switch (application network)
    48 ports (SFP28, 25 Gbps per port)
  • HPE Aruba Networking 2930F 48G 4SFP+ switch (management network)
    48 ports (1 Gbps per port)
  • HPE Cray XD Direct liquid cooling system
    HPE Cray XD 75kW 208V FIO In-Rack Coolant Distribution Unit
    Three-chiller setup with BlueBox ZETA Rev HE FC 3.2

The system is assembled in a single rack and physically located in NU data center in Block 1.

Irgetas cluster theoretical peak performance
Subsystem FP8 FP16 FP32 FP64
CPUs (total) 245.0 TFLOPS 122.5 TFLOPS
GPUs (total) 47,492 TFLOPS 23,746 TFLOPS 1,606 TFLOPS 803 TFLOPS


Shabyt cluster

The Shabyt cluster is manufactured by Hewlett Packard Enterprise (HPE) and deployed in 2020. For several years it served as the primary platform for performing computational tasks by NU researchers. It has the following hardware configuration:

  • 20 CPU compute nodes. Each CPU node features
    Two AMD EPYC 7502 CPUs (32 cores / 64 threads, 2.5 GHz Base)
    256 GB DDR4-2933 RAM (8-channel)
    Infiniband EDR 100 Gbps network adapter
    Rocky Linux 8.10
  • 4 GPU compute nodes. Each GPU node features
    Two AMD EPYC 7452 CPUs (32 cores / 64 threads, 2.3 GHz Base)
    Two Nvidia V100 GPUs (32 GB HBM2)
    256 GB DDR4-2933 RAM (8-channel)
    Infiniband EDR 100 Gbps network adapter
    Rocky Linux 8.10
  • 1 Interactive login node
    AMD EPYC 7502P CPU (32 cores / 64 threads, 2.5 GHz Base)
    256 GB DDR4-2933 RAM (8-channel)
    Infiniband EDR 100 Gbps network adapter
    Rocky Linux 8.10
  • A storage system based on BeeGFS and consisting of two NVMe SSD storage servers in RAID 6 configuration for software and user home directories (/shared). The total capacity is 16 TB (raw), 9.9 TB (usable). Each storage server features
    AMD EPYC 7452 CPU (32 cores / 64 threads, 2.3 GHz Base)
    128 GB DDR4-2933 RAM (8-channel)
    Two Infiniband EDR 100 Gbps network adapters (200 Gbps total)
    Rocky Linux 8.10
  • 144 TB (raw) HPE MSA 2050 SAS HDD Array in RAID 6 configuration for backups and large data storage for user groups (/zdisk)
  • Mellanox Infiniband EDR v2 Managed switch (compute network)
    36 ports (100 Gbps per port)
  • HPE 5700 48G 4XG 2QSFP+ switch (application network)
    48 ports (1 Gbps per port)
  • Aruba 2540 48G 4SFP+ switch (management network)
    48 ports (1 Gbps per port)

The system is assembled in two racks and is physically located in NU data center in Block C2

Shabyt cluster theoretical peak performance
Subsystem FP8 FP16 FP32 FP64
CPUs (total) 121.7 TFLOPS 60.8 TFLOPS
GPUs (total) 897.6 TFLOPS 112.2 TFLOPS 56.1 TFLOPS

       


Muon cluster

Muon is an older cluster used by the faculty of Physics Department. It was manufactured by HPE and first deployed in 2017. It has the following hardware configuration:

  • 10 CPU compute nodes. Each CPU node features
    Intel Xeon CPU E5-2690v4 (14 cores / 28 threads, 2600 MHz Base)
    64 GB DDR4-2400 RAM (4-channel)
    1 Gbps Ethernet network adapter
    Rocky Linux 8.10
  • Interactive login node
    Intel Xeon CPU E5-2640v4 (10 cores / 20 threads, 2400 MHz Base)
    64 GB DDR4-2400 RAM (4-channel)
    10 Gbps Ethernet network adapter (WAN traffic)
    10 Gbps Ethernet network adapter (compute traffic)
    Rocky Linux 8.10
  • 2.84 TB (raw) SSD storage for software and user home directories (/shared)
  • 7.2 TB (raw) HDD RAID 5 storage for backups and large data storage for user group (/zdisk)
  • HPE 5800 Ethernet switch (compute network)

The system is physically located in NU data center in Block 1.

Muon cluster theoretical peak performance
Subsystem FP8 FP16 FP32 FP64
CPUs (total) 11.6 TFLOPS 5.8 TFLOPS


Other facilities on campus

There are several other computational facilities at NU that are not managed by the NU HPC Team. Brief information about them is provided below. All inquiries regarding their use for research projects should be directed to the person responsible for each facility.

Cluster name Short description Contact details
High-performance bioinformatics cluster "Q-Symphony"

HPE Apollo R2600 Gen10 cluster
Compute nodes: 8 nodes x dual Intel Xeon Gold 6226R (16 cores / 32 threads, 3.3 GHz Base), 512 GB DDR4-2933 RAM per node
Storage: 1.3 PB (raw) HDD storage HPE D6020
Interconnect: Infiniband FDR
OS: RedHat Linux
This cluster is optimized for bioinformatics research and big genomics datasets analysis

Ulykbek Kairov
Head of Laboratory - Leading Researcher, Laboratory of bioinformatics and systems biology, Private Institution National Laboratory Astana
Email: ulykbek.kairov@nu.edu.kz
Computational resources for AI infrastructure at NU

NVIDIA DGX-1 (1 unit)
CPU: dual Intel Xeon ES-2698v4 (20 cores / 40 threads, 2.2GHz Base), 512 GB DDR4 RAM
GPUs: 8 x NVIDIA Tesla V100
GPU Memory: 8 x 32 GB HBM2
Storage 4 x 1.92 TB SSD in RAID0
OS: Ubuntu Linux

NVIDIA DGX-2 (2 units)
CPU: dual Intel Xeon Platinum 8168 (24 cores / 48 threads, 2.7 GHz Base), 512 GB DDR4-2133 RAM
GPUs: 16 x NVIDIA Tesla V100
GPU Memory: 16 x 32 GB HBM2
Storage: 30.72 TB NVMe SSD
OS: Ubuntu Linux

DGX A100 (4 units)
CPU: dual AMD EPYC Rome 7742 (64 cores / 128 threads, 2.25 GHz Base), 512 GB DDR4 RAM
GPUs: 8 x NVIDIA A100
GPU: Memory 8 x 40 GB HBM2
Storage: 15 TB NVMe SSD
OS: Ubuntu Linux

Yerbol Absalyamov
Technical Project Coordinator, Institute of Smart Systems and Artificial Intelligence, Nazarbayev University
Email: yerbol.absalyamov@nu.edu.kz
Makat Tlebaliyev
Computer Engineer, Institute of Smart Systems and Artificial Intelligence, Nazarbayev University
Email: makat.tlebaliyev@nu.edu.kz