Systems: Difference between revisions
No edit summary |
No edit summary |
||
Line 23: | Line 23: | ||
* 4 TB (raw) HDD RAID 5 storage for large temporary data | * 4 TB (raw) HDD RAID 5 storage for large temporary data | ||
* 1 Gb/s Ethernet network adapter in all nodes for compute traffic | * 1 Gb/s Ethernet network adapter in all nodes for compute traffic | ||
* HPE 5800 1Gb/s Ethernet switch<br />[[File:Muon picture 1.jpg| | * HPE 5800 1Gb/s Ethernet switch<br />[[File:Muon picture 1.jpg|548x548px]] | ||
= Other facilities = | = Other facilities = |
Revision as of 15:49, 10 May 2024
Nazarbayev University's Research Computing currently operates two HPC facilities: Shabyt and Muon clusters
Shabyt cluster
Shabyt cluster was manufactured by Hewlett Packard Enterprise (HPE) and deployed in 2020. It serves as the primary platform for performing computational tasks by NU and NLA researchers. It has the following hardware configuration:
- 20 Compute nodes with dual AMD EPYC 7502 CPUs (32 cores / 64 threads, 2.5 GHz Base), 256 GB 8-channel DDR4-2933 RAM, Rocky Linux 8.7
- 4 Compute nodes with dual AMD EPYC 7452 CPUs (32 cores / 64 threads, 2.3 GHz Base), 256 GB 8-channel DDR4-2933 RAM, dual NVidia Tesla V100 GPUs 32GB HBM2 RAM, Rocky Linux 8.7
- 1 interactive login node with AMD EPYC 7502P CPU (32 cores / 64 threads, 2.5 MHz Base), 256 GB 8-channel DDR4-2933 RAM, Rocky Linux 8.7
- Mellanox Infiniband EDR 100 Gb/s interconnect in each node for compute traffic
- Mellanox Infiniband EDR v2 36P Managed switch (100 Gb/s per port)
- 16 TB (raw) internal NVMe SSD Storage for software and user home directories
- 144 TB (raw) HPE MSA 2050 SAS HDD Array for large data storage
- Theoretical peak CPU performance of the system is about 60 TFlops (double precision)
- Theoretical peak GPU performance of the system is about 65 TFlops (double precision)
Muon cluster
Muon is an older cluster used by the faculty of Physics Department. It was manufactured by HPE and first deployed in 2017. It has the following hardware configuration:
- 10 Compute nodes with Intel Xeon CPU E5-2690v4 (14 cores / 28 threads, 2600 MHz Base), 64 GB 4-channel DDR4-2400 RAM, Rocky Linux 8.9
- 1 interactive login node with Intel Xeon CPU E5-2640v4 (14 cores / 28 threads, 2400 MHz Base), 64 GB 4-channel DDR4-2400 RAM, Rocky Linux 8.9
- 2 TB (raw) SSD storage for software user home directories
- 4 TB (raw) HDD RAID 5 storage for large temporary data
- 1 Gb/s Ethernet network adapter in all nodes for compute traffic
- HPE 5800 1Gb/s Ethernet switch
Other facilities
There are several other computational facilities available at NU that are not managed by NU HPC team. Below we provide brief information about them.
Cluster name | Short description | Contact details |
---|---|---|
High-performance bioinformatics cluster "Q-Symphony" | Hewlett-Packard Enterprise – Apollo (208 Cores x Intel Xeon, 3.26 TB RAM, 258 TB RAID HDD, RedHat Linux) Max computing performance: 16 TFlops Optimized for bioinformatics research and big genomics datasets analysisz Total RAM: 5.6 TB Storage (HDD): 1.2 PB |
Ulykbek Kairov Head of Laboratory - Leading Researcher, Laboratory of bioinformatics and systems biology, Private Institution National Laboratory Astana Email: ulykbek.kairov@nu.edu.kz |
Computational resources for AI infrastructure at NU | NVIDIA DGX-1 (1 supercomputer) Description: Experimentation purpose GPUs: 8 x NVIDIA Tesla V100 GPU Memory: 256 GB CPU: DUAL 20-Core Intel Xeon ES-2698 v4 2.2GHz Storage 4x.1.92 TB SSD RIAD 0 NVIDIA DGX 2 (2 units) DGX A100(4 units) |
Yerbol Absalyamov Technical Project Coordinator, Office of the Provost - Institute of Smart Systems and Artificial Intelligence, Nazarbayev University Email: yerbol.absalyamov@nu.edu.kz Makat Tlebaliyev Computer Engineer, Office of the Provost - Institute of Smart Systems and Artificial Intelligence, Nazarbayev University Email: makat.tlebaliyev@nu.edu.kz |