|
|
Line 22: |
Line 22: |
| Racks | | Racks |
| Other NU research computing clusters | | Other NU research computing clusters |
| Cluster name Short description Contact details
| |
| High-performance bioinformatics cluster âQ-Symphonyâ Hewlett-Packard
| |
| Enterprise â Apollo (208 Cores x Intel Xeon, 3.26 TB RAM, 258 ТB RAID
| |
| HDD, RedHat Linux) â max computing performance 7.5 TFlops: specifically
| |
| designed architecture optimized for bioinformatics research and
| |
| analysis of big genomics datasets (whole-genome/whole transcriptomes
| |
| datasets and genomics bulk datasets with more than 100 samples
| |
| simultaneously) Ulykbek Kairov (Head of Laboratory - Leading
| |
| Researcher, Laboratory of bioinformatics and systems biology, Private
| |
| Institution National Laboratory Astana)
| |
| Email: [11]ulykbek.kairov@nu.edu.kz
| |
| Intelligence-Cognition-Robotics GPUs 8X NVIDIA Tesla V100
| |
| Performance (Mixed Precision): 1 petaFLOPS
| |
| GPU Memory: 256 GB total system
| |
| CPU: Dual 20-Core Intel Xeon, E5-2698 v4 2.2 GHz
| |
| NVIDIA CUDA Cores: 40,960
| |
| NVIDIA Tensor Cores (on Tesla V100 based systems): 5,120
| |
| System Memory: 512 GB 2,133 MHz DDR4 RDIMM
| |
| Storage: 4X 1.92 TB SSD RAID 0
| |
| Network: Dual 10 GbE, 4 IB EDR
| |
| Operating System Canonical Ubuntu, Red Hat Enterprise Linux Zhandos
| |
| Yessenbayev (Senior Researcher, Laboratory of Computational Materials
| |
| Science for Energy Application, Private Institution National Laboratory
| |
| Astana)
| |
| Email: [12]zhyessenbayev@nu.edu.kz
| |
| Computational resources for AI infrastructure at NU NVIDIA DGX-1 (1
| |
| supercomputer):
| |
| GPUs:8 x NVIDIA® Tesla® V100
| |
| GPU Memory 256 GB
| |
| CPU Dual 20-Core Intel Xeon E5-2698 v4 2.2 GHz
| |
| System Memory 512 GB, 2,133 MHz DDR4 RDIMM
| |
| Storage 4X 1.92 TB SSD RAID 0
| |
| Performance: 1PF
| |
| NVIDIA DGX-2 (2 supercomputers):
| |
| GPUs: 16 x NVIDIA® Tesla® V100
| |
| GPU: Memory 512 GB total
| |
| CPU: Dual Intel Xeon Platinum 8168, 2.7 GHz, 24-cores
| |
| System Memory: 1.5TB DDR4 RDIMM
| |
| Storage: 2 x 960GB NVME SSDs
| |
| Internal Storage: 30TB (8X 3.84TB) NVME SSDs
| |
| Performance: 4PF
| |
| NVIDIA DGX A100 (4 supercomputers):
| |
| DGX A100 (01,02,03,04)
| |
| GPUs: 8 x NVIDIA A100 40 GB GPUs
| |
| GPU: Memory 320 GB total
| |
| CPU: Dual AMD Rome 7742, 128 cores total, 2.25 GHz (base), 3.4 GHz (max
| |
| boost)
| |
| System Memory: 1TB DDR4 RDIMM
| |
| Storage: 2 x 1.92TB M.2 NVME drives
| |
| Internal Storage: 15 TB (4x 3.84 TB) U.2 NVMe drives
| |
| Performance: 20PF
| |
| Total:
| |
| NVIDIA DGX (580 Cores x Intel,AMD, 3 TB RAM, 128 ТB RAID HDD, Ubuntu) â
| |
| max computing performance 25 PFlops: specifically designed architecture
| |
| optimized for Deep Learning,Machine Learning,Natural Language
| |
| Processing,Computer Vision. Yerbol Absalyamov (Technical Project
| |
| Coordinator, Office of the Provost - Institute of Smart Systems and
| |
| Artificial Intelligence, Nazarbayev University)
| |
| Email: [13]yerbol.absalyamov@nu.edu.kz
| |
| Makat Tlebaliyev (Computer Engineer, Office of the Provost - Institute
| |
| of Smart Systems and Artificial Intelligence, Nazarbayev University)
| |
| Email: [14]makat.tlebaliyev@nu.edu.kz����le to do so outside of the official workdays and hours.
| |
Key features at a glance:
* 20 Compute nodes with dual AMD EPYC 7502 CPUs (32 cores / 64
threads, 2.5 MHz Base), 256 GB 8-channel DDR4-2933 RAM, CentOS 7.9
* 4 Compute nodes with dual AMD EPYC 7452 CPUs (32 cores / 64
threads, 2.3 MHz Base), 256 GB 8-channel DDR4-2933 RAM, dual NVidia
Tesla V100 GPUs 32GB HBM2 RAM, CentOS 7.9
* 1 interactive login node with AMD EPYC 7502P CPU (32 cores / 64
threads, 2.5 MHz Base), 256 GB 8-channel DDR4-2933 RAM, Red Hat
Enterprise Linux 7.9
* EDR Infiniband 100 Gb/s interconnect for compute traffic
* 16TB internal NVMe SSD Storage (HPE Clustered Extents File System)
* 144TB HPE MSA 2050 SAS HDD Array
* Theoretical peak CPU performance of the system is 60 TFlops (double
precision)
* Theoretical peak GPU performance of the system is 65 TFlops (double
precision)
* SLURM job scheduler
Schematic view of Shabyt
Scheme
The system is assembled in a two-rack configuration and is physically
located at NU Data Center
Racks
Other NU research computing clusters