Systems: Difference between revisions

From NU HPC Wiki
Jump to navigation Jump to search
No edit summary
Line 6: Line 6:
* 1 interactive login node with AMD EPYC 7502P CPU (32 cores / 64 threads, 2.5 MHz Base), 256 GB 8-channel DDR4-2933 RAM, Red Hat Enterprise Linux 7.9
* 1 interactive login node with AMD EPYC 7502P CPU (32 cores / 64 threads, 2.5 MHz Base), 256 GB 8-channel DDR4-2933 RAM, Red Hat Enterprise Linux 7.9
* Mellanox Infiniband 100 Gb/s interconnect for compute traffic
* Mellanox Infiniband 100 Gb/s interconnect for compute traffic
* 16TB internal NVMe SSD Storage (HPE Clustered Extents File System)
* 144TB HPE MSA 2050 SAS HDD Array


=Shabyt=
=Shabyt=

Revision as of 16:24, 12 April 2024

Summary of Shabyt cluster

  • 20 Compute nodes with dual AMD EPYC 7502 CPUs (32 cores / 64 threads, 2500 MHz Base), 256 GB 8-channel DDR4-2933 RAM, CentOS 7.9
  • 4 Compute nodes with dual AMD EPYC 7452 CPUs (32 cores / 64 threads, 2.3 MHz Base), 256 GB 8-channel DDR4-2933 RAM, dual NVidia Tesla V100 GPUs 32GB HBM2 RAM, CentOS 7.9
  • 1 interactive login node with AMD EPYC 7502P CPU (32 cores / 64 threads, 2.5 MHz Base), 256 GB 8-channel DDR4-2933 RAM, Red Hat Enterprise Linux 7.9
  • Mellanox Infiniband 100 Gb/s interconnect for compute traffic
  • 16TB internal NVMe SSD Storage (HPE Clustered Extents File System)
  • 144TB HPE MSA 2050 SAS HDD Array

Shabyt

Head Node

Head node is a system which acts as an intermediate device between user and compute nodes.

Compute nodes

Shabyt HPC consists with total 20 Compute Nodes, 40 Physical CPU and 5120 GB of Memory.

Shabyt Compute Nodes
CPU CPU per Node Cores Per CPU Base Frequency RAM per CPU
cn[01-20] AMD EPYC 7502 32-Core Processor 2 32 2500 MHz 128 GB

Storage Nodes

  • 16TB internal NVMe SSD Storage (HPE Clustered Extents File System)

System Interconnect

  • Cluster Interconnect: Mellanox InfiniBand EDR 100 Gb/sec v2

Muon