Systems 1

From NU HPC Wiki
Revision as of 23:01, 30 August 2023 by Admin (talk | contribs)
Jump to navigation Jump to search

Key features at a glance:

    * 20 Compute nodes with dual AMD EPYC 7502 CPUs (32 cores / 64
      threads, 2.5 MHz Base), 256 GB 8-channel DDR4-2933 RAM, CentOS 7.9
    * 4 Compute nodes with dual AMD EPYC 7452 CPUs (32 cores / 64
      threads, 2.3 MHz Base), 256 GB 8-channel DDR4-2933 RAM, dual NVidia
      Tesla V100 GPUs 32GB HBM2 RAM, CentOS 7.9
    * 1 interactive login node with AMD EPYC 7502P CPU (32 cores / 64
      threads, 2.5 MHz Base), 256 GB 8-channel DDR4-2933 RAM, Red Hat
      Enterprise Linux 7.9
    * EDR Infiniband 100 Gb/s interconnect for compute traffic
    * 16TB internal NVMe SSD Storage (HPE Clustered Extents File System)
    * 144TB HPE MSA 2050 SAS HDD Array
    * Theoretical peak CPU performance of the system is 60 TFlops (double
      precision)
    * Theoretical peak GPU performance of the system is 65 TFlops (double
      precision)
    * SLURM job scheduler
  Schematic view of Shabyt
  Scheme
  The system is assembled in a two-rack configuration and is physically
  located at NU Data Center
  Racks

Other NU research computing clusters