Policies: Difference between revisions
(Created page with "== Policies == === Note === Please note that Shabyt was recently set up and is now open for general access. Software configurations are continually being updated. The policies described here are subject to change based on decisions made by the NU HPC Committee and actual system utilization. === Acceptable Use === The HPC system is a unique resource for NU researchers and the community. It has special characteristics, such as a large amount of RAM and the capability for...") |
No edit summary |
||
Line 6: | Line 6: | ||
The HPC system is a unique resource for NU researchers and the community. It has special characteristics, such as a large amount of RAM and the capability for massive parallelism. Due to its uniqueness and expense, its use is supervised by the HPC team to ensure fair and efficient utilization. | The HPC system is a unique resource for NU researchers and the community. It has special characteristics, such as a large amount of RAM and the capability for massive parallelism. Due to its uniqueness and expense, its use is supervised by the HPC team to ensure fair and efficient utilization. | ||
Users are responsible for complying with the general policies. | Users are responsible for complying with the general policies. | ||
== Job Submission == | == Job Submission == | ||
Jobs are submitted using the SLURM batch system. Below are examples of batch scripts for different types of jobs: | Jobs are submitted using the SLURM batch system. Below are examples of batch scripts for different types of jobs: |
Revision as of 09:11, 6 September 2023
Policies
Note
Please note that Shabyt was recently set up and is now open for general access. Software configurations are continually being updated. The policies described here are subject to change based on decisions made by the NU HPC Committee and actual system utilization.
Acceptable Use
The HPC system is a unique resource for NU researchers and the community. It has special characteristics, such as a large amount of RAM and the capability for massive parallelism. Due to its uniqueness and expense, its use is supervised by the HPC team to ensure fair and efficient utilization.
Users are responsible for complying with the general policies.
Job Submission
Jobs are submitted using the SLURM batch system. Below are examples of batch scripts for different types of jobs:
Serial Job
- !/bin/bash
- SBATCH --job-name=Test_Serial
- SBATCH --nodes=1
- SBATCH --ntasks=1
- SBATCH --time=3-0:00:00
- SBATCH --mem=5G
- SBATCH --partition=CPU
- SBATCH --output=stdout%j.out
- SBATCH --error=stderr%j.out
- SBATCH --mail-type=END,FAIL
- SBATCH --mail-user=my.email@nu.edu.kz
- SBATCH --get-user-env
- SBATCH --no-requeue
pwd; hostname; date
cp myfile1.dat myfile2.dat
./my_program myfile2.dat
SMP Job
- !/bin/bash
- SBATCH --job-name=Test_SMP
- SBATCH --nodes=1
- SBATCH --ntasks=1
- SBATCH --cpus-per-task=8
- SBATCH --time=3-0:00:00
- SBATCH --mem=20G
- SBATCH --partition=CPU
- SBATCH --output=stdout%j.out
- SBATCH --error=stderr%j.out
- SBATCH --mail-type=END,FAIL
- SBATCH --mail-user=my.email@nu.edu.kz
- SBATCH --get-user-env
- SBATCH --no-requeue
pwd; hostname; date
export OMP_NUM_THREADS=8
./my_smp_program myinput.inp > myoutput.out
Distributed Memory Parallelism (MPI) Job
- !/bin/bash
- SBATCH --job-name=Test_MPI
- SBATCH --nodes=2
- SBATCH --ntasks=256
- SBATCH --ntasks-per-node=128
- SBATCH --time=3-0:00:00
- SBATCH --mem=250G
- SBATCH --partition=CPU
- SBATCH --exclusive
- SBATCH --output=stdout%j.out
- SBATCH --error=stderr%j.out
- SBATCH --mail-type=END,FAIL
- SBATCH --mail-user=my.email@nu.edu.kz
- SBATCH --get-user-env
- SBATCH --no-requeue
pwd; hostname; date
NP=${SLURM_NTASKS}
module load gcc/9.3.0
module load openmpi/gcc9/4.1.0
mpirun -np ${NP} ./my_mpi_program myinput.inp > myoutput.out