Software Old: Difference between revisions
No edit summary |
m (Admin moved page Software to Software 1 without leaving a redirect) |
(No difference)
|
Revision as of 17:30, 9 April 2024
New Software
New software can be installed system-wide at the request of users, provided it meets the following criteria:
- It is either freely available or NU has a site license for it.
- It is compatible with the existing OS environment on Shabyt.
- It can utilize resources available on Shabyt effectively.
For assistance regarding new software packages, please contact Shabyt system administrators at hpcadmin@nu.edu.kz.
Software Priorities
Software applications are installed in accordance with priorities.
- Software which can be installed via EasyBuild application falls under this category. List of supported EasyBuild softwares you can find here: EasyBuild
- Applications that are crucial for User Groups but cannot be installed through EasyBuild fall under this priority.
- Individual user requests. Such requests will be processed only after other priorities have been fulfilled.
Anaconda
Description: "Anaconda" (shortly "conda"), a Python package management, permits the creation of "environments," which are sets of modifiable packages. It accomplishes this by placing them in your residence. This page will take you through conda loading, environment creation, and modification so you may install and utilize any Python packages you require.
Usage: Anaconda could be loaded by: module load Anaconda3/2022.05
Creating the Conda Environment:
Every user can create their own environments, and packages that are shared with the system-wide environments will not be reinstalled or copied to your file store; instead, they will be symlinked. This reduces the amount of space required in your /home directory to install numerous Python environments.
To build a pristine environment with only Python 3.9 and numpy, execute:
conda create -n mynumpy1 python=3.9 numpy
To alter an existing environment, such as one of the anaconda installations, you can clone it:
conda create --clone mynumpy2 -n mynumpy3
Package Installation Within a Conda Environment:
After creating your own environment, you can install more packages or other versions of existing programs. There are two ways to accomplish this: conda and pip. If a package is accessible through conda, it is strongly advised that you use it to install packages. Using conda, you can search for packages:
conda search pandas
then download the package by following these steps:
conda install pandas
When attempting to install packages outside of your environment, you will receive a permission denied message. If this occurs, create or activate an environment you own. If a package is not available via conda, you can search for and install it using pip:
pip search colormath
pip install colormath
Usage of conda Environments:
Once the conda module has been loaded, the required conda environments must be loaded or created. See the conda manual for documentation on conda environments. You can load a conda environment with the following:
source activate mycondaenv
where mycondaenv is the environment's name; unload one with:
source deactivate
which returns you back to the base environment. You can list all the accessible environments by using:
conda env list
A set of anaconda environments is provided system-wide; these are installed with the anaconda version number in the environment name and are never updated. Therefore, they will provide a fixed base for alternative environments or for direct use.
Using Conda in conjunction with the SLURM scheduler:
Using the Anaconda batch mode
To submit jobs to the Slurm job scheduler, you must run your primary application in batch mode from within your Conda environment. There are several stages to take:
Create a script for application
Create a Slurm script job to execute the application script
sbatch is used to submit the job script to the task scheduler.
Your application script should provide the necessary sequence of commands for your analysis. A Slurm job script is a Bash shell script of a particular type that the Slurm job scheduler recognizes as a job.
Create a batch job submission script like the following and name it myscript.slurm:
<clippy show="true">
- !/bin/bash
- SBATCH --partition=NVIDIA
- SBATCH --nodes=1
- SBATCH --ntasks=1
- SBATCH --cpu-per-task=8
- SBATCH --mem=16GB
- SBATCH --time=1:00:00
module purge
eval $(conda shell.bash hook)
conda activate myenvironment
python script.py
</clippy>
The following describes each line:
Command or Slurm option
Description
- !/bin/bash
Use BASH to execute the script
- SBATCH
Syntax that allows SLURM to read your requests (ignored by BASH)
--partition=NVIDIA
Submit job to the NVIDIA partition
--nodes=1
Use only one compute node
--ntasks=1
Run only one task
--cpus-per-task=8
Reserve 8 CPUs for the user
--mem=16GB
Reserve 16GB of RAM
--module purge
Purge or clear environment modules
--time=1:00:00
Reserve resources for an hour
eval $(conda shell.bash hook) Initialize the shell to use Conda
conda activate myenvironment
Activate your Conda environment, i.e., myenvironment
python script.py
Use python to run script.py
Be sure to alter the requested resources based on your needs, but keep in mind that requesting fewer resources will reduce the waiting time for your work. To fully exploit the resources, particularly the amount of cores, it may be necessary to modify your application script. You can build application scripts and job scripts on your local machine and then move them to the cluster, or you can develop them directly on the cluster using one of the various text editor modules (e.g., nano, micro, vim). Submit the job to the job scheduler using the Slurm's sbatch command:
sbatch myscript.slurm
To determine the status of your position, enter:
myqueue
If there is no job status listed, then the job has been finished. The job's findings will be logged and, by default, stored in a plain-text file of the following format:
slurm-<jobid>.outin
in the identical directory from which the job script was submitted. To access this file's contents, enter:
less slurm-<jobid>.out
then press q to close the viewer.
CUDA
Description: Nvidia created the parallel computing platform and programming model known as CUDA for use with its GPUs for general computing (graphics processing units). By utilizing the capability of GPUs for the parallelizable portion of the calculation, CUDA enables developers to accelerate computationally heavy applications.
Usage:
module load CUDA/11.4.1
To check if CUDA has been loaded, type:
nvcc --version
ANSYS
Description: The ANSYS suite of tools can be used to numerically simulate a wide range of structural and fluid dynamics issues encountered in several engineering, physics, medical, aerospace, and automotive sector applications.
Usage: Loading the ANSYS module module load ansys/2022r1 Launching the workbench is accomplished by: runwb2 The workbench provides access to Fluent, CFX, ICEM, Mechanical APDL/model, and many other languages and models. The appropriate GUIs can be launched outside of the workbench using fluent, cfx5pre, icemcfd, and launcher.
GROMACS
Description: GROMACS is a flexible package for performing molecular dynamics, simulating the Newtonian equations of motion for systems containing hundreds of thousands to millions of particles. It is intended for biochemical molecules, such as proteins, lipids, and nucleic acids, with complex bonded interactions. However, GROMACS is fast at calculating nonbonded interactions, so many groups use it for non-biological systems, like polymers.
Usage: To load GROMACS software: module load GROMACS/2021.5-foss-2021b-CUDA-11.4.1 The GROMACS executable is either gmx or gmx mpi if an OpenMPI module is used. When you type gmx help commands, a list of gmx commands and their functions will be displayed.
Batch jobs: Users are encouraged to create their own scripts for batch submissions. Below are examples of batch submission scripts.
Parallel MPI #!/bin/bash #SBATCH --job-name=gromacs #SBATCH --mail-user=<YOUR_NU_ID>@nu.edu.kz #SBATCH --mail-type=FAIL,BEGIN,END #SBATCH --output=gmx-%j.out #SBATCH --ntasks=2 #SBATCH --cpus-per-task=4 #SBATCH --ntasks-per-socket=1 #SBATCH --time=24:00:00 #SBATCH --mem-per-cpu=1gb module purge module load OpenMPI/4.1.1-GCC-11.2.0 module load GROMACS/2021.5-foss-2021b-CUDA-11.4.1 export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK srun --mpi=pmix_v3 gmx mdrun -ntomp ${SLURM_CPUS_PER_TASK} -s topol.tpr
Please ensure that you paste this code into the MediaWiki editor and make any necessary adjustments for formatting or links if needed.