You should ssh in to login.zaratan.umd.edu
to login to a zaratan login node:
ssh <username>@login.zaratan.umd.edu
You can change your shell by using the chsh command.
Modules enable a user to load system installed software. To load mpi, you need to type:
module load openmpi/gcc
Once you do this you'll notice that mpicc and mpicxx are in your path. You can view your path variable by typing:
echo ${PATH}
You can look for other installed software by typing: module avail <name>.
module list lists your currently loaded modules.
You need to use mpicc (for C programs) or mpicxx (for C++ programs) to compile your code. You can do this on the login node.
mpicc -O3 -o myexe myprogram.c
All programs should be run on compute nodes by submitting a job via a batch script. Below we show a sample batch script for launching the myexe executable on 2 compute nodes, with 8 processes per node.
#!/bin/bash
#SBATCH -N 2
#SBATCH --ntasks-per-node=8
#SBATCH -t 00:10
mpirun -np 16 ./myexe
Jobs are submitted to the batch system using sbatch. Lets say you save the snippet above in a file called submit.sh. You can then type:
sbatch submit.sh
For a quicker turnaround time, you may use the debug queue. However, the maximum allowed wall clock time is 15 minutes.
sbatch -p debug submit.sh
You can check the status of your jobs by typing:
squeue -u
You can also request interactive sessions on Zaratan. In an interactive session, slurm (the job scheduler) will allocate you a compute node for a specified amount of wall clock time. You can then ssh into this compute node and run your commands interactively on the shell. This is specially useful for debugging code.
You can request an interactive session in the following two ways:
sinteractive -c 16 -t 60
Once your job is scheduled, you will be placed on a compute node automatically. Once there you can directly run your programs on the shell. For more details, see this.
salloc -N 1 --time=01:00:00
Unlike before, salloc does not place you on the compute node directly. Once your job is scheduled, you should see the address of the compute node assigned to you (for example, compute-b8-21). Now you have to ssh into this node.
ssh compute-b8-21