Quickstart Guide for the Harlow cluster
Node configuration:
30 standard compute nodes with:
This cluster is named for a giant in the field of computational fluid dynamics, Frank Harlow . It was commissioned Oct 3, 2018.
Login
Access to the Harlow system requires a user account. See our computing page for details on how to request one. Access is only possible from within the GSU network.
File Systems
Software and Environment
To manage the access to pre-installed software like compilers, libraries, pre- and postprocessing tools and further application software, Harlow uses the module command. This command offers the following functionality.
- Show lists of available software
- Access software in different versions
harlow:~ $ module avail ... intel/18.0.3.222 ... harlow:~ $ module load intel/18.0.3.222 harlow:~ $ module list Currently Loaded Modulefiles: ... intel/18.0.3.222 ...
Job Scripts
Standard batch system jobs are executed applying the following steps:
- Provide (write) a batch job script, see the examples below.
- Submit the job script with the command sbatch.
- Monitor and control the job execution, e.g. with the commands squeue and scancel (to cancel the job).
MPI job script
Requesting 4 nodes in the student partition with 16 cores (no hyperthreading possible) for 10 minutes, using MPI.#!/bin/bash #SBATCH -J harlow_mpi_test #SBATCH --partition=student #SBATCH -t 00:10:00 #SBATCH -N 4 #SBATCH --tasks-per-node 16 #SBATCH -o job%j.out # strout filename (%j is jobid) #SBATCH -e job%j.err # stderr filename (%j is jobid) module load intel/18.0.3.222 export SLURM_CPU_BIND=none mpirun -iface ib0 -env I_MPI_FAULT_CONTINUE=on -n $SLURM_NPROCS hello_world > hello.out
Hybrid MPI+OpenMP job script
Requesting 2 nodes with 2 MPI tasks per node, and 8 OpenMP tasks per MPI task.#!/bin/bash #SBATCH -J harlow_hyb_test #SBATCH --partition=student #SBATCH -t 00:20:00 #SBATCH -N 2 #SBATCH --cpus-per-task=8 #SBATCH -o job%j.out # strout filename (%j is jobid) #SBATCH -e job%j.err # stderr filename (%j is jobid) # This binds each thread to one core export OMP_PROC_BIND=TRUE # Number of threads as given by -c / --cpus-per-task export OMP_NUM_THREADS=8 export KMP_AFFINITY=verbose,scatter mpiexec -iface ib0 -n 4 --perhost 1 ./hello_world > hello.out
Batch partitions
Partition | Max. walltime | Nodes | Remark |
devel | 04:00:00 | up to 30, higher requests than 8 nodes per job require prior permission | high priority development tests |
student | 04:00:00 | maximum 4 nodes/minimum 2 nodes per job | normal queue for student running codes that are not being developed |
Help
For questions, please contact Dr. Justin Cantrell or Dr. Jane Pratt .