UMass Boston

Chimera Scripts

Jobs can be submitted using the 'sbatch' command, generally followed by the name of the submission script.  Common line options are included in the sample scripts. A full listing can be found in the man pages ('man sbatch'). Status of queued or running jobs can be obtained via the 'squeue' command.

Download the sample script below (you need to remove the txt file extension after you download it). When you are logged in on Chimera, you can copy the sample script from /share/apps/training/sample_scripts/run_scavenger.sh to your working directory and modify as needed. 

Interactive job submission can also be used.

#!/bin/bash

# Sample slurm submission script for the Chimera compute cluster
# Lines beginning with # are comments, and will be ignored by
# the interpreter.  Lines beginning with #SBATCH are directives
# to the scheduler.  These in turn can be commented out by
# adding a second # (e.g. ##SBATCH lines will not be processed
# by the scheduler).
#
#
# set name of job
#SBATCH --job-name=slurm-sample
#
# set the number of processors/tasks needed
#SBATCH -n 2

#set an account to use
#if not used then default will be used
# for scavenger users, use this format:
##SBATCH --account=pi_first.last
# for contributing users, use this format:
##SBATCH --account=<deptname|lastname></deptname|lastname>

# set max wallclock time  DD-HH:MM:SS

# the default time will be 1 hour if not set
#SBATCH --time=00-1:00:00

# set a memory request
#SBATCH --mem=1gb

# Set filenames for stdout and stderr.  %j can be used for the jobid.
# see "filename patterns" section of the sbatch man page for
# additional options
#SBATCH --error=%x-%j.err
#SBATCH --output=%x-%j.out
#

# set the partition where the job will run.  Multiple partitions can
# be specified as a comma separated list
# Use command "sinfo" to get the list of partitions
#SBATCH --partition=Intel6240
##SBATCH --partition=Intel6240,Intel6248,DGXA100

#When submitting to the GPU node, these following three lines are needed:

##SBATCH --gres=gpu:1
##SBATCH --export=NONE
#source /etc/profile

 

#Optional
# mail alert at start, end and/or failure of execution
# see the sbatch man page for other options
##SBATCH --mail-type=ALL
# send mail to this address
##SBATCH --mail-user=first.last@umb.edu

# Put your job commands here, including loading any needed
# modules or diagnostic echos.

# this job simply reports the hostname and sleeps for two minutes

echo "using $SLURM_CPUS_ON_NODE CPUs"
echo `date`

hostname
sleep 120

# Diagnostic/Logging Information
echo "Finish Run"
echo "end time is `date`"

Interactive

Use something like the following commands to submit a interactive jobs on chimera:

srun -n 1 -N 1 --cpus-per-task=4 -p Intel6126 -t 01:00:00 --pty /bin/bash
srun -n 1 -N 1 --cpus-per-task=16 -p DGXA100 -t 01:00:00 --gres=gpu:1 --export=NONE --pty /bin/bash
Modify the various parameters as appropriate.  See 'man sbatch' for information on command line options.
 
When submitting to the GPU nodes:  add "--gres=gpu:1 --export=NONE" to the command above,  and after getting a prompt on the compute node,  issue the command "source /etc/profile".

 

Back to top

IT Research Computing
Healey Library, Lower Level
UMass Boston
100 Morrissey Blvd.
Boston, MA 02125
Book a Consultation
 617.287.5399
 It-rc@umb.edu