# Longleaf SLURM Examples

**Table of Contents**

These are just examples to give you an idea of how to submit jobs on Longleaf for some commonly used applications. You’ll need to specify SBATCH options as appropriate for your job and application.

## Matlab Examples

a. Single cpu job submission script:

#!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH -n 1 #SBATCH --mem=2g #SBATCH -t 5-00:00:00 module add matlab matlab -nodesktop -nosplash -singleCompThread -r mycode -logfile mycode.out

The above will submit the Matlab code (mycode.m) requesting one task (–n 1) on a single node (–N 1) on the general partition (–p general) with a 5 day run time limit (–t 05–00:00:00), and 2 GB memory for the job (––mem=2g). Note: Because the default is one cpu per task, -n 1 can be thought of as requesting just one cpu.

The equivalent command-line method:

module add matlab sbatch -p general -N 1 -n 1 --mem=2g -t 05-00:00:00 --wrap="matlab -nodesktop -nosplash -singleCompThread -r mycode -logfile mycode.out"

b. Multi cpu job submission script:

#!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH -n 17 #SBATCH --mem=10g #SBATCH -t 02-00:00:00 module add matlab matlab -nodesktop -nosplash -singleCompThread -r mycode -logfile mycode.out

The above will submit the Matlab code (mycode.m) requesting 17 tasks (–n 17) on a single node (–N 1) on the general partition (–p general) with a 2 day run time limit (–t 02–00:00:00), and 10 GB memory (––mem=10g). Note: Because the default is one cpu per task, -n 17 can be thought of as requesting 17 cpus. Also, for jobs needing 60%-100% of the cpus (cores) on a node (but not more than are on one node), the job will likely will have a shorter wait time to start running if it is submitted to the snp partition instead of the general partition. See Longleaf Technical Specifications to see how many cores each node in the general partition currently has. Send email to research@unc.edu to request that your account be added to the snp partition and then submit those jobs to the snp partition.

The equivalent command-line method:

module add matlab sbatch -p general -N 1 -n 12 --mem=10g -t 02-00:00:00 --wrap="matlab -nodesktop -nosplash -singleCompThread -r mycode -logfile mycode.out"

c. Running the Matlab GUI:

module add matlab srun -p interact -N 1 -n 1 --mem=4g --x11=first matlab -desktop -singleCompThread

The above will run the Matlab GUI on Longleaf and display it to your local machine. It will use one task (–n 1), on one node (–N 1) in the interact partition (–p interact), and have a 4 GB memory limit (––mem=4g). Note: Because the default is one cpu per task, -n 1 can be thought of as requesting just one cpu.

## R Examples

a. Single cpu job submission script:

#!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH --mem=5g #SBATCH -n 1 #SBATCH -t 1- Rscript mycode.R

The above will submit the R code (mycode.R) requesting one task (–n 1) on a single node (–N 1) on the general partition (–p general) with a one day run time limit (–t 1–), and 5 GB memory for the job (––mem=5g). Note: Because the default is one cpu per task, -n 1 can be thought of as requesting just one cpu.

The equivalent command-line method:

module add r sbatch -p general -N 1 --mem=5g -n 1 -t 1- --wrap="Rscript mycode.R"

b. Multi cpu job submission script:

#!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH --mem=5g #SBATCH -n 12 #SBATCH -t 00:20:00 R CMD BATCH --no-save mycode.R

The above will submit the R job (mycode.R) to a single node (–N 1) on the general partition (–p general) with a twenty minute run time limit (–t 00:20:00), 5 GB memory limit (––mem=5g), and 12 tasks (–n 12). Note: Because the default is one cpu per task, -n 12 can be thought of as requesting 12 cpus or cores. Also, for jobs needing 60%-100% of the cpus (cores) on a node (but not more than are on one node), the job will likely will have a shorter wait time to start running if it is submitted to the snp partition instead of the general partition. See Longleaf Technical Specifications to see how many cores each node in the general partition currently has. Send email to research@unc.edu to request that your account be added to the snp partition and then submit those jobs to the snp partition.

The equivalent command-line method:

module add r sbatch -p general -N 1 --mem=5g -n 12 -t 00:20:00 --wrap="R CMD BATCH --no-save mycode.R"

c. Running the RStudio GUI:

module add r module add rstudio srun --mem=10g -t 5:00:00 -p interact -N 1 -n 1 --x11=first rstudio

The above will run the RStudio GUI on Longleaf and display it to your local machine. It will request one task (–n 1), on one node (–N 1), run in the interact partition (–p interact), have a 10 GB memory limit (––mem=10g), and a five hour run time limit (–t 5:00:00). Note: Because the default is one cpu per task, -n 1 can be thought of as requesting just one cpu.

## Python Examples

a. Single cpu job submission script:

#!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH --mem 5120 #SBATCH -n 1 #SBATCH -t 2:00:00 #SBATCH --mail-type=end #SBATCH --mail-user=onyen@email.unc.edu module add python python3 myscript.py

The above will submit the Python 3 job to a single node (–N 1) on the general partition (–p general) with a 2 hour run time limit (–t 2:00:00), 5120 MB memory limit (––mem 5120), using 1 task (–n 1), and will send you an email when the job has finished (––mail–type=end, ––mail–user=onyen@email.unc.edu). **Make sure to use your actual email address.** While SLURM sends emails to any email address, we prefer you use your onyen@email.unc.edu email address. System administrators will use onyen@email.unc.edu if they need to contact you about a job. Note: Because the default is one cpu per task, -n 1 can be thought of as requesting just one cpu or core.

The equivalent command-line method:

module add python sbatch -p general -N 1 --mem 5120 -n 1 -t 2:00:00 --mail-type=end --mail-user=onyen@email.unc.edu --wrap="python3 myscript.py"

b. Multi cpu job submission script:

#!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH --mem=1g #SBATCH -n 1 #SBATCH -c 12 #SBATCH -t 5- module add python python3 myscript.py

The above will submit the Python 3 job to a single node (–N 1), as a single task (–n 1) that uses 12 cpus (–c 12), in the general partition (–p general) with a five day run time limit (–t 5–), and a 1 GB memory limit (––mem=1g). Note: The default setting of one cpu per task is not applicable here because -c overrides that default. Also, for jobs needing 60%-100% of the cpus (cores) on a node (but not more than are on one node), the job will likely will have a shorter wait time to start running if it is submitted to the snp partition instead of the general partition. See Longleaf Technical Specifications to see how many cores each node in the general partition currently has. Send email to research@unc.edu to request that your account be added to the snp partition and then submit those jobs to the snp partition.

The equivalent command-line method:

module add python sbatch -p general -N 1 --mem=1g -n 1 -c 12 -t 5- --wrap="python3 myscript.py"

c. Tensorflow (gpu) job submission script:

#!/bin/bash #SBATCH -N 1 #SBATCH -n 1 #SBATCH -p gpu #SBATCH --mem=1g #SBATCH -t 02-00:00:00 #SBATCH --qos gpu_access #SBATCH --gres=gpu:1 module add tensorflow python mycode.py

The above will submit your tensorflow code (mycode.py) as a single task (–n 1), to a single node (–N 1), requesting gpu access (––qos gpu_access), to the gpu partition (–p gpu), with a 2 day runtime limit (–t 02–00:00:00), a 1 GB memory limit (––mem=1g) and requesting 1 gpu (––gres=gpu:1). *Longleaf accounts are created without access to the gpu nodes. To get access, include your onyen in a request email to research@unc.edu.*

Note: Because the default is one cpu per task, -n 1 can be thought of as requesting just one cpu or core.

The equivalent command-line method (all on one line):

module add tensorflow sbatch -N 1 -n 1 -p gpu --mem=1g -t 02-00:00:00 --qos gpu_access --gres=gpu:1 --wrap="python mycode.py"

## SAS Examples

a. Single cpu job submission script to the general partition:

#!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH -n 1 #SBATCH -t 00:60:00 #SBATCH --mem=2g #SBATCH --mail-type=end #SBATCH --mail-user=onyen@email.unc.edu module add sas sas -noterminal mycode.sas

The above will submit your SAS code (mycode.sas) as a single task (-n 1), to a single node (-N 1), to the general partition (-p general), with a 2 GB memory limit (–mem=2g), a one hour run time limit (-t 00:60:00), and so that you receive an email when the job has finished (–mail-type=end, –mail-user=onyen@email.unc.edu). **Make sure to use your actual email address.** While SLURM sends emails to any email address, we prefer you use your onyen@email.unc.edu email address. System administrators will use onyen@email.unc.edu if they need to contact you about a job. Note: Because the default is one cpu per task, -n 1 can be thought of as requesting just one cpu or core.

The equivalent command-line method (all on one line):

module add sas sbatch -p general -N 1 -n 1 -t 00:60:00 --mem=2g --mail-type=end --mail-user=onyen@email.unc.edu --wrap="sas -noterminal mycode.sas"

## Stata Examples

a. Single cpu job submission script:

#!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH -t 72:00:00 #SBATCH --mem=6g #SBATCH -n 1 module add stata stata-se -b do mycode.do

The above will submit the Stata job (mycode.do) to a single node (-N 1) on the general partition (-p general) with a 3 day time limit (-t 72:00:00), 6 GB memory limit (–mem=6g), and 1 task (-n 1). Note: Because the default is one cpu per task, -n 1 can be thought of as requesting just one cpu or core.

The equivalent command-line method (all on one line):

module add stata sbatch -p general -N 1 -t 72:00:00 --mem=6g -n 1 --wrap="stata-se -b do mycode.do"

b. Multi cpu job submission script:

#!/bin/bash #SBATCH -p general #SBATCH -N 1 #SBATCH -t 01-00:00:00 #SBATCH --mem=6g #SBATCH -n 8 module add stata stata-mp -b do mycode.do

The above will submit the Stata job (mycode.do) to a single node (-N 1) on the general partition (-p general) with a 1 day time limit (-t 01-00:00:00), 6 GB memory limit (–mem=6g), and 8 tasks (-n 8). Note: Because the default is one cpu per task, -n 8 can be thought of as requesting 8 cpus or cores.

Note. You may also need to add the line

**set procs_use 8**

to the top of your Stata script to tell Stata how many cpus on the host to use. Due to our licensing the maximum you can use is 8.

The equivalent command-line method (all one line):

module add stata sbatch -p general -N 1 -t 01-00:00:00 --mem=6g -n 8 --wrap="stata-mp -b do mycode.do"

c. Running the Stata GUI:

First get an interactive bash session:

srun -t 5:00:00 -p interact -N 1 -n 1 --x11=first --pty /bin/bash

Once in your bash session do:

module add stata xstata-se

The above will run the Stata GUI on Longleaf and display it to your local machine. It will use one task (-n 1), on one node (-N 1), run in the interact partition (-p interact), with a five hour run time limit (-t 5:00:00). Note: Because the default is one cpu per task, -n 1 can be thought of as requesting just one cpu or core.

## Interactive Bash Example

To start an interactive bash session:

srun -t 5:00:00 -p interact -N 1 -n 1 --x11=first --pty /bin/bash

This will start a bash session with a five hour time limit (-t 5:00:00) in the interact partition (-p interact) for one task (-n 1) on one node (-N 1). Note: Because the default is one cpu per task, -n 1 can be thought of as requesting just one cpu or core.

Users may encounter X11 forwarding issue(srun: error: No DISPLAY variable set, cannot setup x11 forwarding) when calling interactive sessions. One way to solve this issue is to change ssh option “-X” to “-Y” (or “-XY”) .