Skip to main content
  1. How do I get an account on Longleaf (or Dogwood)?
  2. What are the details of the new filesystem location(s)?
  3. Why aren’t `/netscr` nor `/lustre` present on Longleaf?
  4. What is the queue structure on Longleaf?
  5. How do I transfer data between Research Computing clusters?
  6. How do I transfer data onto a Research Computing cluster?
  7. How do I access the OnDemand Web Portal for Longleaf?

 

1. HOW TO I GET AN ACCOUNT ON LONGLEAF (OR DOGWOOD)?

Follow the steps outlined on https://its.unc.edu/research-computing/request-a-cluster-account/.

For more information on Longleaf, see: https://its.unc.edu/research-computing/longleaf-cluster/.

For more information on Dogwood, see: https://its.unc.edu/research-computing/dogwood-cluster/.

 

2. WHAT ARE THE FILESYSTEM LOCATION(S)?

See the Longleaf Directory Spaces page or the Dogwood Directory Spaces page.

 

3. WHY AREN’T NET-SCRATCH NOR LUSTRE PRESENT ON LONGLEAF AND DOGWOOD?

The `/lustre` filesystem is available only via the Infiniband fabric, which we had on Killdevil. Since Longleaf and Dogwood nodes in no way access that fabric, `/lustre` is not present on them.

With respect to net-scratch, `/netscr`, it is not present on Longleaf and Dogwood for performance reasons. First, computing with the research cluster nodes against `/netscr` would add an extremely significant workload that `/netscr` cannot sustain—it would thus severely degrade performance for everyone. Secondly, the `/pine` filesystem is purpose-built for I/O and balanced/designed for our research clusters: though it may take some effort to move files/data to a filesystem present on our research clusters, your results will be vastly better than doing something else. Third, the quotas on the `/pine` filesystem are higher, so you have more resource to work with.

 

4. WHAT IS THE QUEUE STRUCTURE ON LONGLEAF AND DOGWOOD?

The queue systems are managed through SLURM partitions, which vary by research cluster:

If you have jobs that require a queue (partition) you do not have access to, please contact us via research@unc.edu or via help ticket at https://help.unc.edu.

 

5. HOW DO I TRANSFER DATA BETWEEN RESEARCH COMPUTING CLUSTERS?

  • To copy files to/from mass storage use SLURM to submit a cp command or GLOBUS.
  • To copy a big file or thousands of small files, use GLOBUS.
  • To copy a medium sized file, do not connect to longleaf.unc.edu (or dogwood.unc.edu), instead connect to one of our data mover nodes to use the cp command. There are four data mover nodes: `rc-dm1.its.unc.edu`, `rc-dm2.its.unc.edu`, `rc-dm3.its.unc.edu`, and `rc-dm4.its.unc.edu`. Connecting to the general the host address:
    `rc-dm.its.unc.edu` will connect you to the least busy of the four. This will generally result in the best performance.
  • To copy small sized files to & from anywhere other than mass storage, use the cp command from the login node.

 

 

6. HOW DO I TRANSFER DATA ONTO A RESEARCH COMPUTING CLUSTER?

For transfers from your desktop or home computer, or another computer external to Research Computing, to one of the Research Computing, there are several methods:


WHEN IS FILE TRANSFER SO BIG TO BE CONSIDERED A BIG FILE TRANSFER?

The file transfer size required to make it count as a big transfer is rapidly increasing as technology and network bandwidth improves. Data transfer times are fastest for copying a file between two UNC research clusters, then copying between a research cluster and your on-campus computer and significantly slower between a research cluster and your off-campus computer (due to the VPN and your internet connection). The slowest is copying a file from a media (ex: CD, DVD) onto your computer hard drive [to then copy to a research cluster]. Because of this, listing any explicit size determination for a big file is problematic. One solution is to treat every file as a large file and use GLOBUS for every copy. This will work, but requires more overhead than some users find practical for their work flow. Instead, here is a guideline based on time: If the copy command (cp, scp, sftp, etc.) is going to take more that 10 minutes, then treat the copy as a big file transfer. If it is less than a minute, then it is a small transfer. Medium-sized transfers are, therefore 1-10 minutes long. As of this writing, this roughly translates to the following cutoff size between medium & big files:

  • Between 2 research clusters:
    • any directory to mass storage: Use SLURM or GLOBUS.
    • home directory to home directory: 200g.
    • home directory to/from scratch or project directory: 500g.
    • scratch or project directory to scratch or project directory: 1T.
  • Between a research clusters and your on-campus computer:
    • to/from mass storage: Use GLOBUS or first copy the file to scratch and then use SLURM to copy from scratch to mass storage to scratch.
    • to/from home directory: 50g.
    • to/from scratch or project directory: 50g.
  • Between a research clusters and an off-campus computer: This is highly variable depending on the speed of your internet connect. We commend using GLOBUS because it is also harder to hold a long connection off-campus.