The Kure cluster is a Linux-based computing system available to researchers across the campus. With more than 1840 computing cores across 230 blade servers and a large scratch disk space, it provides an environment that can accommodate many types of computational problems. The blades are interconnected with a high speed Infiniband network, making this especially appropriate for large parallel jobs. 48 or more GB of memory on each host also provides room for large memory jobs.

 

You may request an account on Kure by selecting Subscribe to Services from the Onyen Services page and select KillDevil Cluster. Accounts on Kure are primarily available to faculty, graduate students, and staff working with genomic data as well as to research team members of current Kure faculty patrons.


Operating System

  • RedHat Enterprise Linux 5.6

System Maintenance Guidelines

  • Maintenance will be scheduled and announced in advance with a Change Notice and an email to all Kure account holders. Depending on the nature of the maintenance, outages may last several hours up to several days.
  • Unscheduled maintenance may involve little or no advance notice depending on the nature of the problem. An Emergency or Follow Up Change Notice will be issued as soon as possible after unscheduled outages.
  • HP blade-based Linux Cluster
  • Largely focused on high throughput sequencing and genome sciences and some astrophysics-based computing
  • Machine Name: kure.its.unc.edu
  • One Login Node
  • 136 Compute Nodes: 48GB RAM
  • 80 Compute Nodes: 72GB RAM
  • 2 Compute Nodes: 96GB RAM
  • 3 Compute Nodes: 192GB RAM
  • Access to 42TB NetApp NAS RAID array used for scratch mounted as /netscr
  • Access to 24TB IBM GPFS Disk Space used for scratch mounted as /largefs
  • Access to 42TB NetApp NAS FC array used for departmental space, mounted as /nas01
  • Access to 36TB NetApp NAS SATA array used for home directories, mounted as /nas02/home
  • Access to 36TB NetApp NAS SATA array for departmental space, mounted as /nas02/depts
  • Access to 909 TB Isilon system for sequencing data, mounted as /proj
We use LSF (Load Sharing Facility) from Platform Computing, Inc. for job management. It helps us balance the workload on our central computational servers while giving you access to the software and hardware you need to get your work done regardless of where you are logged in. We are available to assist users with using LSF to submit jobs in a fashion that make optimal use of cluster resources.