Skip to end of metadata
Go to start of metadata

The Holland Computing Center supports a diverse collection of research computing hardware.  Anyone in the University of Nebraska system is welcome to apply for an account on HCC machines.

Access to these resources is by default shared with the rest of the user community via various job schedulers. These policies may be found on the pages for the various resources. Alternatively, a user may buy into an existing resource, acquiring 'priority access'. Finally, several machines are available via Condor for opportunistic use. This will allow users almost immediate access, but the job is subject to preemption.

New Users Sign Up

Quick Start Guides

Which Cluster to Use?

Crane: Crane is the newest and most powerful HCC resource . If you are new to using HCC resources, Crane is the recommended cluster to use initially.  Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per node. If your job requires more than 16 cores per node or you need more than 64GB of memory, consider using Tusker instead.

Tusker: Similar to Crane, Tusker is another cluster shared by all campus users. It has 4 CPU/ 64 cores and 256GB RAM per nodes. Two nodes have 512GB RAM for very large memory jobs. So for jobs requiring more than 16 cores per node or large memory, Tusker would be a better option.

Sandhills: Sandhills is a condominium-style cluster, and the majority is owned by various research groups on campus. Jobs from resource owners have first priority in their owned partitions. Users that do not own resources (Guests) can do opportunistic computation, but we would recommend using Crane or Tusker.

User Login

For Windows users, please refer to this link For Windows Users. For Mac or Linux users, please refer to this link For Mac/Linux Users.

Logging into Crane or Tusker
ssh -l <username>
ssh -l <username>
ssh -l <username>

Duo Security

Duo two-factor authentication is required for access to HCC resources. Registration and usage of Duo security can be found in this section: Setting up and using Duo

Important Notes
  • The Crane, Tusker and Sandhills clusters are separate. But, they are similar enough that submission scripts on whichever one will work on another, and vice versa.  
  • The worker nodes cannot write to the /home directories. You must use your /work directory for processing in your job. You may access your work directory by using the command:

    $ cd $WORK


  • Crane - HCC's newest machine, Crane has 7232 Intel Xeon cores in 452 nodes with 64GB RAM per node.

  • Tusker - consists of 106 AMD Interlagos-based nodes (6784 cores) interconnected with Mellanox QDR Infiniband.

  • Red - This cluster is the resource for UNL's US CMS Tier-2 site.

  • Glidein - A gateway to running jobs on the OSG, a collection of computing resources across the US.

Resource Capabilities 

Crane452 node Production-mode LINUX clusterIntel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node*64GB RAM per nodeQDR Infiniband

~1452 TB shared Lustre storage
~1.8 TB local scratch per node

106 node Production-mode LINUX cluster
Opteron 6272 2.1GHz, 4 CPU/64 cores per node
**256 GB RAM per node
***2 Nodes with 512GB per node
****1 Node with 1024GB per node 
QDR Infiniband
~500 TB shared Lustre storage 
~500GB local scratch
108 Node Production-mode LINUX cluster
(condominium model)
62 4-socket Opteron 6376 (2.3 Ghz, 64 cores/node)
44 4-socket Opteron 6128 (2.0 Ghz, 32 cores/node)
2 4-socket Opteron 6168 (1.9 Ghz, 48 cores/node)
62 nodes @ 192GB
44 nodes @ 128GB
2 nodes @ 256GB
QDR Infiniband
Gigabit Ethernet
175 TB shared Lustre storage
~1.5TB per node
337 node Production-mode LINUX cluster
Various Xeon and Opteron processors models
5,888 cores maximum, actual number of
job slots depends on RAM usage
1.5-4GB RAM per job slot
Gigabit and 10Gb
~4.0PB of raw storage space

You may only request the following amount of RAM: *62.5, **250GB, ***500GB, and ****1000GB.

  • No labels