Skip to end of metadata
Go to start of metadata

Quick overview: 

  • Connected read/write to all HCC HPC cluster resources – you will see the same files "in common" on any HCC cluster (i.e. crane, tusker, sandhills)
  • 30 TB Per-group quota at no charge – larger quota available for $100/TB/year
  • No backups are made!  Don't be silly!  Precious data should still be stored / backed up elsewhere such as on Attic.  Please. 
  • No purge!  So, while your files just might be lost in the case of disk failure or user error, they won't be removed by the purge scripts. 

    Accessing common

    Your /common directory can be accessed via the $COMMON environment variable, i.e. cd $COMMON.

How should I use /common? 

  • Store things that are routinely needed on multiple clusters
  • /common is a network attached FS, so limit the number of files per directory (1 million files in a directory is a very bad idea)
  • If you are accessing /common for a job, you will need to add a line to your submission script!  
    • We have each user check out a "license" to access /common for a given job
    • this allows us to know exactly who is accessing it, and for how long, in case of the need for a shut down so we can try to avoid killing jobs whenever possible
    • it also allows us to limit how many jobs can hammer this single filesystem so it remains healthy and happy 

To gain access to the path on worker nodes, a job must be submitted with the following SLURM directive:

SLURM Submit File
#SBATCH --licenses=common

If a job lacks the above Slurm directive, /common will not be accessible from the worker nodes.  (Briefly, this construct will allow us to quickly do maintenance on a single cluster without having to unmount $COMMON from all HCC resources).  

What should I not do when using /common? 

  • Don't use it for high I/O work flows, use /work for that – /common should mostly be used to read largely static files or data
  • Do not expect your compiled program binaries to work everywhere!  /common is available on machines with different cpu architecture, different network connections, and so on.  caveat emptor!
    • Serial codes will not be optimized for all clusters
    • MPI codes, in particular, will likely not work unless recompiled for each cluster
    • if you use  module things should be just fine!
  • No labels