Skip to end of metadata
Go to start of metadata

I have an account, now what?

Congrats on getting an HCC account! Now you need to connect to a Holland cluster. To do this, we use an SSH connection. SSH stands for Secure Shell, and it allows you to securely connect to a remote computer and operate it just like you would a personal machine.

Depending on your operating system, you may need to install software to make this connection. Check out on Quick Start Guides for information on how to install the necessary software for your operating system

How do I change my password?

All passwords must be at least 8 characters in length and must contain at least one capital letter and one numeric digit. Passwords also cannot contain any dictionary words. If you need help picking a good password, consider using a (secure!) password generator such as this one provided by Random.org

To preserve the security of your account, we recommend changing the default password you were given as soon as possible. To do this, type in the command `passwd` once you successfully connect to a Holland cluster. If you do not remember your current password or are unable to login to the clusters, please contact us at hcc-support@unl.edu and we will assist you.

For additional information, check out the documentation on How to change your password.

I just deleted some files and didn't mean to! Can I get them back?

That depends. Where were the files you deleted?

If the files were in your $HOME directory (/home/group/user/): It's possible.

$HOME directories are backed up daily and we can restore your files as they were at the time of our last backup. Please note that any changes made to the files between when the backup was made and when you deleted them will not be preserved. To have these files restored, please contact HCC Support at hcc-support@unl.edu as soon as possible.

If the files were in your $WORK directory (/work/group/user/): No.

Unfortunately, the $WORK directories are created as a short term place to hold job files. This storage was designed to be quickly and easily accessed by our worker nodes and as such is not conducive to backups. Any irreplaceable files should be backed up in a secondary location, such as Attic, the cloud, or on your personal machine. For more information on how to prevent file loss, check out Don't Let This Happen To You: Preventing File Loss.

How do I (re)activate Duo?

If you have not activated Duo before:

Please stop by our offices along with a photo ID and we will be happy to activate it for you. If you are not local to Omaha or Lincoln, contact us at hcc-support@unl.edu and we will help you activate Duo remotely.

If you have activated Duo previously but now have a different phone number:

Stop by our offices along with a photo ID and we can help you reactivate Duo and update your account with your new phone number.

If you have activated Duo previously and have the same phone number:

Email us at hcc-support@unl.edu from the email address your account is registered under and we will send you a new link that you can use to activate Duo.

How many nodes/memory/time should I request?

Short answer: We don’t know.

Long answer: The amount of resources required is highly dependent on the application you are using, the input file sizes and the parameters you select. Sometimes it can help to speak with someone else who has used the software before to see if they can give you an ideal of what has worked for them.

But ultimately, it comes down to trial and error, try different combinations and see what works and what doesn’t. Good practice is to check the output and utilization of each job you run. This will help you determine what parameters you will need in the future.

For more information on how to determine how many resources a completed job used, check out the documentation on Monitoring Jobs.

I am trying to run a job but nothing happens?

Where are you trying to run the job from? You can check this by typing the command `pwd` into the terminal.

If you are running from inside your $HOME directory (/home/group/user/):

Move your files to your $WORK directory (/work/group/user) and resubmit your job.

The worker nodes on our clusters have read-only access to the files in $HOME directories. This means that when a job is submitted from $HOME, the scheduler cannot write the output and error files in the directory and the job is killed. It appears the job does nothing because no output is produced.

If you are running from inside your $WORK directory:

Contact us at hcc-support@unl.edu with your login, the name of the cluster you are running on, and the full path to your submit script and we will be happy to help solve the issue.

I keep getting the error "slurmstepd: error: Exceeded step memory limit at some point." What does this mean and how do I fix it?

This error occurs when the job you are running uses more memory than was requested in your submit script.

If you specified --mem or --mem-per-cpu in your submit script, try increasing this value and resubmitting your job.

If you did not specify --mem or --mem-per-cpu in your submit script, chances are the default amount allotted is not sufficient. Add the line `#SBATCH --mem=<memory_amount>` to your script with a reasonable amount of memory and try running it again. If you keep getting this error, continue to increase the requested memory amount and resubmit the job until it finishes successfully.

For additional details on how to monitor usage on jobs, check out the documentation on Monitoring Jobs.

If you continue to run into issues, please contact us at hcc-support@unl.edu for additional assistance.

I want to talk to a human about my problem. Can I do that?

Of course! We have an open door policy and invite you to stop by either of our offices anytime Monday through Friday between 9 am and 5 pm. One of the HCC staff would be happy to help you with whatever problem or question you have.  Alternatively, you can drop one of us a line and we'll arrange a time to meet (http://hcc.unl.edu/contact-us). 

  • No labels