- User access to the cluster is only allowed through a Secure Shell (SSH) client.
- User access to batch compute nodes is only via the scheduling system, Moab/Torque. Direct SSH access to compute nodes is blocked.
- For testing and development purposes, users are encouraged to use the interactive nodes via qlogin.
- Users are encouraged to use the local disk space on the compute nodes as scratch space when running their jobs. Each job that is submitted, creates a temporary directory of 10 GB (by default). Users are able to request more disk space if required. The path to this temporary directory is “/localhd/<your JOB ID>”. The temporary directory and its contents are removed when a job is complete.
- Non-HPC Team managed devices or virtual machines may not mount the Tier 1 storage system with NFS.
- HPC Cluster batch compute nodes are blocked from reaching Internet and other resources external to the HPC architecture to ensure security and availability of compute resources. The interactive qlogin machines and the data-mover nodes are provided for external downloads.
Batch Job Scheduling
The HPC employs Moab HPC Enterprise Suite as the queuing system for batch job submissions.
Job priorities are calculated based on the user resource requests and the time the job is queued in the system. Jobs will be scheduled once resources are available and Moab’s policies are satisfied.
Users are encouraged to avoid using compute node hostnames and queue names as part of job submission, except in special circumstances (“let the scheduler figure it out for you”).
For special purposes, such as meeting project deadline or unusual job requirements, it is possible to request the HPC team to set up special queues and/or reserve resources for you. To inquire about this, please contact us by emailing firstname.lastname@example.org with the subject line “HPC Batch Job Scheduling”. We will review your demands on a case by case basis, and will try our best to meet your needs.