Usage fees for the CfN cluster are based on two components:
Users of the CfN cluster are assigned to one or more Project Slot Quotas. Each Project Slot Quota is assigned to a single Billing Entity who is responsible for payment of usage fees.
There is no charge per user assigned to a Billing Entity. Charges are based just on cumulative disk usage and Project Slot Quota quotas shared among any number of users.
A Billing Entity is typically a PI or research center that can be billed for cluster usage. A Billing Entity is responsible for all disk space and Project Slot Quotas assigned to it. Via the Project Slot Quotas, any number of cluster users (i.e. people with cluster login accounts) can be included within a Billing Entity. There is no charge for user accounts themselves. Project Slot Quotas can include users that are not part of the Billing Entity's lab per se, but rather are collaborating with the lab on one of its projects. Individual users can belong to any number of Project Slot Quotas under any number of Billing Entities.
Disk space is organized into project trees, consisting of a directory and all its sub-directories (e.g. /data/jet/mgstauff). Each project tree is assigned to a single Billing Entity. A billing entity is responsible for all project trees assigned to it, regardless of file ownership within the tree.
Project trees can be setup as a shared directory for all users within a lab and/or within a project. Or, project trees can be setup for each user within a lab or project. However, the shared tree approach is preferred for easier management, maintenance and to facilitate file sharing within a lab and project.
All storage is on dedicated enterprise-grade hardware RAIDs utilizing RAID-6 for redundancy.
You may want to archive your data off of the cluster to save costs. If so, we have some suggestions here
All storage is backed up to tape on a near-quarterly basis (at least 3 times per year) and tapes are stored off-site. Users are responsible for maintaining their own copies of original data in the event of catastrophic failure of the system.
See here for suggestions on archiving your data off of the CfN cluster.
A Project Slot Quota is an assignment of SGE slot quotas (see here for details) to one or more users. Each Project Slot Quota is assigned to exactly one Billing Entity, and a Billing Entity must have at least one Project Slot Quota.
One slot is a quota of 1 cpu core and 6GB RAM (3GB on the 'basic' compute nodes).
Each Project Slot Quota sets an aggregate slot quota for all users assigned to the Project Slot Quota. This quota limits the aggregate number of concurrent slots used by all the users in the Project Slot Quota. Currently there is a maximum of 200 slots per Billing Entity.
There is an individual limit of 40 slots for each user with a Project Slot Quota. Different individual quotas can be assigned for each user if needed, e.g. so a power user can be guaranteed more slots than the casual users in a group.
For example:
Project Slot Quota Name | picsl |
---|---|
Users | mgstauff, pcook, jtduda |
Aggregate quota | 80 |
mgstauff quota | 20 |
pcook, jtduda quota | 40 |
In this example, the three users can together never use more than 80 slots at the same time. In addition, user mgstauff can never use more than 20, and users pcook and jtduda can each never user more than 40. Note that even though pcook and jtduda have 40-slot quotas, if pcook were already using 40 slots, and mgstauff were using 15, then jtduda could only use 25 slots with any jobs he submitted at that time.
A Billing Entity may decide to create individual Project Slot Quotas for power users, so they are not limited by an overall group slot quota.
If a user works on projects for more than one Billing Entity, their resources should be handled thusly:
The data for the different labs/PI's should be in different high-level directory trees. If you currently have all your data for both labs in, e.g., /data/jet/mydir, we'd create a new dir /data/jet/myotherdir for all data belonging to the second lab/PI. Then each dir is tracked separately on our side for usage. Or you can move your data to a shared lab dir if your lab has one (or we can set one up).
A single user can belong to multiple Project Slot Quotas assigned to different Billing Entities (and also multiple Project Slot Quotas belonging to a single Billing Entity). In that case you'll need to run your cluster jobs with an additional parameter to specify which Project Slot Quota you want to have the particular jobs count against. I'll have details on that in the coming weeks.
Each Billing Entity must have one Basic Account.
This account is a means to provide very affordable basic computing services to small labs and casual users.
It provides:
It does not provide:
Billing will be conducted quarterly and published as a disk usage billing report and slot usage billing report. Slot usage details are reported under Slot Usage Reports.
NOTE The first billing cycle will begin 7/1/2015, with charges applied 10/1/2015. For the first cycle, storage amounts won't be tracked weekly until 9/1/2015, to give labs time to clean up disk space.
For comparison, you may want to check the PMACS HPC services.