generated from OSC/bc_osc_pymol
-
Notifications
You must be signed in to change notification settings - Fork 0
/
form.yml
66 lines (65 loc) · 2.13 KB
/
form.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
cluster:
- "cardinal"
form:
- schrodinger_version
- auto_accounts
- bc_num_hours
- bc_num_slots
- num_cores
- node_type
- licenses
- bc_vnc_resolution
- bc_email_on_started
attributes:
schrodinger_version:
widget: 'select'
options:
- ["2023.2", "schrodinger/2023.2"]
- ["2024.3", "schrodinger/2024.3"]
num_cores:
widget: "number_field"
label: "Number of cores"
value: 1
help: |
Number of cores on node type (4 GB per core unless requesting whole
node). Leave blank if requesting full node.
min: 0
max: 96
step: 1
bc_vnc_resolution:
required: true
node_type:
widget: select
label: "Node type"
help: |
- **any** - (*96 cores*) Use any available Cardinal node. This reduces the
wait time as there are no node requirements.
- **vis** - (*96 cores*) Use an Cardinal node that has an [NVIDIA H100
GPU](https://www.nvidia.com/en-us/data-center/h100/) with an X server
running in the background. This utilizes the GPU for hardware
accelerated 3D visualization. There are 32 of these nodes on Owens.
- **hugemem** - (*96 cores*) Use an Cardinal node that has 2 TB of
available RAM as well as 96 cores. There are 16 of these nodes on
Owens. Requesting these nodes always reserve entire nodes.
options:
- [
'any', 'any',
data-max-num-cores: 96,
data-min-num-cores: 1,
]
- [
'vis', 'vis',
data-max-num-cores: 96,
data-min-num-cores: 1,
]
- [
'hugemem', 'hugemem',
data-max-num-cores: 96,
data-min-num-cores: 43,
]
licenses:
help: |
If you want to use use macromodel, glide, ligprep, qikprep, or epik feature in Schrodinger,
you need to specify it. Licenses are comma separeated in the format '<name>@osc:<# of licenses>'
like 'glide@osc:1'. More help can be found on our website regarding the changes in the [slurm migration].
[slurm migration]: https://www.osc.edu/resources/technical_support/supercomputers/owens/environment_changes_in_slurm_migration