You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have added you to the Flowcron group in Globus. If you go to Flows -> Library then search “Flowcron-AI” . Please use the flow named “Flowcron-AI” since this is for the AI project in Baskerville.
In Flowcron you select-submit a whole Job directory which should have the following structure (I attach a template Job Directory with also a template script in it):
Job_Dir/
├── data
│ └──
└── scripts
└── submission_script.sh
There can be other sub-directories apart from the ‘data’ and ‘scripts’ ones, but the 2 mentioned have to be included (even as empty directories), and also there should not be any sub-directory named ‘sentinels’.
Also in the submission_script.sh
Use the jgms5830-rfi-ai project,
Edit the slurm #SBATCH commands with the appropriate time, number of tasks, cpus per task and gpus per task that you want.
The #SBATCH commands can be exactly as you have them in other scripts, but DO NOT include any -o --output or -e --error arguments (these are automatically filled by Flowcron),
Add any module load commands before the no edit zone
Do not edit the no edit zone
Add any non module load commands after the no edit zone.
Do not use any $HOME and/or $USER variables. This is because this script will be submitted using the account of the person that has set Flowcron (in the case of Flowcron-AI my account)
When you want to point to specific files and/or folders using their path the rules are:
The present working directory is the Job_Dir. So if you want for instance to point to a file e.g. ‘test.txt’ in a directory named ‘test’ which is the ‘data’ directory, point to using the path data/test/test.txt
If the files/folders (data, models, etc) are packaged with the job directory (e.g. maybe within the ‘data’ directory) reference to them using relative paths
If the inputs (data, models, etc) are already somewhere in Baskerville (e.g. in your personal directory) use absolute paths
For paths of output-resulting files/folders use relative paths
After you create a job directory, to submit it, login to Globus and go to the left toolbar, then select Flows, and then go to the Library tab, and then find the ‘Flowcron-AI’ flow and then click to Start.
In the input use the Globus collection that has your Job directory that you want to submit, and the path to the Job Directory. The path has to end with a ‘/’ character to signify you upload the whole Job_Dir/ directory
You can select if you want to clean up after processing. By default this option is off. Please do not turn this on so we can collect later the results.
Label the flow and click ‘Start run’.
At the end any file/folder generated in Baskerville within the Job_Dir will be transferred back (transfer with sync to not waste time re-transferring the original files back) in the same location (Globus collection/path) from where you uploaded the Job_Dir
After you submitted a run, if you click in the left toolbar Flows and then the Runs tab, you can see your Flow running. If you click on it and go the Event Log path you can see at which state it is.
Important: When you submit for the first time you have tick Allow from Globus prompts to use Flows and you have to go the Flow Runs and open your first submitted flow because it will ask for futher authentication in order to be able to access both your source Globus collection and the Baskerville Globus collection. Otherwise it will not continue further.
The text was updated successfully, but these errors were encountered:
I have added you to the Flowcron group in Globus. If you go to Flows -> Library then search “Flowcron-AI” . Please use the flow named “Flowcron-AI” since this is for the AI project in Baskerville.
In Flowcron you select-submit a whole Job directory which should have the following structure (I attach a template Job Directory with also a template script in it):
Job_Dir/
├── data
│ └──
└── scripts
└── submission_script.sh
There can be other sub-directories apart from the ‘data’ and ‘scripts’ ones, but the 2 mentioned have to be included (even as empty directories), and also there should not be any sub-directory named ‘sentinels’.
Also in the submission_script.sh
The #SBATCH commands can be exactly as you have them in other scripts, but DO NOT include any -o --output or -e --error arguments (these are automatically filled by Flowcron),
When you want to point to specific files and/or folders using their path the rules are:
After you create a job directory, to submit it, login to Globus and go to the left toolbar, then select Flows, and then go to the Library tab, and then find the ‘Flowcron-AI’ flow and then click to Start.
In the input use the Globus collection that has your Job directory that you want to submit, and the path to the Job Directory. The path has to end with a ‘/’ character to signify you upload the whole Job_Dir/ directory
You can select if you want to clean up after processing. By default this option is off. Please do not turn this on so we can collect later the results.
Label the flow and click ‘Start run’.
At the end any file/folder generated in Baskerville within the Job_Dir will be transferred back (transfer with sync to not waste time re-transferring the original files back) in the same location (Globus collection/path) from where you uploaded the Job_Dir
After you submitted a run, if you click in the left toolbar Flows and then the Runs tab, you can see your Flow running. If you click on it and go the Event Log path you can see at which state it is.
Important: When you submit for the first time you have tick Allow from Globus prompts to use Flows and you have to go the Flow Runs and open your first submitted flow because it will ask for futher authentication in order to be able to access both your source Globus collection and the Baskerville Globus collection. Otherwise it will not continue further.
The text was updated successfully, but these errors were encountered: