You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for brining this up we can work on it but this might be something that needs to change on the API side and then we can change some options on the client side.
The API takes a list of keyword arguments and then can use those to limit the results from sacct or squeue but only the jobid and user options are ever passed to the sacct or squeue call. Any other keywords added are just filter on the server side of the API before the response is sent to the user. We'll need to work on the server side to get some good options for the keyword args and then we can have analogous options to them on the client side.
Since the qos you submit to is different on Perlmutter than whats returned with sacct it was hard to get back jobs that ran on cpu vs gpu so partition was the easiest since it captures both debug and regular qos jobs. In the API we might need to think about some sensible defaults for quires or filtering because it's going to be confusing for a user to ask for jobs in the qos="regular" and get nothing back because it was in qos="regular_1" or qos="gpu_regular".
the example code under https://nersc.github.io/sfapi_client/examples/check_current_and_past_jobs/#check-currently-running-jobs queries a partition. this is not in line with NERSC documentation and typical system use, which has users specify QOS for job submission and monitoring.
The text was updated successfully, but these errors were encountered: