-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Submission queue - add missing continue #11
Conversation
@bfhealy unrelated to that, but I retried the same analysis and same problem. Looks like the Any idea? Everything seems fine on the API side, it does submit with the argument. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. As for the failing analysis, I merged this NMMA PR yesterday afternoon (nuclear-multimessenger-astronomy/nmma#270) that adjusts how the plots are generated considering which files are checkpointed. Might be worth retrying that analysis once more.
@Theodlz I think there's an additional wrinkle with the expiring analysis (assuming it's ZTF23abnpynv): the transient has photometric detections in filters that do not have a corresponding trained model on Zenodo. When the code tries to download the associated model from Zenodo, it raises this error:
To address this, we could either define a list of filters for the API to use in the config, or we could modify the NMMA code to handle missing filters differently. The first approach might be easier, but I'm open to either - what do you think? |
Oh nice catch! Sorry for nothing figuring this one out… didn’t think of this. We might want both: if the filters are similar enough have some kind of mapper, and notify on the response. Otherwise we through an error when submitting from Fritz before running anything. I’m happy to look into that tomorrow. |
Thanks Theo, that sounds like a good plan! |
@mcoughlin do you think that mapping some filters to others would be reasonable, and if yes, we might want to decide what is in that mapper together to make sure it's correct. |
@Theodlz yes and happy to help! For start, you can just make the sdss filters to panstarrs. |
Thanks! I'll send a version that mapper a little later here (I need to head out for an hour or so). @bfhealy the other issue is that the API does not always know what models have what filters trained for it. Either we maintain a copy of the models.yaml from zenodo in the API (easy enough!) that we can use to filter the data after applying the mapper, or we might want NMMA to output some kind of error file in the directory where it's running the analysis. Right now I see on expanse that there's some kind of logs from what I guess is pymultinest, but it might be cool to have some logs from NMMA there too. |
@Theodlz I think maintaining a copy of models.yaml in the API is a good way to go for now, but I'm all for more NMMA logging going forward. The current |
@bfhealy @mcoughlin I'm wrapping up that PR today (with the mappers and fetching models.yaml to verify upon submission if we have the submitted filters in NMMA. Any idea there? |
These models rely on central wavelengths, I think. So they should not be subject to the same constraints. |
@mcoughlin sounds good @mcoughlin, just made that change! I'm wondering, if a user submits an analysis that has filters we do not support, should we just skip those and still run on the rest of the filters, or cancel entirely? What do you think? Maybe the former makes more sense? |
Agreed. But nice to report in the logs what you skipped so they know. |
Added a warning in the analysis results if we skipped any filters when submitting. I still keep all the data in the analysis table, for bookkeeping + being able to rerun past analyses if we add new supported filters to the models in the future. We never know :) |
One can dream ;) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Theodlz I'm getting an error when I test this code with my usual demo SkyPortal source (ZTF21aaqjmps):
[20231129_19:31:24 main] No valid filters found in photometry data for this model, cancelling analysis submission.
[20231129_19:31:24 main] Validation error: no valid filters found in photometry data
I think this is happening because the model name passed to the API is Bu2022Ye
, but since we currently only support tensorflow
analyses, the model name that the API should be checking is Bu2022Ye_tf
.
A quick fix could be to try appending a _tf
to any input model name that lacks one before raising an error. This could be done by modifying enums.py
starting at L44 as follows:
elif model not in FIXED_FILTERS_MODELS:
model = f"{model}_tf"
if model not in FIXED_FILTERS_MODELS:
raise ValueError(f"Model {model} not found")
This would be problematic if we end up supporting sklearn_gp
analyses as well, but I don't think that's a current goal of this work. What do you think?
Nice catch @bfhealy! Yes, I think it's very fine as is to just append the _tf, just added it! Please let me know if that works for you now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Theodlz Thanks, this now works for me! LGTM
Not critical, but noticed that problem here.