-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: NeptuneCallback fails for losses other than BaseLoss #22
Comments
Hi @slawekslex, Prince Canuma here, from Neptune.ai, Thank you for submitting this feedback, I have managed to replicate it with your description and submit a couple of possible fixes to the engineering team, I will let you know once it is fixed. For now, if this is blocking you. You can edit your local ./neptune_fastai/impl/init.py file line 214 to:
Let me know this helps you! |
Thanks for looking into it @Blaizzy. I was able to work around it by setting a dummy attribute on my loss function. |
I'm going to open issues here if I run into anything else. Thanks again for the responsiveness. |
Most welcome! 👍 Have a great weekend 💯 |
Describe the bug
The line here:
neptune-fastai/neptune_fastai/impl/__init__.py
Line 214 in b8e0f3c
assumes that
Learner.loss_func
is an object with afunc
attribute. Howeverloss_func
can be any callable, liketorch.nn.MSELoss
or just a plain python function.This causes is to crash with
AttributeError
Reproduction
Create a
Learner
by passingloss_func=nn.MSELoss()
or any other custom function. Together withNeptuneCallback
Expected behavior
Model trains without crashing
The text was updated successfully, but these errors were encountered: