-
Notifications
You must be signed in to change notification settings - Fork 67
Conversation
Tests are failing because I did not commit any requirements change for commons and stubs yet. |
I added a pre-release to see if the tests are running. Everything seems to work as expected. One test is failing because loss-optimizers are not added yet. This should work when #664 is merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! while why CI is failing?
train_data = 'path/to/another/data.csv' | ||
|
||
run2 = finetuner.fit( | ||
model='efficientnet_b0', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we still need the model
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay now i see the hint, do i understand correly that the model
type must be identical to artifact
model type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need the model name to detect the task, so the task they are used for must be equal, so basically yes. If you use a docarray dataset, it probably does not matter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once the pr for the new pooling and loss options is merged, everything should pass.
LGTM!
docs/walkthrough/run-job.md
Outdated
:class: hint | ||
When you want to continue training, you still need to provide the `model` parameter | ||
beside the `model_artifact` parameter for Finetuner to correctly configure the new run. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:class: hint | |
When you want to continue training, you still need to provide the `model` parameter | |
beside the `model_artifact` parameter for Finetuner to correctly configure the new run. | |
:class: hint | |
When you want to continue training, you still need to provide the `model` parameter | |
as well as the `model_artifact` parameter for Finetuner to correctly configure the new run. |
name=model, | ||
name=model if not kwargs.get(MODEL_ARTIFACT) else None, | ||
artifact=kwargs.get(MODEL_ARTIFACT), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to make sure I understand, this means that if there is a MODEL_ARTIFACT
, then name
is set to None
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, because George implemented the jsonschema in a way that both arguments are mutually exclusive, which makes somehow sense, because otherwise it would be a little bit hard to see what will happen, e.g., if a new model is constructed or the artifact is used. So in this way, the final config to be send to the API is more clear on what a run should do.
as said in the comment above, the stubs version includes already the ArcFace changes and specifically, the loss-optimizers etc. The tests therefore failing but it will be addressed by Louis PR. So I need to merge main into this branch here after Louis finished its PR then it should not fail anymore |
📝 Docs are deployed on https://ft-feat-support-continue-training--jina-docs.netlify.app 🎉 |
Add support for continue the training
This PR enables to continue the training form an existing artifact of a fine-tuned model
Since we need the model name to detect the task in order to construct the training dataset correctly, users still need to add the model property to the
fit
function.