-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Metrics Paramater in training loops #93
Comments
Metrics is a little abstract, maybe we could also integrate wandb or tensorboard to quickvision to show the performance of training strategy |
People can use wandb / tensorboard. E.g Wandb notebook out of box with current metrics. We probably should avoid wandb dependency. Again PyTorch Lightning provides these loggers out of box and current API flexible with PyTorch too. By avoiding integrating wandb or such loggers, people can manually log whatever, whenever, wherever they like. |
After a few discussions over slack with @zlapp and PyTorch Ligthninig Team Should we start this refactor. it would mean updating tests a bit as well as docs. |
So here is API proposal for metrics. Let's not tie Pycoco Tools with Quickvision. This is for extra param with
|
@zlapp to use Pycoco tools or any other metrics is will be very simple. You can just use
train_results will be a dict. It's values will be a Some conssitency which we will have
Another alternative is that we do not compute metrics. I'm not against it or for it, but we provide utilites to compute them. |
Ok so we can now use torchmetrics It has absolutely no requirements dependency so we can easily add it. Also, we will use it to calculate metrics. In case any metric is not available we will keep it in |
🚀 Feature
Slightly important as engines are concerned here 😅
Metrics parameter in training.
Motivation
What currently we do in
train_step
,val_step
orfit
API returns a dictionary by calculating metrics from torchvision and return metrics (but not predictions).This hardcodes the API and is trouble.
Metrics param which will open up this API.
Currently
train_step
will always return metrics, and not return results/model outputs.Let's add a param metrics which will work as follows.
Pitch
For Detection models in PyTorch Trainers, the Param would work as follows.
Since PyTorch Lightning API is completely decoupled from PyTorch. People can already subclass it and use PL Metrics.
E.g.
Metrics with lIghtning is easer, we can use PL metrics and calculate these on fly with a param in
Trainer
Not sure about this but possible easily.
Alternatives
This is open to discussion and really a neat solution can be a major feature upgrade.
Additional context
This will solve lot of issues like #62 ,
Be able to use COCO API for detection. We should not have this dependency. So COCO is still a little confusion,
with metrics None, people can postprocess (which we provide) and use COCO API.
cc @zlapp @zhiqwang @hassiahk
The text was updated successfully, but these errors were encountered: