-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feat] Add classification fine-tuning utilities #8
Conversation
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. [ghstack-poisoned]
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. ghstack-source-id: ae4206926a64ab7456885f6e47db2d7f9cc5e2e5 Pull Request resolved: #8
@ankitade has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. Differential Revision: [D35271487](https://our.internmc.facebook.com/intern/diff/D35271487) [ghstack-poisoned]
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. ghstack-source-id: 2c0b03cde9ca54f662c20c4f6d40b73cc1b306cb Pull Request resolved: #8
@ankitade has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. Differential Revision: [D35271487](https://our.internmc.facebook.com/intern/diff/D35271487) [ghstack-poisoned]
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. ghstack-source-id: bdf5a7e6fc0e824c50e2c5c0514d4de35e3d42e3 Pull Request resolved: #8
@ankitade has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. Differential Revision: [D35271487](https://our.internmc.facebook.com/intern/diff/D35271487) [ghstack-poisoned]
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. ghstack-source-id: 7bb538829ae5e404c05b8a010a923b1f4804dd2b Pull Request resolved: #8
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. Differential Revision: [D35271487](https://our.internmc.facebook.com/intern/diff/D35271487) [ghstack-poisoned]
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. ghstack-source-id: d082767d1332b7dbb3a5c1178f84e40868dbd28d Pull Request resolved: #8
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. [ghstack-poisoned]
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. ghstack-source-id: 615e5cc7f28ece279979026213e39236544a1f32 Pull Request resolved: #8
@ankitade has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. Differential Revision: [D35361821](https://our.internmc.facebook.com/intern/diff/D35361821) [ghstack-poisoned]
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. ghstack-source-id: 72db90d45c85907592d5ab1ef85b8327337b9512 Pull Request resolved: #8
@ankitade has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
1 similar comment
@ankitade has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. Differential Revision: [D35361821](https://our.internmc.facebook.com/intern/diff/D35361821) [ghstack-poisoned]
@ankitade has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. Differential Revision: [D35361821](https://our.internmc.facebook.com/intern/diff/D35361821) [ghstack-poisoned]
- The PR aims at ending starter classification utils to flava examples. As of now the PR adds following things: - Finetuning trainer - Classification FLAVA - TorchVisionDataModule for easy composability of datasets from torchvision - Some changes to MLP module for more generalization - Some improvements/bug fixes to original FLAVA code - Splits the datamodules to better service their individual concerns. TODOs: - Add support for rest of the datasets. This involves levaraging the existing datamodules that we created in this PR along with support for seamlessly plugging different dataset - Add command line overriding on top - Add support for retrieval, zero-shot and other downstream tasks in an easily accessible form - Expose more things from the model other than just the loss Test Plan: The code is not in 100% working stage. I have tested only the changes in my PR. I expect everything to be stable by the end of the stack. ghstack-source-id: 72db90d45c85907592d5ab1ef85b8327337b9512 Pull Request resolved: #8
@ankitade has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
1 similar comment
@ankitade has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
@ankitade has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
1 similar comment
@ankitade has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Stack from ghstack (oldest at bottom):
As of now the PR adds following things:
torchvision
TODOs:
existing datamodules that we created in this PR along with support for
seamlessly plugging different dataset
easily accessible form
Test Plan:
The code is not in 100% working stage. I have tested only the changes in
my PR. I expect everything to be stable by the end of the stack.
Differential Revision: D35361821