-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Batteries Included - Phase 3 #6323
Comments
Tagging a few of the regular contributors in case they are interested in specific items: Feel free to propose additional candidates. |
I will be happy to take losses :) dice loss first. |
Like Christmas in july hehe 😁 I have a few questions though:
Looking forward to help for the next release :) |
I'd like to take the Polynomial LR scheduler!
|
Thanks for offering to help! We are lucky to have you guys! :) @frgfm excellent questions. Let me try to provide some more context here:
|
Hello all 👋 Is mosaic still available? If not I would like to try it. |
@abhi-glitchhg Mosaic is available. It's a bit unclear how it will be implemented at the moment as there are multiple approaches seen online. I would prefer it if we can implement it as a Transform (rather than a Dataset or preloader etc), potentially similar to what we do for MixUp or SimpleCopyPaste. I think it would be best to disconnect its addition from the Transforms V2 initiative and add it first on the references. Then @pmeier and @vfdev-5 can propose moving forward with it using the new API. To test the new transform we can use a similar approach as in #5825. The contributor has provided enough visual proof that the transform works as expected and then I helped him verify it by training models on internal infra. Let me know if that makes sense to you. |
@datumbox I can work on MobileViT. 👌 |
I wanted to know how helpful it is to implement network architectures without training them, validating the implementations just by using/adapting/porting weights released by the authors. I can provide some specific examples if needed. |
@federicopozzi33 Though the final training (especially of the large variants) is done by our team, we typically request to train at least one variant of the architecture to prove it works. The hardest part of such contributions is often to reproduce the accuracies of the paper and that's why we request this. We've been known to be flexible though, especially if a contributor has experience in implementing and contributing similar architectures to us. Another approach would be to partner with another contributor who has access to an infra and co-author the PR. That's the approach taken on the FCOS model by @xiaohu2015 and @zhiqwang. |
@yassineAlouini I realized that my fat finger gave you a thumbs down instead of thumbs up on your comment to work on MobileViT. Sorry about that. Are you still interested in it? |
@datumbox Yes I am and I understood that you meant 👍 instead so all is good. I should start on Friday. 👌 |
I'd like to take on LARS optimizer, but I have a question: to test the correctness of the optimizer, is it required to reproduce the experiments of the paper? |
@federicopozzi33 I don't think it's required to reproduce experiments but we would need to be very careful to ensure the optimizer works the same as a reference implementation. If reproducing experiments is necessary, I can run them for you. @frgfm you said you had already implementations, are you still interested and have the time to contribute? Perhaps you could work with Federico. Let me know your preferences guys and we can come up with a plan. Because the contribution will land on Core, we would need to align with their practices. The earlier PR on PolynomialLR went super smoothly, so we can try replicating that approach. |
Ok. There should be some reference implementations.
Oh sorry, I looked at the main thread, without paying attention to the other messages. @frgfm let me know if you're still interested to contribute, I can choose other issues without any problem :) |
@federicopozzi33 @frgfm @yassineAlouini @abhi-glitchhg @oke-aditya It would be great if you can either open issues with the items you plan to work on, or open dummy (empty) initial PRs with them so that we can link from the ticket and know which work is assigned to whom. This would allow other community members to pick up work. I would also recommend to assign one task to each so that we can progress the work faster and without blocking others who want to contribute (though I'm happy to group together things that make sense such as the losses or the optimizers if that's what we want). |
Will do @datumbox 👌 |
Yes. Fortunately I'm well and having good health. So will take dice loss and this. : 😊 |
Thanks a lot @oke-aditya for the help! |
I see that Mixup for Detection [1, 2] is still available. |
@ambujpawar It is! Would you be happy to give a try of the new Transforms API (it's at |
I dont have a preference for now. But I think the new transforms API would be nicer right? |
Sounds great! Could you create an issue or a dummy PR so that we can assign it to you and keep track of this item easier? |
Since LARS optimizer is still available, I would like to pick it. @datumbox, is it ok for you? |
@datumbox I would like to take LAMB optimizer. |
@federicopozzi33 I think your message fell through the cracks... I apologise, would you like to pick it up? @Atharva-Phatak sounds great, I assigned the issue you started to you. Note that this is meant to be upstreamed to PyTorch Core. Ping me when you have an early version, to do an early check before we involve PyTorch Core engineers. :) |
No problem. Yes, I'm still interested. Do I open the PR draft directly in the PyTorch repo, right? |
@federicopozzi33 Yes that sounds great! Feel free to ping me like the previous time to go through checks together and when we are mostly ready, I'll ping the Core engineers to get their input. :) |
@datumbox Are you recommending I make a draft PR, and we can go over the changes? Then we can file a main PR for pytorch-core right? |
@Atharva-Phatak Yes sounds good. You can start a draft PR on core and put me as reviewer to discuss details before looping other devs in. If you have specific details in mind, you can also post them on the issue you raised at TorchVision. Previously that's what we did with @federicopozzi33 and worked fine (see #4438 (comment)). I'm quite flexible to adjust on the way it works for you, just make sure you mark the PR as draft to indicate it's work in progress. |
Oops I realized I hadn't opened the PR for LARS & LAMB on core 😅 |
@frgfm I thought no one was working on LAMB and hence I took it up. If it's okay I would like to work on it. 😃 |
Of course it is! |
Same as @Atharva-Phatak. Moreover, I've already started working on that, so I'd like to continue. |
Roger that! In case one of you encounters trouble, let me know as I've implemented those already a while back 👍 (cf. https://github.com/frgfm/Holocron/tree/main/holocron/optim) |
I am (almost) finished with the Mixup for Detection. Would like to pickup Deformable DeTR next, since its not taken up yet. |
@ambujpawar I was wondering if you could perhaps support on normal DeTR first. The work has previously started at #5922 but was not completed. Let me know if that's of interest. |
Sure, it seems interesting. I thought other contirbutors were already working on it. Therefore, chose the different one. I see a PR for DeTR but not an issue. Shall I create one? |
Yes good idea, create an issue and perhaps ping the devs who are on the PR to see if there are opportunities for collaboration. We've done previously shared PRs (for FCOS, see #4961) so this might also work here. Otherwise we can find another ticket for you. I just wanted to make sure we add DeTR soon as it would be the first Transformer-based detection model, something missing in TorchVision at the moment. |
So good.Looking forward to the realization of MTV candidates. |
@datumbox I have some free time and would like to contribute. I see that the DETR implementation is also not moving ahead. |
@oke-aditya Thanks. :) Are there any open topics? I can see that many of the topics/tasks are already taken. |
@deepwilson It's a tough period for the team as it's doesn't have enough resources. Myself I have changed jobs so it's harder to follow up with every ongoing initiative. It would be very nice to finally add DETR to the library but it might be a challenge training it. Not sure if @pmeier or @vfdev-5 have any good issues that they could use community help? |
🚀 The feature
Note: To track the progress of the project check out this board.
This is the 3rd phase of TorchVision's modernization project (see phase 1 and 2). We aim to keep TorchVision relevant by ensuring it provides off-the-shelf all the necessary primitives, model architectures and recipe utilities to produce SOTA results for the supported Computer Vision tasks.
1. New Primitives
To enable our users to reproduce the latest state-of-the-art research we will enhance TorchVision with the following data augmentations, layers, losses and other operators:
Data Augmentations
Losses
Operators added in PyTorch Core
2. New Architectures & Model Iterations
To ensure that our users have access to the most popular SOTA models, we will add the following architectures along with pre-trained weights:
Image Classification
Video Classification
3. Improved Training Recipes & Pre-trained models
To ensure that are users can have access to strong baselines and SOTA weights, we will improve our training recipes to incorporate the newly released primitives and offer improved pre-trained models:
Reference Scripts
Pre-trained weights
Other Candidates
There are several other Operators (#5414), Losses (#2980), Augmentations (#3817) and Models (#2707) proposed by the community. Here are some potential candidates that we could implement depending on bandwidth. Contributions are welcome for any of the below:
cc @datumbox @vfdev-5
The text was updated successfully, but these errors were encountered: