-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for TorchEval for metrics usage #1708
Comments
torcheval captures several ideas that their authors were exposed to while collaborating at torchmetrics, differences are very minor and will be addressed by our team in no time. Mosaic team, you have Lightning’s full support to help unblock you in any issues you might run into. not a great way to do OSS @ananthsub |
A bit more background here. The Lightning team led the development of torchmetrics. There was a period where @ananthsub was a close member of the torchmetrics team where the impression that we were under was that he was contributing back to Lightning Torchmetrics OSS, however it seems that we have diverged now. We developed metrics for the larger community (beyond Lightning). Metrics has become a de-facto standard across the PyTorch community. We valued API stability when Meta started engaging, to the point where we went back and forth on design decisions that didn’t bring crystal clear value, but that would break people’s code and not benefit the broad PyTorch community. Meta pushed for changes that our team championed but decided not to go ahead with, then decided to start their own very similar project, and are very actively working at having projects adopt their solution, which we don’t think is fair, because it fragments the community and there’s nothing that we couldn’t fundamentally fix. This mostly just fragments the ecosystem… The “differences” are so minor, that one of our engineers will just address them in the next week… I’m sure that eval is a good attempt at metrics and you can be the judge of what you prefer to use. What I can say is that we have a whole company dedicated to making sure our software is the best in the world and are committed to providing first class support and integrating the feedback into torchmetrics. We’ve been working on this for years and have deep expertise in-house that you are leveraging through torchmetrics, not to mention a massive contributor ecosystem. Thanks for the thorough comparison! we will be taking this feedback into consideration as we prepare for our next release. cheers! |
Thanks @ananthsub for the request and @williamFalcon for the context! We’ll take a look at torcheval. Our principle has always been to maximize user choice, and so are inclined to investigate supporting both, and let our users decide. In fact, a few months ago we refactored our metrics API to give the user more control over which metrics framework they prefer, so this would be well aligned in that direction. @williamFalcon torchmetrics is fantastic to use, thank you and your team for the hard work building that library 🙏 |
🚀 Feature Request
Motivation
TorchEval is a newly released library from PyTorch for common evaluation metrics and tools: https://pytorch.org/torcheval/main/
The MosaicML framework currently has a deep integration with TorchMetrics. TorchMetrics is an awesome library, with an easy to use API.
For performance sensitive users, it's important to control the synchronization logic, which is one area torcheval currently offers as a first-class item. More analysis is here: pytorch/torcheval#82 (comment)
MosaicML users may stand to benefit from being able to use TorchEval in addition to TorchMetrics.
[Optional] Implementation
Additional context
The text was updated successfully, but these errors were encountered: