-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comparison to torchmetrics #82
Comments
I wonder as well, torchmetrics is quite mature and complete with available metric tools. |
Hi, @rsokl and @yongen9696 , thanks for the great question~ Kudos to the communitySupport for metrics and evaluation has been a long-running request from the PyTorch community. First, we would like to give kudos to scikit-learn metrics, Keras Metrics, Ignite Metrics, and TorchMetrics as existing projects in the ML community that have inspired TorchEval. In particular, we have discussed these design points on multiple occasions with the developers of TorchMetrics. What makes TorchEval unique?Philosophy for TorchEvalTorchEval is a library that enables easy and performant model evaluation for PyTorch. The library’s philosophy is to provide minimal interfaces that are bolstered by a robust toolkit, alongside a rich collection of performant, out-of-the-box implementations. Critically, we believe in the following axes:
Components in TorchEvalInterface clarity
Metric synchronization in distributed applications
Performance
Beyond MetricsTorchEval also includes tools for evaluation like FLOPs and summarization techniques for modules. We are open to your feedback about what else you'd find helpful in this library! |
I think @ninginthecloud 's reply summarizes the difference very well, so I'll close out this issue. @rsokl please let us know if you have further questions about this though! |
Thank you! This response was very useful. |
(given the engagement on this thread, you might consider pinning it in your issues section so that other inquiring users can find it easily 😄 ) |
Hi! William here from Lightning. The Lightning team led the development of torchmetrics. There was a period where @ananthsub was a close member of the torchmetrics team where the impression that we were under was that he was contributing back to Lightning Torchmetrics OSS, however it seems that we have diverged now. We developed metrics for the larger community (beyond Lightning). Metrics has become a de-facto standard across the PyTorch community. We valued API stability when Meta started engaging, to the point where we went back and forth on design decisions that didn’t bring crystal clear value, but that would break people’s code and not benefit the broad PyTorch community. Meta pushed for changes that our team championed but decided not to go ahead with, then decided to start their own very similar project, and are very actively working at having projects adopt their solution, which we don’t think is fair, because it fragments the community and there’s nothing that we couldn’t fundamentally fix. This mostly just fragments the ecosystem… The “differences” are so minor, that one of our engineers will just address them in the next week… I’m sure that eval is a good attempt at metrics and you can be the judge of what you prefer to use @rsokl. What I can say is that we have a whole company dedicated to making sure our software is the best in the world and are committed to providing first class support and integrating the feedback into torchmetrics. We’ve been working on this for years and have deep expertise in-house that you are leveraging through torchmetrics, not to mention a massive contributor ecosystem. Thanks for the thorough comparison! we will be taking this feedback into consideration as we prepare for our next release. cheers! |
Hello!
torcheval
looks great!I'd be interested to know how
torcheval
compares to torchmetrics. Are there certain shortcomings intorchmetrics
thattorcheval
hopes to address? Any other insights into what inspired the creation oftorcheval
might help users understand what makes this project unique 😄The text was updated successfully, but these errors were encountered: