-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve the tracking of Metric
members and deprecate the metric_attribute
argument to LightningModule.log
#9067
Comments
Dear @yifuwang, This design was a made from a tradoff between performance and reliability of this feature. Curious to hear if you have another alternative ? Best, |
Thanks for the comment @tchaton!
Yes I understood the intention of the cache. The proposal won't require rediscovering all metric member everytime https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L1180-L1225
Yes I agree and I don't consider this a bug (I mentioned this in the proposal above). Though we still had to help an internal user navigate around this. Since there's opportunity to get the best of both world (same perf + no need for user intervention under special circumstances + one fewer argument to |
Oh, I didn t see the attached link. Thnks for the clarification. Yes, that s a great Idea 🤗 Would you be willing to implement it ? Best, |
@tchaton happy to! |
Dear @yifuwang, Actually, I remember why I decided not to rely on setattr as I am not sure it would work with nested modules. Example: class ModelA(LightningModule):
def __init__(self):
self.metrics = ...
class ModelB(LightningModule):
def __init__(self):
self.model = ModelA() Any ideas ? |
@yifuwang just as a note: Lightning-AI/torchmetrics#478 might be related since that ensures each metric instance has a unique hash (so the tracking of modules through |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
Proposed refactoring or deprecation
Motivation
Currently, when the
value
argument is aMetric
object,LightningModule.log
assumes that theMetric
object is a member of theLightningModule
. The method uses the following logic to deduce the name of the member:https://github.com/PyTorchLightning/pytorch-lightning/blob/1e4d8929fb4fe79877fe5996a793a42ceb8cdb4b/pytorch_lightning/core/lightning.py#L439-L457
If the user reassigns a
Metric
member,self._metric_attributes
would become stale andMisconfigurationException
would be raised.While the suggestion in the error message is useful (i.e. telling the user to pass in
metric_attribute
explicitly), it's possible to prevent the issue in the first place without relying on themetric_attribute
argument.Pitch
Track
Metric
members with the same approachtorch.nn.Module
uses to track parameters and submodules:https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L1143-L1188
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
The text was updated successfully, but these errors were encountered: