Skip to content

Commit

Permalink
[Docs] Note on running metric in dp (#4494)
Browse files Browse the repository at this point in the history
* note

* Update docs/source/metrics.rst

Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
Co-authored-by: Jeff Yang <ydcjeff@outlook.com>
(cherry picked from commit 01a925d)
  • Loading branch information
SkafteNicki authored and SeanNaren committed Nov 10, 2020
1 parent 55ab025 commit a47b621
Showing 1 changed file with 20 additions and 0 deletions.
20 changes: 20 additions & 0 deletions docs/source/metrics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,26 @@ If ``on_epoch`` is True, the logger automatically logs the end of epoch metric v
self.valid_acc(logits, y)
self.log('valid_acc', self.valid_acc, on_step=True, on_epoch=True)
.. note::
If using metrics in data parallel mode (dp), the metric update/logging should be done
in the ``<mode>_step_end`` method (where ``<mode>`` is either ``training``, ``validation``
or ``test``). This is due to metric states else being destroyed after each forward pass,
leading to wrong accumulation. In practice do the following:

.. code-block:: python
def training_step(self, batch, batch_idx):
data, target = batch
pred = self(data)
...
return {'loss' : loss, 'preds' : preds, 'target' : target}
def training_step_end(self, outputs):
#update and log
self.metric(outputs['preds'], outputs['target'])
self.log('metric', self.metric)
This metrics API is independent of PyTorch Lightning. Metrics can directly be used in PyTorch as shown in the example:

.. code-block:: python
Expand Down

0 comments on commit a47b621

Please sign in to comment.