Skip to content

Commit

Permalink
Mocking Loggers Part 5/5 (final) (#3926)
Browse files Browse the repository at this point in the history
* base

* add xfail

* new test

* import

* missing import

* xfail if not installed


include mkpatch


fix test

* mock comet


comet mocks


fix test


remove dep


undo merge duplication

* line

* line

* convert doctest

* doctest

* docs

* prune Results usage in notebooks (#3911)

* notebooks

* notebooks

* revamp entire metrics (#3868)

* removed metric

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* added new metrics

Co-authored-by: Teddy Koker teddy.koker@gmail.com

* pep8

Co-authored-by: Teddy Koker teddy.koker@gmail.com

* pep8

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* docs

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* docs

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* win ddp tests skip

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* win ddp tests skip

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* win ddp tests skip

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* win ddp tests skip

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* reset in compute, cache compute

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* reduce_ops handling

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* sync -> sync_dist, type annotations

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* wip docs

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* mean squared error

* docstring

* added mean ___ error metrics

* added mean ___ error metrics

* seperated files

* accuracy doctest

* gpu fix

* remove unnecessary mixin

* metric and accuracy docstring

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* metric docs

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* pep8, changelog

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* refactor dist utils, pep8

* refactor dist utils, pep8

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* Callback docs with autosummary (#3908)

* callback docs with autosummary

* do not show private methods

* callback base docstring

* skip some docker builds (temporally pass) (#3913)

* skip some docker builds

* todos

* skip

* use badges only with push (#3914)

* testtube

* mock test tube

* mock mlflow

* remove mlflow

* clean up

* test

* test

* test

* test

* test

* test

* code blocks

* remove import

* codeblock

* logger

* wandb causes stall

Co-authored-by: William Falcon <waf2107@columbia.edu>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Ananya Harsh Jha <ananya@pytorchlightning.ai>
Co-authored-by: Teddy Koker <teddy.koker@gmail.com>
Co-authored-by: Jeff Yang <ydcjeff@outlook.com>
  • Loading branch information
6 people authored Oct 7, 2020
1 parent 1a345a4 commit d65b037
Show file tree
Hide file tree
Showing 8 changed files with 93 additions and 67 deletions.
6 changes: 3 additions & 3 deletions docs/source/loggers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ First, install the package:
Then configure the logger and pass it to the :class:`~pytorch_lightning.trainer.trainer.Trainer`:

.. testcode::
.. code-block:: python
from pytorch_lightning.loggers import MLFlowLogger
mlf_logger = MLFlowLogger(
Expand Down Expand Up @@ -169,7 +169,7 @@ First, install the package:
Then configure the logger and pass it to the :class:`~pytorch_lightning.trainer.trainer.Trainer`:

.. testcode::
.. code-block:: python
from pytorch_lightning.loggers import TestTubeLogger
logger = TestTubeLogger('tb_logs', name='my_model')
Expand Down Expand Up @@ -232,7 +232,7 @@ Multiple Loggers
Lightning supports the use of multiple loggers, just pass a list to the
:class:`~pytorch_lightning.trainer.trainer.Trainer`.

.. testcode::
.. code-block:: python
from pytorch_lightning.loggers import TensorBoardLogger, TestTubeLogger
logger1 = TensorBoardLogger('tb_logs', name='my_model')
Expand Down
2 changes: 1 addition & 1 deletion docs/source/logging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -306,7 +306,7 @@ Snapshot code
Loggers also allow you to snapshot a copy of the code used in this experiment.
For example, TestTubeLogger does this with a flag:

.. testcode::
.. code-block:: python
from pytorch_lightning.loggers import TestTubeLogger
logger = TestTubeLogger('.', create_git_tag=True)
Expand Down
41 changes: 22 additions & 19 deletions pytorch_lightning/loggers/mlflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,25 +43,28 @@ class MLFlowLogger(LightningLoggerBase):
pip install mlflow
Example:
>>> from pytorch_lightning import Trainer
>>> from pytorch_lightning.loggers import MLFlowLogger
>>> mlf_logger = MLFlowLogger(
... experiment_name="default",
... tracking_uri="file:./ml-runs"
... )
>>> trainer = Trainer(logger=mlf_logger)
Use the logger anywhere in you :class:`~pytorch_lightning.core.lightning.LightningModule` as follows:
>>> from pytorch_lightning import LightningModule
>>> class LitModel(LightningModule):
... def training_step(self, batch, batch_idx):
... # example
... self.logger.experiment.whatever_ml_flow_supports(...)
...
... def any_lightning_module_function_or_hook(self):
... self.logger.experiment.whatever_ml_flow_supports(...)
.. code-block:: python
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import MLFlowLogger
mlf_logger = MLFlowLogger(
experiment_name="default",
tracking_uri="file:./ml-runs"
)
trainer = Trainer(logger=mlf_logger)
Use the logger anywhere in your :class:`~pytorch_lightning.core.lightning.LightningModule` as follows:
.. code-block:: python
from pytorch_lightning import LightningModule
class LitModel(LightningModule):
def training_step(self, batch, batch_idx):
# example
self.logger.experiment.whatever_ml_flow_supports(...)
def any_lightning_module_function_or_hook(self):
self.logger.experiment.whatever_ml_flow_supports(...)
Args:
experiment_name: The name of the experiment
Expand Down
33 changes: 17 additions & 16 deletions pytorch_lightning/loggers/test_tube.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,8 @@

try:
from test_tube import Experiment
_TEST_TUBE_AVAILABLE = True
except ImportError: # pragma: no-cover
Experiment = None
_TEST_TUBE_AVAILABLE = False

from pytorch_lightning.core.lightning import LightningModule
from pytorch_lightning.loggers.base import LightningLoggerBase, rank_zero_experiment
Expand All @@ -41,22 +39,25 @@ class TestTubeLogger(LightningLoggerBase):
pip install test_tube
Example:
>>> from pytorch_lightning import Trainer
>>> from pytorch_lightning.loggers import TestTubeLogger
>>> logger = TestTubeLogger("tt_logs", name="my_exp_name")
>>> trainer = Trainer(logger=logger)
.. code-block:: python
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import TestTubeLogger
logger = TestTubeLogger("tt_logs", name="my_exp_name")
trainer = Trainer(logger=logger)
Use the logger anywhere in your :class:`~pytorch_lightning.core.lightning.LightningModule` as follows:
>>> from pytorch_lightning import LightningModule
>>> class LitModel(LightningModule):
... def training_step(self, batch, batch_idx):
... # example
... self.logger.experiment.whatever_method_summary_writer_supports(...)
...
... def any_lightning_module_function_or_hook(self):
... self.logger.experiment.add_histogram(...)
.. code-block:: python
from pytorch_lightning import LightningModule
class LitModel(LightningModule):
def training_step(self, batch, batch_idx):
# example
self.logger.experiment.whatever_method_summary_writer_supports(...)
def any_lightning_module_function_or_hook(self):
self.logger.experiment.add_histogram(...)
Args:
save_dir: Save directory
Expand All @@ -83,7 +84,7 @@ def __init__(
create_git_tag: bool = False,
log_graph: bool = False
):
if not _TEST_TUBE_AVAILABLE:
if Experiment is None:
raise ImportError('You want to use `test_tube` logger which is not installed yet,'
' install it with `pip install test-tube`.')
super().__init__()
Expand Down
4 changes: 0 additions & 4 deletions requirements/extra.txt
Original file line number Diff line number Diff line change
@@ -1,9 +1,5 @@
# extended list of package dependencies to reach full functionality

# TODO: this shall be removed as we mock them in tests
mlflow>=1.0.0
test_tube>=0.7.5

matplotlib>=3.1.1
# no need to install with [pytorch] as pytorch is already installed and torchvision is required only for Horovod examples
horovod>=0.19.2, != 0.20.0 # v0.20.0 has problem with building the wheel/installation
Expand Down
6 changes: 0 additions & 6 deletions tests/base/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,6 @@

from tests.base.datasets import TrialMNIST, AverageDataset, MNIST

try:
from test_tube import HyperOptArgumentParser
except ImportError as exp:
# TODO: this should be discussed and moved out of this package
raise ImportError('Missing test-tube package.') from exp

from pytorch_lightning.core.lightning import LightningModule


Expand Down
57 changes: 40 additions & 17 deletions tests/loggers/test_all.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import atexit
import inspect
import os
import pickle
Expand All @@ -20,6 +19,7 @@
from pytorch_lightning.loggers.base import DummyExperiment
from tests.base import EvalModelTemplate
from tests.loggers.test_comet import _patch_comet_atexit
from tests.loggers.test_mlflow import mock_mlflow_run_creation


def _get_logger_args(logger_class, save_dir):
Expand All @@ -34,27 +34,31 @@ def _get_logger_args(logger_class, save_dir):


def test_loggers_fit_test_all(tmpdir, monkeypatch):
_patch_comet_atexit(monkeypatch)
""" Verify that basic functionality of all loggers. """

_test_loggers_fit_test(tmpdir, TensorBoardLogger)

with mock.patch('pytorch_lightning.loggers.comet.comet_ml'), \
mock.patch('pytorch_lightning.loggers.comet.CometOfflineExperiment'):
_patch_comet_atexit(monkeypatch)
_test_loggers_fit_test(tmpdir, CometLogger)

_test_loggers_fit_test(tmpdir, MLFlowLogger)
with mock.patch('pytorch_lightning.loggers.mlflow.mlflow'), \
mock.patch('pytorch_lightning.loggers.mlflow.MlflowClient'):
_test_loggers_fit_test(tmpdir, MLFlowLogger)

with mock.patch('pytorch_lightning.loggers.neptune.neptune'):
_test_loggers_fit_test(tmpdir, NeptuneLogger)

_test_loggers_fit_test(tmpdir, TensorBoardLogger)
_test_loggers_fit_test(tmpdir, TestTubeLogger)
with mock.patch('pytorch_lightning.loggers.test_tube.Experiment'):
_test_loggers_fit_test(tmpdir, TestTubeLogger)

with mock.patch('pytorch_lightning.loggers.wandb.wandb'):
_test_loggers_fit_test(tmpdir, WandbLogger)


def _test_loggers_fit_test(tmpdir, logger_class):
"""Verify that basic functionality of all loggers."""
os.environ['PL_DEV_DEBUG'] = '0'

model = EvalModelTemplate()

class StoreHistoryLogger(logger_class):
Expand All @@ -78,6 +82,13 @@ def log_metrics(self, metrics, step):
logger.experiment.id = 'foo'
logger.experiment.project_name = 'bar'

if logger_class == TestTubeLogger:
logger.experiment.version = 'foo'
logger.experiment.name = 'bar'

if logger_class == MLFlowLogger:
logger = mock_mlflow_run_creation(logger, experiment_id="foo", run_id="bar")

trainer = Trainer(
max_epochs=1,
logger=logger,
Expand Down Expand Up @@ -109,21 +120,27 @@ def log_metrics(self, metrics, step):


def test_loggers_save_dir_and_weights_save_path_all(tmpdir, monkeypatch):
_patch_comet_atexit(monkeypatch)
""" Test the combinations of save_dir, weights_save_path and default_root_dir. """

_test_loggers_save_dir_and_weights_save_path(tmpdir, TensorBoardLogger)

with mock.patch('pytorch_lightning.loggers.comet.comet_ml'), \
mock.patch('pytorch_lightning.loggers.comet.CometOfflineExperiment'):
_patch_comet_atexit(monkeypatch)
_test_loggers_save_dir_and_weights_save_path(tmpdir, CometLogger)

_test_loggers_save_dir_and_weights_save_path(tmpdir, TensorBoardLogger)
_test_loggers_save_dir_and_weights_save_path(tmpdir, MLFlowLogger)
_test_loggers_save_dir_and_weights_save_path(tmpdir, TestTubeLogger)
with mock.patch('pytorch_lightning.loggers.mlflow.mlflow'), \
mock.patch('pytorch_lightning.loggers.mlflow.MlflowClient'):
_test_loggers_save_dir_and_weights_save_path(tmpdir, MLFlowLogger)

with mock.patch('pytorch_lightning.loggers.test_tube.Experiment'):
_test_loggers_save_dir_and_weights_save_path(tmpdir, TestTubeLogger)

with mock.patch('pytorch_lightning.loggers.wandb.wandb'):
_test_loggers_save_dir_and_weights_save_path(tmpdir, WandbLogger)


def _test_loggers_save_dir_and_weights_save_path(tmpdir, logger_class):
""" Test the combinations of save_dir, weights_save_path and default_root_dir. """

class TestLogger(logger_class):
# for this test it does not matter what these attributes are
Expand Down Expand Up @@ -255,18 +272,24 @@ def on_train_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_
assert pl_module.logger.experiment.something(foo="bar") is None


@pytest.mark.skipif(platform.system() == "Windows", reason="Distributed training is not supported on Windows")
@pytest.mark.parametrize("logger_class", [
TensorBoardLogger,
CometLogger,
MLFlowLogger,
# NeptuneLogger, # TODO: fix: https://github.com/PyTorchLightning/pytorch-lightning/pull/3256
NeptuneLogger,
TensorBoardLogger,
TestTubeLogger,
])
@mock.patch('pytorch_lightning.loggers.neptune.neptune')
def test_logger_created_on_rank_zero_only(neptune, tmpdir, monkeypatch, logger_class):
@pytest.mark.skipif(platform.system() == "Windows", reason="Distributed training is not supported on Windows")
def test_logger_created_on_rank_zero_only(tmpdir, monkeypatch, logger_class):
""" Test that loggers get replaced by dummy loggers on global rank > 0"""
_patch_comet_atexit(monkeypatch)
try:
_test_logger_created_on_rank_zero_only(tmpdir, logger_class)
except (ImportError, ModuleNotFoundError):
pytest.xfail(f"multi-process test requires {logger_class.__class__} dependencies to be installed.")


def _test_logger_created_on_rank_zero_only(tmpdir, logger_class):
logger_args = _get_logger_args(logger_class, tmpdir)
logger = logger_class(**logger_args)
model = EvalModelTemplate()
Expand Down
11 changes: 10 additions & 1 deletion tests/loggers/test_mlflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,22 @@
from unittest.mock import MagicMock
import pytest

from mlflow.tracking import MlflowClient

from pytorch_lightning import Trainer
from pytorch_lightning.loggers import MLFlowLogger
from tests.base import EvalModelTemplate


def mock_mlflow_run_creation(logger, experiment_name=None, experiment_id=None, run_id=None):
""" Helper function to simulate mlflow client creating a new (or existing) experiment. """
run = MagicMock()
run.info.run_id = run_id
logger._mlflow_client.get_experiment_by_name = MagicMock(return_value=experiment_name)
logger._mlflow_client.create_experiment = MagicMock(return_value=experiment_id)
logger._mlflow_client.create_run = MagicMock(return_value=run)
return logger


@mock.patch('pytorch_lightning.loggers.mlflow.mlflow')
@mock.patch('pytorch_lightning.loggers.mlflow.MlflowClient')
def test_mlflow_logger_exists(client, mlflow, tmpdir):
Expand Down

0 comments on commit d65b037

Please sign in to comment.