Skip to content

Commit 1fb4e92

Browse files
authored
Merge branch 'master' into prune-earlystopping-auto
2 parents 83962d9 + 46617d9 commit 1fb4e92

30 files changed

+246
-615
lines changed

CHANGELOG.md

Lines changed: 18 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
99

1010
### Added
1111

12+
- Added a way to print to terminal without breaking up the progress bar ([#5470](https://github.com/PyTorchLightning/pytorch-lightning/pull/5470))
13+
1214

1315
### Changed
1416

@@ -18,6 +20,19 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
1820

1921
### Removed
2022

23+
- Removed support for passing a bool value to `profiler` argument of Trainer ([#6164](https://github.com/PyTorchLightning/pytorch-lightning/pull/6164))
24+
25+
26+
- Removed deprecated Trainer argument `enable_pl_optimizer` and `automatic_optimization` ([#6163](https://github.com/PyTorchLightning/pytorch-lightning/pull/6163))
27+
28+
29+
- Removed deprecated metrics ([#6161](https://github.com/PyTorchLightning/pytorch-lightning/pull/6161))
30+
* from `pytorch_lightning.metrics.functional.classification` removed `to_onehot`, `to_categorical`, `get_num_classes`, `roc`, `multiclass_roc`, `average_precision`, `precision_recall_curve`, `multiclass_precision_recall_curve`
31+
* from `pytorch_lightning.metrics.functional.reduction` removed `reduce`, `class_reduce`
32+
33+
34+
- Removed deprecated `ModelCheckpoint` arguments `prefix`, `mode="auto"` ([#6162](https://github.com/PyTorchLightning/pytorch-lightning/pull/6162))
35+
2136

2237
- Removed `mode='auto'` from `EarlyStopping` ([#6167](https://github.com/PyTorchLightning/pytorch-lightning/pull/6167))
2338

@@ -36,6 +51,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
3651
- Expose DeepSpeed loss parameters to allow users to fix loss instability ([#6115](https://github.com/PyTorchLightning/pytorch-lightning/pull/6115))
3752

3853

54+
- Fixed epoch level schedulers not being called when `val_check_interval < 1.0` ([#6075](https://github.com/PyTorchLightning/pytorch-lightning/pull/6075))
55+
56+
3957
## [1.2.1] - 2021-02-23
4058

4159
### Fixed
@@ -91,7 +109,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
91109
- Added `Trainer` flag to activate Stochastic Weight Averaging (SWA) `Trainer(stochastic_weight_avg=True)` ([#6038](https://github.com/PyTorchLightning/pytorch-lightning/pull/6038))
92110
- Added DeepSpeed integration ([#5954](https://github.com/PyTorchLightning/pytorch-lightning/pull/5954),
93111
[#6042](https://github.com/PyTorchLightning/pytorch-lightning/pull/6042))
94-
- Added a way to print to terminal without breaking up the progress bar ([#5470](https://github.com/PyTorchLightning/pytorch-lightning/pull/5470))
95112

96113
### Changed
97114

docs/source/common/hyperparameters.rst

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -167,9 +167,6 @@ improve readability and reproducibility.
167167
def train_dataloader(self):
168168
return DataLoader(mnist_train, batch_size=self.hparams.batch_size)
169169
170-
.. warning:: Deprecated since v1.1.0. This method of assigning hyperparameters to the LightningModule
171-
will no longer be supported from v1.3.0. Use the ``self.save_hyperparameters()`` method from above instead.
172-
173170
174171
4. You can also save full objects such as `dict` or `Namespace` to the checkpoint.
175172

docs/source/common/optimizers.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -300,8 +300,6 @@ override the :meth:`optimizer_step` function.
300300

301301
For example, here step optimizer A every 2 batches and optimizer B every 4 batches
302302

303-
.. note:: When using Trainer(enable_pl_optimizer=True), there is no need to call `.zero_grad()`.
304-
305303
.. testcode::
306304

307305
def optimizer_zero_grad(self, current_epoch, batch_idx, optimizer, opt_idx):

docs/source/starter/new-project.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -737,7 +737,7 @@ Lightning has many tools for debugging. Here is an example of just a few of them
737737
.. testcode::
738738

739739
# Profile your code to find speed/memory bottlenecks
740-
Trainer(profiler=True)
740+
Trainer(profiler="simple")
741741

742742
---------------
743743

pytorch_lightning/callbacks/model_checkpoint.py

Lines changed: 9 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -86,26 +86,14 @@ class ModelCheckpoint(Callback):
8686
if ``save_top_k >= 2`` and the callback is called multiple
8787
times inside an epoch, the name of the saved file will be
8888
appended with a version count starting with ``v1``.
89-
mode: one of {auto, min, max}.
90-
If ``save_top_k != 0``, the decision
91-
to overwrite the current save file is made
92-
based on either the maximization or the
93-
minimization of the monitored quantity. For `val_acc`,
94-
this should be `max`, for `val_loss` this should
95-
be `min`, etc. In `auto` mode, the direction is
96-
automatically inferred from the name of the monitored quantity.
97-
98-
.. warning::
99-
Setting ``mode='auto'`` has been deprecated in v1.1 and will be removed in v1.3.
100-
89+
mode: one of {min, max}.
90+
If ``save_top_k != 0``, the decision to overwrite the current save file is made
91+
based on either the maximization or the minimization of the monitored quantity.
92+
For ``'val_acc'``, this should be ``'max'``, for ``'val_loss'`` this should be ``'min'``, etc.
10193
save_weights_only: if ``True``, then only the model's weights will be
10294
saved (``model.save_weights(filepath)``), else the full model
10395
is saved (``model.save(filepath)``).
10496
period: Interval (number of epochs) between checkpoints.
105-
prefix: A string to put at the beginning of checkpoint filename.
106-
107-
.. warning::
108-
This argument has been deprecated in v1.1 and will be removed in v1.3
10997
11098
Note:
11199
For extra customization, ModelCheckpoint includes the following attributes:
@@ -122,7 +110,7 @@ class ModelCheckpoint(Callback):
122110
MisconfigurationException:
123111
If ``save_top_k`` is neither ``None`` nor more than or equal to ``-1``,
124112
if ``monitor`` is ``None`` and ``save_top_k`` is none of ``None``, ``-1``, and ``0``, or
125-
if ``mode`` is none of ``"min"``, ``"max"``, and ``"auto"``.
113+
if ``mode`` is none of ``"min"`` or ``"max"``.
126114
ValueError:
127115
If ``trainer.save_checkpoint`` is ``None``.
128116
@@ -166,9 +154,8 @@ def __init__(
166154
save_last: Optional[bool] = None,
167155
save_top_k: Optional[int] = None,
168156
save_weights_only: bool = False,
169-
mode: str = "auto",
157+
mode: str = "min",
170158
period: int = 1,
171-
prefix: str = "",
172159
):
173160
super().__init__()
174161
self.monitor = monitor
@@ -178,7 +165,6 @@ def __init__(
178165
self.save_weights_only = save_weights_only
179166
self.period = period
180167
self._last_global_step_saved = -1
181-
self.prefix = prefix
182168
self.current_score = None
183169
self.best_k_models = {}
184170
self.kth_best_model_path = ""
@@ -188,12 +174,6 @@ def __init__(
188174
self.save_function = None
189175
self.warned_result_obj = False
190176

191-
if prefix:
192-
rank_zero_warn(
193-
'Argument `prefix` is deprecated in v1.1 and will be removed in v1.3.'
194-
' Please prepend your prefix in `filename` instead.', DeprecationWarning
195-
)
196-
197177
self.__init_monitor_mode(monitor, mode)
198178
self.__init_ckpt_dir(dirpath, filename, save_top_k)
199179
self.__validate_init_configuration()
@@ -300,18 +280,8 @@ def __init_monitor_mode(self, monitor, mode):
300280
"max": (-torch_inf, "max"),
301281
}
302282

303-
if mode not in mode_dict and mode != 'auto':
304-
raise MisconfigurationException(f"`mode` can be auto, {', '.join(mode_dict.keys())}, got {mode}")
305-
306-
# TODO: Update with MisconfigurationException when auto mode is removed in v1.3
307-
if mode == 'auto':
308-
rank_zero_warn(
309-
"mode='auto' is deprecated in v1.1 and will be removed in v1.3."
310-
" Default value for mode with be 'min' in v1.3.", DeprecationWarning
311-
)
312-
313-
_condition = monitor is not None and ("acc" in monitor or monitor.startswith("fmeasure"))
314-
mode_dict['auto'] = ((-torch_inf, "max") if _condition else (torch_inf, "min"))
283+
if mode not in mode_dict:
284+
raise MisconfigurationException(f"`mode` can be {', '.join(mode_dict.keys())} but got {mode}")
315285

316286
self.kth_value, self.mode = mode_dict[mode]
317287

@@ -410,7 +380,7 @@ def format_checkpoint_name(self, epoch: int, step: int, metrics: Dict[str, Any],
410380
'step=0.ckpt'
411381
412382
"""
413-
filename = self._format_checkpoint_name(self.filename, epoch, step, metrics, prefix=self.prefix)
383+
filename = self._format_checkpoint_name(self.filename, epoch, step, metrics)
414384
if ver is not None:
415385
filename = self.CHECKPOINT_JOIN_CHAR.join((filename, f"v{ver}"))
416386

@@ -523,7 +493,6 @@ def _save_last_checkpoint(self, trainer, pl_module, ckpt_name_metrics):
523493
trainer.current_epoch,
524494
trainer.global_step,
525495
ckpt_name_metrics,
526-
prefix=self.prefix
527496
)
528497
last_filepath = os.path.join(self.dirpath, f"{last_filepath}{self.FILE_EXTENSION}")
529498
else:

pytorch_lightning/core/lightning.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1324,9 +1324,6 @@ def optimizer_step(
13241324
By default, Lightning calls ``step()`` and ``zero_grad()`` as shown in the example
13251325
once per optimizer.
13261326
1327-
.. tip:: With ``Trainer(enable_pl_optimizer=True)``, you can use ``optimizer.step()`` directly
1328-
and it will handle zero_grad, accumulated gradients, AMP, TPU and more automatically for you.
1329-
13301327
Warning:
13311328
If you are overriding this method, make sure that you pass the ``optimizer_closure`` parameter
13321329
to ``optimizer.step()`` function as shown in the examples. This ensures that

pytorch_lightning/metrics/functional/__init__.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,6 @@
2121
multiclass_auroc,
2222
stat_scores_multiple_classes,
2323
to_categorical,
24-
to_onehot,
2524
)
2625
from pytorch_lightning.metrics.functional.confusion_matrix import confusion_matrix # noqa: F401
2726
from pytorch_lightning.metrics.functional.explained_variance import explained_variance # noqa: F401

0 commit comments

Comments
 (0)