Skip to content

Tuner leaves checkpoint files behind when interrupted #13856

@awaelchli

Description

@awaelchli

🐛 Bug

When an error occurs during turning (LR or BS), for example caused by user error in their training_step, the tuner leaves checkpoint files behind in the default_root dir.

To Reproduce

import os

import torch
from torch.utils.data import DataLoader, Dataset

from pytorch_lightning import LightningModule, Trainer


class RandomDataset(Dataset):
    def __init__(self, size, length):
        self.len = length
        self.data = torch.randn(length, size)

    def __getitem__(self, index):
        return self.data[index]

    def __len__(self):
        return self.len


class BoringModel(LightningModule):
    def __init__(self):
        super().__init__()
        self.layer = torch.nn.Linear(32, 2)
        self.learning_rate = 0.1

    def forward(self, x):
        return self.layer(x)

    def training_step(self, batch, batch_idx):
        loss = self(batch).sum()
        self.log("train_loss", loss)
        assert False  # simulte an error by user
        return {"loss": loss}

    def configure_optimizers(self):
        return torch.optim.SGD(self.layer.parameters(), lr=0.1)


def run():
    train_data = DataLoader(RandomDataset(32, 64), batch_size=2)

    model = BoringModel()
    trainer = Trainer(
        default_root_dir=os.getcwd(),
        num_sanity_val_steps=0,
        max_epochs=1,
        enable_model_summary=False,
        auto_lr_find=True
    )
    trainer.tune(model, train_dataloaders=train_data)


if __name__ == "__main__":
    run()

Expected behavior

No files. We should save to a temporary location or use a tempfile.

Environment

Master 1.7.

cc @akihironitta @Borda @rohitgr7

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingtuner

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions