Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fast_dev_run fail on log_hyperparams #6395

Closed
FredrikM97 opened this issue Mar 7, 2021 · 1 comment · Fixed by #6398
Closed

fast_dev_run fail on log_hyperparams #6395

FredrikM97 opened this issue Mar 7, 2021 · 1 comment · Fixed by #6398
Labels
bug Something isn't working help wanted Open to be worked on logger Related to the Loggers

Comments

@FredrikM97
Copy link

🐛 Bug

Issue when running: fast_dev_run=True
"TypeError: log_hyperparams() takes 2 positional arguments but 3 were given"

To Reproduce

When using the following: Where self.hp_metrics is a list of strings where each string is an available metric that is being logged, example "accuracy/val".

def on_train_start(self):
        if self.logger:
            self.logger.log_hyperparams(self.hparams, {metric:0 for metric in self.hp_metrics})

Expected behavior

Assume the unit test is wrong since the documentation say that self.logger.log_hyperparams takes one positional argument and one dictionary. The code run fine without fast_dev_run=True and everything is logged correctly to tensorboard.

Environment

pytorch_lightning 1.2.2

@FredrikM97 FredrikM97 added bug Something isn't working help wanted Open to be worked on labels Mar 7, 2021
@awaelchli
Copy link
Contributor

import torch
from torch.utils.data import Dataset

from pytorch_lightning import LightningModule, Trainer


class RandomDataset(Dataset):


    def __init__(self, size, length):
        self.len = length
        self.data = torch.randn(length, size)

    def __getitem__(self, index):
        return self.data[index]

    def __len__(self):
        return self.len


class BoringModel(LightningModule):

    def __init__(self):

        super().__init__()
        self.layer = torch.nn.Linear(32, 2)

    def forward(self, x):
        return self.layer(x)

    def training_step(self, batch, batch_idx):
        output = self.layer(batch)
        return output.sum()

    def configure_optimizers(self):
        optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1)
        return optimizer

    def on_train_start(self):
        if self.logger:
            self.logger.log_hyperparams(self.hparams, {"x": 0})


if __name__ == '__main__':
    train_data = torch.utils.data.DataLoader(RandomDataset(32, 64))
    model = BoringModel()
    trainer = Trainer(fast_dev_run=True)
    trainer.fit(model, train_data)

minimal repro example

@awaelchli awaelchli mentioned this issue Mar 7, 2021
11 tasks
@awaelchli awaelchli added the logger Related to the Loggers label Mar 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on logger Related to the Loggers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants