Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] MisconfigurationException: Trainer was configured with enable_progress_bar=False but found TQDMProgressBar in callbacks list. #1455

Closed
johnnyb1509 opened this issue Dec 29, 2022 · 4 comments · Fixed by #1459
Labels
bug Something isn't working triage Issue waiting for triaging

Comments

@johnnyb1509
Copy link

Describe the bug
As the topic name, I have load the model.pt then run historical forecast on a dataset, then there is an error. The code is as below

# Load the model
modelLoad = TFTModel(input_chunk_length = 18, 
                    output_chunk_length = 3).load(os.path.join(os.getcwd(), 'model_save/experiment_11_runAllDataSet.pt'), map_location=torch.device("cuda:0"))

# Run historical forecast
pred = modelLoad.historical_forecasts(series = endog_test, past_covariates = exogs_test, forecast_horizon= 3, stride = 1, retrain = False, verbose=False)

The error is

MisconfigurationException                 Traceback (most recent call last)
File ~/anaconda3/envs/model_build/lib/python3.9/site-packages/darts/utils/utils.py:179, in _with_sanity_checks.<locals>.decorator.<locals>.sanitized_method(self, *args, **kwargs)
    176     only_args.pop("self")
    178     getattr(self, sanity_check_method)(*only_args.values(), **only_kwargs)
--> 179 return method_to_sanitize(self, *only_args.values(), **only_kwargs)

File ~/anaconda3/envs/model_build/lib/python3.9/site-packages/darts/models/forecasting/forecasting_model.py:880, in ForecastingModel.historical_forecasts(self, series, past_covariates, future_covariates, num_samples, train_length, start, forecast_horizon, stride, retrain, overlap_end, last_points_only, verbose)
    857 if (not self._fit_called) or retrain_func(
    858     counter=_counter,
    859     pred_time=pred_time,
   (...)
    872     else None,
    873 ):
    874     self._fit_wrapper(
    875         series=train_series,
    876         past_covariates=past_covariates_,
    877         future_covariates=future_covariates_,
    878     )
--> 880 forecast = self._predict_wrapper(
    881     n=forecast_horizon,
    882     series=train_series,
...
    176     )
    178 if enable_progress_bar:
    179     progress_bar_callback = TQDMProgressBar()

MisconfigurationException: Trainer was configured with `enable_progress_bar=False` but found `TQDMProgressBar` in callbacks list.

Additional infomation
I did not import any tqdm or callback tqdmProgress from pytorch Lightning module

System (please complete the following information):

  • Python version: 3.10
  • darts version 0.23.0

How to fix this error?
I have been using darts since the version 0.13 til now, and almost of bugs is about configuration of Trainer. So is there any method to fix these error from wrappers of darts?

@johnnyb1509 johnnyb1509 added bug Something isn't working triage Issue waiting for triaging labels Dec 29, 2022
@dennisbader
Copy link
Collaborator

Hey @johnnyb1509, I tested and couldn't reproduce the issue.
Could you give a minimum working example including to reproduce the issue?

You can use one of our datasets as a dummy time series.

I.e.what I used is:

import os

from darts.models import NBEATSModel
from darts.datasets import AirPassengersDataset
from darts.utils.timeseries_generation import linear_timeseries

series = AirPassengersDataset().load()
past_covs = linear_timeseries(start=series.start_time(), length=len(series), freq=series.freq)

save_path = os.path.join(os.getcwd(), "nbeats_1455.pt")
model = NBEATSModel(input_chunk_length=12, output_chunk_length=6)
model.fit(series, past_covariates=past_covs, verbose=True, epochs=3)
model.save(save_path)

model_loaded = NBEATSModel.load(save_path)
model_loaded.historical_forecasts(
    series=series, 
    past_covariates=past_covs, 
    forecast_horizon=3, 
    stride=1, 
    retrain=False,
    verbose=False,
)

@johnnyb1509
Copy link
Author

johnnyb1509 commented Dec 30, 2022

Hey @johnnyb1509, I tested and couldn't reproduce the issue. Could you give a minimum working example including to reproduce the issue?

You can use one of our datasets as a dummy time series.

I.e.what I used is:

import os

from darts.models import NBEATSModel
from darts.datasets import AirPassengersDataset
from darts.utils.timeseries_generation import linear_timeseries

series = AirPassengersDataset().load()
past_covs = linear_timeseries(start=series.start_time(), length=len(series), freq=series.freq)

save_path = os.path.join(os.getcwd(), "nbeats_1455.pt")
model = NBEATSModel(input_chunk_length=12, output_chunk_length=6)
model.fit(series, past_covariates=past_covs, verbose=True, epochs=3)
model.save(save_path)

model_loaded = NBEATSModel.load(save_path)
model_loaded.historical_forecasts(
    series=series, 
    past_covariates=past_covs, 
    forecast_horizon=3, 
    stride=1, 
    retrain=False,
    verbose=False,
)

Let's try this code, the only difference is the model, I'm using TFTModel instead of NBEATSModel

import os

from darts.models import TFTModel
from darts.datasets import AirPassengersDataset
from darts.utils.timeseries_generation import linear_timeseries

series = AirPassengersDataset().load()
past_covs = linear_timeseries(start=series.start_time(), length=len(series), freq=series.freq)

pl_kwarg = {"accelerator": "gpu",
            "devices": [0],
            "log_every_n_steps":100}

quantiles = [
    0.01,
    0.05,
    0.1,
    0.15,
    0.2,
    0.25,
    0.3,
    0.4,
    0.5,
    0.6,
    0.7,
    0.75,
    0.8,
    0.85,
    0.9,
    0.95,
    0.99,
]

save_path = os.path.join(os.getcwd(), "tft_1455.pt")
model = TFTModel(
    input_chunk_length = 18,
    output_chunk_length = 3,
    hidden_size = 5, # minus 1 to get rid of endog
    hidden_continuous_size = 5, 
    lstm_layers = 5,
    num_attention_heads = 4,
    full_attention = False,
    dropout = 0.2,
    batch_size = 128,
    n_epochs = 100,
    add_relative_index = True,
    add_encoders = None,
    likelihood = QuantileRegression(quantiles = quantiles),
    # loss_fn=MSELoss(),
    random_state = 0,
    log_tensorboard = True,
    pl_trainer_kwargs = pl_kwarg,
    save_checkpoints = True,
    model_name = 'tft_1455'
)

model.fit(series, past_covariates=past_covs, verbose=True, epochs=3)
model.save(save_path)


model_loaded = TFTModel.load(save_path)

model_loaded.historical_forecasts(
    series=series, 
    past_covariates=past_covs, 
    forecast_horizon=3, 
    stride=1, 
    retrain=False,
    verbose=False,
)

and the result on my machine is

MisconfigurationException                 Traceback (most recent call last)
f:\GitHub\debugging.ipynb Cell 7 in <cell line: 63>()
 model.save(save_path)
 model_loaded = TFTModel.load(save_path)
---> model_loaded.historical_forecasts(
    series=series, 
     past_covariates=past_covs, 
     forecast_horizon=3, 
     stride=1, 
     retrain=False,
     verbose=False,
     )

File ~/anaconda3/envs/model_build/lib/python3.9/site-packages/darts/utils/utils.py:179, in _with_sanity_checks.<locals>.decorator.<locals>.sanitized_method(self, *args, **kwargs)
    176     only_args.pop("self")
    178     getattr(self, sanity_check_method)(*only_args.values(), **only_kwargs)
--> 179 return method_to_sanitize(self, *only_args.values(), **only_kwargs)

File ~/anaconda3/envs/model_build/lib/python3.9/site-packages/darts/models/forecasting/forecasting_model.py:880, in ForecastingModel.historical_forecasts(self, series, past_covariates, future_covariates, num_samples, train_length, start, forecast_horizon, stride, retrain, overlap_end, last_points_only, verbose)
    857 if (not self._fit_called) or retrain_func(
    858     counter=_counter,
    859     pred_time=pred_time,
   (...)
    872     else None,
...
    176     )
    178 if enable_progress_bar:
    179     progress_bar_callback = TQDMProgressBar()

MisconfigurationException: Trainer was configured with `enable_progress_bar=False` but found `TQDMProgressBar` in callbacks list.

For more infomation on package version:

  • tqdm == 4.64.1
  • darts == 0.23.0

@dennisbader
Copy link
Collaborator

Thanks, tested and could successfully reproduce the issue. This is indeed a bug and comes from a mutable copy of the trainer parameters. This resulted in PyTorch Lightning appending their callbacks to the original trainer_params every time when fit()/predict() is called.

#1459 will fix this.

In the meantime, as a quick fix you can use the same verbosity for fit/predict/historical_forecasts ...

@johnnyb1509
Copy link
Author

Thank you for your support! The thread will be closed here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Issue waiting for triaging
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants