Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: you tried to log -1 which is currently not supported. Try a dict or a scalar/tensor. #902

Closed
kkckk1110 opened this issue Feb 26, 2024 · 4 comments
Labels

Comments

@kkckk1110
Copy link

kkckk1110 commented Feb 26, 2024

What happened + What you expected to happen

I came across a bug when running the getting_started.ipynb in the following codes:
the bug is:

ValueError:
you tried to log -1 which is currently not supported. Try a dict or a scalar/tensor.

Also, I have a problem with the implementation of exogenous features. If I want to use Auto version of models, how can I add exogenous features to it? When I attempted to add futr_exog_list to AutoLSTM(h=h, config=config_lstm, loss=MQLoss(), num_samples=2), it raises an error that futr_exog_list argument cannot be found.

Versions / Dependencies

I have neuralforecast==1.6.4

Reproduction script

%%capture
horizon = 12

models = [LSTM(h=horizon, # Forecast horizon
max_steps=500, # Number of steps to train
scaler_type='standard', # Type of scaler to normalize data
encoder_hidden_size=64, # Defines the size of the hidden state of the LSTM
decoder_hidden_size=64,), # Defines the number of hidden units of each layer of the MLP decoder
NHITS(h=horizon, # Forecast horizon
input_size=2 * horizon, # Length of input sequence
max_steps=100, # Number of steps to train
n_freq_downsample=[2, 1, 1]) # Downsampling factors for each stack output
]
nf = NeuralForecast(models=models, freq='M')
nf.fit(df=Y_df)

Issue Severity

None

@kkckk1110 kkckk1110 added the bug label Feb 26, 2024
@jmoralez
Copy link
Member

Hey @kkckk1110, thanks for using neuralforecast. Can you provide the full stacktrace for the error you're getting? For the auto models you have to provide the exogenous features in the config.

@jmoralez jmoralez changed the title [<Library component: Model|Core|etc...>] ValueError: you tried to log -1 which is currently not supported. Try a dict or a scalar/tensor. Feb 26, 2024
@kkckk1110
Copy link
Author

kkckk1110 commented Feb 27, 2024

Thank you very much for your attention!

Stacktrace
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
File ~/miniforge3/lib/python3.9/site-packages/lightning_fabric/loggers/tensorboard.py:202, in TensorBoardLogger.log_metrics(self, metrics, step)
    201 try:
--> 202     self.experiment.add_scalar(k, v, step)
    203 # TODO(fabric): specify the possible exception

File ~/miniforge3/lib/python3.9/site-packages/lightning_fabric/loggers/logger.py:117, in rank_zero_experiment.<locals>.experiment(self)
    115     return fn(self)
--> 117 return get_experiment() or _DummyExperiment()

File ~/miniforge3/lib/python3.9/site-packages/lightning_utilities/core/rank_zero.py:32, in rank_zero_only.<locals>.wrapped_fn(*args, **kwargs)
     31 if rank == 0:
---> 32     return fn(*args, **kwargs)
     33 return None

File ~/miniforge3/lib/python3.9/site-packages/lightning_fabric/loggers/logger.py:115, in rank_zero_experiment.<locals>.experiment.<locals>.get_experiment()
    113 @rank_zero_only
    114 def get_experiment() -> Callable:
--> 115     return fn(self)

File ~/miniforge3/lib/python3.9/site-packages/lightning_fabric/loggers/tensorboard.py:181, in TensorBoardLogger.experiment(self)
    180 if _TENSORBOARD_AVAILABLE:
--> 181     from torch.utils.tensorboard import SummaryWriter
    182 else:

File ~/miniforge3/lib/python3.9/site-packages/torch/utils/tensorboard/__init__.py:12, in <module>
     10 del tensorboard
---> 12 from .writer import FileWriter, SummaryWriter  # noqa: F401
     13 from tensorboard.summary.writer.record_writer import RecordWriter

File ~/miniforge3/lib/python3.9/site-packages/torch/utils/tensorboard/writer.py:10, in <module>
      9 from tensorboard.compat import tf
---> 10 from tensorboard.compat.proto import event_pb2
     11 from tensorboard.compat.proto.event_pb2 import Event, SessionLog

File ~/miniforge3/lib/python3.9/site-packages/tensorboard/compat/proto/event_pb2.py:6, in <module>
      5 from google.protobuf.internal import enum_type_wrapper
----> 6 from google.protobuf import descriptor as _descriptor
      7 from google.protobuf import descriptor_pool as _descriptor_pool

File ~/miniforge3/lib/python3.9/site-packages/google/protobuf/descriptor.py:47, in <module>
     46 import os
---> 47 from google.protobuf.pyext import _message
     48 _USE_C_DESCRIPTORS = True

TypeError: bases must be types

The above exception was the direct cause of the following exception:

ValueError                                Traceback (most recent call last)
Input In [16], in <cell line: 1>()
----> 1 model.fit(train)

File ~/miniforge3/lib/python3.9/site-packages/neuralforecast/core.py:274, in NeuralForecast.fit(self, df, static_df, val_size, sort_df, use_init_models, verbose)
    271         print("WARNING: Deleting previously fitted models.")
    273 for model in self.models:
--> 274     model.fit(self.dataset, val_size=val_size)
    276 self._fitted = True

File ~/miniforge3/lib/python3.9/site-packages/neuralforecast/common/_base_recurrent.py:633, in BaseRecurrent.fit(self, dataset, val_size, test_size, random_seed)
    630 self.trainer_kwargs["check_val_every_n_epoch"] = None
    632 trainer = pl.Trainer(**self.trainer_kwargs)
--> 633 trainer.fit(self, datamodule=datamodule)

File ~/miniforge3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py:520, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
    518 model = _maybe_unwrap_optimized(model)
    519 self.strategy._lightning_module = model
--> 520 call._call_and_handle_interrupt(
    521     self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
    522 )

File ~/miniforge3/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py:44, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
     42         return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
     43     else:
---> 44         return trainer_fn(*args, **kwargs)
     46 except _TunerExitException:
     47     _call_teardown_hook(trainer)

File ~/miniforge3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py:559, in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
    549 self._data_connector.attach_data(
    550     model, train_dataloaders=train_dataloaders, val_dataloaders=val_dataloaders, datamodule=datamodule
    551 )
    553 ckpt_path = self._checkpoint_connector._select_ckpt_path(
    554     self.state.fn,
    555     ckpt_path,
    556     model_provided=True,
    557     model_connected=self.lightning_module is not None,
    558 )
--> 559 self._run(model, ckpt_path=ckpt_path)
    561 assert self.state.stopped
    562 self.training = False

File ~/miniforge3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py:918, in Trainer._run(self, model, ckpt_path)
    915     call._call_callback_hooks(self, "on_fit_start")
    916     call._call_lightning_module_hook(self, "on_fit_start")
--> 918 _log_hyperparams(self)
    920 if self.strategy.restore_checkpoint_after_setup:
    921     log.debug(f"{self.__class__.__name__}: restoring module and callbacks from checkpoint path: {ckpt_path}")

File ~/miniforge3/lib/python3.9/site-packages/pytorch_lightning/loggers/utilities.py:94, in _log_hyperparams(trainer)
     92 for logger in trainer.loggers:
     93     if hparams_initial is not None:
---> 94         logger.log_hyperparams(hparams_initial)
     95     logger.log_graph(pl_module)
     96     logger.save()

File ~/miniforge3/lib/python3.9/site-packages/lightning_utilities/core/rank_zero.py:32, in rank_zero_only.<locals>.wrapped_fn(*args, **kwargs)
     30     raise RuntimeError("The `rank_zero_only.rank` needs to be set before use")
     31 if rank == 0:
---> 32     return fn(*args, **kwargs)
     33 return None

File ~/miniforge3/lib/python3.9/site-packages/pytorch_lightning/loggers/tensorboard.py:181, in TensorBoardLogger.log_hyperparams(self, params, metrics)
    178 else:
    179     self.hparams.update(params)
--> 181 return super().log_hyperparams(params=params, metrics=metrics)

File ~/miniforge3/lib/python3.9/site-packages/lightning_utilities/core/rank_zero.py:32, in rank_zero_only.<locals>.wrapped_fn(*args, **kwargs)
     30     raise RuntimeError("The `rank_zero_only.rank` needs to be set before use")
     31 if rank == 0:
---> 32     return fn(*args, **kwargs)
     33 return None

File ~/miniforge3/lib/python3.9/site-packages/lightning_fabric/loggers/tensorboard.py:233, in TensorBoardLogger.log_hyperparams(self, params, metrics)
    230     metrics = {"hp_metric": metrics}
    232 if metrics:
--> 233     self.log_metrics(metrics, 0)
    235     if _TENSORBOARD_AVAILABLE:
    236         from torch.utils.tensorboard.summary import hparams

File ~/miniforge3/lib/python3.9/site-packages/lightning_utilities/core/rank_zero.py:32, in rank_zero_only.<locals>.wrapped_fn(*args, **kwargs)
     30     raise RuntimeError("The `rank_zero_only.rank` needs to be set before use")
     31 if rank == 0:
---> 32     return fn(*args, **kwargs)
     33 return None

File ~/miniforge3/lib/python3.9/site-packages/lightning_fabric/loggers/tensorboard.py:206, in TensorBoardLogger.log_metrics(self, metrics, step)
    204 except Exception as ex:
    205     m = f"\n you tried to log {v} which is currently not supported. Try a dict or a scalar/tensor."
--> 206     raise ValueError(m) from ex

ValueError: 
 you tried to log -1 which is currently not supported. Try a dict or a scalar/tensor.

@jmoralez
Copy link
Member

Thanks. That seems to be a protobuf error, can you try the fix suggested here?

@kkckk1110
Copy link
Author

That's really helpful! Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants