You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I0318 11:12:45.466972 15554 lightning_utils.py:182] <class '__main__.ResnetLightningExample'> hparams: Namespace(amp_level='O2', backend='', batch_size=16, debug_print_env=False, debug_skip_loaded_hparams_check=False, do_test=False, early_stop_metric='val_loss', early_stop_mode='min', early_stop_patience=10, enable_batch_size_scaling=True, enable_early_stop=False, gpus=None, learning_rate=0.01, max_epochs=1, min_epochs=1, model_load_checkpoint_path='', model_save_path='', nodes=1, use_amp=False)
Traceback (most recent call last):
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/zoox/experimental/estevens/pytorch/lightning_resnet50.py", line 128, in <module>
ResnetLightningExample.init_from_cli(sys.argv[1:]).main()
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/zoox/tflight/lightning_utils/lightning_utils.py", line 255, in main
trainer.fit(self)
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/pypi__pytorch_lightning_python3_deps/pytorch_lightning/trainer/trainer.py", line 630, in fit
self.run_pretrain_routine(model)
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/pypi__pytorch_lightning_python3_deps/pytorch_lightning/trainer/trainer.py", line 748, in run_pretrain_routine
self.logger.log_hyperparams(ref_model.hparams)
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/pypi__pytorch_lightning_python3_deps/pytorch_lightning/loggers/base.py", line 18, in wrapped_fn
fn(self, *args, **kwargs)
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/pypi__pytorch_lightning_python3_deps/pytorch_lightning/loggers/tensorboard.py", line 113, in log_hyperparams
exp, ssi, sei = hparams(params, {})
File "/home/estevens/.cache/bazel/_bazel_estevens/7fa7f74bbe03dba6c4e36403df17d704/execroot/zoox/bazel-out/k8-py3-fastbuildcuda/bin/experimental/estevens/pytorch/lightning_resnet50.runfiles/pypi__torch_python3_deps/torch/utils/tensorboard/summary.py", line 156, in hparams
raise ValueError('value should be one of int, float, str, bool, or torch.Tensor')
ValueError: value should be one of int, float, str, bool, or torch.Tensor
To Reproduce
To reproduce, have parser.add_argument('--gpus', default=None, type=str) and then don't give the --gpus CLI argument.
Expected behavior
Screen out None values before sending them on.
PyTorch Version (e.g., 1.0): 1.4.0
OS (e.g., Linux): Ubuntun 14.04
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.6.8
CUDA/cuDNN version: 10.0
GPU models and configuration: 2080 Ti
Any other relevant information:
The text was updated successfully, but these errors were encountered:
this is set by the tensorboard, not lightning...
Why would you need to set gpus to NOne when NOne is the default value?
We did the conversion to primitives in #1130
I tested this and it is fixed on master. Hparams with unsupported types (such as None in your case) are converted to string and now appear in the hparams tab in Tensorboard.
🐛 Bug
I can't set
hparams.gpus
toNone
:To Reproduce
To reproduce, have
parser.add_argument('--gpus', default=None, type=str)
and then don't give the--gpus
CLI argument.Expected behavior
Screen out None values before sending them on.
conda
,pip
, source): pipThe text was updated successfully, but these errors were encountered: