You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
estimator = DeepVAREstimator(freq=freq,
context_length=context_length,
prediction_length=prediction_length,
num_layers=num_layers,
num_cells=num_cells,
target_dim=16,
cardinality=[3,], # Need to fill the len of unique switch types
trainer=Trainer(epochs=epochs, callbacks=callbacks))
training_network = estimator.create_training_network()
training_network.initialize()
transformation = estimator.create_transformation()
predictor = estimator.create_predictor(
transformation=transformation, trained_network=training_network)
and the error trace
---------------------------------------------------------------------------
DeferredInitializationError Traceback (most recent call last)
/var/folders/mc/ph4ybm0x2mld5hj1nch6fpdr0000gn/T/ipykernel_27210/1339024294.py in <module>
49 transformation = estimator.create_transformation()
50
---> 51 predictor = estimator.create_predictor(
52 transformation=transformation, trained_network=training_network)
53
~/opt/anaconda3/lib/python3.9/site-packages/gluonts/mx/model/deepvar/_estimator.py in create_predictor(self, transformation, trained_network)
480 )
481
--> 482 copy_parameters(trained_network, prediction_network)
483
484 return RepresentableBlockPredictor(
~/opt/anaconda3/lib/python3.9/site-packages/gluonts/mx/util.py in copy_parameters(net_source, net_dest, ignore_extra, allow_missing)
112 ) as model_dir:
113 model_dir_path = str(Path(model_dir) / "tmp_model")
--> 114 net_source.save_parameters(model_dir_path)
115 net_dest.load_parameters(
116 model_dir_path,
~/opt/anaconda3/lib/python3.9/site-packages/mxnet/gluon/block.py in save_parameters(self, filename, deduplicate)
447 params = {v: k for k, v in reverse_params.items()}
448
--> 449 arg_dict = {key: val._reduce() for key, val in params.items()}
450 save_fn = _mx_npx.save if is_np_array() else ndarray.save
451 save_fn(filename, arg_dict)
~/opt/anaconda3/lib/python3.9/site-packages/mxnet/gluon/block.py in <dictcomp>(.0)
447 params = {v: k for k, v in reverse_params.items()}
448
--> 449 arg_dict = {key: val._reduce() for key, val in params.items()}
450 save_fn = _mx_npx.save if is_np_array() else ndarray.save
451 save_fn(filename, arg_dict)
~/opt/anaconda3/lib/python3.9/site-packages/mxnet/gluon/parameter.py in _reduce(self)
389 ctx = context.cpu()
390 if self._stype == 'default':
--> 391 block = self.list_data()
392 if len(block) > 1:
393 if is_np_array():
~/opt/anaconda3/lib/python3.9/site-packages/mxnet/gluon/parameter.py in list_data(self)
587 "list_data() because its storage type is %s. Please use " \
588 "row_sparse_data() instead." % (self.name, self._stype))
--> 589 return self._check_and_get(self._data, list)
590
591 def grad(self, ctx=None):
~/opt/anaconda3/lib/python3.9/site-packages/mxnet/gluon/parameter.py in _check_and_get(self, arr_list, ctx)
228 self.name, str(ctx), str(self._ctx_list)))
229 if self._deferred_init:
--> 230 raise DeferredInitializationError(
231 "Parameter '%s' has not been initialized yet because initialization was " \
232 "deferred. Actual initialization happens during the first forward pass. " \
DeferredInitializationError: Parameter 'deepvartrainingnetwork5_None_distr_mu_weight' has not been initialized yet because initialization was deferred. Actual initialization happens during the first forward pass. Please pass one batch of data through the network before accessing Parameters. You can also avoid deferred initialization by specifying in_units, num_features, etc., for network layers
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Need a way to implement early stopping based on metric value for DeepVar and LSTNet model.
Added a custom Trainer callback, but I am getting error. Referred https://ts.gluon.ai/stable/tutorials/advanced_topics/trainer_callbacks.html?highlight=patience for the custom callback. The code doesn't throw error if we use "SimpleFeedForwardEstimator" as soon as we change the estimator too "DeepVAREstimator" it throws below error.
and the error trace
Beta Was this translation helpful? Give feedback.
All reactions