-
Notifications
You must be signed in to change notification settings - Fork 750
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: RecursionError: maximum recursion depth exceeded while calling a Python object #2947
Labels
bug
Something isn't working
Comments
Can reproduce. Further reduced example to: import numpy as np
import pandas as pd
from gluonts.torch import TemporalFusionTransformerEstimator
freq = "H"
estimator = TemporalFusionTransformerEstimator(
freq=freq,
context_length=10,
prediction_length=5,
num_batches_per_epoch=2,
trainer_kwargs={"max_epochs": 1},
)
train = [
{
"target": np.random.random(size=(50,)),
"start": pd.Period("01-01-2023", freq=freq),
}
for _ in range(20)
]
model = estimator.train(train)
pred = [
{"target": np.arange(10), "start": pd.Period("01-01-2023", freq=freq)},
{"target": np.arange(10), "start": pd.Period("01-01-2023", freq=freq)},
{"target": np.arange(10), "start": pd.Period("01-01-2023", freq=freq)},
]
for i in range(5000):
forecasts = model.predict(pred)
list(forecasts) |
Fixed in #2951, will be released shortly |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Description
Getting recursion depth exceeded with what I believe is coming from gluon calling get_batch (see stack trace "[Previous line repeated 964 more times]").
To Reproduce
This is an issue with the way the forcast_generator unpacks the batch. Only happens after calling model.predict several times and iterating through the returned generator.
This example isn't totally complete because you have to provide your own model, sorry.
Error message or code output
Environment
(Add as much information about your environment as possible, e.g. dependencies versions.)
I'm using the cpu based environment; no CUDA.
The text was updated successfully, but these errors were encountered: