Why the results are different for make_evaluation_prediction and predictor.predict #1303
Unanswered
ethanqi1109
asked this question in
Q&A
Replies: 1 comment
-
That should give the same forecasts if your train set is the same as the test set except that the the last |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
For example,
Training dataset: 2020-01-01 ~ 2020-01-25
Testing dataset: 2020-01-01 ~ 2020-01-31
after trained a DeepAR model with training dataset,
predictor = model.train(train_set)
the forecast of evaluation for 2020-01-25 ~ 2020-01-31 is,
forecast_it, ts_it = make_evaluation_predictions(
dataset=test_set,
predictor=predictor,
num_samples=num_samples
)
As my think, there is another way to evaluate will get the same result:
forecast_it = predictor.predict(train_set)
However, I got two results with big gap, therefore I'd like to know is there any difference between "make_evaluation_predictions" and "predictor.predict"?
Thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions