Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PEMS-BAY results #12

Open
vgsatorras opened this issue Sep 23, 2021 · 1 comment
Open

PEMS-BAY results #12

vgsatorras opened this issue Sep 23, 2021 · 1 comment

Comments

@vgsatorras
Copy link

vgsatorras commented Sep 23, 2021

Hi,

Thank you for publishing the code. I am trying to reproduce the results for the PEMS-BAY dataset. The loss I get is larger than the one reported in the Appendix of the paper. I just pulled the repository and ran the code with the provided commands. Following I copy the log at epoch 99.

2021-09-23 02:36:07,372 - INFO - Epoch [99/200] (57000) train_mae: 10.9693, val_mae: 2.6471
2021-09-23 02:36:48,472 - INFO - Test: mae: 2.5019, mape: 0.0420, rmse: 4.2803
2021-09-23 02:36:48,473 - INFO - Horizon 15mins: mae: 1.4035, mape: 0.0296, rmse: 3.0428
2021-09-23 02:36:48,473 - INFO - Horizon 30mins: mae: 1.8508, mape: 0.0425, rmse: 4.3016
2021-09-23 02:36:48,473 - INFO - Horizon 60mins: mae: 2.3758, mape: 0.0592, rmse: 5.5099
2021-09-23 02:36:48,474 - INFO - Epoch [99/200] (57000) train_mae: 10.9693, test_mae: 2.5019, lr: 0.000005, 357.9s, 378.4s

The training loss seems too large, could it be it is diverging? Maybe an error has been introduced into the repository in one of the last updates?

Best,
Victor

@chaoshangcs
Copy link
Owner

chaoshangcs commented Sep 27, 2021

Thanks for your message. I checked this situation. The performance on the PEMS-BAY do have the gap with our previous implementation. It might come from the previous updates. I will check the code, re-ture the parameters and get back to you soon. Thanks for your remind.

Update:
I quickly finetuned some parameters. It seems I used a large learning rate before. When I set base_lr to 0.001, the performance became better. I guess the model is very sensitive to the parameters about learning rate: base_lr, lr_decay_ratio and steps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants