Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-processing and Training #2

Open
BettyHcZhang opened this issue Apr 22, 2020 · 6 comments
Open

Pre-processing and Training #2

BettyHcZhang opened this issue Apr 22, 2020 · 6 comments

Comments

@BettyHcZhang
Copy link

Thank your for sharing your code. I followed your steps to pre-processing and training, but I found Overall F1 score is 69, lower than paper report. So I think I must missed some important part in the experiment, could you give me some suggestions? Thank you again.

@taoshen58
Copy link
Owner

I've never encountered such a problem. I re-runed this code from scratch 2 months ago and got a similar result.

How did u search the full logical form? Due to there is a timeout parameter, too much burden on one CPU would decrease the successful searching rate and further affect the model training.

@BettyHcZhang
Copy link
Author

BettyHcZhang commented Apr 22, 2020

I searched the logical form for training and dev, the success ratio is listed in the picture, maybe the figure here is already affected by the timeout parameter?
lf_ratio

@taoshen58
Copy link
Owner

The rates are reasonable. It's weird to get only 69% overall score.

@BettyHcZhang
Copy link
Author

I think your model architecture is reasonable and brilliant, and thank you very much for your quick reply.

@taoshen58
Copy link
Owner

Sorry for not helping. If you have any further questions, welcome asking anytime.

@sixlife
Copy link

sixlife commented Feb 17, 2022

The data link is invalid, can you help me send the data link? thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants