Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Result on ReadMe ? #1

Open
dylanee2 opened this issue Jul 25, 2017 · 6 comments
Open

Result on ReadMe ? #1

dylanee2 opened this issue Jul 25, 2017 · 6 comments

Comments

@dylanee2
Copy link

Hi @tobyyouup ,

I thought it would be good for people like me (who want to use your implementation) if you have result of running your implementation on the dataset.

Were you able to reproduce the result of original conv seq2seq?

Thanks!

@anglil
Copy link

anglil commented Jul 28, 2017

@tobyyouup could you report the result in terms of BLEU scores in the wmt'14 en-de translation task? Thanks!

@tobyyouup
Copy link
Owner

@dylanee2 @anglil I have run the iwslt14 de-en task, the command is shown in the README, and I can get a BLEU score 25 at step 60k with batchsize 32.

@anglil
Copy link

anglil commented Jul 31, 2017

@tobyyouup thanks, that looks promising. Could you comment on how many GPUs you used and what dataset (newstest 2014 or newstest 2015 or something else) you got a BLEU score of 25 on?

@tobyyouup
Copy link
Owner

Hi anglil, just one tesla K40 GPU. The traning data is IWSLT Germain-English, I concatenate dev2010, tst2010, tst2011 and tst2012 as the test set.

@anglil
Copy link

anglil commented Sep 13, 2017

Thanks.

@cauivy
Copy link

cauivy commented Nov 7, 2018

@tobyyouup Hi, tobyyouup, do you remember how long it took to train step 60k with batchsize 32 on iwslt14 de-en?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants