You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now, I am trying to reproduce the results on your papers including cutoff and back-translation which also shows a great improvement.
But, now I'm struggling due to the absence of some details.
For example, I can reproduce the results for CoLA task after a huge amount of trials and error for finding proper hyper-parameter (cutoff_ratio, ce_loss, js_loss), however, can't reproduce the results for other tasks on GLUE.
It might be due to that I have not found proper hyper-parameters yet.. So, if you let me know or give the detailed hyper-parameters for each GLUE task, then it will be very helpful..!
Can you let me know some details (e.g., used value of alpha,beta for each task) for reproducing the results?
Also, in the case of back-translation, now I'm trying to reproduce it with WMT en-de, de-en 19 in fairseq. But, I can't improve the naive baseline. For example, my reproducing results is almost 88, but your reporting is 91.7 in RTE with RoBERTa large So, can you let me know the details of back-translation and related hyper-parameters?
Thanks for your help and look forward to hearing from you!
The text was updated successfully, but these errors were encountered:
Congratulations on this interesting work!
Now, I am trying to reproduce the results on your papers including cutoff and back-translation which also shows a great improvement.
But, now I'm struggling due to the absence of some details.
For example, I can reproduce the results for CoLA task after a huge amount of trials and error for finding proper hyper-parameter (cutoff_ratio, ce_loss, js_loss), however, can't reproduce the results for other tasks on GLUE.
It might be due to that I have not found proper hyper-parameters yet.. So, if you let me know or give the detailed hyper-parameters for each GLUE task, then it will be very helpful..!
Can you let me know some details (e.g., used value of alpha,beta for each task) for reproducing the results?
Also, in the case of back-translation, now I'm trying to reproduce it with WMT en-de, de-en 19 in fairseq. But, I can't improve the naive baseline. For example, my reproducing results is almost 88, but your reporting is 91.7 in RTE with RoBERTa large
So, can you let me know the details of back-translation and related hyper-parameters?
Thanks for your help and look forward to hearing from you!
The text was updated successfully, but these errors were encountered: