-
Notifications
You must be signed in to change notification settings - Fork 196
Poor performance and poor results #15
Comments
Hmm, 9 hours compared to 10 minutes? wow, that is horrible, sadly though I'm swamped with work and can't figure out what's wrong (I checked your code and it seems fine). @HighCWu do you have any ideas? |
I guess it should be caused by a small batch size. Have you used the same training batch size and the same epoch as the official? @colanim large batch size keeps the training steady, but it also means that the training time will be longer. |
Thanks for the answer. I used same epoch. With the official implementation I used batch size of 32, with this one batch size of 64, so I don't think the problem come from here. I think there is a problem in my code, because 9h ! Compare to 10 with the tensorflow script, this is weird.. |
I tried this code and https://github.com/hanxiao/bert-as-service to get sentence representation, and tensorflow is much faster, like 200ms vs 2000ms |
Thanks @MrKamiZhou, so I'm guessing that something is wrong here because Keras shouldn't be this slow. I will try to figure it out as soon as I have some free time. |
I'm trying to fine tune BERT on STS-B dataset.
I used the following notebook to fine tune it using BERT-keras.
(As described in the paper, I just added a classification layer using the CLS token of the output of BERT).
However, there is great differences in performance and results between this notebook and the script used in the official version for fine tuning :
Note : Pearson / Spearman and correlation metrics used to evaluate the accuracy on the STS-B dataset
Why there is such a difference between the 2 approach ?
The text was updated successfully, but these errors were encountered: