You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Tianduo,
I really appreciated your work in developing the learnable data augmentation for sentence representation learning. Your proposed method DiffAug has shown really good performance in semi-supervised and supervised settings.
However, I was wondering how is the performance of DiffAug on unsupervised settings.
If you have already tried, did DiffAug still show better performance than SimCSE?
If not, how do you think we first train the prefix on unsupervised contrastive learning (freeze the language model), and then jointly train the language model and prefix?
The text was updated successfully, but these errors were encountered:
In our preliminary experiments, we did try to use unsupervised learning objectives (e.g., MLM), but the final performance is not satisfying.
For your question that whether it is possible to do contrastive learning twice (one for prefix-tuning, the other for joint tuning), I suggest you may read this paper. The idea is quite relevant to yours.
I believe it is interesting and worthwhile to explore whether we can train a data augmentation module (e.g., prefix) with only unsupervised data. As we suggested in our paper, making positive pairs meaningfully different is a promising way to improve the performance of contrastive learning.
Hi Tianduo,
I really appreciated your work in developing the learnable data augmentation for sentence representation learning. Your proposed method DiffAug has shown really good performance in semi-supervised and supervised settings.
However, I was wondering how is the performance of DiffAug on unsupervised settings.
The text was updated successfully, but these errors were encountered: