Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

not good when I use BERT for seq2seq model in keyphrase generation #59

Closed
whqwill opened this issue Nov 28, 2018 · 23 comments
Closed

not good when I use BERT for seq2seq model in keyphrase generation #59

whqwill opened this issue Nov 28, 2018 · 23 comments

Comments

@whqwill
Copy link

whqwill commented Nov 28, 2018

Hi,

recently, I am researching about Keyphrase generation. Usually, people use seq2seq with attention model to deal with such problem. Specifically I use the framework: https://github.com/memray/seq2seq-keyphrase-pytorch, which is implementation of http://memray.me/uploads/acl17-keyphrase-generation.pdf .

Now I just change its encoder part to BERT, but the result is not good. The experiment comparison of two models is in the attachment.

Can you give me some advice if what I did is reasonable and if BERT is suitable for doing such a thing?

Thanks.
RNN vs BERT in Keyphrase generation.pdf

@waynedane
Copy link

have u tried transformer decoder ?instead of rnn decoder.

@whqwill
Copy link
Author

whqwill commented Nov 28, 2018

not yet, I will try. But I think rnn decoder should not be such bad.

@waynedane
Copy link

not yet, I will try. But I think rnn decoder should not be such bad.

emmm,maybe u should used mean of last layer to initialize decoder, not the last token representation of last layer.
I am also very concerned about the results of using transformer decoder. If you are done, can you tell me? Thank you.

@waynedane
Copy link

I think the batch size of RNN with BERT is too small. pleas see

https://github.com/memray/seq2seq-keyphrase-pytorch/blob/master/pykp/dataloader.py
line 377-378

@whqwill
Copy link
Author

whqwill commented Nov 28, 2018

I don't know what you mean by giving me this link. I set to 10 really because of the memory problem. Actually, when sentence length is 512, the max batch size is only 5, if it is 6 or bigger there will be memory error for my GPU.

@whqwill
Copy link
Author

whqwill commented Nov 28, 2018

not yet, I will try. But I think rnn decoder should not be such bad.

emmm,maybe u should used mean of last layer to initialize decoder, not the last token representation of last layer.
I am also very concerned about the results of using transformer decoder. If you are done, can you tell me? Thank you.

You are right. Maybe the mean is better, I will try as well. Thanks.

@waynedane
Copy link

May i ask a question? R u chinese?23333

@waynedane
Copy link

Cause for one example, it has N targets. We wanna put all targets in the same batch. 10 is too small that the targets of one example would be in different batches probably.

@whqwill
Copy link
Author

whqwill commented Nov 28, 2018

I know, but ... the same problem ... my memory is limited .. so ...

PS. I am Chinese

@waynedane
Copy link

I know, but ... the same problem ... my memory is limited .. so ...

PS. I am Chinese

i am as well hahaha

@waynedane
Copy link

是不是语料的问题,bert是在wiki上训练的。我用kp20k训练了一个mini bert,在测试集上的accuracy目前是80%,你要不要试试用我这个作为encoder?

@whqwill
Copy link
Author

whqwill commented Nov 29, 2018 via email

@waynedane
Copy link

accuracy 是masklm和nextsentence两个任务的,不是key phrase generation,我没说清楚,抱歉。我的算力有限,两块p100, 快一个月了,目前还没训练完。80%是当前的表现。

@whqwill
Copy link
Author

whqwill commented Nov 29, 2018

你提到的mini bert 是什么意思?

@whqwill
Copy link
Author

whqwill commented Nov 29, 2018

我大概理解你的意思了,你相当于是用kp20重新预训练一个bert,不过这样做... 感觉确实蛮麻烦。

@waynedane
Copy link

我大概理解你的意思了,你相当于是用kp20重新预训练一个bert,不过这样做... 感觉确实蛮麻烦。

是的,用的是 Junseong Kim的代码:https://github.com/codertimo/BERT-pytorch ,模型规模比谷歌的BERT-Base Uncased都小很多。这个是L-8 H-256 A-8.我把目前训练的checkpoint和vocab文件发给你

@whqwill
Copy link
Author

whqwill commented Nov 29, 2018

但是你这个checkpoint,我的这个版本能直接用吗,还是说我必须装你的那个版本的代码?

@whqwill
Copy link
Author

whqwill commented Nov 29, 2018

你可以发到我邮箱 whqwill@126.com , 谢

@waynedane
Copy link

但是你这个checkpoint,我的这个版本能直接用吗,还是说我必须装你的那个版本的代码?

可以根据Junseong Kim 的代码创建一个bert model然后加载参数,不一定得安装

@whqwill
Copy link
Author

whqwill commented Nov 29, 2018

好的把。那你把checkpoint 发给我试试。

@thomwolf
Copy link
Member

thomwolf commented Nov 29, 2018

Hi guys,
I would like to keep the issues of this repository focused on the package it-self.
I also think it's better to keep the conversation in english so everybody can participate.
Please move this conversation to your repository: https://github.com/memray/seq2seq-keyphrase-pytorch or emails.
Thanks, I am closing this discussion.
Best,

@InsaneLife
Copy link

accuracy 是masklm和nextsentence两个任务的,不是key phrase generation,我没说清楚,抱歉。我的算力有限,两块p100, 快一个月了,目前还没训练完。80%是当前的表现。
你好,能把mini版模型发我一下吗,993001803@qq.com,谢谢啦。

@Accagain2014
Copy link

hi, @whqwill I have some doubts about the usage manner of bert with RNN.
In bert with RNN method, I see you only consider the last term's representation (I mean the TN's) as the input to RNN decoder, why not use the other term's representation, like T1 to TN-1 ? I think the last term's information is too less to represent all the context information.

jameshennessytempus pushed a commit to jameshennessytempus/transformers that referenced this issue Jun 1, 2023
ZYC-ModelCloud pushed a commit to ZYC-ModelCloud/transformers that referenced this issue Nov 14, 2024
ZYC-ModelCloud pushed a commit to ZYC-ModelCloud/transformers that referenced this issue Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants