-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the implementation of embedding in PaddlePaddle #3867
Comments
|
感谢定位。 data = paddle.layer.data("word",paddle.data_type.integer_value_sequence(input_dim))
emb = paddle.layer.embedding(input=data, size=emb_dim);
conv_3 = paddle.networks.sequence_conv_pool(input=emb, context_len=3,
hidden_size=hid_dim) 我们其中,我们想自己替换embedding这一步,data转emb,data怎么解析,转成我们的向量后怎么封装成emb,查看embedding也就是相对其内部格式有一个了解,这样才能对接paddle.networks.sequence_conv_pool |
请参考此issue下面的回复,我会把这个问题加入paddle 的F&Q中: |
感谢 我们研究一下 |
PaddleBook 中 SRL 一节,https://github.com/PaddlePaddle/book/blob/develop/07.label_semantic_roles/train.py#L142 使用了加载预训练参数,使用方法也可以参考这个例子。 |
看了https://github.com/PaddlePaddle/book/blob/develop/07.label_semantic_roles/train.py#L142 感觉很有用,还请值班人员帮忙看看以下我的理解是否有误: 通过parameters.set,加载了预先训练出来的44068*32的矩阵(44068个词向量,每个词向量32维,每一维是个float32类型;该矩阵里的第i个词向量,i对应该词在词典里的位置),加载这个后,整个embedding层用该矩阵map,把输入里的word(词典位置) map成刚刚加载矩阵的对应的word向量,整个map完成后,输出给其他层使用。我们只需要加载自己训练的对应矩阵就可以达到自定义embedding的目的。 |
好的 谢谢啦 |
We wanna know the implementation of the function embedding inside, but we can't find it using tools such as source insight. Could you guys help us to locate this function in the paddle project?
The text was updated successfully, but these errors were encountered: