-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add external memory network demo #696
Conversation
e23f54e
to
1dddec7
Compare
1dddec7
to
9b4ea69
Compare
self.name = name | ||
self.mem_slot_size = mem_slot_size | ||
self.mem_fea_size = mem_fea_size | ||
self.scale = 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.scale = scale
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has been corrected.
self.scale = 5 | ||
self.external_memory = memory(name=self.name, | ||
size=mem_fea_size*mem_slot_size, | ||
boot_bias= ParamAttr(initial_std=0.01, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bad indent
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has been updated.
bias_attr = False, | ||
act = SoftmaxActivation(), | ||
size = self.mem_slot_size, | ||
name='read_weight') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in order to avoid name confict when using multiple memory, this and other names should be prefixed by self.name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar issues have been addressed.
|
||
return memory_output | ||
|
||
def MakeConstantVector(self, vec_size, value, dummy_input): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
python naming convention: make_constant_vector
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed the function name following the convention.
memory_removed = mixed_layer(input = [identity_projection(input=self.external_memory), | ||
identity_projection(input=memory_remove_neg)], | ||
bias_attr = False, | ||
act = LinearActivation()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
78 and 81 can be combinded as written as: memory_removed = self.external_memory - memory_remove.
See https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/trainer_config_helpers/tests/configs/math_ops.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part of the code has been updated using math_ops
print_layer(input=[erase_vec]) | ||
print_layer(input=[add_vec]) | ||
|
||
out_prod = out_prod_layer(norm_cosine_similarity_write, erase_vec, name="outer") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Creating a constant vector erase_vec for this is very ugly. A nicer way to do this is to enhance "repeat" layer to allow repeat in both directions, similar to "repmat" in matlat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking into the repeat layer currently.
|
||
out_prod_add = out_prod_layer(norm_cosine_similarity_write, add_vec, name="outer_add") | ||
|
||
memory_output = mixed_layer(input = [identity_projection(input=memory_removed), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using addto_layer can make this looks simpler.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switch to addto_layer
from paddle.trainer_config_helpers import * | ||
|
||
|
||
class ExternalMemory(object): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to comment for the class, and its member functions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comments have been added to both the class and member functions.
eaa7d62
to
1be184e
Compare
感谢您给PaddlePaddle贡献代码。由于Paddle V1/V2版本已不再维护,相关代码也已从develop分支上删除,因此关闭您的PR,欢迎您向Paddle最新版-Fluid贡献代码。 |
) * add api_guides low_level backward parameter program_en * Apply suggestions from code review Co-Authored-By: zy0531 <48094155+zy0531@users.noreply.github.com> * Apply suggestions from code review Co-Authored-By: zy0531 <48094155+zy0531@users.noreply.github.com> * Update backward_en.rst * Update parameter_en.rst * Update program_en.rst * Update doc/fluid/api_guides/low_level/program_en.rst
This PR includes an example implementation of an external memory network and example usage with a simple task.