-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Paddle V4 API - Word to Vec #10214
Comments
Thanks for this design! I understand that this design proposes a way to define and call (from Python) Fluid functions. In particular, the definition of Fluid functions depends not only on Fluid blocks, but also Python decorator However, given that Fluid functions might be called from other host languages (C++, Java, Go) other than Python, we'd prefer Fluid function definitions implemented only in Fluid. I think that Fluid functions might be called from other host languages is due to the way we do inference -- the form of the Fluid inference engine could be a gRPC server in C++, Java, or Go, an Objective-C program built for ARM, etc; anyway, they need to be able to call the Fluid function describing the inference process. I am drafting the proposal of function definition and function invocation in Fluid: #10244 |
Agree, we need to store function signature in Fluid program desc just like the exported functions in a shared library. In this way a program desc can be called from other languages.
To me
Agree. |
To my understanding, the user syntax has two layers, the underlying layer implements the basic but flexible elements which can cover 100% use cases, trivial but complete, just like the TF; the upper layer makes some encapsulation to make the 80% frequent use cases easier, like Keras or other wrappers. I wonder whether this is the mix of these two layers or the underlying layer? |
def predict(self):
# every embedding will share the same parameter
with fluid.var_scope("shared_embedding"):
embed_first = fluid.layers.embedding(
input=first_word,
size=[self.dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=self.is_sparse)
with fluid.var_scope("shared_embedding"):
embed_second = fluid.layers.embedding(
input=second_word,
size=[self.dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=self.is_sparse)
with fluid.var_scope("shared_embedding"):
embed_third = fluid.layers.embedding(
input=third_word,
size=[self.dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=self.is_sparse)
with fluid.var_scope("shared_embedding"):
embed_forth = fluid.layers.embedding(
input=forth_word,
size=[self.dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=self.is_sparse) can be rewritten to def predict(self):
embeds = []
num_embeds = 4
# every embedding will share the same parameter
with fluid.var_scope("shared_embedding"):
for i in range(num_embeds):
embed = fluid.layers.embedding(
input=first_word,
size=[self.dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=self.is_sparse)
embeds.append(embed) also @network("firstw", "secondw", "thirdw", "forthw")
def infer(self):
first_word = fluid.layers.data(name='firstw', shape=[1], dtype='int64')
second_word = fluid.layers.data(name='secondw', shape=[1], dtype='int64')
third_word = fluid.layers.data(name='thirdw', shape=[1], dtype='int64')
forth_word = fluid.layers.data(name='forthw', shape=[1], dtype='int64') to word_names = 'firstw secondw thirdw forthw nextw'.split()
@network(*word_names)
def train_step(self):
words = [fluid.layers.data(name=n, shape=[1], dtype='int64') for n in word_names] |
Thanks @Superjomn ! That is a very good question! Sorry maybe the title is a little misleading, this issue mainly tries to address how to do cross language invocation (e.g., Python calls Fluid program). It is primarily done by using the The issue you raise is very important, however I don't think it's the focus for this issue. The example "network construction" code is taken from our fluid example directly. We need to answer this question in another discussion for Fluid as a whole. Sorry perhaps my title is too general. |
@Superjomn thanks!!! The first code change actually should be: def predict(self):
embeds = []
num_embeds = 4
for i in range(num_embeds):
# every embedding will share the same parameter
with fluid.var_scope("shared_embedding"):
embed = fluid.layers.embedding(
input=first_word,
size=[self.dict_size, EMBED_SIZE],
dtype='float32',
is_sparse=self.is_sparse)
embeds.append(embed)
Changed. The second one I prefer making it plain simple for illustration purpose :) |
您好,此issue在近一个月内暂无更新,我们将于今天内关闭。若在关闭后您仍需跟进提问,可重新开启此问题,我们将在24小时内回复您。因关闭带来的不便我们深表歉意,请您谅解~感谢您对PaddlePaddle的支持! |
API design: #10152
The text was updated successfully, but these errors were encountered: