-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High level API的一些使用问题 #11246
Comments
I can answer the question about memory optimize, we will apply the memory optimize to program by default, seems we didn't have the needs to export a high-level API. |
@dzhwinter opening the memory optimization will cause performance problem, which have not been solved. Thus, it is necessary to export an interface and set it False. |
def train_network():
predict = inference_network()
label = fluid.layers.data(name='label', shape=[1], dtype='int64')
cost = fluid.layers.cross_entropy(input=predict, label=label)
avg_cost = fluid.layers.mean(cost)
accuracy = fluid.layers.accuracy(input=predict, label=label)
return [avg_cost, accuracy]
if use_cuda and not fluid.core.is_compiled_with_cuda():
return 下面trainer的使用上,有些参数放在Trainer初始化中,有些放在成员函数train中,是否可以合并或者简化: trainer = fluid.Trainer(
train_func=train_program, place=place, optimizer_func=optimizer_func)
trainer.train(
reader=train_reader,
num_epochs=1,
event_handler=event_handler,
feed_order=['pixel', 'label'])
trainer = fluid.Trainer(
train_func=train_program, place=place, optimizer_func=optimizer_func)
|
函数参数的问题,这个可以通过python的partial function来解决 |
隐藏训练迭代部分 不是通常框架的方式,建议还是要暴露出训练feed数据的主循环。另一方面是方便 写调试代码。目前event回调方式很不灵活,只能在一个全局函数里打打日志 干不了其他的。 |
您好,此issue在近一个月内暂无更新,我们将于今天内关闭。若在关闭后您仍需跟进提问,可重新开启此问题,我们将在24小时内回复您。因关闭带来的不便我们深表歉意,请您谅解~感谢您对PaddlePaddle的支持! |
背景是要升级一些models里模型,遇到若干问题:
如何使得训练中的test_program和train_program不同?
self.test_program = self.train_program.clone()
如何Load部分参数做初始化?
predicate
的一个函数传入给load_vars
才能只load部分参数。io.load_persistables(exe, dirname=param_path)当前这样使用,无法只load部分参数。
如何做内存优化?
The text was updated successfully, but these errors were encountered: