Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prune the feed op in compiler #18997

Merged

Conversation

chengduoZH
Copy link
Contributor

@chengduoZH chengduoZH commented Aug 5, 2019

Fix #18922
ParallelExecutor doesn't need feed op.

@chengduoZH chengduoZH force-pushed the prune_feed_op_in_compiler branch from 09139e3 to e836bc3 Compare August 9, 2019 08:32
@chengduoZH chengduoZH changed the title [WIP] Prune the feed op in compiler Prune the feed op in compiler Aug 9, 2019
@chengduoZH chengduoZH force-pushed the prune_feed_op_in_compiler branch from e836bc3 to ab58c72 Compare August 9, 2019 08:34
self.exe,
model_filename=self.model_filename,
params_filename=self.params_filename,
main_program=main)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There will insert feed layer in the program in save_inference_model.

main_program = main_program._prune(targets=target_vars)
main_program = main_program._inference_optimize(prune_read_op=True)
fetch_var_names = [v.name for v in target_vars]
prepend_feed_ops(main_program, feeded_var_names)
append_fetch_ops(main_program, fetch_var_names)

@chengduoZH chengduoZH requested a review from luotao1 August 9, 2019 08:48
Copy link
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

保存了inference model,预测时希望使用ParallelExecutor,应该怎么操作?
2 participants