Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How about implement another Run() in framework::Executor #7610

Closed
Xreki opened this issue Jan 17, 2018 · 4 comments
Closed

How about implement another Run() in framework::Executor #7610

Xreki opened this issue Jan 17, 2018 · 4 comments
Assignees
Labels
预测 原名Inference,包含Capi预测问题等

Comments

@Xreki
Copy link
Contributor

Xreki commented Jan 17, 2018

Suggested by @qingqing01 , we may implement another Run() in paddle/framework/executor.h which do the same things as the Run() of Python Executor.

def run(self,
program=None,
feed=None,
fetch_list=None,
feed_var_name='feed',
fetch_var_name='fetch',
scope=None,
return_numpy=True):

The interface maybe like and will do following things:

void Run(const ProgramDesc& program,
         Scope* scope,
         std::map<std::string, Tensor>& feeds,
         std::map<std::string, Tensor>& fetchs,
         std::string& feed_var_name="feed",
         std::string& fetch_var_name="fetch") {
  1. clone the program
  2. prepend feed_op
  3. set feed variables
  4. append fetch_op
  5. call Run(const ProgramDesc&, Scope*, int, bool create_local_scope = true, bool create_vars = true)
  6. get fetch variables 
}

So that we can directly Run both in Python API and C++ API.

@Xreki Xreki added the 预测 原名Inference,包含Capi预测问题等 label Jan 17, 2018
@sidgoyal78
Copy link
Contributor

@Xreki: This idea seems like a reasonable approach to avoid the Resolver class. So I think I would prefer this approach.

(Another comment, just comparing against the Resolver class design: I think steps 1 - 4 which are extra in this design should be relatively cheap in terms of computation. So calling run again and again, in case of inference in a streaming fashion, should be also fine. )

@Xreki
Copy link
Contributor Author

Xreki commented Jan 18, 2018

Can you give some detail about running steps 1-4 in a streaming fashion?

@sidgoyal78
Copy link
Contributor

I just was saying that if we call the Run() function again and again:

  • in the previous design with ProgramResolver and the current design, the only repetitive part that i see is steps 1 to 4 in the above function.
    So I think this should be fine since these steps are not costly..

@Xreki
Copy link
Contributor Author

Xreki commented Jan 18, 2018

OKay. It seems an optimization method. I add it to the TODO list in inference framework project to remark it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
预测 原名Inference,包含Capi预测问题等
Projects
None yet
Development

No branches or pull requests

4 participants