Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay] Port param dict save/load from NNVM #2620

Merged
merged 7 commits into from
Feb 27, 2019

Conversation

weberlo
Copy link
Contributor

@weberlo weberlo commented Feb 18, 2019

This PR addresses #2413 by porting the nnvm.compiler.save_param_dict and nnvm.compiler.load_param_dict functions to Relay as tvm.relay.save_param_dict and tvm.relay.load_param_dict, respectively.

To serialize NDArrays, we wrap them in TensorValues, which have serialization functions defined on them. Let me know of any concerns about this approach.

While porting, I found that ClosureNode, TupleValueNode, and TensorValueNode were not registered with TVM_REGISTER_NODE_TYPE. This PR also addresses this issue.

The test_bigendian_rpc_param test has not yet been confirmed to pass, so I still need to wait for the results from CI.

@jroesch @tqchen @srkreddy1238

Apologies for being late on this one. Let me know of anyone else I should @.

@weberlo
Copy link
Contributor Author

weberlo commented Feb 20, 2019

test_bigendian_rpc_param seems to be passing, when it shouldn't be.

I think this path

host = os.environ.get("TVM_POWERPC_TEST_HOST", None)
port = os.environ.get("TVM_POWERPC_TEST_PORT", 9090)
if host is None:
    return

is still being taken.

@tqchen
Copy link
Member

tqchen commented Feb 20, 2019

@weberlo we should use the original function that actually saves the ndarray dict(as in NNVMv1), otherwise the binary format won't be readable by graph_runtime.load_params

@weberlo
Copy link
Contributor Author

weberlo commented Feb 20, 2019

@tqchen Do you mean these functions? If all of the tests are passing without using those functions, does that mean we need a new test?

@tqchen
Copy link
Member

tqchen commented Feb 20, 2019

Yes, I do mean these functions. A more general test is to use these save_dict function and then call graphruntime.load_params to load them in.

@tqchen tqchen added the status: need update need update based on feedbacks label Feb 22, 2019
@tqchen
Copy link
Member

tqchen commented Feb 22, 2019

@weberlo please update the PR

@weberlo
Copy link
Contributor Author

weberlo commented Feb 22, 2019

@tqchen I'm having some weird problems with includes. Maybe my approach is wrong, but I'm trying to take the functions from nnvm/src/compiler/graph_runtime.cc and put them in src/runtime/graph/graph_runtime.cc. These functions require the following node to be defined in the header:

/*!
 * \brief wrapper node container for exchange.
 */
struct NDArrayWrapperNode : public ::tvm::Node {
  std::string name;
  tvm::runtime::NDArray array;

  void VisitAttrs(tvm::AttrVisitor* v) final {
    v->Visit("name", &name);
    v->Visit("array", &array);
  }

  static constexpr const char* _type_key = "NDArrayWrapper";
  TVM_DECLARE_NODE_TYPE_INFO(NDArrayWrapperNode, Node);
};

TVM_DEFINE_NODE_REF(NDArrayWrapper, NDArrayWrapperNode);

. But when I try to put #include <tvm/node/node.h> in the header, it says it can't find that header.

Is there a different header I should be including, or is this struct definition using outdated types?

@weberlo
Copy link
Contributor Author

weberlo commented Feb 26, 2019

@tqchen: @jroesch suggested to move the functions to a new set of files src/relay/backend/param_dict.[h/cc]. Any objections?

@tqchen
Copy link
Member

tqchen commented Feb 26, 2019

that works you can move the nnvm impl there, and use the new exposed packfubc

.. code-block:: python

# compile and save the modules to file.
graph, lib, params = nnvm.compiler.build(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change the example code to relay

@tqchen
Copy link
Member

tqchen commented Feb 27, 2019

@weberlo some final comments

@tqchen tqchen merged commit 3e765ed into apache:master Feb 27, 2019
@tqchen tqchen added status: accepted and removed status: need update need update based on feedbacks labels Feb 27, 2019
@yzhliu yzhliu mentioned this pull request Mar 2, 2019
28 tasks
wweic pushed a commit to neo-ai/tvm that referenced this pull request Mar 9, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Mar 12, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Mar 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants