Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add tvm wasm export #582

Closed
wants to merge 28 commits into from
Closed

feat: add tvm wasm export #582

wants to merge 28 commits into from

Conversation

WenheLI
Copy link
Collaborator

@WenheLI WenheLI commented Sep 9, 2020

This PR adds a wasm export solution for tensorflow.
More models' support will come in the future.

To export the model into wasm, you need to export WASM=1 in the command line to set the env variable.

The above env variable is just a temporary solution. In the final release, we should be able to config the output format per job. Therefore, this support will be added after #565 is landed.

@yorkie
Copy link
Member

yorkie commented Sep 10, 2020

To export the model into wasm, you need to export WASM=1 in the command line to set the env variable.

I think adding an option from CLI is better than the environment:

$ pipcook run foobar.json --type wasm|node

@yorkie
Copy link
Member

yorkie commented Sep 10, 2020

And I found some .bc files in our source, it might be unstable because they expect to run on specific v8.

@@ -10,6 +10,8 @@ Before starting the installation, please make sure the following environments ar
- macOS, Linux
- Node.js 12

**Note:** To use `wasm` output format, you need to manually install [`emsdk`](https://emscripten.org/docs/introducing_emscripten/index.html) and export the `emcc` and `emsdk` to the environmental variable.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we include the emcc/emsdk inside Pipcook?

return output.toArray();
}

module.exports = predict;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EOF

@@ -61,7 +62,8 @@
"test": "midway-bin test --ts --full-trace",
"migration": "sequelize-cli db:migrate",
"migration:undo": "sequelize-cli db:migrate:undo",
"benchmark": "node benchmark/bootstrap.js"
"benchmark": "node benchmark/bootstrap.js",
"postinstall": "python3 -m pip install https://files.pythonhosted.org/packages/67/ff/011f588d54153c5d8ee3841d137acf752933f78eb1e3d1c5ffc6c1cb5a32/pipcook_tvm-0.7.dev1-cp37-cp37m-macosx_10_15_x86_64.whl"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we have a PyPi package for this? Installing from this whl causes the following 2 issues:

  1. Python upgrade/downgrade not working.
  2. mirror index not working.

@@ -0,0 +1,41 @@
/*
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is created by us? We don't specify the license in this way :(

const {dict, open} = boa.builtins();

// download tvm runtime from oss
const tvmjsPromise = execAsync(`wget http://ai-sample.oss-cn-hangzhou.aliyuncs.com/tvmjs/dist/tvmjs.bundle.js`, {cwd: dist});
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to use string template.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need a cache for this bundle.

const tvmjsPromise = execAsync(`wget http://ai-sample.oss-cn-hangzhou.aliyuncs.com/tvmjs/dist/tvmjs.bundle.js`, {cwd: dist});
fileQueue.push(tvmjsPromise);

const model = keras.models.load_model(path.join(opts.modelPath, "model.h5"));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about splitting the framework-specific generator to another file?

Copy link
Member

@yorkie yorkie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And the CI should be fixed :)

@WenheLI
Copy link
Collaborator Author

WenheLI commented Sep 22, 2020

Await TVM releasement.

@yorkie
Copy link
Member

yorkie commented Sep 22, 2020

Await TVM releasement.

@WenheLI May I ask you to share which release is to be waited, and what problem(s) to be resolved by this release?

@WenheLI
Copy link
Collaborator Author

WenheLI commented Sep 22, 2020

We are awaiting apache/tvm#6486 (v0.7).
They will provide binary distribution after this release, which will help us reduce the redundant work during installation.

@yorkie
Copy link
Member

yorkie commented Oct 9, 2020

See the following notes from CPython Documentation:

if the Python executable is found in /usr/local/bin/python, it will assume that the libraries are in /usr/local/lib/pythonX.Y. (In fact, this particular path is also the “fallback” location, used when no executable file named python is found along PATH.) The user can override this behavior by setting the environment variable PYTHONHOME, or insert additional directories in front of the standard path by setting PYTHONPATH.

Thus the PYTHONHOME is an unnecessary set, because if the python executable is /path/to/boa/.miniconda/bin/python, the libraries are at /path/to/boa/.miniconda/lib/pythonX.Y.

/cc @FeelyChau @WenheLI @rickycao-qy

@yorkie
Copy link
Member

yorkie commented Oct 9, 2020

I will remove it after #602 gets merged because this PR might require it as well :)

@yorkie yorkie mentioned this pull request Oct 10, 2020
@@ -317,6 +317,66 @@ export class PipelineService {
return path.join(CoreConstants.PIPCOOK_RUN, id, 'output.tar.gz');
}

// private _generateWASMOutput(dist: string, opts: GenerateOptions, fileQueue: Array<Promise<void | string>>): void {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@WenheLI @FeelyChau Boa is working with worker_threads at #602, this function could be reimplemented with worker_threads to avoid blocking the server.

@WenheLI WenheLI closed this Dec 11, 2020
@WenheLI
Copy link
Collaborator Author

WenheLI commented Dec 11, 2020

Too many conflicts, open another PR for this feature

@WenheLI WenheLI deleted the feat/tvm-support branch December 11, 2020 03:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants