-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[microTVM][PyTorch][Tutorial]Adding a PyTorch tutorial for microTVM with CRT #13324
Conversation
Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.
Generated by tvm-bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is great, thanks for writing it up!
@@ -35,6 +37,18 @@ | |||
|
|||
IS_TEMPLATE = not os.path.exists(os.path.join(PROJECT_DIR, MODEL_LIBRARY_FORMAT_RELPATH)) | |||
|
|||
MEMORY_SIZE_BYTES = 2 * 1024 * 1024 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: maybe add comment as to why this is default memory size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added, it's not an interesting reason. It's basically chosen to pass CRT tests in TVM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
b7e242a
to
091d332
Compare
b1de307
to
5284076
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mehrdadh Thanks for the tutorial! Nice.
I just have nits (see inline) and one question:
Could we add Wno-unused-variable
flag to MODEL_CFLAGS
? like:
diff --git a/src/runtime/crt/host/Makefile.template b/src/runtime/crt/host/Makefile.template
index a8e725ade..2caf7ba0b 100644
--- a/src/runtime/crt/host/Makefile.template
+++ b/src/runtime/crt/host/Makefile.template
@@ -22,7 +22,7 @@ CXXFLAGS ?= -Werror -Wall -std=c++11 -DTVM_HOST_USE_GRAPH_EXECUTOR_MODULE -DMEMO
LDFLAGS ?= -Werror -Wall
# Codegen produces spurious lines like: int32_t arg2_code = ((int32_t*)arg_type_ids)[(2)];
-MODEL_CFLAGS ?= -Wno-error=unused-variable -Wno-error=missing-braces -Wno-error=unused-const-variable
+MODEL_CFLAGS ?= -Wno-error=unused-variable -Wno-error=missing-braces -Wno-error=unused-const-variable -Wno-unused-variable
AR ?= ${PREFIX}ar
CC ?= ${PREFIX}gcc
Otherwise I get bazillions of unused variables warnings when I run the script which is not encouraging for people running / exploring the tutorial for the first time. -Wno-error=unused-variable
is not enough to silence it. wdyt?
# Define Target, Runtime and Executor | ||
# ------------------------------- | ||
# | ||
# In this tutorial we use AOT host driven executor. To compile the model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: host-driven?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
from tvm.contrib.download import download_testdata | ||
from tvm.relay.backend import Executor | ||
|
||
################################# |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: add one more #
to "cover" the line below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
################################# | ||
# Load a pre-trained PyTorch model | ||
# ------------------------------- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: add one more -
to match end of line above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
shape_list = [(input_name, input_shape)] | ||
relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list) | ||
|
||
################################# |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: add more #
chars to "cover" end of line below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
################################# | ||
# Define Target, Runtime and Executor | ||
# ------------------------------- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: add enough -
chars to align this to the end of line above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
# ------------------------------- | ||
# | ||
# In this tutorial we use AOT host driven executor. To compile the model | ||
# for an emulated embedded environment on an X86 machine we use C runtime (CRT) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/X86/x86/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
# In this tutorial we use AOT host driven executor. To compile the model | ||
# for an emulated embedded environment on an X86 machine we use C runtime (CRT) | ||
# and we use `host` micro target. Using this setup, TVM compiles the model | ||
# for C runtime which can run on a X86 CPU machine with the same flow that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here: x86 instead of X86
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
|
||
# Simulate a microcontroller on the host machine. Uses the main() from `src/runtime/crt/host/main.cc` | ||
# To use physical hardware, replace "host" with something matching your hardware. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about instead of using "something'" say "replace 'hosts' with another physical micro target, e.g. 'nrf52840' or 'mps2_an521' -- see more more target examples in micro_train.py and micro_tflite.py tutorials "?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@gromero that's a good point. I added that compiler flag. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mehrdadh Thanks, LGTM.
…pache#13324) This commit adds a tutorial to compile and run a PyTorch model using microTVM, the AOT host-driven executor, and C runtime (CRT).
…pache#13324) This commit adds a tutorial to compile and run a PyTorch model using microTVM, the AOT host-driven executor, and C runtime (CRT).
This PR adds a tutorial to compile/run a PyTorch model using microTVM CRT.