Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[microTVM][PyTorch][Tutorial]Adding a PyTorch tutorial for microTVM with CRT #13324

Merged
merged 9 commits into from
Nov 9, 2022

Conversation

mehrdadh
Copy link
Member

@mehrdadh mehrdadh commented Nov 8, 2022

This PR adds a tutorial to compile/run a PyTorch model using microTVM CRT.

@mehrdadh mehrdadh requested a review from gromero November 8, 2022 17:24
@tvm-bot
Copy link
Collaborator

tvm-bot commented Nov 8, 2022

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

@tvm-bot
Copy link
Collaborator

tvm-bot commented Nov 8, 2022

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

Copy link
Contributor

@alanmacd alanmacd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is great, thanks for writing it up!

@@ -35,6 +37,18 @@

IS_TEMPLATE = not os.path.exists(os.path.join(PROJECT_DIR, MODEL_LIBRARY_FORMAT_RELPATH))

MEMORY_SIZE_BYTES = 2 * 1024 * 1024
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe add comment as to why this is default memory size

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added, it's not an interesting reason. It's basically chosen to pass CRT tests in TVM

Copy link
Contributor

@gromero gromero Nov 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mehrdadh I think that's actually interesting / important info: the constant is determined experimentally, it's really good to have comment about it as @alanmacd requested, I don't even consider it a nit :-)

Copy link
Contributor

@gromero gromero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mehrdadh Thanks for the tutorial! Nice.

I just have nits (see inline) and one question:

Could we add Wno-unused-variable flag to MODEL_CFLAGS? like:

diff --git a/src/runtime/crt/host/Makefile.template b/src/runtime/crt/host/Makefile.template
index a8e725ade..2caf7ba0b 100644
--- a/src/runtime/crt/host/Makefile.template
+++ b/src/runtime/crt/host/Makefile.template
@@ -22,7 +22,7 @@ CXXFLAGS ?= -Werror -Wall -std=c++11 -DTVM_HOST_USE_GRAPH_EXECUTOR_MODULE -DMEMO
 LDFLAGS ?= -Werror -Wall
 
 # Codegen produces spurious lines like: int32_t arg2_code = ((int32_t*)arg_type_ids)[(2)];
-MODEL_CFLAGS ?= -Wno-error=unused-variable -Wno-error=missing-braces -Wno-error=unused-const-variable
+MODEL_CFLAGS ?= -Wno-error=unused-variable -Wno-error=missing-braces -Wno-error=unused-const-variable -Wno-unused-variable
 
 AR ?= ${PREFIX}ar
 CC ?= ${PREFIX}gcc

Otherwise I get bazillions of unused variables warnings when I run the script which is not encouraging for people running / exploring the tutorial for the first time. -Wno-error=unused-variable is not enough to silence it. wdyt?

# Define Target, Runtime and Executor
# -------------------------------
#
# In this tutorial we use AOT host driven executor. To compile the model
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: host-driven?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

from tvm.contrib.download import download_testdata
from tvm.relay.backend import Executor

#################################
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add one more # to "cover" the line below?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


#################################
# Load a pre-trained PyTorch model
# -------------------------------
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add one more - to match end of line above?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

shape_list = [(input_name, input_shape)]
relay_mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)

#################################
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add more # chars to "cover" end of line below?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


#################################
# Define Target, Runtime and Executor
# -------------------------------
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add enough - chars to align this to the end of line above?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

# -------------------------------
#
# In this tutorial we use AOT host driven executor. To compile the model
# for an emulated embedded environment on an X86 machine we use C runtime (CRT)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/X86/x86/

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

# In this tutorial we use AOT host driven executor. To compile the model
# for an emulated embedded environment on an X86 machine we use C runtime (CRT)
# and we use `host` micro target. Using this setup, TVM compiles the model
# for C runtime which can run on a X86 CPU machine with the same flow that
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here: x86 instead of X86

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done



# Simulate a microcontroller on the host machine. Uses the main() from `src/runtime/crt/host/main.cc`
# To use physical hardware, replace "host" with something matching your hardware.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about instead of using "something'" say "replace 'hosts' with another physical micro target, e.g. 'nrf52840' or 'mps2_an521' -- see more more target examples in micro_train.py and micro_tflite.py tutorials "?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@mehrdadh
Copy link
Member Author

mehrdadh commented Nov 8, 2022

@gromero that's a good point. I added that compiler flag.

Copy link
Contributor

@gromero gromero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mehrdadh Thanks, LGTM.

@gromero gromero merged commit fbe174b into apache:main Nov 9, 2022
@mehrdadh mehrdadh deleted the microtvm/pytorch_tutorial branch November 9, 2022 16:44
xinetzone pushed a commit to daobook/tvm that referenced this pull request Nov 10, 2022
…pache#13324)

This commit adds a tutorial to compile and run a PyTorch model using   
microTVM, the AOT host-driven executor, and C runtime (CRT).
xinetzone pushed a commit to daobook/tvm that referenced this pull request Nov 25, 2022
…pache#13324)

This commit adds a tutorial to compile and run a PyTorch model using   
microTVM, the AOT host-driven executor, and C runtime (CRT).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants