-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stablehlo compiler #338
base: main
Are you sure you want to change the base?
Stablehlo compiler #338
Conversation
Add COMPILE_STABLEHLO_OP_BY_OP CompileDepth. Allow compilation/ execution starting from stablehlo.
|
❌ 4 Tests Failed:
View the top 3 failed test(s) by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
env/activate
Outdated
fi | ||
pip install $TT_TORCH_HOME/dist/torchvision*.whl | ||
pip install --pre torch-mlir torchvision |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's torchvision for here?
tt_torch/tools/utils.py
Outdated
@@ -275,7 +276,7 @@ def post_init(self): | |||
else: | |||
torch._dynamo.config.inline_inbuilt_nn_modules = True | |||
|
|||
def save_unique_ops(self): | |||
def save_unique_ops(self, mode=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perhaps default to mode="torch"
as opposed to None.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could do that. Just want to confirm, by default we want the files to be named {self.results_path}{pytest_test}_torch_unique_ops.json
or {self.results_path}{pytest_test}_unique_ops.json
then? Right now, the way it is:
default --> {self.results_path}{pytest_test}_unique_ops.json
torch --> {self.results_path}{pytest_test}_torch_unique_ops.json
stablehlo --> {self.results_path}{pytest_test}_stablehlo_unique_ops.json
executor.add_gm(gm, graph_constants) | ||
return executor | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section is duplicated. Can you move it into a helper function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think my latest commit should address this. By complicated, I assumed you meant the isinstance
checks. I agree. I now check and assign parsed_module when initializing the Executor object.
self.gm = gm | ||
self.graph_constants = tuple(graph_constants) | ||
|
||
def gm_op_by_op(self, *inputs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe shlo_op_by_op
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this is actually the same function from torch_backend, but stripped. This is actually going through a torch graph op by op, and only called during COMPILE_STABLEHLO_OP_BY_OP, since it's implied the input is a torch graph. Doing so, we create 2 json dumps - one for stablehlo and one for torch, during one run.
# No conversion required. | ||
new_inputs = new_inputs + ((input),) | ||
inputs = new_inputs | ||
if self.compiler_config.compile_depth == CompileDepth.EXECUTE: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't this handled by base backed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is base_backend, which returns executor. This is in parallel with torch executor.
def _base_backend(gm_or_shlo, example_inputs, compiler_config):
# Called during EXECUTE
# input is a torch graph
if isinstance(gm_or_shlo, torch.fx.GraphModule):
shlo, executor, gm, graph_constants = torch_to_shlo(
gm_or_shlo, example_inputs, compiler_config
)
# input is a stablehlo string module
elif isinstance(gm_or_shlo, str):
shlo = parse_module_from_str(gm_or_shlo)
executor = StablehloExecutor(
parsed_module=shlo, compiler_config=compiler_config
)
else:
assert False, "Compiler input not valid"
binary = shlo_to_flatbuffer(shlo, compiler_config)
executor.set_binary(binary)
return executor
|
|
|
|
Ticket
#203
Problem description
Currently, we only support compilation or execution of torch graphs. We do not support compilation of stablehlo graphs because this would fail at the first unsupported stablehlo op. The solution is to create a module for each op. This can be done through stablehlo support.
What's changed
_base_backend
-->_torch_backend
_base_backend
is for execution, for other compile depths using_torch_backend
or_shlo_backend
tt_torch/dynamo/backend.py
intott_torch/dynamo/backend.py
,tt_torch/dynamo/shlo_backend.py
,tt_torch/dynamo/torch_backend.py
tt_torch/dynamo/backend.py
contents are mostly moved tott_torch/dynamo/torch_backend.py
,tt_torch/dynamo/executor.py
tt_torch/dynamo/executor.py
provides base executor class to be used by both stablehlo and torch backendsenv/activate
edited to add support for dependenciesTesting strategy: