Skip to content
This repository was archived by the owner on Aug 1, 2025. It is now read-only.

Conversation

@anijain2305
Copy link
Contributor

@anijain2305 anijain2305 commented Aug 26, 2022

This work is based on @Chillee minifier infrastructure.

The existing minifier works after AotAutograd is run, and runs on the generated forward and backward pass. Therefore, it cannot minify issues in the Aot Autograd tracing, decomps, partitioner etc.

This PR adds minifier to the TorchDynamo produced Fx graph (opposed to the Aot Autograd produced Fx Graphs). This makes it backend-agnostic. In the generated repro file, we use torchdynamo.optimize("compiler_name") (opposed to make_fx in the existing infra).

However, there are few points to notice

  • Existing infra first dumps a file that we can run further to minify. It uses make_fx to generate the Fx Graph module for minifier. This makes it pretty nice and isolated.
  • We can't use make_fx here because we want to run torchdynamo.optimize compiler here. Therefore, the minification runs in the main process. This might be problematic if the process crashes during minification.

@voznesenskym
Copy link
Contributor

  • We can't use make_fx here because we want to run torchdynamo.optimize compiler here. Therefore, the minification runs in the main process. This might be problematic if the process crashes during minification.

Can we use export? I believe it has a very similar api to make_fx, in that it takes a callable and args and produces an fx graph.

@anijain2305 anijain2305 changed the title [RFC][WIP] Backend Agnostic Minifier for TorchDynamo produced Fx Graphs Backend Agnostic Minifier for TorchDynamo produced Fx Graphs Aug 31, 2022
Copy link
Contributor

@Chillee Chillee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly LGTM, modulo naming nits.

Copy link
Contributor

@Chillee Chillee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants