Skip to content
This repository was archived by the owner on Aug 1, 2025. It is now read-only.

Commit 601baa9

Browse files
committed
turn off normalize_ir, turn on use_functionalization by default
ghstack-source-id: 86d8054 Pull Request resolved: #1026
1 parent 1e43618 commit 601baa9

File tree

2 files changed

+5
-1
lines changed

2 files changed

+5
-1
lines changed

torchdynamo/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,10 @@
1313
from .utils import guard_failures
1414
from .utils import orig_code_map
1515

16+
# TODO: remove this config entirely
17+
import functorch.compile
18+
functorch.compile.config.use_functionalize = True
19+
1620
__all__ = [
1721
"optimize",
1822
"optimize_assert",

torchdynamo/config.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ class AccessLimitingConfig(ModuleType):
6666
fake_tensor_propagation = True
6767

6868
# run FX normalization passes in optimizer
69-
normalize_ir = True
69+
normalize_ir = False
7070

7171
# If a tensor subclass type is in this set, torchdynamo will inline the
7272
# __torch_function__ logic of the subclass.

0 commit comments

Comments
 (0)