Skip to content
This repository was archived by the owner on Aug 1, 2025. It is now read-only.

Commit 366629c

Browse files
committed
turn off normalize_ir, turn on use_functionalization by default
ghstack-source-id: 1561bb9 Pull Request resolved: #1026
1 parent 7d77d92 commit 366629c

File tree

2 files changed

+6
-1
lines changed

2 files changed

+6
-1
lines changed

torchdynamo/__init__.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,11 @@
1313
from .utils import guard_failures
1414
from .utils import orig_code_map
1515

16+
# TODO: remove this config entirely
17+
import functorch.compile
18+
19+
functorch.compile.config.use_functionalize = True
20+
1621
__all__ = [
1722
"optimize",
1823
"optimize_assert",

torchdynamo/config.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ class AccessLimitingConfig(ModuleType):
6666
fake_tensor_propagation = True
6767

6868
# run FX normalization passes in optimizer
69-
normalize_ir = True
69+
normalize_ir = False
7070

7171
# If a tensor subclass type is in this set, torchdynamo will inline the
7272
# __torch_function__ logic of the subclass.

0 commit comments

Comments
 (0)