-
Notifications
You must be signed in to change notification settings - Fork 83
Increase DEFAULT_CONSTANT_FOLD_INPUT_SIZE_LIMIT #2527
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I have seen graphs like `Add(bias, 1)` in gemma3 where bias is an initializer. (Why?) This PR increases the input limit so these bias initializers can be folded
❌ 11 Tests Failed:
View the top 3 failed test(s) by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
Maybe we should make this an option we can change if needed. |
It is already an option in the optimize() interface. This changes the default |
Sounds fine to me as long as it doesn't have any other unexpected impact in standard models. The one place where this may be relevant is the ConstantOfShape operator (already discussed in issue/PR elsewhere), because I think it is common to see ConstantOfShape that generate tensors with order of 500 or 1000 values. I think we keep them as ConstantOfShape instead of folding, so there may be no impact there. |
I have seen graphs like
Add(bias, 1)
in gemma3 where bias is an initializer. (Why?) This PR increases the default input limit so these bias initializers can be folded