Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Builtins] Optimize 'MakeKnownM' #4587
[Builtins] Optimize 'MakeKnownM' #4587
Changes from all commits
ce5acfe
83bbac6
ade58d8
54665b9
6377aa6
c27e088
f6845ef
20f4e15
d06d3af
f93d9df
bd8071f
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All these
INLINE
annotations seem a little risky to me. If you have any substantialdo
block inMakeKnownM
you're going to be at risk of exponential code bloat. Maybe we should just doINLINABLE
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why exponential? It's a constant factor: every call to
>>=
becomes, what, six pattern matches. Not too much of a deal, given that we don't do much inMakeKnownM
and we want all of it to be efficient, apart from stuff in tests, but it's still not that much.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you're right, it's not exponential. It would only become so if GHC got happy with case-of-case, which it's probably too smart to do. I'd be interested to know if performance is actually worse with
INLINABLE
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, there's something to worry about here indeed. So let's say we didn't have
MakeKnownSuccessWithLogs
, thenwould expand into
which would be turned by
case-of-case
intowhich would be turned by
case-of-known-constructor
intowhich is still perfectly linear.
But we do have
MakeKnownSuccessWithLogs
and so it is indeed very much possible we'll get duplication in the two success cases.But then
f
andg
are going to be known and so thecase-of-known-constructor
will save usReally, I don't think we should worry about this. Especially compared to the ridiculous code bloats with meanings of builtins inlining all the way through the pipeline at each and every step until the very construction of
EvaluationContext
(we won't have any problems withMakeKnownM
at any step, because the denotations have always been perfectly linear due to everything being fully known).It probably isn't any worse, but we're building quite a house of cards with all the optimizations here, I don't want to make it any more shaky unless we have to.