-
Notifications
You must be signed in to change notification settings - Fork 6.1k
8364991: Incorrect not-exhaustive error #27247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
👋 Welcome back jlahoda! A progress list of the required criteria for merging this PR into |
❗ This change is not yet ready to be integrated. |
Webrevs
|
|
||
if (nestedRPOne.equals(currentReplaced)) { | ||
foundMatchingReplaced = true; | ||
break; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just use continue ACCEPT;
, and remove the foundMatchingReplaced
variable?
public class Test { | ||
private int test(Root r) { | ||
return switch (r) { | ||
case Root(R1 _, _, _) -> 0; |
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
!(rpOther.nested[i] instanceof BindingPattern bpOther) || | ||
!types.isSubtype(types.erasure(bpOne.type), types.erasure(bpOther.type))) { | ||
if (useHashes) { | ||
continue NEXT_PATTERN; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the code would be more readable if these labels and jumps could be (re)factored out. Another improvement in this sense could be using explicit types instead of var
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've moved this check into a separate method (9138ef6), and added some comments. Looks much better, thanks!
continue NEXT_PATTERN; | ||
} | ||
if (rpOne.nested[i] instanceof BindingPattern bpOne) { | ||
if (!types.isSubtype(types.erasure(bpOne.type), types.erasure(bpOther.type))) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems from the description that the subtyping test is in the critical path when no hashes are being used. Would it make sense to use a cache and probably reduce the number of times we need to do the full fledged subtyping?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added caching (11ee4df), let's see how that will work. Thanks!
} | ||
} | ||
Set<PatternDescription> patterns = patternSet; | ||
Map<PatternDescription, Set<PatternDescription>> replaces = new IdentityHashMap<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why using IdentityHashMap here? won't this imply, potentially, having duplicated info in this map? I'm sure there is a good reason for this but not clear to me at first glance. It seems like we need to preserve like the position
of the element being replaced.
Also if the implementation depends on this DS we should probably document it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the identity map is intentional. It is entirely possible two equivalent types are produced from two different originating pattern sets, and when backtracking, we want to use the correct set. That is achieve by using the identity search. I've added a comment:
51b7fc2
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for adding the comment!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to be honest, the problem I have with IdentityHashMap is that it has a lot of misses, at least that's that I saw while debugging some examples. What I saw was that in order to make sure that a given PatternDescription was not in the map, several keys in the table were visited. This somehow can kill the benefit of using a map as it can degenerate in some cases. So I wonder if it could be possible to define a key that takes into consideration the position of the JCRecordPattern we got the record pattern from, in order to make them "unique". Of course this is in the performance side and could be done in a follow-up patch
alive = alive.or(resolveYields(tree, prevPendingExits)); | ||
} | ||
|
||
private final Map<Pair<Type, Type>, Boolean> isSubtypeCache = new HashMap<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that you want to compare a type using Types::isSameType instead of Type::equals method. There is one cache that does this: Infer::incorporationCache. Although it could be that identity comparison is enough for this application, dunno.
EDIT: But I can see that this current definition should be faster than doing Types::isSameType, yep scratch the above. I think it is better the way you defined it. In type inference we do the isSameType to derive more constraints which is not applicable / necessary here. Also if the Pair stores the erased types instead of the unerased types, then we have the equivalent to comparing two types using Types::isSameType as erased types tend to be unique
} | ||
} | ||
Set<PatternDescription> patterns = patternSet; | ||
Map<PatternDescription, Set<PatternDescription>> replaces = new IdentityHashMap<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for adding the comment!
* - it was produced by a reduction from a record pattern that is equivalent to | ||
* the existing pattern | ||
*/ | ||
private boolean nestedComponentsEquivalent(RecordPattern existing, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep looks better now, thanks!
} | ||
} | ||
Set<PatternDescription> patterns = patternSet; | ||
Map<PatternDescription, Set<PatternDescription>> replaces = new IdentityHashMap<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to be honest, the problem I have with IdentityHashMap is that it has a lot of misses, at least that's that I saw while debugging some examples. What I saw was that in order to make sure that a given PatternDescription was not in the map, several keys in the table were visited. This somehow can kill the benefit of using a map as it can degenerate in some cases. So I wonder if it could be possible to define a key that takes into consideration the position of the JCRecordPattern we got the record pattern from, in order to make them "unique". Of course this is in the performance side and could be done in a follow-up patch
Consider this code:
javac (JDK 25) will produce a compile-time error for this code:
This error is not correct according to the JLS. JLS defines a set of possible reductions of pattern sets, and if there exists a series of reductions from the pattern set into a pattern set that covers the selector type, the switch is exhaustive.
One such reduction is that if there's a sub-set of (record) patterns that only differ in one component ("the mismatching component"), we can replace them with a (set of) patterns where this component is reduced, and the other components are unmodified.
Such path exists here (every line shows a set of patterns that is being transformed):
The problem here is that in the first step, javac chooses this path:
If javac would do full backtracking, it could go back, and choose the other path, and find out the switch is exhaustive. But, full naive backtracking is, I think, prohibitively too slow for even relatively small switches. The implementation approach javac is using so far is that it does not remove some of the reduced patterns from the set. So, it can use the pattern again, and hence basically pick a different reduction path. But, it fails here, and if we would keep the relevant patterns here in the set, the overall performance would be too bad.
So, this PR proposes that, when reducing a sub-set of patterns to another set of patterns, javac keeps a record that the new pattern(s) originate in specific original pattern(s), and if it needs to, it will look into this record when searching for possible reductions. javac does "fast reduction rounds" normally using hashes, but if it fails to find reductions using the fast approach, it switches to a (much) slower approach that uses plain subtyping instead of hashes. The new approach to search for reductions proposed herein is part of this slow round only.
So, basically, the new chain after this PR is roughly:
Progress
Issue
Reviewing
Using
git
Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk.git pull/27247/head:pull/27247
$ git checkout pull/27247
Update a local copy of the PR:
$ git checkout pull/27247
$ git pull https://git.openjdk.org/jdk.git pull/27247/head
Using Skara CLI tools
Checkout this PR locally:
$ git pr checkout 27247
View PR using the GUI difftool:
$ git pr show -t 27247
Using diff file
Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/27247.diff
Using Webrev
Link to Webrev Comment