Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Canonicalize root var when making response from new solver #108839

Merged
merged 2 commits into from
Mar 9, 2023

Conversation

compiler-errors
Copy link
Member

During trait solving, if we equate two inference variables ?0 and ?1 but don't equate them with any rigid types, then InferCtxt::probe_ty_var will return Err for both of these. The canonicalizer code will then canonicalize the variables independently(!), and the response will not reflect the fact that these two variables have been made equal.

This hinders inference and I also don't think it's sound? I haven't thought too much about it past that, so let's talk about it.

r? @lcnr

@@ -238,6 +238,7 @@ impl<'a, 'tcx> EvalCtxt<'a, 'tcx> {
&& has_changed
&& !self.in_projection_eq_hack
&& !self.search_graph.in_cycle()
&& false
Copy link
Member Author

@compiler-errors compiler-errors Mar 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both of these UI tests I added in the PR fail here. That's because the inference guidance we applied from a Certainty::AMBIGUOUS response can make the goal pass the next time it's evaluated.

This seems like it's to be expected though, as long as we're returning canonical var values in ambiguous responses 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't be, the issue again that the solver proves goals in sequence and doesn't reprove previous goals.

I guess disabling it for now is alright but I do want us to get to a point where we always internally keep reproving goals until we hit a fixpoint

@lcnr
Copy link
Contributor

lcnr commented Mar 7, 2023

This hinders inference and I also don't think it's sound?

yeah, it's unsound. if you change the mirror example to not require ?0: Sized we drop the equality constraint. This example causes an ICE during mir typeck

trait Mirror {
    type Item: ?Sized;
}
impl<T: ?Sized> Mirror for *const T {
    type Item = T;
}

fn mirror<T>() -> T
where
    *const T: Mirror<Item = i32>,
{
    todo!()
}
fn mirror_reverse<T>(_: T)
where
    *const u32: Mirror<Item = T>,
{
}

fn main() {
    let x = mirror();
    mirror_reverse(x);
}

Copy link
Contributor

@lcnr lcnr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, this should affect the old canonicalization routine as well, but we don't use that with inference vars, at least not in soundness critical ways iirc. Ah well, would be good to fix if you want, r=me after adding this as a test regardless

@rustbot rustbot added T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. WG-trait-system-refactor The Rustc Trait System Refactor Initiative (-Znext-solver) labels Mar 7, 2023
@rustbot
Copy link
Collaborator

rustbot commented Mar 7, 2023

Some changes occurred to the core trait solver

cc @rust-lang/initiative-trait-system-refactor

@compiler-errors
Copy link
Member Author

@bors r=lcnr

@bors
Copy link
Contributor

bors commented Mar 7, 2023

📌 Commit 3bfcfd0 has been approved by lcnr

It is now in the queue for this repository.

@bors bors added the S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. label Mar 7, 2023
GuillaumeGomez added a commit to GuillaumeGomez/rust that referenced this pull request Mar 8, 2023
…oot-var, r=lcnr

Canonicalize root var when making response from new solver

During trait solving, if we equate two inference variables `?0` and `?1` but don't equate them with any rigid types, then `InferCtxt::probe_ty_var` will return `Err` for both of these. The canonicalizer code will then canonicalize the variables independently(!), and the response will not reflect the fact that these two variables have been made equal.

This hinders inference and I also don't think it's sound? I haven't thought too much about it past that, so let's talk about it.

r? `@lcnr`
bors added a commit to rust-lang-ci/rust that referenced this pull request Mar 9, 2023
…iaskrgr

Rollup of 8 pull requests

Successful merges:

 - rust-lang#108754 (Retry `pred_known_to_hold_modulo_regions` with fulfillment if ambiguous)
 - rust-lang#108759 (1.41.1 supported 32-bit Apple targets)
 - rust-lang#108839 (Canonicalize root var when making response from new solver)
 - rust-lang#108856 (Remove DropAndReplace terminator)
 - rust-lang#108882 (Tweak E0740)
 - rust-lang#108898 (Set `LIBC_CHECK_CFG=1` when building Rust code in bootstrap)
 - rust-lang#108911 (Improve rustdoc-gui/tester.js code a bit)
 - rust-lang#108916 (Remove an unused return value in `rustc_hir_typeck`)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
@bors bors merged commit 2428083 into rust-lang:master Mar 9, 2023
@rustbot rustbot added this to the 1.70.0 milestone Mar 9, 2023
flip1995 pushed a commit to flip1995/rust that referenced this pull request Mar 10, 2023
…iaskrgr

Rollup of 8 pull requests

Successful merges:

 - rust-lang#108754 (Retry `pred_known_to_hold_modulo_regions` with fulfillment if ambiguous)
 - rust-lang#108759 (1.41.1 supported 32-bit Apple targets)
 - rust-lang#108839 (Canonicalize root var when making response from new solver)
 - rust-lang#108856 (Remove DropAndReplace terminator)
 - rust-lang#108882 (Tweak E0740)
 - rust-lang#108898 (Set `LIBC_CHECK_CFG=1` when building Rust code in bootstrap)
 - rust-lang#108911 (Improve rustdoc-gui/tester.js code a bit)
 - rust-lang#108916 (Remove an unused return value in `rustc_hir_typeck`)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
Dylan-DPC added a commit to Dylan-DPC/rust that referenced this pull request Mar 21, 2023
…lcnr

new solver: make all goal evaluation able to be automatically rerun

It is generally wrong to call `evaluate_goal` multiple times or `evaluate_goal` and `evaluate_all` for the same `QueryResult` without correctly handling rerunning the goals when inference makes progress. Not doing so will result in the assertion in `evaluate_goal` firing because rerunning the goal will lead to a more accurate `QueryResult`.

Currently there are lots of places that get this wrong and generally it is complex and error prone to handle correctly everywhere. This PR introduces a way to add goals to the `EvalCtxt` and then run all the added goals in a loop so that `evaluate_goal`/`evaluate_all` is not necessary to call manually.

There are a few complications for making everything work "right":
1. the `normalizes-to` hack that replaces the rhs with an unconstrained infer var requires special casing in the new `try_evaluate_added_goals` function similar to how `evaluate_goal`'s assertion special cases that hack.

2. `assemble_candidates_after_normalizing_self_ty`'s normalization step needs to be reran for each candidate otherwise the found candidates will potentially get a more accurate `QueryResult` when rerunning the projection/trait goal which can effect the `QueryResult` of the projection/trait goal.
This is implemented via `EvalCtxt::probe`'s closure's `EvalCtxt` inheriting the added goals of the `EvalCtxt` that `probe` is called on, allowing us to add goals in a probe, and then enter a nested probe for each candidate and evaluate added goals which include the normalization step's goals.

I made `make_canonical_response` evaluate added goals so that it will be hard to mess up the impl of the solver by forgetting to evaluate added goals. Right now the only way to mess this up would be to call `response_no_constraints` (which from the name is obviously weird).

The visibility of `evaluate_goal` means that it can be called from various `compute_x_goal` or candidate assembly functions, this is generally wrong and we should never call `evaluate_goal` manually, instead we should be calling `add_goal`/`add_goals`. This is solved by moving `evaluate_goal` `evaluate_canonical_goal` and `compute_goal` into `eval_ctxt`'s module and making them private so they cannot be called from elsewhere, forcing people to call `add_goal/s` and `evaluate_added_goals_and_make_canonical_resposne`/`try_evaluate_added_goals`

---

Other changes:
- removed the `&& false` that was introduced to the assertion in `evaluate_goal` in rust-lang#108839
- remove a `!self.did_overflow()` requirement in `search_graph.is_empty()` which causes goals that overflow to ICE
- made `EvalCtxt::eq` take `&mut self` and add all the nested goals via `add_goals` instead of returning them as 99% of call sites just immediately called `EvalCtxt::add_goals` manually.

r? `@lcnr`
matthiaskrgr added a commit to matthiaskrgr/rust that referenced this pull request Mar 21, 2023
…lcnr

new solver: make all goal evaluation able to be automatically rerun

It is generally wrong to call `evaluate_goal` multiple times or `evaluate_goal` and `evaluate_all` for the same `QueryResult` without correctly handling rerunning the goals when inference makes progress. Not doing so will result in the assertion in `evaluate_goal` firing because rerunning the goal will lead to a more accurate `QueryResult`.

Currently there are lots of places that get this wrong and generally it is complex and error prone to handle correctly everywhere. This PR introduces a way to add goals to the `EvalCtxt` and then run all the added goals in a loop so that `evaluate_goal`/`evaluate_all` is not necessary to call manually.

There are a few complications for making everything work "right":
1. the `normalizes-to` hack that replaces the rhs with an unconstrained infer var requires special casing in the new `try_evaluate_added_goals` function similar to how `evaluate_goal`'s assertion special cases that hack.

2. `assemble_candidates_after_normalizing_self_ty`'s normalization step needs to be reran for each candidate otherwise the found candidates will potentially get a more accurate `QueryResult` when rerunning the projection/trait goal which can effect the `QueryResult` of the projection/trait goal.
This is implemented via `EvalCtxt::probe`'s closure's `EvalCtxt` inheriting the added goals of the `EvalCtxt` that `probe` is called on, allowing us to add goals in a probe, and then enter a nested probe for each candidate and evaluate added goals which include the normalization step's goals.

I made `make_canonical_response` evaluate added goals so that it will be hard to mess up the impl of the solver by forgetting to evaluate added goals. Right now the only way to mess this up would be to call `response_no_constraints` (which from the name is obviously weird).

The visibility of `evaluate_goal` means that it can be called from various `compute_x_goal` or candidate assembly functions, this is generally wrong and we should never call `evaluate_goal` manually, instead we should be calling `add_goal`/`add_goals`. This is solved by moving `evaluate_goal` `evaluate_canonical_goal` and `compute_goal` into `eval_ctxt`'s module and making them private so they cannot be called from elsewhere, forcing people to call `add_goal/s` and `evaluate_added_goals_and_make_canonical_resposne`/`try_evaluate_added_goals`

---

Other changes:
- removed the `&& false` that was introduced to the assertion in `evaluate_goal` in rust-lang#108839
- remove a `!self.did_overflow()` requirement in `search_graph.is_empty()` which causes goals that overflow to ICE
- made `EvalCtxt::eq` take `&mut self` and add all the nested goals via `add_goals` instead of returning them as 99% of call sites just immediately called `EvalCtxt::add_goals` manually.

r? ``@lcnr``
matthiaskrgr added a commit to matthiaskrgr/rust that referenced this pull request Mar 21, 2023
…lcnr

new solver: make all goal evaluation able to be automatically rerun

It is generally wrong to call `evaluate_goal` multiple times or `evaluate_goal` and `evaluate_all` for the same `QueryResult` without correctly handling rerunning the goals when inference makes progress. Not doing so will result in the assertion in `evaluate_goal` firing because rerunning the goal will lead to a more accurate `QueryResult`.

Currently there are lots of places that get this wrong and generally it is complex and error prone to handle correctly everywhere. This PR introduces a way to add goals to the `EvalCtxt` and then run all the added goals in a loop so that `evaluate_goal`/`evaluate_all` is not necessary to call manually.

There are a few complications for making everything work "right":
1. the `normalizes-to` hack that replaces the rhs with an unconstrained infer var requires special casing in the new `try_evaluate_added_goals` function similar to how `evaluate_goal`'s assertion special cases that hack.

2. `assemble_candidates_after_normalizing_self_ty`'s normalization step needs to be reran for each candidate otherwise the found candidates will potentially get a more accurate `QueryResult` when rerunning the projection/trait goal which can effect the `QueryResult` of the projection/trait goal.
This is implemented via `EvalCtxt::probe`'s closure's `EvalCtxt` inheriting the added goals of the `EvalCtxt` that `probe` is called on, allowing us to add goals in a probe, and then enter a nested probe for each candidate and evaluate added goals which include the normalization step's goals.

I made `make_canonical_response` evaluate added goals so that it will be hard to mess up the impl of the solver by forgetting to evaluate added goals. Right now the only way to mess this up would be to call `response_no_constraints` (which from the name is obviously weird).

The visibility of `evaluate_goal` means that it can be called from various `compute_x_goal` or candidate assembly functions, this is generally wrong and we should never call `evaluate_goal` manually, instead we should be calling `add_goal`/`add_goals`. This is solved by moving `evaluate_goal` `evaluate_canonical_goal` and `compute_goal` into `eval_ctxt`'s module and making them private so they cannot be called from elsewhere, forcing people to call `add_goal/s` and `evaluate_added_goals_and_make_canonical_resposne`/`try_evaluate_added_goals`

---

Other changes:
- removed the `&& false` that was introduced to the assertion in `evaluate_goal` in rust-lang#108839
- remove a `!self.did_overflow()` requirement in `search_graph.is_empty()` which causes goals that overflow to ICE
- made `EvalCtxt::eq` take `&mut self` and add all the nested goals via `add_goals` instead of returning them as 99% of call sites just immediately called `EvalCtxt::add_goals` manually.

r? ```@lcnr```
Noratrieb added a commit to Noratrieb/rust that referenced this pull request Mar 21, 2023
…lcnr

new solver: make all goal evaluation able to be automatically rerun

It is generally wrong to call `evaluate_goal` multiple times or `evaluate_goal` and `evaluate_all` for the same `QueryResult` without correctly handling rerunning the goals when inference makes progress. Not doing so will result in the assertion in `evaluate_goal` firing because rerunning the goal will lead to a more accurate `QueryResult`.

Currently there are lots of places that get this wrong and generally it is complex and error prone to handle correctly everywhere. This PR introduces a way to add goals to the `EvalCtxt` and then run all the added goals in a loop so that `evaluate_goal`/`evaluate_all` is not necessary to call manually.

There are a few complications for making everything work "right":
1. the `normalizes-to` hack that replaces the rhs with an unconstrained infer var requires special casing in the new `try_evaluate_added_goals` function similar to how `evaluate_goal`'s assertion special cases that hack.

2. `assemble_candidates_after_normalizing_self_ty`'s normalization step needs to be reran for each candidate otherwise the found candidates will potentially get a more accurate `QueryResult` when rerunning the projection/trait goal which can effect the `QueryResult` of the projection/trait goal.
This is implemented via `EvalCtxt::probe`'s closure's `EvalCtxt` inheriting the added goals of the `EvalCtxt` that `probe` is called on, allowing us to add goals in a probe, and then enter a nested probe for each candidate and evaluate added goals which include the normalization step's goals.

I made `make_canonical_response` evaluate added goals so that it will be hard to mess up the impl of the solver by forgetting to evaluate added goals. Right now the only way to mess this up would be to call `response_no_constraints` (which from the name is obviously weird).

The visibility of `evaluate_goal` means that it can be called from various `compute_x_goal` or candidate assembly functions, this is generally wrong and we should never call `evaluate_goal` manually, instead we should be calling `add_goal`/`add_goals`. This is solved by moving `evaluate_goal` `evaluate_canonical_goal` and `compute_goal` into `eval_ctxt`'s module and making them private so they cannot be called from elsewhere, forcing people to call `add_goal/s` and `evaluate_added_goals_and_make_canonical_resposne`/`try_evaluate_added_goals`

---

Other changes:
- removed the `&& false` that was introduced to the assertion in `evaluate_goal` in rust-lang#108839
- remove a `!self.did_overflow()` requirement in `search_graph.is_empty()` which causes goals that overflow to ICE
- made `EvalCtxt::eq` take `&mut self` and add all the nested goals via `add_goals` instead of returning them as 99% of call sites just immediately called `EvalCtxt::add_goals` manually.

r? ````@lcnr````
Noratrieb added a commit to Noratrieb/rust that referenced this pull request Mar 21, 2023
…lcnr

new solver: make all goal evaluation able to be automatically rerun

It is generally wrong to call `evaluate_goal` multiple times or `evaluate_goal` and `evaluate_all` for the same `QueryResult` without correctly handling rerunning the goals when inference makes progress. Not doing so will result in the assertion in `evaluate_goal` firing because rerunning the goal will lead to a more accurate `QueryResult`.

Currently there are lots of places that get this wrong and generally it is complex and error prone to handle correctly everywhere. This PR introduces a way to add goals to the `EvalCtxt` and then run all the added goals in a loop so that `evaluate_goal`/`evaluate_all` is not necessary to call manually.

There are a few complications for making everything work "right":
1. the `normalizes-to` hack that replaces the rhs with an unconstrained infer var requires special casing in the new `try_evaluate_added_goals` function similar to how `evaluate_goal`'s assertion special cases that hack.

2. `assemble_candidates_after_normalizing_self_ty`'s normalization step needs to be reran for each candidate otherwise the found candidates will potentially get a more accurate `QueryResult` when rerunning the projection/trait goal which can effect the `QueryResult` of the projection/trait goal.
This is implemented via `EvalCtxt::probe`'s closure's `EvalCtxt` inheriting the added goals of the `EvalCtxt` that `probe` is called on, allowing us to add goals in a probe, and then enter a nested probe for each candidate and evaluate added goals which include the normalization step's goals.

I made `make_canonical_response` evaluate added goals so that it will be hard to mess up the impl of the solver by forgetting to evaluate added goals. Right now the only way to mess this up would be to call `response_no_constraints` (which from the name is obviously weird).

The visibility of `evaluate_goal` means that it can be called from various `compute_x_goal` or candidate assembly functions, this is generally wrong and we should never call `evaluate_goal` manually, instead we should be calling `add_goal`/`add_goals`. This is solved by moving `evaluate_goal` `evaluate_canonical_goal` and `compute_goal` into `eval_ctxt`'s module and making them private so they cannot be called from elsewhere, forcing people to call `add_goal/s` and `evaluate_added_goals_and_make_canonical_resposne`/`try_evaluate_added_goals`

---

Other changes:
- removed the `&& false` that was introduced to the assertion in `evaluate_goal` in rust-lang#108839
- remove a `!self.did_overflow()` requirement in `search_graph.is_empty()` which causes goals that overflow to ICE
- made `EvalCtxt::eq` take `&mut self` and add all the nested goals via `add_goals` instead of returning them as 99% of call sites just immediately called `EvalCtxt::add_goals` manually.

r? `````@lcnr`````
@compiler-errors compiler-errors deleted the canonicalize-the-root-var branch August 11, 2023 19:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. WG-trait-system-refactor The Rustc Trait System Refactor Initiative (-Znext-solver)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants