Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Equivalence propagation #24155

Merged

Conversation

frankmcsherry
Copy link
Contributor

@frankmcsherry frankmcsherry commented Dec 29, 2023

Transformation that propagates "equivalence" information, of the form expr1 = expr2 under Rust's =. This allows us to communicate arbitrary predicates (expr = true) but also compactly represent normalizing equivalences that equate classes of expressions (and allow substitution by the simplest representatives).

At the moment, it optimizes away queries like

-- Demonstrates absence of cross-join input sharing.
SELECT COUNT(*) FROM 
    (SELECT * FROM foo WHERE foo.a = 7) foo,
    (SELECT * FROM bar WHERE bar.a < 5) bar
WHERE foo.a = bar.a;

but should be more fully explored to see what it does to expressions, and whether it mis-optimizes any. This PR is meant to read out any test failures in that regard.

The longer term goal is to consolidate places where we try and reason about "equivalence" in ad-hoc manners. There are several transforms that overlap with this reasoning, and several of them could be simplified, improved, or potentially just removed. They include:

  1. Predicate pushdown (encourage Get nodes to house predicates, and provide guarantees for users)
  2. Literal lifting (identify columns as literals, and remove dependence on the columns themselves)
  3. Column knowledge (identify columns as literals or non-null. more generally, other predicates of columns)
  4. Non-null requirements (push down the information that rows with null columns can be dropped)
  5. Nonnullability (harvest more accurate information about non-nullability than typ() provides)
  6. Predicate canonicalization (Filter-local predicate simplification that now extends further)
  7. Demand analysis (identify equivalent columns to reduce demand on one of them)

The main gotcha at the moment is that several of the above perform light physical plan modification as they go. For example LiteralLifting projects away any lifted columns, in addition to highlighting the information that the column is a literal. To do this it goes through a bunch of permutation pain, but it ends up with an expression that for example removes unit join terms, where without the projection they are a column containing unread data that we don't know how to prune (the code was never written).

Motivation

Tips for reviewer

Checklist

  • This PR has adequate test coverage / QA involvement has been duly considered.
  • This PR has an associated up-to-date design doc, is a design doc (template), or is sufficiently small to not require a design.
  • If this PR evolves an existing $T ⇔ Proto$T mapping (possibly in a backwards-incompatible way), then it is tagged with a T-proto label.
  • If this PR will require changes to cloud orchestration or tests, there is a companion cloud PR to account for those changes that is tagged with the release-blocker label (example).
  • This PR includes the following user-facing behavior changes:

@frankmcsherry frankmcsherry requested a review from a team December 29, 2023 22:16
@frankmcsherry frankmcsherry force-pushed the equivalence_propagation branch 6 times, most recently from 903d306 to 0f00775 Compare January 2, 2024 21:34
@frankmcsherry frankmcsherry force-pushed the equivalence_propagation branch 3 times, most recently from da98e1e to b7b45ca Compare February 14, 2024 20:04
frankmcsherry added a commit that referenced this pull request Feb 21, 2024
This PR introduces an `Analysis` trait that is meant to be a
simplification of the `Attribute` trait/framework.

It is simpler in many ways, and less expressive, but seemingly the
restrictions are not yet limiting and are potentially helpful in
broadening the uptake of the framework.

The main idea is that an `Analysis` implementor specifies an output type
it will produce for each expression, and the logic that produces this
output given an expression, analysis results for child expressions, and
analysis results for other depended-upon analyses. All analysis state is
indexed by "offset": the post-order visitation numbering of the
expressions.

The framework runs the analyses in dependency order, establishes links
between `Let`-bound identifiers and offsets, but otherwise has very few
opinions. By contrast, the Attribute framework allowed its hosted
attributes to propose various bits of logic that would run in addition
as it traverses an expression, used to set up `Let`-binding information
but also potentially much more.

There is a demonstration of how to use the framework in
#24155, but we are
looking at this PR in isolation to break up the moving parts of that PR
into manageable pieces.
1. The main annoyance in using the output is that one must keep notes as
one descends into an expression about the current expression offsets, as
very little help is currently offered here (though, a `Slice` variant of
the results that scopes itself down to expressions, and their child
expressions, would probably be helpful).
2. There is a fair amount of cloning of results, even though many
expressions only wish to say "whatever my only child says". One could
imagine an analysis result type of `Result<AnalysisType, usize>` where
the `Err(offset)` variant might indicate an offset to consult for the
state (and expressions that don't modify the analysis result could have
`Err(offset)` for some appropriate `offset` (their child, or whereever
their child references).

The main reviewing question is whether this simplification is net
valuable, in that it makes things easier to use and adopt, without
ruling out too much expressivity or potentially performance.
Additionally, I have not yet begun to integrate it further than the
previous PR, but I'm happy to follow up with commits that do this if we
identify where we could most fruitfully slice out the existing uses. I
put together prototype versions for all existing attributes other than
`Cardinality` and `ColumnNames`, as they each had a fair bit of inherent
complexity.

### Motivation

<!--
Which of the following best describes the motivation behind this PR?

  * This PR fixes a recognized bug.

    [Ensure issue is linked somewhere.]

  * This PR adds a known-desirable feature.

    [Ensure issue is linked somewhere.]

  * This PR fixes a previously unreported bug.

    [Describe the bug in detail, as if you were filing a bug report.]

  * This PR adds a feature that has not yet been specified.

[Write a brief specification for the feature, including justification
for its inclusion in Materialize, as if you were writing the original
     feature specification.]

   * This PR refactors existing code.

[Describe what was wrong with the existing code, if it is not obvious.]
-->

### Tips for reviewer

<!--
Leave some tips for your reviewer, like:

    * The diff is much smaller if viewed with whitespace hidden.
    * [Some function/module/file] deserves extra attention.
* [Some function/module/file] is pure code movement and only needs a
skim.

Delete this section if no tips.
-->

### Checklist

- [ ] This PR has adequate test coverage / QA involvement has been duly
considered.
- [ ] This PR has an associated up-to-date [design
doc](https://github.com/MaterializeInc/materialize/blob/main/doc/developer/design/README.md),
is a design doc
([template](https://github.com/MaterializeInc/materialize/blob/main/doc/developer/design/00000000_template.md)),
or is sufficiently small to not require a design.
  <!-- Reference the design in the description. -->
- [ ] If this PR evolves [an existing `$T ⇔ Proto$T`
mapping](https://github.com/MaterializeInc/materialize/blob/main/doc/developer/command-and-response-binary-encoding.md)
(possibly in a backwards-incompatible way), then it is tagged with a
`T-proto` label.
- [ ] If this PR will require changes to cloud orchestration or tests,
there is a companion cloud PR to account for those changes that is
tagged with the release-blocker label
([example](MaterializeInc/cloud#5021)).
<!-- Ask in #team-cloud on Slack if you need help preparing the cloud
PR. -->
- [ ] This PR includes the following [user-facing behavior
changes](https://github.com/MaterializeInc/materialize/blob/main/doc/developer/guide-changes.md#what-changes-require-a-release-note):
- <!-- Add release notes here or explicitly state that there are no
user-facing behavior changes. -->
@frankmcsherry frankmcsherry force-pushed the equivalence_propagation branch 3 times, most recently from fd8fc76 to e7e33ec Compare February 24, 2024 15:24
@frankmcsherry frankmcsherry changed the title WIP/DNM: Equivalence propagation Equivalence propagation Feb 26, 2024
@frankmcsherry frankmcsherry force-pushed the equivalence_propagation branch from f63ad17 to 779a6d9 Compare February 26, 2024 14:30
ggevay
ggevay previously requested changes Feb 26, 2024
Copy link
Contributor

@ggevay ggevay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll continue reading the code tomorrow, but I'm sending my comments so far.

classes: Vec::new(),
};
for class1 in self.classes.iter() {
for class2 in other.classes.iter() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add some limit here that just gives up if self.classes.len() * other.classes.len() is too large? (I've already seen a join in a customer's MIR plan with 196 equivalences...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm conflicted on this, given the status of the codebase and the struggles with predictability of transforms. If we can figure out a principled way to do this that doesn't fight with idempotence, that doesn't make the outer fixpoint iteration brittle, etc., then yes! But if it's "don't do something quadratic in the input" then .. we have larger problems. :D

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, looking at the code there is a fine linear time implementation (move to HashSets for intersection). Does that work for you?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we'll never run into the limit. How about you add a soft_assert_or_log? And then CI would fail if we run into the limit, and in production we'd get a Sentry error. And then if it never happens then everyone is happy, and if it happens then we can figure out the next steps (increasing the limit, optimizing the code, ignoring the problem if it doesn't look dangerous that the optimization was not applied in the specific situation, ...).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I've only now seen your comment about using HashSets. I'm not sure how to do it with HashSets, but a possible approach would be to have a HashMap that maps from expressions to indexes in classes (relying on the fact that minimize makes it so that each expression can appear in only one class). And a similar thing should also work for my other similar comment.

// They stop being sorted as soon as we make any modification, though.
// But, it would be a fast rejection when faced with lots of data.
for index1 in 0..self.classes.len() {
for index2 in 0..index1 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as in union: could you please add a safety limit?

@ggevay
Copy link
Contributor

ggevay commented Feb 26, 2024

Even with the above complete = false; issue fixed, EquivalenceClasses::minimize is somehow still not idempotent: I ran all slts with the following assert added, and it's failing for test/sqllogictest/window_funcs.slt:
ggevay@b95a39e
(The commit runs a copy of minimize again at the end of minimize. The copy is to make it not run itself infinitely.)

E.g.:

thread 'tokio-runtime-worker' panicked at src/transform/src/analysis/equivalences.rs:352:9:
assertion `left == right` failed
  left: EquivalenceClasses { classes: [[Literal(Ok(Row{[True]}), ColumnType { scalar_type: Bool, nullable: false }), CallUnary { func: Not(Not), expr: CallUnary { func: IsNull(IsNull), expr: CallUnary { func: RecordGet(RecordGet(0)), expr: CallUnary { func: RecordGet(RecordGet(1)), expr: Column(2) } } } }], [CallUnary { func: RecordGet(RecordGet(0)), expr: Column(2) }, CallUnary { func: CastInt32ToInt64(CastInt32ToInt64), expr: CallUnary { func: RecordGet(RecordGet(0)), expr: CallUnary { func: RecordGet(RecordGet(1)), expr: Column(2) } } }]] }
 right: EquivalenceClasses { classes: [[Literal(Ok(Row{[True]}), ColumnType { scalar_type: Bool, nullable: false }), CallUnary { func: Not(Not), expr: CallUnary { func: IsNull(IsNull), expr: CallUnary { func: RecordGet(RecordGet(0)), expr: Column(2) } } }, CallUnary { func: Not(Not), expr: CallUnary { func: IsNull(IsNull), expr: CallUnary { func: RecordGet(RecordGet(0)), expr: CallUnary { func: RecordGet(RecordGet(1)), expr: Column(2) } } } }], [CallUnary { func: RecordGet(RecordGet(0)), expr: Column(2) }, CallUnary { func: CastInt32ToInt64(CastInt32ToInt64), expr: CallUnary { func: RecordGet(RecordGet(0)), expr: CallUnary { func: RecordGet(RecordGet(1)), expr: Column(2) } } }]] }
stack backtrace:
   0: rust_begin_unwind
             at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:645:5
   1: core::panicking::panic_fmt
             at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/panicking.rs:72:14
   2: core::panicking::assert_failed_inner
             at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/panicking.rs:342:17
   3: core::panicking::assert_failed
             at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/panicking.rs:297:5
   4: mz_transform::analysis::equivalences::EquivalenceClasses::minimize
             at ./src/transform/src/analysis/equivalences.rs:352:9
   5: <mz_transform::analysis::equivalences::Equivalences as mz_transform::analysis::Analysis>::derive
             at ./src/transform/src/analysis/equivalences.rs:176:9
   6: <mz_transform::analysis::common::Bundle<A> as mz_transform::analysis::common::AnalysisBundle>::analyse
             at ./src/transform/src/analysis.rs:345:25
   7: mz_transform::analysis::common::DerivedBuilder::visit
             at ./src/transform/src/analysis.rs:319:25
   8: <mz_transform::equivalence_propagation::EquivalencePropagation as mz_transform::Transform>::transform
             at ./src/transform/src/equivalence_propagation.rs:61:23
   9: <mz_transform::Fixpoint as mz_transform::Transform>::transform::{{closure}}
             at ./src/transform/src/lib.rs:279:25
  10: tracing::span::Span::in_scope
             at /home/gabor/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tracing-0.1.37/src/span.rs:1102:9
  11: <mz_transform::Fixpoint as mz_transform::Transform>::transform
             at ./src/transform/src/lib.rs:277:17
  12: mz_transform::Optimizer::transform
             at ./src/transform/src/lib.rs:663:13
  13: mz_transform::Optimizer::optimize
             at ./src/transform/src/lib.rs:626:32
  14: mz_adapter::optimize::optimize_mir_local
             at ./src/adapter/src/optimize/mod.rs:312:16
  15: <mz_adapter::optimize::peek::Optimizer as mz_adapter::optimize::Optimize<mz_sql::plan::expr::HirRelationExpr>>::optimize
             at ./src/adapter/src/optimize/peek.rs:158:20
  16: mz_adapter::optimize::Optimize::catch_unwind_optimize::{{closure}}
             at ./src/adapter/src/optimize/mod.rs:120:63
  17: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/panic/unwind_safe.rs:272:9
  18: std::panicking::try::do_call
             at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:552:40
  19: __rust_try
  20: std::panicking::try
             at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:516:19
  21: std::panic::catch_unwind
             at /rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panic.rs:142:14
  22: mz_ore::panic::catch_unwind::{{closure}}
             at ./src/ore/src/panic.rs:78:19
...

chuck-alt-delete pushed a commit that referenced this pull request Mar 1, 2024
This PR introduces an `Analysis` trait that is meant to be a
simplification of the `Attribute` trait/framework.

It is simpler in many ways, and less expressive, but seemingly the
restrictions are not yet limiting and are potentially helpful in
broadening the uptake of the framework.

The main idea is that an `Analysis` implementor specifies an output type
it will produce for each expression, and the logic that produces this
output given an expression, analysis results for child expressions, and
analysis results for other depended-upon analyses. All analysis state is
indexed by "offset": the post-order visitation numbering of the
expressions.

The framework runs the analyses in dependency order, establishes links
between `Let`-bound identifiers and offsets, but otherwise has very few
opinions. By contrast, the Attribute framework allowed its hosted
attributes to propose various bits of logic that would run in addition
as it traverses an expression, used to set up `Let`-binding information
but also potentially much more.

There is a demonstration of how to use the framework in
#24155, but we are
looking at this PR in isolation to break up the moving parts of that PR
into manageable pieces.
1. The main annoyance in using the output is that one must keep notes as
one descends into an expression about the current expression offsets, as
very little help is currently offered here (though, a `Slice` variant of
the results that scopes itself down to expressions, and their child
expressions, would probably be helpful).
2. There is a fair amount of cloning of results, even though many
expressions only wish to say "whatever my only child says". One could
imagine an analysis result type of `Result<AnalysisType, usize>` where
the `Err(offset)` variant might indicate an offset to consult for the
state (and expressions that don't modify the analysis result could have
`Err(offset)` for some appropriate `offset` (their child, or whereever
their child references).

The main reviewing question is whether this simplification is net
valuable, in that it makes things easier to use and adopt, without
ruling out too much expressivity or potentially performance.
Additionally, I have not yet begun to integrate it further than the
previous PR, but I'm happy to follow up with commits that do this if we
identify where we could most fruitfully slice out the existing uses. I
put together prototype versions for all existing attributes other than
`Cardinality` and `ColumnNames`, as they each had a fair bit of inherent
complexity.

### Motivation

<!--
Which of the following best describes the motivation behind this PR?

  * This PR fixes a recognized bug.

    [Ensure issue is linked somewhere.]

  * This PR adds a known-desirable feature.

    [Ensure issue is linked somewhere.]

  * This PR fixes a previously unreported bug.

    [Describe the bug in detail, as if you were filing a bug report.]

  * This PR adds a feature that has not yet been specified.

[Write a brief specification for the feature, including justification
for its inclusion in Materialize, as if you were writing the original
     feature specification.]

   * This PR refactors existing code.

[Describe what was wrong with the existing code, if it is not obvious.]
-->

### Tips for reviewer

<!--
Leave some tips for your reviewer, like:

    * The diff is much smaller if viewed with whitespace hidden.
    * [Some function/module/file] deserves extra attention.
* [Some function/module/file] is pure code movement and only needs a
skim.

Delete this section if no tips.
-->

### Checklist

- [ ] This PR has adequate test coverage / QA involvement has been duly
considered.
- [ ] This PR has an associated up-to-date [design
doc](https://github.com/MaterializeInc/materialize/blob/main/doc/developer/design/README.md),
is a design doc
([template](https://github.com/MaterializeInc/materialize/blob/main/doc/developer/design/00000000_template.md)),
or is sufficiently small to not require a design.
  <!-- Reference the design in the description. -->
- [ ] If this PR evolves [an existing `$T ⇔ Proto$T`
mapping](https://github.com/MaterializeInc/materialize/blob/main/doc/developer/command-and-response-binary-encoding.md)
(possibly in a backwards-incompatible way), then it is tagged with a
`T-proto` label.
- [ ] If this PR will require changes to cloud orchestration or tests,
there is a companion cloud PR to account for those changes that is
tagged with the release-blocker label
([example](MaterializeInc/cloud#5021)).
<!-- Ask in #team-cloud on Slack if you need help preparing the cloud
PR. -->
- [ ] This PR includes the following [user-facing behavior
changes](https://github.com/MaterializeInc/materialize/blob/main/doc/developer/guide-changes.md#what-changes-require-a-release-note):
- <!-- Add release notes here or explicitly state that there are no
user-facing behavior changes. -->
@frankmcsherry frankmcsherry force-pushed the equivalence_propagation branch 3 times, most recently from 8eac2db to 5aeedfc Compare March 2, 2024 15:19
(_, MirScalarExpr::Literal(_, _)) => std::cmp::Ordering::Greater,
(MirScalarExpr::Column(_), MirScalarExpr::Column(_)) => e1.cmp(e2),
(MirScalarExpr::Column(_), _) => std::cmp::Ordering::Less,
(_, MirScalarExpr::Column(_)) => std::cmp::Ordering::Greater,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a comment here mentioning that the first elements of the classes are the representatives, and these ordering considerations are important because we'll be simplifying expressions to these representatives?

self.classes.dedup();
}

/// Update `self` to maintain the same equivalences which potentially reducing along `Ord::le`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whose Ord::le is this sentence referring to?

// TODO: remove these measures once we are more confident about idempotence.
let prev = self.clone();
self.minimize_once(columns);
assert_eq!(self, &prev);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copying here from Slack:
I'd suggest to leave the idempotence assertion in the code (as a soft_assert_or_log) until we test this on customer queries.

@ggevay
Copy link
Contributor

ggevay commented Mar 4, 2024

I've started a Nightly CI run on the current version: https://buildkite.com/materialize/nightlies/builds/6753

Alex says he'll run RQG tomorrow. He will also add a feature flag as discussed here.

Copy link
Contributor

@aalexandrov aalexandrov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After the latest fix I couldn't find any more correctness issues.

I suggest to wait for me to rebase and push a commit that add the transform behind a feature flag and enables it in CI before we merge this PR. This can be done immediate after #25628 and #25812 are merged, so probably later today.

@def-: you'll have to ping me next week if the SQLSmith / RQG failures due to OOMs become too frequent. Once we have good example queries I can try tracing with and without the feature flag turned on to see what (if anything) is causing the extra memory consumtion.

/// The mutations should never invalidate an equivalence the operator has been reported as providing, as that
/// information may have already been acted upon by others.
///
/// The `expr_index` argument must equal `expr`s position in post-order, so that it can be used as a reference
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no expr_index mentioned else in this file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to fix, but maybe after it's in a state that you like / it gets merged. Mostly want to avoid randomizing things with my primitive git skills.

@aalexandrov aalexandrov force-pushed the equivalence_propagation branch from e36b409 to 35ea714 Compare March 8, 2024 20:01
@frankmcsherry frankmcsherry requested a review from a team as a code owner March 8, 2024 20:01
@frankmcsherry frankmcsherry requested review from a team and jkosh44 March 8, 2024 20:01
Copy link

shepherdlybot bot commented Mar 8, 2024

Risk Score:83 / 100 Bug Hotspots:1 Resilience Coverage:60%

Mitigations

Completing required mitigations increases Resilience Coverage.

  • (Required) Code Review 🔍 Detected
  • (Required) Feature Flag
  • (Required) Integration Test 🔍 Detected
  • (Required) Observability
  • (Required) QA Review 🔍 Detected
  • Unit Test
Risk Summary:

The risk associated with this pull request is high, with a score of 83, indicating a significant likelihood of introducing bugs. This assessment is informed by factors such as the average line count in files and the number of executable lines within files, which historically have made pull requests 141% more likely to cause a bug compared to the baseline. Additionally, the pull request modifies files that have a recent history of bug fixes. The repository's observed bug trend is currently decreasing, which is separate from the risk score but suggests an improvement in the overall code stability.

Note: The risk score is not based on semantic analysis but on historical predictors of bug occurrence in the repository. The attributes above were deemed the strongest predictors based on that history. Predictors and the score may change as the PR evolves in code, time, and review activity.

Bug Hotspots:
What's This?

File Percentile
../statement/ddl.rs 96

@aalexandrov aalexandrov dismissed ggevay’s stale review March 8, 2024 20:07

We discussed with Gábor that further change requests can be addressed in a follow up PR since we are merging this behind a feature flag that is off by default.

Copy link
Contributor

@jkosh44 jkosh44 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adapter code LGTM

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkosh44 I think you are flagged when I force-pushed my feature flag commits, because in those I extended the parser in order to allow overriding the new feature flag at the SQL level in EXPLAIN WITH(...) and CREATE CLUSTER ... FEATURES(...).

We already have parser tests for this syntax. I don't think it's worth it to add specific specific tests for each new feature flag that we define.

@aalexandrov aalexandrov force-pushed the equivalence_propagation branch from 35ea714 to 90575e5 Compare March 8, 2024 21:01
Temporary set the default value to `true` in order to make all of the
failing `*.td` tests happy.
@aalexandrov aalexandrov enabled auto-merge March 8, 2024 22:09
@aalexandrov aalexandrov merged commit 3cfaa82 into MaterializeInc:main Mar 8, 2024
77 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants