-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for recursive CTEs #462
Comments
this is interesting to handle. i imaging it's easier than window functions. but i guess i shall finish that first. |
@Dandandan I've been experimenting with whether support for recursive CTEs could be done without too much overhead (like holding onto all the results from all the iterations until the recursion is over) and without using any real 'tables' (like the "working table" concept in postgres, a table where the previous iteration's results are materialized). It seems like there might be a solution, but before going in too much deeper and spending a lot of time on the details I wanted to see if it makes sense in general (or whether there are any obvious obstacles that I might have missed) and whether I should submit it as a PR to Datafusion? (it is probably too large to be sent as a single PR, so I also might start a task list and send it as smaller / localized parts) Implementation (very wordy)The branch (the source code also has a decent-ish amount of documentation left-over from my experiments, in case anyone is interested). WITH RECURSIVE nodes(node_1) AS (
SELECT 1 as node_1
UNION ALL
SELECT node_1+1 as node_1 FROM nodes WHERE node_1 < 100
)
SELECT node_1 FROM nodes; I've tried to localize the number of changes needed to compile the recursive queries in the SQL planner, and it seems like we could get away with no changes outside of the
Here is the plan for the query above:
In terms of the physical plan, for each
The namings on the sketch might be a bit cryptic, so here is a brief explanation:
The parent operation receives its input no matter what happens internally (in terms of buffering). The buffering currently is only for the previous iteration's records, but we might also want to keep a hashset of the all the rows we have seen once we add support for |
I was thinking about how we can split, and an initial plan might look like this if there are no objections on separating Possible roadmap?
The implementation is self-contained enough that I think it could be split (with tests), and it would include the
This would be a sizable change that can actually implement the initial piece of logic (without distinct) where we could execute queries up until a certain condition has been met. It would also include new logical operations (
The implementation in terms of SQL is completely decoupled from the actual logical/physical representation, and I think it can be added last, the algorithm is basically using a temporary CTE and then replacing it with the original form, more details in the main PR.
This would require us to actually record what sort of values we have actually collected (probably not direct references, but hashes) and it would be a bit less efficient than the |
Thank you for the write up @isidentical! |
Huge plus one for support here @isidentical and @Dandandan :) I need this for a use case. Would help bring it up to feature parity with DuckDB. |
I thought I'd take a stab at refreshing your PR @isidentical. Here's the refreshed PR. Quickly threw together and it compiles but it fails at runtime when using your recursive example. I haven't taken a step back to figure out why (yet). Thought I'd share in case you know next steps. thread 'main' panicked at 'partition not used yet', /Users/matthewgapp/.cargo/git/checkouts/arrow-datafusion-152bda9bc50d03e7/9479b2b/datafusion/physical-plan/src/repartition/mod.rs:562:14 |
@matthewgapp thanks for trying to revive it (and if you are interested, you have my full blessing on taking over it), but unfortunately I might not be able to give a lot of help on it (am unavailable due to projects that require my attention). I am hoping that the written material in this issue and the comments on the PR might guide you if you choose to take over that implementation as is (although it was my first attempt, and I haven't poured too much time on it). |
@isidentical thanks for the response and encouragement! I'll try to take a stab at getting it working. Seems like Here are the logical and physical plans for reference
|
Ok, I got it working by eliminating As an aside, this approach seems to be about 4x faster than DuckDB (using the trivial example with 1 million iterations). I have yet to test it thoroughly. |
I've continued to clean up the rough edges and test more use cases. I have a couple priorities before requesting additional feedback
|
I think you can control this by setting some combination of https://docs.rs/datafusion/latest/datafusion/physical_plan/trait.ExecutionPlan.html#method.required_input_distribution and https://docs.rs/datafusion/latest/datafusion/physical_plan/trait.ExecutionPlan.html#method.benefits_from_input_partitioning
Definitely -- at the very least I think it should have some sqllogictest end to end sql test coverage (see https://github.com/apache/arrow-datafusion/tree/main/datafusion/sqllogictest for more details) Excited to see this work progress. Thank you @matthewgapp |
@alamb thanks for the pointer on those "characteristic metadata" methods. I ran across them late last night and it seemed to solve the blocking issue. But the optimizer still slides in RepartitionsExec and CoalesceExec into the physical plan which slows things down by an order of magnitude (~30x). To that end, I've drafted a follow-on PR that omits them (only) when they'd be descendants of a recursive query. Will work on getting this thing tested (might not be until this weekend). Lmk if any other comments/thoughts in meantime. |
Per the conversation in #7581, I'll break down #7581 into, roughly, the following PRs
|
I filed #9554 to track enabling recursive CTEs by default. Let's track progress there |
which version began to support it? ctx.sql('with recursive t as (select 1 union all select 2 from t)select * from t')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Exception: DataFusion error: NotImplemented("Recursive CTEs are not enabled") |
I believe the initial implementation is in 37 -- https://github.com/apache/datafusion/blob/main/dev/changelog/37.0.0.md |
the speed seems not fast. I used the binary file
|
I am sure there are ways the performance could be improved -- the overhead of running each iteration is non trivial (it starts a new plan). It might be worth filing a new ticket to track potential performance improvements if you are interested in helping out there |
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
Recursive CTEs are interesting to support more complex algorithms like graph processing
Describe the solution you'd like
Implement recursive CTEs. Some research needs to be done to do this in an efficient way in DataFusion
Describe alternatives you've considered
Additional context
The text was updated successfully, but these errors were encountered: