-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CT-2723] [spike+] Maximally parallelize dbt clone
operations, a different mechanism for processing a queue
#7914
Comments
dbt clone
operations in paralleldbt clone
operations in parallel
I said in #7881 (comment):
After looking into it a bit more, I don't think this is actually is true. It would make sense, though — even if So: This would be a first in dbt, and require some modification to how we iterate over the job/graph queue. |
dbt clone
operations in paralleldbt clone
operations
Notes from refinement:
Where would this speed things up and by how much? @jtcohen6 do you have an example here to understand the concrete benefit of doing this - we'd like to understand what the performance gain would be to decide if this change is worth it |
We still need to construct a DAG for the It's a fair thought re: ephemeral models. The good news is, at least for I believe this would significantly speed up the execution of In theory, if you're running with as many threads as you have selected models for cloning:
Meaning: The maximum theoretical speedup is to run this Y times faster, where Y is the greatest parent-child depth among selected resources. In dbt Labs' internal-analytics project, Y = 31. import networkx
G = networkx.readwrite.gpickle.read_gpickle("target/graph.gpickle")
networkx.dag_longest_path(G)
print(len(networkx.dag_longest_path(G))) Instead of We're never going to achieve that theoretical speedup in practice, but I'd hope it could be pretty darn significant. There is the additional latency due to dbt maintaining a central adapter cache, which each thread must lock while updating. @peterallenwebb had identified some bonus cache-related slowness in #6844. I had tried pulling that change into my previous spike PR, and it shaved off ~40% of the total runtime: #7258 (comment). |
I believe this will also be relevant for our unit testing work, since unit tests do not need to run in DAG order |
A simple approach that might work here to achieve an 'maximally parallelized execution mode' would be to modify the get_graph_queue method to accept an optional config that builds the graph queue without any edges. |
More notes from refinement:
|
dbt clone
operationsdbt clone
operations, a different mechanism for processing a queue
This might also apply to unittest(in test command). |
Originally posted by @jtcohen6 in #7881 (comment)
The text was updated successfully, but these errors were encountered: