-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support bq legacySQL queries, to access partition metadata #2552
Comments
@jtcohen6 It may be overkill to add a full blown feature for it - changing the signatures of run_query/statement, handling the new parameter as an error for non BQ etc. It may be quicker (and dirtier) to have the BQ adapter look for |
@jtcohen6 my instinct here is that we should not add new general-purpose support for legacy sql. If we want to make some helper adapter function that gets the zero-cost partitions for a table, we can certainly do that, but the more we can firewall this functionality from the rest of the plugin, the better IMO! |
Describe the feature
Allow dbt to run Legacy SQL queries to access older BigQuery features that are not yet supported in Standard SQL.
The required change involves an ability to dynamically override
job_params
inraw_execute
. Ideally, there would be an additional argument torun_query
/statement
such as:use_legacy_sql: true
job_params: {"use_legacy_sql": true}
Specific use case
There is a compelling and free (zero-byte) way to access partition metadata, but it's only available to Legacy SQL:
This offers substantial savings over the Standard SQL query to get the latest partition value—which is a big chunk of the overhead in the dynamic
insert_overwrite
incremental strategy.Describe alternatives you've considered
Wait for BigQuery to release a zero-cost way of accessing partition metadata from Standard SQL. The signs of that happening soon aren't promising, but I'm not a huge fan of adding new support for legacy functionality.
Who will this benefit?
BigQuery users with large partitioned tables
The text was updated successfully, but these errors were encountered: