-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CT-1878] [Feature] <Optionally provide job_id for every model that gets executed> #475
Comments
Thanks for creating the issue! |
Yes Christophe, something like that. A full --debug run gets too much verbose and difficult to navigate, what I would suggest here is to have the option to always show the |
It makes sense to improve logging and I think it could be useful, I'm curious if @jtcohen6 has a higher level plan for that. |
Quick thoughts:
I do think those are the right foundational mechanisms to have in place, as far as making this information available / programmatically parseable. Open to hearing your thoughts about preferred UX, though! Should we include the BQ job link for test failures, same as we do for query errors? dbt-bigquery/dbt/adapters/bigquery/connections.py Lines 193 to 199 in 3ce88d7
|
I agree on the fundamentals! logger.error(
cls._bq_job_link(
error.query_job.location, error.query_job.project, error.query_job.job_id
)
) ? |
Hmm, I don't know that we should make this a documented & supported pattern... but technically, this is already possible. If I add some code like this into a custom version of the ...
{% call statement('main', fetch_result=True) -%}
{{ get_test_sql(main_sql, fail_calc, warn_if, error_if, limit)}}
{%- endcall %}
---- my custom code
{% set result = load_result('main') %}
{% set should_warn_or_error = result.table.columns[1][0] or result.table.columns[2][0] %}
{% if should_warn_or_error %}
{{ log("Job ID for test that found failures: " ~ result.response.job_id, info = true) }}
{% endif %}
----
{{ return({'relations': relations}) }}
... Voila:
|
Absolutely! |
This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please remove the stale label or comment on the issue, or it will be closed in 7 days. |
Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest. Just add a comment to notify the maintainers. |
Is this your first time submitting a feature request?
Describe the feature
When several hundreds of models gets executed in a single
dbt build
, it sometimes happens that model get created successfully, but some of the tests might fail.In this case, it is nowadays very time consuming to look for the job_id and execution details, as it is not explicitly outputted unless in
--debug
mode, which would unnecessarily clutter the logs. This gets particularly challenging when thedbt build
is triggered in some sort of automated process, such as a scheduled dbt execution with airflow running in own kubernetes environment.Describe alternatives you've considered
Alternatives considered so far are:
Neither of those options seems however maintainable or offer a smooth developer experience where all information is condensed in the log shown in the console
Who will this benefit?
The main benefit is for anyone running/building multiple models at once who has the need to trace back a particular execution to its counterpart in bigquery. The benefit becomes evident as soon as the number of models increase
Are you interested in contributing this feature?
I would love to help, but I might need a couple of pointers regarding best practices, etc.
Anything else?
No response
The text was updated successfully, but these errors were encountered: