You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
dbt uses SQL parameters when inserting static data into seeds. The most common SQL parameter is %s, but pyodbc uses ? instead. Today, we achieve that switch at the very last moment, right when dbt is going to execute the SQL:
This is tricky, though, because it means that the SQL logged to logs/dbt.log appears to contain the incorrect SQL parameters. I'd much rather have it be the correct one, if possible, to aid in debugging what dbt actually ran.
We refactored the seed materialization for v0.21, and kicked get_binding_char() into its own macro. I think this means that we could:
reimplement spark__get_binding_char to return %s or ? based on target.method == 'odbc'
reimplement spark__load_csv_rows to look more like default__load_csv_rows
do we still need a custom seed materialization at all?
The text was updated successfully, but these errors were encountered:
dbt uses SQL parameters when inserting static data into seeds. The most common SQL parameter is
%s
, butpyodbc
uses?
instead. Today, we achieve that switch at the very last moment, right when dbt is going to execute the SQL:dbt-spark/dbt/adapters/spark/connections.py
Lines 268 to 271 in 1f84005
This is tricky, though, because it means that the SQL logged to
logs/dbt.log
appears to contain the incorrect SQL parameters. I'd much rather have it be the correct one, if possible, to aid in debugging what dbt actually ran.We refactored the
seed
materialization for v0.21, and kickedget_binding_char()
into its own macro. I think this means that we could:spark__get_binding_char
to return%s
or?
based ontarget.method == 'odbc'
spark__load_csv_rows
to look more likedefault__load_csv_rows
seed
materialization at all?The text was updated successfully, but these errors were encountered: