-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] dbt unit tests raise an error when QUOTED_IDENTIFIERS_IGNORE_CASE = true
in Snowflake
#1160
Comments
Hi any updates on this error? Our team is encountering similar issues to this. |
On Snowflake, if param # dbt_project.yml
name: my_dbt_project
profile: all
config-version: 2
version: "1.0.0"
models:
my_dbt_project:
+materialized: table
on-run-start: "{{ check_param() }}"
# models/tests.yml
unit_tests:
- name: test_is_valid
model: bar
given:
- input: ref('foo')
rows:
- {id: 1}
expect:
rows:
- {id: 2} -- models/foo.sql
select 1 id
-- models/bar.sql
select id * 2 as id from {{ ref('foo') }}
-- macros/check_param.sql
{% macro check_param() %}
{% set r = run_query("show parameters like 'QUOTED_IDENTIFIERS_IGNORE_CASE' in account;") %}
{% if execute %}
{% do r.print_table() %}
{% endif %}
{% endmacro %} (1) First make sure param is set to (2) Build: $ dbt --debug build
03:35:14 Running 1 on-run-start hook
03:35:14 Using snowflake connection "master"
03:35:14 On master: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "connection_name": "master"} */
show parameters like 'QUOTED_IDENTIFIERS_IGNORE_CASE' in account;
03:35:14 Opening a new connection, currently in state init
03:35:16 SQL status: SUCCESS 1 in 3.0 seconds
| key | value | default | level | description | type |
| -------------------- | ----- | ------- | ------- | -------------------- | ------- |
| QUOTED_IDENTIFIER... | false | false | ACCOUNT | If true, the case... | BOOLEAN |
03:35:16 Writing injected SQL for node "operation.my_dbt_project.my_dbt_project-on-run-start-0"
03:35:16 1 of 1 START hook: my_dbt_project.on-run-start.0 ............................... [RUN]
03:35:16 1 of 1 OK hook: my_dbt_project.on-run-start.0 .................................. [OK in 0.00s]
03:35:16
03:35:16 On master: Close
03:35:17 Concurrency: 1 threads (target='sf-eu')
03:35:17
03:35:17 Began running node model.my_dbt_project.foo
03:35:17 1 of 3 START sql table model dbt_jyeo.foo ...................................... [RUN]
03:35:17 Re-using an available connection from the pool (formerly list_analytics_dbt_jyeo, now model.my_dbt_project.foo)
03:35:17 Began compiling node model.my_dbt_project.foo
03:35:17 Writing injected SQL for node "model.my_dbt_project.foo"
03:35:17 Began executing node model.my_dbt_project.foo
03:35:17 Writing runtime sql for node "model.my_dbt_project.foo"
03:35:17 Using snowflake connection "model.my_dbt_project.foo"
03:35:17 On model.my_dbt_project.foo: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "node_id": "model.my_dbt_project.foo"} */
create or replace transient table analytics.dbt_jyeo.foo
as
(select 1 id
);
03:35:17 Opening a new connection, currently in state closed
03:35:20 SQL status: SUCCESS 1 in 3.0 seconds
03:35:20 On model.my_dbt_project.foo: Close
03:35:21 Sending event: {'category': 'dbt', 'action': 'run_model', 'label': '72b6ef92-cde4-4239-bb45-03aff682b5b5', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x155241c10>]}
03:35:21 1 of 3 OK created sql table model dbt_jyeo.foo ................................. [SUCCESS 1 in 4.31s]
03:35:21 Finished running node model.my_dbt_project.foo
03:35:21 Began running node unit_test.my_dbt_project.bar.test_is_valid
03:35:21 2 of 3 START unit_test bar::test_is_valid ...................................... [RUN]
03:35:21 Re-using an available connection from the pool (formerly model.my_dbt_project.foo, now unit_test.my_dbt_project.bar.test_is_valid)
03:35:21 Began compiling node unit_test.my_dbt_project.bar.test_is_valid
03:35:21 Began executing node unit_test.my_dbt_project.bar.test_is_valid
03:35:21 Using snowflake connection "unit_test.my_dbt_project.bar.test_is_valid"
03:35:21 On unit_test.my_dbt_project.bar.test_is_valid: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "node_id": "unit_test.my_dbt_project.bar.test_is_valid"} */
describe table analytics.dbt_jyeo.foo
03:35:21 Opening a new connection, currently in state closed
03:35:24 SQL status: SUCCESS 1 in 2.0 seconds
03:35:24 Writing injected SQL for node "unit_test.my_dbt_project.bar.test_is_valid"
03:35:24 Writing injected SQL for node "unit_test.my_dbt_project.bar.test_is_valid"
03:35:24 Using snowflake connection "unit_test.my_dbt_project.bar.test_is_valid"
03:35:24 On unit_test.my_dbt_project.bar.test_is_valid: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "node_id": "unit_test.my_dbt_project.bar.test_is_valid"} */
create or replace temporary table analytics.dbt_jyeo.test_is_valid__dbt_tmp
as
(select * from (
with __dbt__cte__foo as (
-- Fixture for foo
select
try_cast('1' as NUMBER(1,0))
as id
) select id * 2 as id from __dbt__cte__foo
) as __dbt_sbq
where false
limit 0
);
03:35:25 SQL status: SUCCESS 1 in 1.0 seconds
03:35:25 Using snowflake connection "unit_test.my_dbt_project.bar.test_is_valid"
03:35:25 On unit_test.my_dbt_project.bar.test_is_valid: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "node_id": "unit_test.my_dbt_project.bar.test_is_valid"} */
describe table analytics.dbt_jyeo.test_is_valid__dbt_tmp
03:35:25 SQL status: SUCCESS 1 in 0.0 seconds
03:35:25 Writing runtime sql for node "unit_test.my_dbt_project.bar.test_is_valid"
03:35:25 Using snowflake connection "unit_test.my_dbt_project.bar.test_is_valid"
03:35:25 On unit_test.my_dbt_project.bar.test_is_valid: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "node_id": "unit_test.my_dbt_project.bar.test_is_valid"} */
-- Build actual result given inputs
with dbt_internal_unit_test_actual as (
select
id, 'actual' as "actual_or_expected"
from (
with __dbt__cte__foo as (
-- Fixture for foo
select
try_cast('1' as NUMBER(1,0))
as id
) select id * 2 as id from __dbt__cte__foo
) _dbt_internal_unit_test_actual
),
-- Build expected result
dbt_internal_unit_test_expected as (
select
id, 'expected' as "actual_or_expected"
from (
select
try_cast('2' as NUMBER(2,0))
as id
) _dbt_internal_unit_test_expected
)
-- Union actual and expected results
select * from dbt_internal_unit_test_actual
union all
select * from dbt_internal_unit_test_expected
03:35:26 SQL status: SUCCESS 2 in 1.0 seconds
03:35:26 Applying DROP to: analytics.dbt_jyeo.test_is_valid__dbt_tmp
03:35:26 Using snowflake connection "unit_test.my_dbt_project.bar.test_is_valid"
03:35:26 On unit_test.my_dbt_project.bar.test_is_valid: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "node_id": "unit_test.my_dbt_project.bar.test_is_valid"} */
drop table if exists analytics.dbt_jyeo.test_is_valid__dbt_tmp cascade
03:35:26 SQL status: SUCCESS 1 in 0.0 seconds
03:35:26 On unit_test.my_dbt_project.bar.test_is_valid: Close
03:35:27 2 of 3 PASS bar::test_is_valid ................................................. [PASS in 5.82s]
03:35:27 Finished running node unit_test.my_dbt_project.bar.test_is_valid
03:35:27 Began running node model.my_dbt_project.bar
03:35:27 3 of 3 START sql table model dbt_jyeo.bar ...................................... [RUN]
03:35:27 Re-using an available connection from the pool (formerly unit_test.my_dbt_project.bar.test_is_valid, now model.my_dbt_project.bar)
03:35:27 Began compiling node model.my_dbt_project.bar
03:35:27 Writing injected SQL for node "model.my_dbt_project.bar"
03:35:27 Began executing node model.my_dbt_project.bar
03:35:27 Writing runtime sql for node "model.my_dbt_project.bar"
03:35:27 Using snowflake connection "model.my_dbt_project.bar"
03:35:27 On model.my_dbt_project.bar: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "node_id": "model.my_dbt_project.bar"} */
create or replace transient table analytics.dbt_jyeo.bar
as
(select id * 2 as id from analytics.dbt_jyeo.foo
);
03:35:27 Opening a new connection, currently in state closed
03:35:32 SQL status: SUCCESS 1 in 5.0 seconds
03:35:32 On model.my_dbt_project.bar: Close
03:35:33 Sending event: {'category': 'dbt', 'action': 'run_model', 'label': '72b6ef92-cde4-4239-bb45-03aff682b5b5', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x1546cc8d0>]}
03:35:33 3 of 3 OK created sql table model dbt_jyeo.bar ................................. [SUCCESS 1 in 5.85s]
$ dbt --debug build
03:36:50 Running 1 on-run-start hook
03:36:50 Using snowflake connection "master"
03:36:50 On master: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "connection_name": "master"} */
show parameters like 'QUOTED_IDENTIFIERS_IGNORE_CASE' in account;
03:36:50 Opening a new connection, currently in state init
03:36:53 SQL status: SUCCESS 1 in 2.0 seconds
| key | value | default | level | description | type |
| -------------------- | ----- | ------- | ------- | -------------------- | ------- |
| QUOTED_IDENTIFIER... | true | false | ACCOUNT | If true, the case... | BOOLEAN |
03:36:53 Writing injected SQL for node "operation.my_dbt_project.my_dbt_project-on-run-start-0"
03:36:53 1 of 1 START hook: my_dbt_project.on-run-start.0 ............................... [RUN]
03:36:53 1 of 1 OK hook: my_dbt_project.on-run-start.0 .................................. [OK in 0.00s]
03:36:53
03:36:53 On master: Close
03:36:54 Concurrency: 1 threads (target='sf-eu')
03:36:54
03:36:54 Began running node model.my_dbt_project.foo
03:36:54 1 of 3 START sql table model dbt_jyeo.foo ...................................... [RUN]
03:36:54 Re-using an available connection from the pool (formerly list_analytics, now model.my_dbt_project.foo)
03:36:54 Began compiling node model.my_dbt_project.foo
03:36:54 Writing injected SQL for node "model.my_dbt_project.foo"
03:36:54 Began executing node model.my_dbt_project.foo
03:36:54 Writing runtime sql for node "model.my_dbt_project.foo"
03:36:54 Using snowflake connection "model.my_dbt_project.foo"
03:36:54 On model.my_dbt_project.foo: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "node_id": "model.my_dbt_project.foo"} */
create or replace transient table analytics.dbt_jyeo.foo
as
(select 1 id
);
03:36:54 Opening a new connection, currently in state closed
03:36:58 SQL status: SUCCESS 1 in 4.0 seconds
03:36:58 On model.my_dbt_project.foo: Close
03:36:58 Sending event: {'category': 'dbt', 'action': 'run_model', 'label': '36b5e6c9-f0d6-4016-a8e8-e1737f793857', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x13ff9b650>]}
03:36:58 1 of 3 OK created sql table model dbt_jyeo.foo ................................. [SUCCESS 1 in 4.97s]
03:36:58 Finished running node model.my_dbt_project.foo
03:36:58 Began running node unit_test.my_dbt_project.bar.test_is_valid
03:36:58 2 of 3 START unit_test bar::test_is_valid ...................................... [RUN]
03:36:58 Re-using an available connection from the pool (formerly model.my_dbt_project.foo, now unit_test.my_dbt_project.bar.test_is_valid)
03:36:58 Began compiling node unit_test.my_dbt_project.bar.test_is_valid
03:36:58 Began executing node unit_test.my_dbt_project.bar.test_is_valid
03:36:59 Using snowflake connection "unit_test.my_dbt_project.bar.test_is_valid"
03:36:59 On unit_test.my_dbt_project.bar.test_is_valid: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "node_id": "unit_test.my_dbt_project.bar.test_is_valid"} */
describe table analytics.dbt_jyeo.foo
03:36:59 Opening a new connection, currently in state closed
03:37:03 SQL status: SUCCESS 1 in 4.0 seconds
03:37:03 Writing injected SQL for node "unit_test.my_dbt_project.bar.test_is_valid"
03:37:03 Writing injected SQL for node "unit_test.my_dbt_project.bar.test_is_valid"
03:37:03 Using snowflake connection "unit_test.my_dbt_project.bar.test_is_valid"
03:37:03 On unit_test.my_dbt_project.bar.test_is_valid: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "node_id": "unit_test.my_dbt_project.bar.test_is_valid"} */
create or replace temporary table analytics.dbt_jyeo.test_is_valid__dbt_tmp
as
(select * from (
with __dbt__cte__foo as (
-- Fixture for foo
select
try_cast('1' as NUMBER(1,0))
as id
) select id * 2 as id from __dbt__cte__foo
) as __dbt_sbq
where false
limit 0
);
03:37:04 SQL status: SUCCESS 1 in 1.0 seconds
03:37:04 Using snowflake connection "unit_test.my_dbt_project.bar.test_is_valid"
03:37:04 On unit_test.my_dbt_project.bar.test_is_valid: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "node_id": "unit_test.my_dbt_project.bar.test_is_valid"} */
describe table analytics.dbt_jyeo.test_is_valid__dbt_tmp
03:37:04 SQL status: SUCCESS 1 in 0.0 seconds
03:37:04 Writing runtime sql for node "unit_test.my_dbt_project.bar.test_is_valid"
03:37:04 Using snowflake connection "unit_test.my_dbt_project.bar.test_is_valid"
03:37:04 On unit_test.my_dbt_project.bar.test_is_valid: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "node_id": "unit_test.my_dbt_project.bar.test_is_valid"} */
-- Build actual result given inputs
with dbt_internal_unit_test_actual as (
select
id, 'actual' as "actual_or_expected"
from (
with __dbt__cte__foo as (
-- Fixture for foo
select
try_cast('1' as NUMBER(1,0))
as id
) select id * 2 as id from __dbt__cte__foo
) _dbt_internal_unit_test_actual
),
-- Build expected result
dbt_internal_unit_test_expected as (
select
id, 'expected' as "actual_or_expected"
from (
select
try_cast('2' as NUMBER(2,0))
as id
) _dbt_internal_unit_test_expected
)
-- Union actual and expected results
select * from dbt_internal_unit_test_actual
union all
select * from dbt_internal_unit_test_expected
03:37:05 SQL status: SUCCESS 2 in 0.0 seconds
03:37:05 Applying DROP to: analytics.dbt_jyeo.test_is_valid__dbt_tmp
03:37:05 Using snowflake connection "unit_test.my_dbt_project.bar.test_is_valid"
03:37:05 On unit_test.my_dbt_project.bar.test_is_valid: /* {"app": "dbt", "dbt_version": "1.8.3", "profile_name": "all", "target_name": "sf-eu", "node_id": "unit_test.my_dbt_project.bar.test_is_valid"} */
drop table if exists analytics.dbt_jyeo.test_is_valid__dbt_tmp cascade
03:37:05 SQL status: SUCCESS 1 in 1.0 seconds
03:37:05 On unit_test.my_dbt_project.bar.test_is_valid: Close
03:37:06 Unhandled error while executing
'actual_or_expected'
03:37:06 Traceback (most recent call last):
File "/Users/jeremy/git/dbt-basic/venv_dbt_1.8.latest/lib/python3.11/site-packages/dbt/task/base.py", line 368, in safe_run
result = self.compile_and_execute(manifest, ctx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeremy/git/dbt-basic/venv_dbt_1.8.latest/lib/python3.11/site-packages/dbt/task/base.py", line 314, in compile_and_execute
result = self.run(ctx.node, manifest)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeremy/git/dbt-basic/venv_dbt_1.8.latest/lib/python3.11/site-packages/dbt/task/base.py", line 415, in run
return self.execute(compiled_node, manifest)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeremy/git/dbt-basic/venv_dbt_1.8.latest/lib/python3.11/site-packages/dbt/task/test.py", line 264, in execute
unit_test_node, unit_test_result = self.execute_unit_test(test, manifest)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeremy/git/dbt-basic/venv_dbt_1.8.latest/lib/python3.11/site-packages/dbt/task/test.py", line 237, in execute_unit_test
actual = self._get_unit_test_agate_table(table, "actual")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeremy/git/dbt-basic/venv_dbt_1.8.latest/lib/python3.11/site-packages/dbt/task/test.py", line 336, in _get_unit_test_agate_table
unit_test_table = result_table.where(
^^^^^^^^^^^^^^^^^^^
File "/Users/jeremy/git/dbt-basic/venv_dbt_1.8.latest/lib/python3.11/site-packages/agate/table/where.py", line 21, in where
if test(row):
^^^^^^^^^
File "/Users/jeremy/git/dbt-basic/venv_dbt_1.8.latest/lib/python3.11/site-packages/dbt/task/test.py", line 337, in <lambda>
lambda row: row["actual_or_expected"] == actual_or_expected
~~~^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeremy/git/dbt-basic/venv_dbt_1.8.latest/lib/python3.11/site-packages/agate/mapped_sequence.py", line 88, in __getitem__
return self.dict()[key]
~~~~~~~~~~~^^^^^
KeyError: 'actual_or_expected'
03:37:06 2 of 3 ERROR bar::test_is_valid ................................................ [ERROR in 7.74s]
03:37:06 Finished running node unit_test.my_dbt_project.bar.test_is_valid
03:37:06 Began running node model.my_dbt_project.bar
03:37:06 3 of 3 SKIP relation dbt_jyeo.bar .............................................. [SKIP]
03:37:06 Finished running node model.my_dbt_project.bar
03:37:06 Connection 'master' was properly closed.
03:37:06 Connection 'unit_test.my_dbt_project.bar.test_is_valid' was properly closed.
03:37:06 Connection 'list_analytics_dbt_jyeo' was properly closed.
03:37:06
03:37:06 Finished running 2 table models, 1 unit test, 1 project hook in 0 hours 0 minutes and 23.85 seconds (23.85s).
03:37:06 Command end result
03:37:06
03:37:06 Completed with 1 error and 0 warnings:
03:37:06
03:37:06 'actual_or_expected'
03:37:06
03:37:06 Done. PASS=1 WARN=0 ERROR=1 SKIP=1 TOTAL=3 The test retrieval query is the same but the resulting "actual_or_expected" column name is lower or uppercased depending on the param. When the param is on - the returned result set is uppercased Workaround:
See workaround below where we override the default built in macro that does unit test. |
Hello, in my case the param is TRUE, thanks for the information, I am not admin and I can't set it to false. |
I was just testing now the unit test feature, and I stumbled on the same error
This is the system config
I don't want and don't intend to change any account parameters for the sake of unit testing without evaluating any side effects before, so I would expected this to be solved on the dbt side. Is there any timeline for solving this bug? |
I tried a workaround, by altering the #dbt_project.yml
on-run-start:
[
"ALTER SESSION SET QUOTED_IDENTIFIERS_IGNORE_CASE = FALSE",
"{{ check_param() }}",
] -- macros/check_params.sql
{% macro check_param() %}
{% set r = run_query("show parameters like 'QUOTED_IDENTIFIERS_IGNORE_CASE';") %}
{% if execute %} {% do r.print_table() %} {% endif %}
{% endmacro %} Log output (same error:
Config:
|
@sqnico i think that's cause of dbt's multi-threaded / connection pooling mechanism or something. When the unit test runs it's a different session from the session that ran the alter session statement. Probably can see that in Snowflake Query History's session id column. |
@jeremyyeo, thank you for your message. I checked, you are correct (see below) |
@sqnico can you add this macro to your project and try again? -- macros/get_unit_test.sql
{% macro get_unit_test_sql(main_sql, expected_fixture_sql, expected_column_names) -%}
-- For accounts that have this param set to true, we need to make it false for the query.
-- https://github.com/dbt-labs/dbt-snowflake/issues/1160
alter session set QUOTED_IDENTIFIERS_IGNORE_CASE = false;
-- Build actual result given inputs
with dbt_internal_unit_test_actual as (
select
{% for expected_column_name in expected_column_names %}{{expected_column_name}}{% if not loop.last -%},{% endif %}{%- endfor -%}, {{ dbt.string_literal("actual") }} as {{ adapter.quote("actual_or_expected") }}
from (
{{ main_sql }}
) _dbt_internal_unit_test_actual
),
-- Build expected result
dbt_internal_unit_test_expected as (
select
{% for expected_column_name in expected_column_names %}{{expected_column_name}}{% if not loop.last -%}, {% endif %}{%- endfor -%}, {{ dbt.string_literal("expected") }} as {{ adapter.quote("actual_or_expected") }}
from (
{{ expected_fixture_sql }}
) _dbt_internal_unit_test_expected
)
-- Union actual and expected results
select * from dbt_internal_unit_test_actual
union all
select * from dbt_internal_unit_test_expected
{%- endmacro %} I'm overriding the default one from https://github.com/dbt-labs/dbt-adapters/blob/4b3806818f3831bbd0206a6b22fdcb4fe2a505c5/dbt/include/global_project/macros/materializations/tests/helpers.sql#L23 and adding the session param altering within it - this would make it use the same "session". |
@jeremyyeo Amazing, this is a great workaround! It works! Thank you so much for your help
|
QUOTED_IDENTIFIERS_IGNORE_CASE = true
in Snowflake
I reached out to @dbt-labs/adapters internally to see if they have a preferred solution here. |
From offline sync with @mikealfare @colin-rogers-dbt, we determined that this is a Snowflake issue so transferred over. |
Will address as part of #1181 |
Is this a new bug in dbt-core?
Current Behavior
No significative log output without log-level debug and the error is unexpected.
14:14:13 Finished running 4 data tests, 1 unit test, 1 project hook in 0 hours 0 minutes and 9.68 seconds (9.68s).
14:14:13 Command end result
14:14:13
14:14:13 Completed with 1 error and 0 warnings:
14:14:13
14:14:13 'actual_or_expected'
14:14:13
14:14:13 Done. PASS=4 WARN=0 ERROR=1 SKIP=0 TOTAL=5
Expected Behavior
Complete the test successfully like:
06:57:00 Finished running 4 data tests, 1 unit test, 1 project hook in 0 hours 0 minutes and 11.69 seconds (11.69s).
06:57:00
06:57:00 Completed successfully
06:57:00
06:57:00 Done. PASS=5 WARN=0 ERROR=0 SKIP=0 TOTAL=5
Steps To Reproduce
With this config:
Running the test like that:
With this output:
Relevant log output
Environment
Which database adapter are you using with dbt?
snowflake
Additional Context
I've had to replace the
_get_unit_test_agate_table
method to parse all dict keys to lowercase like this:The original method is:
The text was updated successfully, but these errors were encountered: