-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Standardize timestamp column types in the Archival materialization #938
Comments
Thanks for the writeup @foundinblank. You indicated that changing |
Changing both to |
Changed the source tables from |
Updating... |
This is prioritized for the Wilt Chamberlain release. Databases like Redshift and Postgres only have a single type to represent timestamps, whereas Snowflake and BigQuery (and presumably other plugin-able databases) have more nuanced ways of representing time. This particular issue in archival when dbt tries to union together "archived" data (with a known Instead of supplying a timestamp type for the initially null Note: dbt should not try to coerce types if a source table Test cases:
|
Issue
Issue description
Running
dbt archive
(on Snowflake) throws a cryptic error related to timezone types.Results
System information
The output of
dbt --version
:The operating system you're running on: OS X High Sierra 10.13.6
The python version you're using: Python 3.5.4
Steps to reproduce
Slack discussion here: https://getdbt.slack.com/archives/C2JRRQDTL/p1534508532000100
When I first got the error, I then cast my
created_at
field in the source table as timestamp_ltz, and verified that Snowflake returned that data type when querying the source table. It did not fix the error, though. It could be something related to thenull::timestamp
command. This is in the last query that failed; however, changingnull:timestamp
tonull:timestamp_ltz
, etc, doesn't fix the error.This is the query that failed:
The text was updated successfully, but these errors were encountered: