-
Notifications
You must be signed in to change notification settings - Fork 237
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support delta lake format #30
Comments
Thanks for the suggestion @tongqqiu! I love the idea of being able to use delta-specific DML in an execution environment like DataBricks. dbt has the ability to define incremental strategies that define how incremental models should be build. I imagine the default could be So, the work to do here is really just adding the Is this something you're interested in contributing? We're super happy to help out if so! |
@drewbanin When model is a "table", the current behavior is to drop and create the table. Since spark doesn't support the transaction, it is not good to drop the table first. The alternative way is to use "Insert into overwrite" statement https://docs.databricks.com/spark/latest/spark-sql/language-manual/insert.html. It is similar what you did for incremental type, just don't need partitions. It will keep the table live, and delta format will ensure ACID on a single table level as well. Any suggests how to make that change? BTW, set file format as delta works well like default parquet. |
Hey @tongqqiu, to follow up on this issue:
As far as the
|
@jtcohen6 Sounds all good to me. |
Delta format support normal merging
https://docs.databricks.com/spark/latest/spark-sql/language-manual/merge-into.html
Wish that we can support something like this
The text was updated successfully, but these errors were encountered: