diff --git a/README.md b/README.md index 4b5dc4e95..890c5cc48 100644 --- a/README.md +++ b/README.md @@ -127,7 +127,7 @@ The following configurations can be supplied to models run with the dbt-spark pl **Incremental Models** To use incremental models, specify a `partition_by` clause in your model config. The default incremental strategy used is `insert_overwrite`, which will overwrite the partitions included in your query. Be sure to re-select _all_ of the relevant -data for a partition when using the `insert_overwrite` strategy. +data for a partition when using the `insert_overwrite` strategy. If a `partition_by` config is not specified, dbt will overwrite the entire table as an atomic operation, replacing it with new data of the same schema. This is analogous to `truncate` + `insert`. ``` {{ config( diff --git a/dbt/include/spark/macros/materializations/incremental.sql b/dbt/include/spark/macros/materializations/incremental.sql index f5d7335fb..000659a8f 100644 --- a/dbt/include/spark/macros/materializations/incremental.sql +++ b/dbt/include/spark/macros/materializations/incremental.sql @@ -100,9 +100,11 @@ {% do dbt_spark_validate_merge(file_format) %} {% endif %} - {% call statement() %} - set spark.sql.sources.partitionOverwriteMode = DYNAMIC - {% endcall %} + {% if config.get('partition_by') %} + {% call statement() %} + set spark.sql.sources.partitionOverwriteMode = DYNAMIC + {% endcall %} + {% endif %} {% call statement() %} set spark.sql.hive.convertMetastoreParquet = false