- This file provides a full account of all changes to
dbt-spark
. - Changes are listed under the (pre)release in which they first appear. Subsequent releases include changes from previous releases.
- "Breaking changes" listed under a version may require action from end users or external maintainers when upgrading to that version.
- Do not edit this file directly. This file is auto-generated using changie. For details on how to document a change, see the contributing guide
- Persist Column level comments when creating views (#372)
- Replace sasl with pure-sasl for PyHive (#818)
- Wrap expression for check constraints in parentheses (#7480)
- Support for data types constraints in Spark following the dbt Core feature #6271 (#558)
- Enforce contracts on models materialized as tables and views (#639, #654)
- Modify adapter to support unified constraint fields (#655)
- Modify order of columns in create_table_as to match contract (#671)
- Support for iceberg v2 tables. Added ability to use multiple join conditions to allow for multiple columns to make a row distinct. (#294)
- Use take() instead of collect on dataframe to improve the performance (#526)
- add merge_exclude_columns tests (#00)
- Fix pyodbc type_code -> data_type conversion (#665)
- Fixed issue where table materialization was not always properly refreshing for non-admin users on Databricks (#725)
- Allow thrift 0.16.0 (#605)
- Update ipdb requirement from ~=0.13.11 to ~=0.13.13 (#677)
- Update wheel requirement from ~=0.38 to ~=0.40 (#680)
- Update pre-commit requirement from ~=2.21 to ~=3.2 (#687)
- Update wheel requirement from ~=0.38 to ~=0.40 (#680)
- Bump mypy from 1.0.1 to 1.1.1 (#675)
- Update types-pytz requirement from ~=2022.7 to ~=2023.2 (#697)
- Update pytz requirement from ~=2022.7 to ~=2023.2 (#696)
For information on prior major and minor releases, see their changelogs: