Skip to content

Commit

Permalink
Merge pull request #106 from fivetran/explore/drop-integration-test-s…
Browse files Browse the repository at this point in the history
…chemas

exploring integration test schema cleanup
  • Loading branch information
fivetran-jamie authored Apr 18, 2023
2 parents e384059 + 20e3de0 commit cc710ca
Show file tree
Hide file tree
Showing 6 changed files with 101 additions and 4 deletions.
8 changes: 7 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
# dbt_fivetran_utils v0.4.3
# dbt_fivetran_utils v0.4.4

## Feature Updates
[PR #106](https://github.com/fivetran/dbt_fivetran_utils/pull/106) introduces the following two new macros:
- [drop_schemas_automation](https://github.com/fivetran/dbt_fivetran_utils/tree/explore/drop-integration-test-schemas#drop_schemas_automation-source)
- [wrap_in_quotes](https://github.com/fivetran/dbt_fivetran_utils/tree/explore/drop-integration-test-schemas#wrap_in_quotes-source)

# dbt_fivetran_utils v0.4.3

## Feature Updates
- ([PR #100](https://github.com/fivetran/dbt_fivetran_utils/pull/100)) Expanded the `union_data` macro to create an empty table if none of the provided schemas or databases contain a source table. If the source table does not exist anywhere, `union_data` will return a **completely** empty table (ie `limit 0`) with just one string column (`_dbt_source_relation`) and raise a compiler warning message that the respective staging model is empty.
Expand Down
40 changes: 39 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ This package includes macros that are used across Fivetran's dbt packages. This

# 🎯 How do I use the dbt package?
## Step 1: Installing the Package
Include the following fivetran_utils package version in your `packages.yml`. Please note that this package is installed by default within **all** Fivetran dbt packages.
Include the following fivetran_utils package version in your `packages.yml` if you do not have any other Fivetran dbt packag dependencies. Please note that this package is installed by default within **all** Fivetran dbt packages.
> Check [dbt Hub](https://hub.getdbt.com/) for the latest installation instructions, or [read the dbt docs](https://docs.getdbt.com/docs/package-management) for more information on installing packages.
```yaml
packages:
Expand Down Expand Up @@ -68,10 +68,12 @@ dispatch:
- [timestamp\_add (source)](#timestamp_add-source)
- [timestamp\_diff (source)](#timestamp_diff-source)
- [try\_cast (source)](#try_cast-source)
- [wrap\_in\_quotes (source)](#wrap_in_quotes-source)
- [SQL and field generators](#sql-and-field-generators)
- [add\_dbt\_source\_relation (source)](#add_dbt_source_relation-source)
- [add\_pass\_through\_columns (source)](#add_pass_through_columns-source)
- [calculated\_fields (source)](#calculated_fields-source)
- [drop\_schemas\_automation (source)](#drop_schemas_automation-source)
- [dummy\_coalesce\_value (source)](#dummy_coalesce_value-source)
- [fill\_pass\_through\_columns (source)](#fill_pass_through_columns-source)
- [fill\_staging\_columns (source)](#fill_staging_columns-source)
Expand Down Expand Up @@ -300,7 +302,16 @@ This macro allows a field to be cast to a specified datatype. If the datatype is
* `type` (required): The datatype you want to try and cast the base field.

----
### wrap_in_quotes ([source](macros/wrap_in_quotes.sql))
This macro takes a SQL object (ie database, schema, column) and returns it wrapped in database-appropriate quotes (and casing for Snowflake).

**Usage:**
```sql
{{ fivetran_utils.wrap_in_quotes(object_to_quote="reserved_keyword_mayhaps") }}
```
**Args:**
* `object_to_quote` (required): SQL object you want to quote.
----
## SQL and field generators
These macros create SQL or fields to be included when running the package.
### add_dbt_source_relation ([source](macros/add_dbt_source_relation.sql))
Expand Down Expand Up @@ -344,6 +355,33 @@ vars:
* `variable` (required): The variable containing the calculated field `name` and `transform_sql`.

----

### drop_schemas_automation ([source](macros/drop_schemas_automation.sql))
This macro was created to help clean up the schemas in our integration test environments. It drops schemas that are `like` the `target.schema`. By default it will drop the target schema as well but this can be configured.

**Usage:**
At the end of a Buildkite integration test job in `.buildkite/scripts/run_models.sh`:
```sh
# do all the setup, dbt seed, compile, run, test steps beforehand...
dbt run-operation fivetran_utils.drop_schemas_automation --target "$db"
```

As a Fivetran Transformation job step in a `deployment.yml`:
```yml
jobs:
- name: cleanup
schedule: '0 0 * * 0' # The example will run once a week at 00:00 on Sunday.
steps:
- name: drop schemas but leave target
command: dbt run-operation fivetran_utils.drop_schemas_automation --vars '{"drop_target_schema": False}'
- name: drop schemas including target
command: dbt run-operation fivetran_utils.drop_schemas_automation
```
**Args:**
* `drop_target_schema` (optional): Boolean that is `true` by default. If `false`, the target schema will not be dropped.

----

### dummy_coalesce_value ([source](macros/dummy_coalesce_value.sql))
This macro creates a dummy coalesce value based on the data type of the field. See below for the respective data type and dummy values:
- String = 'DUMMY_STRING'
Expand Down
2 changes: 1 addition & 1 deletion dbt_project.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: 'fivetran_utils'
version: '0.4.3'
version: '0.4.4'
config-version: 2
require-dbt-version: [">=1.3.0", "<2.0.0"]
2 changes: 1 addition & 1 deletion integration_tests/dbt_project.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: 'fivetran_utils_integration_tests'
version: '0.4.3'
version: '0.4.4'
config-version: 2
profile: 'integration_tests'

Expand Down
31 changes: 31 additions & 0 deletions macros/drop_schemas_automation.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
{% macro drop_schemas_automation(drop_target_schema=true) %}
{{ return(adapter.dispatch('drop_schemas_automation', 'fivetran_utils')(drop_target_schema)) }}
{%- endmacro %}

{% macro default__drop_schemas_automation(drop_target_schema=true) %}

{% set fetch_list_sql %}
{% if target.type not in ('databricks', 'spark') %}
select schema_name
from
{{ wrap_in_quotes(target.database) }}.INFORMATION_SCHEMA.SCHEMATA
where lower(schema_name) like '{{ target.schema | lower }}{%- if not drop_target_schema -%}_{%- endif -%}%'
{% else %}
SHOW SCHEMAS LIKE '{{ target.schema }}{%- if not drop_target_schema -%}_{%- endif -%}*'
{% endif %}
{% endset %}

{% set results = run_query(fetch_list_sql) %}

{% if execute %}
{% set results_list = results.columns[0].values() %}
{% else %}
{% set results_list = [] %}
{% endif %}

{% for schema_to_drop in results_list %}
{% do adapter.drop_schema(api.Relation.create(database=target.database, schema=schema_to_drop)) %}
{{ print('Schema ' ~ schema_to_drop ~ ' successfully dropped from the ' ~ target.database ~ ' database.\n')}}
{% endfor %}

{% endmacro %}
22 changes: 22 additions & 0 deletions macros/wrap_in_quotes.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
{%- macro wrap_in_quotes(object_to_quote) -%}

{{ return(adapter.dispatch('wrap_in_quotes', 'fivetran_utils')(object_to_quote)) }}

{%- endmacro -%}

{%- macro default__wrap_in_quotes(object_to_quote) -%}
{# bigquery, spark, databricks #}
`{{ object_to_quote }}`
{%- endmacro -%}

{%- macro snowflake__wrap_in_quotes(object_to_quote) -%}
"{{ object_to_quote | upper }}"
{%- endmacro -%}

{%- macro redshift__wrap_in_quotes(object_to_quote) -%}
"{{ object_to_quote }}"
{%- endmacro -%}

{%- macro postgres__wrap_in_quotes(object_to_quote) -%}
"{{ object_to_quote }}"
{%- endmacro -%}

0 comments on commit cc710ca

Please sign in to comment.