Skip to content

Commit

Permalink
Merge branch 'main' into debuggability-alert-notifications
Browse files Browse the repository at this point in the history
  • Loading branch information
Siddhanttimeline authored Oct 10, 2024
2 parents bd2d6a5 + 384b624 commit 091c1b3
Show file tree
Hide file tree
Showing 74 changed files with 541 additions and 162 deletions.
2 changes: 1 addition & 1 deletion SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ currently being supported with security updates.

| Version | Supported |
| ------- | ------------------ |
| 1.5.x | :white_check_mark: |
| 1.4.x | :white_check_mark: |
| 1.3.x | :white_check_mark: |

## Reporting a Vulnerability

Expand Down
2 changes: 1 addition & 1 deletion ingestion/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@
VERSIONS["azure-storage-blob"],
VERSIONS["azure-identity"],
},
"db2": {"ibm-db-sa~=0.3"},
"db2": {"ibm-db-sa~=0.4.1", "ibm-db>=2.0.0"},
"db2-ibmi": {"sqlalchemy-ibmi~=0.9.3"},
"databricks": {
VERSIONS["sqlalchemy-databricks"],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
from typing import Any, Optional
from urllib.parse import quote_plus

from pydantic import SecretStr
from pydantic import SecretStr, ValidationError
from sqlalchemy.engine import Engine

from metadata.generated.schema.entity.automations.workflow import (
Expand Down Expand Up @@ -187,7 +187,22 @@ def test_connection(
of a metadata workflow or during an Automation Workflow
"""

if service_connection.metastoreConnection:
if service_connection.metastoreConnection and isinstance(
service_connection.metastoreConnection, dict
):
try:
service_connection.metastoreConnection = MysqlConnection.model_validate(
service_connection.metastoreConnection
)
except ValidationError:
try:
service_connection.metastoreConnection = (
PostgresConnection.model_validate(
service_connection.metastoreConnection
)
)
except ValidationError:
raise ValueError("Invalid metastore connection")
engine = get_metastore_connection(service_connection.metastoreConnection)

test_connection_db_schema_sources(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -412,7 +412,7 @@ def compute(self):
)
res = self.runner._session.execute(query).first()
if not res:
return None
return super().compute()
if res.rowCount is None or (
res.rowCount == 0 and self._entity.tableType == TableType.View
):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@

{% connectorInfoCard name="S3 Storage" stage="PROD" href="/connectors/storage/s3" platform="OpenMetadata" / %}
{% connectorInfoCard name="ADLS" stage="PROD" href="/connectors/storage/adls" platform="Collate" / %}
{% connectorInfoCard name="GCS" stage="PROD" href="/connectors/storage/gcs" platform="Collate" / %}
{% connectorInfoCard name="GCS" stage="PROD" href="/connectors/storage/gcs" platform="OpenMetadata" / %}

{% /connectorsListContainer %}
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@

{% connectorInfoCard name="S3 Storage" stage="PROD" href="/connectors/storage/s3" platform="OpenMetadata" / %}
{% connectorInfoCard name="ADLS" stage="PROD" href="/connectors/storage/adls" platform="Collate" / %}
{% connectorInfoCard name="GCS" stage="PROD" href="/connectors/storage/gcs" platform="Collate" / %}
{% connectorInfoCard name="GCS" stage="PROD" href="/connectors/storage/gcs" platform="OpenMetadata" / %}

{% /connectorsListContainer %}
Original file line number Diff line number Diff line change
Expand Up @@ -112,11 +112,11 @@ This is a sample config for Glue:

```yaml {% isCodeBlock=true %}
source:
type: glue
type: gluepipeline
serviceName: local_glue
serviceConnection:
config:
type: Glue
type: GluePipeline
awsConfig:
```
```yaml {% srNumber=1 %}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,11 +112,11 @@ This is a sample config for Glue:

```yaml {% isCodeBlock=true %}
source:
type: glue
type: gluepipeline
serviceName: local_glue
serviceConnection:
config:
type: Glue
type: GluePipeline
awsConfig:
```
```yaml {% srNumber=1 %}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
---
title: Collate Automations Documentation
slug: /how-to-guides/data-governance/automation
collate: true
---

# Collate Automations

{% youtube videoId="ug08aLUyTyE" start="0:00" end="14:52” width="560px" height="315px" /%}

## Overview

Collate's **Automation** feature is a powerful tool designed to simplify and streamline metadata management tasks. By automating repetitive actions such as assigning owners, domains, or tagging data, Collate helps maintain consistency in metadata across an organization's datasets. These automations reduce manual effort and ensure that metadata is always up-to-date, accurate, and governed according to predefined policies.

## Why Automations are Useful

Managing metadata manually can be challenging, particularly in dynamic environments where data constantly evolves. Collate's Automation feature addresses several key pain points:

- **Maintaining Consistency**: Automation helps ensure that metadata such as ownership, tags, and descriptions are applied consistently across all data assets.
- **Saving Time**: Automations allow data teams to focus on higher-value tasks by eliminating the need for manual updates and maintenance.
- **Enforcing Governance Policies**: Automations help ensure that data follows organizational policies at all times by automatically applying governance rules (e.g., assigning data owners or domains).
- **Data Quality and Accountability**: Data quality suffers without clear ownership. Automating ownership assignments helps ensure that data quality issues are addressed efficiently.

## Key Use Cases for Collate Automations

### 1. Bulk Ownership and Domain Assignment

{% image
src="/images/v1.5/how-to-guides/governance/bulk-ownership-and.png"
alt="Getting started with Automation"
caption="Getting started with Automation"
/%}

- **Problem**: Many data assets lack proper ownership and domain assignment, leading to governance and accountability issues. Manually assigning owners can be error-prone and time-consuming.
- **Solution**: Automations can bulk-assign ownership and domains to datasets, ensuring all data assets are correctly categorized and owned. This process can be applied to tables, schemas, or other assets within Collate.
- **Benefit**: This use case ensures data assets have a designated owner and are organized under the appropriate domain, making data more discoverable and accountable.

### 2. Bulk Tagging and Glossary Term Assignment

{% image
src="/images/v1.5/how-to-guides/governance/bulk-tagging-glossary.png"
alt="Getting started with Automation"
caption="Getting started with Automation"
/%}

- **Problem**: Manually applying the same tags or glossary terms to multiple datasets can be inefficient and inconsistent.
- **Solution**: Automations allow users to bulk-apply tags (e.g., PII) or glossary terms (e.g., Customer ID) to specific datasets, ensuring uniformity across the platform.
- **Benefit**: This automation reduces the risk of missing important tags like PII-sensitive and ensures that key metadata elements are applied consistently across datasets.

### 3. Metadata Propagation via Lineage

{% image
src="/images/v1.5/how-to-guides/governance/metadata-propogation.png"
alt="Getting started with Automation"
caption="Getting started with Automation"
/%}

- **Problem**: When metadata such as tags, descriptions, or glossary terms are updated in one part of the data lineage, they may not be propagated across related datasets, leading to inconsistencies.
- **Solution**: Use automations to propagate metadata across related datasets, ensuring that all relevant data inherits the correct metadata properties from the source dataset.
- **Benefit**: Metadata consistency is ensured across the entire data lineage, reducing the need for manual updates and maintaining a single source of truth.

### 4. Automatic PII Detection and Tagging

{% image
src="/images/v1.5/how-to-guides/governance/automatic-detection.png"
alt="Getting started with Automation"
caption="Getting started with Automation"
/%}

- **Problem**: Manually identifying and tagging Personally Identifiable Information (PII) across large datasets is labor-intensive and prone to errors.
- **Solution**: Automations can automatically detect PII data (e.g., emails, usernames) and apply relevant tags to ensure that sensitive data is flagged appropriately for compliance.
- **Benefit**: Ensures compliance with data protection regulations by consistently tagging sensitive data, reducing the risk of non-compliance.

## Best Practices

- **Validate Assets Before Applying Actions**: Always use the **Explore** page to verify the assets that will be affected by the automation. This ensures that only the intended datasets are updated.
- **Use Automation Logs**: Regularly check the **Recent Runs** logs to monitor automation activity and ensure that they are running as expected.
- **Propagate Metadata Thoughtfully**: When propagating metadata via lineage, make sure that the source metadata is correct before applying it across multiple datasets.
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
---
title: How to Set Up Automations in OpenMetadata
slug: /how-to-guides/data-governance/automation/set-up-automation
collate: true
---

# How to Set Up Automations in Collate

### Step 1: Access the Automations Section
In the OpenMetadata UI, navigate to **Govern>Automations**.
This will take you to the Automations page where you can view and manage your existing automations.

{% image
src="/images/v1.5/how-to-guides/governance/automation-1.png"
alt="Getting started with Automation"
caption="Getting started with Automation"
/%}

### Step 2: Add a New Automation
In the Automations page, click the **Add Automation** button located at the top right of the page.
A pop-up window will appear to begin the process of adding a new automation.

{% image
src="/images/v1.5/how-to-guides/governance/automation-2.png"
alt="Add Automation"
caption="Add Automation"
/%}

### Step 3: Fill in Automation Details
In the pop-up window, provide the necessary information to set up the automation:
- **Automation Name**: Give a meaningful name to the automation for easy identification.
- **Description**: Add a brief description explaining what this automation will do (e.g., "Daily metadata ingestion for database XYZ").
- **Logic/Conditions**: Define any conditions or specific criteria needed for this automation to work (e.g., specific tables or columns to be included).
Ensure that the logic is set up as per your specific requirements to make the automation useful for your workflows.

{% image
src="/images/v1.5/how-to-guides/governance/automation-4.png"
alt="Automation details"
caption="Automation details"
/%}

{% image
src="/images/v1.5/how-to-guides/governance/automation-5.png"
alt="Automation logics"
caption="Automation logics"
/%}

### Step 4: Configure Automation Interval
Once you've filled in the required details, click **Next**.
On the next page, you’ll be prompted to select the interval for the automation. This defines how frequently the automation should run (e.g., daily, weekly, or custom intervals).
Review your settings and click **Automate** once you are satisfied with the configuration.

{% image
src="/images/v1.5/how-to-guides/governance/automation-6.png"
alt="Automation Interval"
caption="Automation Interval"
/%}

### Step 5: Manage Your Automation
After completing the setup, your automation will appear in the Automations list.
To manage the automation, click on the three dots next to the automation entry. From here, you can **edit**, **re-deploy**, **delete**, etc.

{% image
src="/images/v1.5/how-to-guides/governance/automation-7.png"
alt="Manage Your Automation"
caption="Manage Your Automation"
/%}
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Adding test suits through the UI
slug: /how-to-guides/data-quality-observability/quality/adding-test-suits
title: Adding test suites through the UI
slug: /how-to-guides/data-quality-observability/quality/adding-test-suites
---

# Adding Test Suites Through the UI
Expand Down
22 changes: 14 additions & 8 deletions openmetadata-docs/content/v1.5.x/menu.md
Original file line number Diff line number Diff line change
Expand Up @@ -689,14 +689,6 @@ site_menu:
color: violet-70
icon: openmetadata

- category: How-to Guides / Data Quality Observability / Visualize
url: /how-to-guides/data-quality-observability/visualize
- category: How-to Guides / Data Quality Observability / Test Cases From YAML Config
url: /how-to-guides/data-quality-observability/quality/test-cases-from-yaml-config
- category: How-to Guides / Data Quality Observability / Adding Test Suits
url: /how-to-guides/data-quality-observability/quality/adding-test-suits
- category: How-to Guides / Data Quality Observability / Adding Test Cases
url: /how-to-guides/data-quality-observability/quality/adding-test-cases
- category: How-to Guides / Getting Started
url: /how-to-guides/getting-started
- category: How-to Guides / Day 1
Expand Down Expand Up @@ -814,6 +806,14 @@ site_menu:
url: /how-to-guides/data-quality-observability/quality/test
- category: How-to Guides / Data Quality and Observability / Data Quality / Configure Data Quality
url: /how-to-guides/data-quality-observability/quality/configure
- category: How-to Guides / Data Quality Observability / Data Quality / Adding Test Cases
url: /how-to-guides/data-quality-observability/quality/adding-test-cases
- category: How-to Guides / Data Quality Observability / Data Quality / Adding Test Suites
url: /how-to-guides/data-quality-observability/quality/adding-test-suites
- category: How-to Guides / Data Quality Observability / Data Quality / Test Cases From YAML Config
url: /how-to-guides/data-quality-observability/quality/test-cases-from-yaml-config
- category: How-to Guides / Data Quality Observability / Data Quality / How to Visualize Test Results
url: /how-to-guides/data-quality-observability/quality/visualize
- category: How-to Guides / Data Quality and Observability / Data Quality / Tests - YAML Config
url: /how-to-guides/data-quality-observability/quality/tests-yaml
- category: How-to Guides / Data Quality and Observability / Data Quality / Custom Tests
Expand Down Expand Up @@ -880,6 +880,12 @@ site_menu:

- category: How-to Guides / Data Governance
url: /how-to-guides/data-governance
- category: How-to Guides / Data Governance / Automation
url: /how-to-guides/data-governance/automation
isCollateOnly: true
- category: How-to Guides / Data Governance / Automation / How to Set Up Automations in Collate
url: /how-to-guides/data-governance/automation/set-up-automation
isCollateOnly: true
- category: How-to Guides / Data Governance / Glossary
url: /how-to-guides/data-governance/glossary
- category: How-to Guides / Data Governance / Glossary / What is a Glossary Term
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,11 +112,11 @@ This is a sample config for Glue:

```yaml {% isCodeBlock=true %}
source:
type: glue
type: gluepipeline
serviceName: local_glue
serviceConnection:
config:
type: Glue
type: GluePipeline
awsConfig:
```
```yaml {% srNumber=1 %}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
---
title: Collate Automations Documentation
slug: /how-to-guides/data-governance/automation
collate: true
---

# Collate Automations

{% youtube videoId="ug08aLUyTyE" start="0:00" end="14:52” width="560px" height="315px" /%}

## Overview

Collate's **Automation** feature is a powerful tool designed to simplify and streamline metadata management tasks. By automating repetitive actions such as assigning owners, domains, or tagging data, Collate helps maintain consistency in metadata across an organization's datasets. These automations reduce manual effort and ensure that metadata is always up-to-date, accurate, and governed according to predefined policies.

## Why Automations are Useful

Managing metadata manually can be challenging, particularly in dynamic environments where data constantly evolves. Collate's Automation feature addresses several key pain points:

- **Maintaining Consistency**: Automation helps ensure that metadata such as ownership, tags, and descriptions are applied consistently across all data assets.
- **Saving Time**: Automations allow data teams to focus on higher-value tasks by eliminating the need for manual updates and maintenance.
- **Enforcing Governance Policies**: Automations help ensure that data follows organizational policies at all times by automatically applying governance rules (e.g., assigning data owners or domains).
- **Data Quality and Accountability**: Data quality suffers without clear ownership. Automating ownership assignments helps ensure that data quality issues are addressed efficiently.

## Key Use Cases for Collate Automations

### 1. Bulk Ownership and Domain Assignment

{% image
src="/images/v1.6/how-to-guides/governance/bulk-ownership-and.png"
alt="Getting started with Automation"
caption="Getting started with Automation"
/%}

- **Problem**: Many data assets lack proper ownership and domain assignment, leading to governance and accountability issues. Manually assigning owners can be error-prone and time-consuming.
- **Solution**: Automations can bulk-assign ownership and domains to datasets, ensuring all data assets are correctly categorized and owned. This process can be applied to tables, schemas, or other assets within Collate.
- **Benefit**: This use case ensures data assets have a designated owner and are organized under the appropriate domain, making data more discoverable and accountable.

### 2. Bulk Tagging and Glossary Term Assignment

{% image
src="/images/v1.6/how-to-guides/governance/bulk-tagging-glossary.png"
alt="Getting started with Automation"
caption="Getting started with Automation"
/%}

- **Problem**: Manually applying the same tags or glossary terms to multiple datasets can be inefficient and inconsistent.
- **Solution**: Automations allow users to bulk-apply tags (e.g., PII) or glossary terms (e.g., Customer ID) to specific datasets, ensuring uniformity across the platform.
- **Benefit**: This automation reduces the risk of missing important tags like PII-sensitive and ensures that key metadata elements are applied consistently across datasets.

### 3. Metadata Propagation via Lineage

{% image
src="/images/v1.6/how-to-guides/governance/metadata-propogation.png"
alt="Getting started with Automation"
caption="Getting started with Automation"
/%}

- **Problem**: When metadata such as tags, descriptions, or glossary terms are updated in one part of the data lineage, they may not be propagated across related datasets, leading to inconsistencies.
- **Solution**: Use automations to propagate metadata across related datasets, ensuring that all relevant data inherits the correct metadata properties from the source dataset.
- **Benefit**: Metadata consistency is ensured across the entire data lineage, reducing the need for manual updates and maintaining a single source of truth.

### 4. Automatic PII Detection and Tagging

{% image
src="/images/v1.6/how-to-guides/governance/automatic-detection.png"
alt="Getting started with Automation"
caption="Getting started with Automation"
/%}

- **Problem**: Manually identifying and tagging Personally Identifiable Information (PII) across large datasets is labor-intensive and prone to errors.
- **Solution**: Automations can automatically detect PII data (e.g., emails, usernames) and apply relevant tags to ensure that sensitive data is flagged appropriately for compliance.
- **Benefit**: Ensures compliance with data protection regulations by consistently tagging sensitive data, reducing the risk of non-compliance.

## Best Practices

- **Validate Assets Before Applying Actions**: Always use the **Explore** page to verify the assets that will be affected by the automation. This ensures that only the intended datasets are updated.
- **Use Automation Logs**: Regularly check the **Recent Runs** logs to monitor automation activity and ensure that they are running as expected.
- **Propagate Metadata Thoughtfully**: When propagating metadata via lineage, make sure that the source metadata is correct before applying it across multiple datasets.
Loading

0 comments on commit 091c1b3

Please sign in to comment.