Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JDBC Destinations: improve error message for conflicting streams #21342

Merged
merged 7 commits into from
Jan 13, 2023

Conversation

edgao
Copy link
Contributor

@edgao edgao commented Jan 12, 2023

What

see https://github.com/airbytehq/airbyte-internal-issues/issues/2542 for information

I'm only planning to publish destination-snowflake; other destinations can get published whenever 🤷

How

In jdbc destination connectors, detect multiple tables writing into the same table (currently this just throws IllegalStateException) and throw a more informative config error.

Recommended reading order

yes

🚨 User Impact 🚨

none, this is just improving how we handle an already broken situation

Pre-merge Checklist

Community member or Airbyter

  • Grant edit access to maintainers (instructions)
  • Secrets in the connector's spec are annotated with airbyte_secret
  • Unit & integration tests added and passing. Community members, please provide proof of success locally e.g: screenshot or copy-paste unit, integration, and acceptance test output. To run acceptance tests for a Python connector, follow instructions in the README. For java connectors run ./gradlew :airbyte-integrations:connectors:<name>:integrationTest.
  • Code reviews completed
  • Documentation updated
    • Connector's README.md
    • Connector's bootstrap.md. See description and examples
    • Changelog updated in docs/integrations/<source or destination>/<name>.md including changelog. See changelog example
  • PR name follows PR naming conventions

Airbyter

If this is a community PR, the Airbyte engineer reviewing this PR is responsible for the below items.

  • Create a non-forked branch based on this PR and test the below items on it
  • Build is successful
  • If new credentials are required for use in CI, add them to GSM. Instructions.
  • /test connector=connectors/<name> command is passing
  • New Connector version released on Dockerhub and connector version bumped by running the /publish command described here

@github-actions
Copy link
Contributor

github-actions bot commented Jan 12, 2023

Affected Connector Report

NOTE ⚠️ Changes in this PR affect the following connectors. Make sure to do the following as needed:

  • Run integration tests
  • Bump connector or module version
  • Add changelog
  • Publish the new version

✅ Sources (0)

Connector Version Changelog Publish
  • See "Actionable Items" below for how to resolve warnings and errors.

❌ Destinations (21)

Connector Version Changelog Publish
destination-azure-blob-storage 0.1.6
destination-clickhouse 0.2.1
destination-clickhouse-strict-encrypt 0.2.1 🔵
(ignored)
🔵
(ignored)
destination-databricks 0.3.1
destination-dynamodb 0.1.7
destination-gcs 0.2.12
destination-mariadb-columnstore 0.1.7
destination-mssql 0.1.22
destination-mssql-strict-encrypt 0.1.22 🔵
(ignored)
🔵
(ignored)
destination-mysql 0.1.20
destination-mysql-strict-encrypt 0.1.21
(mismatch: 0.1.20)
🔵
(ignored)
🔵
(ignored)
destination-oracle 0.1.19
destination-oracle-strict-encrypt 0.1.19 🔵
(ignored)
🔵
(ignored)
destination-postgres 0.3.26
destination-postgres-strict-encrypt 0.3.26 🔵
(ignored)
🔵
(ignored)
destination-redshift 0.3.53
destination-rockset 0.1.4
destination-snowflake 0.4.42
(changelog missing)
destination-teradata 0.1.0
destination-tidb 0.1.0
destination-yugabytedb 0.1.0
  • See "Actionable Items" below for how to resolve warnings and errors.

👀 Other Modules (1)

  • base-normalization

Actionable Items

(click to expand)

Category Status Actionable Item
Version
mismatch
The version of the connector is different from its normal variant. Please bump the version of the connector.

doc not found
The connector does not seem to have a documentation file. This can be normal (e.g. basic connector like source-jdbc is not published or documented). Please double-check to make sure that it is not a bug.
Changelog
doc not found
The connector does not seem to have a documentation file. This can be normal (e.g. basic connector like source-jdbc is not published or documented). Please double-check to make sure that it is not a bug.

changelog missing
There is no chnagelog for the current version of the connector. If you are the author of the current version, please add a changelog.
Publish
not in seed
The connector is not in the seed file (e.g. source_definitions.yaml), so its publication status cannot be checked. This can be normal (e.g. some connectors are cloud-specific, and only listed in the cloud seed file). Please double-check to make sure that it is not a bug.

diff seed version
The connector exists in the seed file, but the latest version is not listed there. This usually means that the latest version is not published. Please use the /publish command to publish the latest version.

@edgao edgao temporarily deployed to more-secrets January 12, 2023 18:49 — with GitHub Actions Inactive
@edgao edgao temporarily deployed to more-secrets January 12, 2023 18:49 — with GitHub Actions Inactive
@edgao
Copy link
Contributor Author

edgao commented Jan 12, 2023

/test connector=connectors/destination-snowflake

🕑 connectors/destination-snowflake https://github.com/airbytehq/airbyte/actions/runs/3905155704
✅ connectors/destination-snowflake https://github.com/airbytehq/airbyte/actions/runs/3905155704
Python tests coverage:

Name                                                              Stmts   Miss  Cover
-------------------------------------------------------------------------------------
normalization/transform_config/__init__.py                            2      0   100%
normalization/transform_catalog/reserved_keywords.py                 14      0   100%
normalization/transform_catalog/__init__.py                           2      0   100%
normalization/destination_type.py                                    14      0   100%
normalization/__init__.py                                             4      0   100%
normalization/transform_catalog/destination_name_transformer.py     166      8    95%
normalization/transform_catalog/table_name_registry.py              174     34    80%
normalization/transform_config/transform.py                         189     48    75%
normalization/transform_catalog/utils.py                             51     14    73%
normalization/transform_catalog/dbt_macro.py                         22      7    68%
normalization/transform_catalog/catalog_processor.py                147     80    46%
normalization/transform_catalog/transform.py                         61     38    38%
normalization/transform_catalog/stream_processor.py                 595    400    33%
-------------------------------------------------------------------------------------
TOTAL                                                              1441    629    56%

Build Passed

Test summary info:

All Passed

@edgao edgao requested a review from ryankfu January 12, 2023 18:51
@edgao edgao marked this pull request as ready for review January 12, 2023 18:51
@edgao edgao requested a review from a team as a code owner January 12, 2023 18:51
writeConfigs.stream()
.collect(Collectors.toUnmodifiableMap(
StagingConsumerFactory::toNameNamespacePair, Function.identity()));
final Set<WriteConfig> conflictingStreams = new HashSet<>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This block seems to make sense in the onStartFunction since none of the values you're accessing here seem to be only isolated to the flushBufferFunction section

The reason that this makes sense to me in the onStartFunction is primarily because you already have the writeConfig values (since we're setting up the temporary tables for each stream (synonymous with writeConfig) and the metadata for each streamIdentifier will be known at setup time

It also fits with what onStartFunction means which is set up so this is only called once as opposed to many times throughout the flushing of the buffer (N times where N can be infinitely large)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this block happens before the lambda gets created (i.e. it's not actually part of the flush buffer function), so I think it only gets executed once per sync

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That does make sense, although I would be hesitant to have this in the flushBufferFunction since it muddies the singular responsibility of the buffer flush. Also, since we know that the lambda for onStartFunction is called only once and it already includes writeConfigs that it would be another suitable location

That said, this isn't blocking

writeConfigs.stream()
.collect(Collectors.toUnmodifiableMap(
StagingConsumerFactory::toNameNamespacePair, Function.identity()));
final Set<WriteConfig> conflictingStreams = new HashSet<>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That does make sense, although I would be hesitant to have this in the flushBufferFunction since it muddies the singular responsibility of the buffer flush. Also, since we know that the lambda for onStartFunction is called only once and it already includes writeConfigs that it would be another suitable location

That said, this isn't blocking

@edgao
Copy link
Contributor Author

edgao commented Jan 12, 2023

submitted https://github.com/airbytehq/airbyte-internal-issues/issues/2543 to solve for non-jdbc destinations

@edgao edgao temporarily deployed to more-secrets January 12, 2023 20:47 — with GitHub Actions Inactive
@edgao edgao temporarily deployed to more-secrets January 12, 2023 20:47 — with GitHub Actions Inactive
@edgao edgao temporarily deployed to more-secrets January 12, 2023 21:25 — with GitHub Actions Inactive
@edgao edgao temporarily deployed to more-secrets January 12, 2023 21:25 — with GitHub Actions Inactive
@edgao
Copy link
Contributor Author

edgao commented Jan 12, 2023

/publish connector=connectors/destination-snowflake

🕑 Publishing the following connectors:
connectors/destination-snowflake
https://github.com/airbytehq/airbyte/actions/runs/3906501628


Connector Did it publish? Were definitions generated?
connectors/destination-snowflake

if you have connectors that successfully published but failed definition generation, follow step 4 here ▶️

@edgao edgao enabled auto-merge (squash) January 12, 2023 23:43
@octavia-squidington-iii octavia-squidington-iii temporarily deployed to more-secrets January 12, 2023 23:44 — with GitHub Actions Inactive
@octavia-squidington-iii octavia-squidington-iii temporarily deployed to more-secrets January 12, 2023 23:44 — with GitHub Actions Inactive
@edgao edgao merged commit 1e44c34 into master Jan 13, 2023
@edgao edgao deleted the edgao/config_error_duplicate_stream branch January 13, 2023 00:22
jbfbell pushed a commit that referenced this pull request Jan 13, 2023
)

* catch conflicting streams as configerror

* add test

* bump version + changelog

* derp, fix test setup

* derp

* auto-bump connector version

Co-authored-by: Octavia Squidington III <octavia-squidington-iii@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants