-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JDBC Destinations: improve error message for conflicting streams #21342
Conversation
Affected Connector ReportNOTE
|
Connector | Version | Changelog | Publish |
---|
- See "Actionable Items" below for how to resolve warnings and errors.
❌ Destinations (21)
Connector | Version | Changelog | Publish |
---|---|---|---|
destination-azure-blob-storage |
0.1.6 |
✅ | ✅ |
destination-clickhouse |
0.2.1 |
✅ | ✅ |
destination-clickhouse-strict-encrypt |
0.2.1 |
🔵 (ignored) |
🔵 (ignored) |
destination-databricks |
0.3.1 |
✅ | ✅ |
destination-dynamodb |
0.1.7 |
✅ | ✅ |
destination-gcs |
0.2.12 |
✅ | ✅ |
destination-mariadb-columnstore |
0.1.7 |
✅ | ✅ |
destination-mssql |
0.1.22 |
✅ | ✅ |
destination-mssql-strict-encrypt |
0.1.22 |
🔵 (ignored) |
🔵 (ignored) |
destination-mysql |
0.1.20 |
✅ | ✅ |
destination-mysql-strict-encrypt |
❌ 0.1.21 (mismatch: 0.1.20 ) |
🔵 (ignored) |
🔵 (ignored) |
destination-oracle |
0.1.19 |
✅ | ✅ |
destination-oracle-strict-encrypt |
0.1.19 |
🔵 (ignored) |
🔵 (ignored) |
destination-postgres |
0.3.26 |
✅ | ✅ |
destination-postgres-strict-encrypt |
0.3.26 |
🔵 (ignored) |
🔵 (ignored) |
destination-redshift |
0.3.53 |
✅ | ✅ |
destination-rockset |
0.1.4 |
✅ | ✅ |
destination-snowflake |
0.4.42 |
❌ (changelog missing) |
✅ |
destination-teradata |
0.1.0 |
✅ | ✅ |
destination-tidb |
0.1.0 |
✅ | ✅ |
destination-yugabytedb |
0.1.0 |
✅ | ✅ |
- See "Actionable Items" below for how to resolve warnings and errors.
👀 Other Modules (1)
- base-normalization
Actionable Items
(click to expand)
Category | Status | Actionable Item |
---|---|---|
Version | ❌ mismatch |
The version of the connector is different from its normal variant. Please bump the version of the connector. |
⚠ doc not found |
The connector does not seem to have a documentation file. This can be normal (e.g. basic connector like source-jdbc is not published or documented). Please double-check to make sure that it is not a bug. |
|
Changelog | ⚠ doc not found |
The connector does not seem to have a documentation file. This can be normal (e.g. basic connector like source-jdbc is not published or documented). Please double-check to make sure that it is not a bug. |
❌ changelog missing |
There is no chnagelog for the current version of the connector. If you are the author of the current version, please add a changelog. | |
Publish | ⚠ not in seed |
The connector is not in the seed file (e.g. source_definitions.yaml ), so its publication status cannot be checked. This can be normal (e.g. some connectors are cloud-specific, and only listed in the cloud seed file). Please double-check to make sure that it is not a bug. |
❌ diff seed version |
The connector exists in the seed file, but the latest version is not listed there. This usually means that the latest version is not published. Please use the /publish command to publish the latest version. |
/test connector=connectors/destination-snowflake
Build PassedTest summary info:
|
writeConfigs.stream() | ||
.collect(Collectors.toUnmodifiableMap( | ||
StagingConsumerFactory::toNameNamespacePair, Function.identity())); | ||
final Set<WriteConfig> conflictingStreams = new HashSet<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block seems to make sense in the onStartFunction
since none of the values you're accessing here seem to be only isolated to the flushBufferFunction
section
The reason that this makes sense to me in the onStartFunction
is primarily because you already have the writeConfig values (since we're setting up the temporary tables for each stream (synonymous with writeConfig) and the metadata for each streamIdentifier
will be known at setup time
It also fits with what onStartFunction
means which is set up so this is only called once as opposed to many times throughout the flushing of the buffer (N times where N can be infinitely large)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this block happens before the lambda gets created (i.e. it's not actually part of the flush buffer function), so I think it only gets executed once per sync
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That does make sense, although I would be hesitant to have this in the flushBufferFunction
since it muddies the singular responsibility of the buffer flush. Also, since we know that the lambda for onStartFunction
is called only once and it already includes writeConfigs
that it would be another suitable location
That said, this isn't blocking
writeConfigs.stream() | ||
.collect(Collectors.toUnmodifiableMap( | ||
StagingConsumerFactory::toNameNamespacePair, Function.identity())); | ||
final Set<WriteConfig> conflictingStreams = new HashSet<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That does make sense, although I would be hesitant to have this in the flushBufferFunction
since it muddies the singular responsibility of the buffer flush. Also, since we know that the lambda for onStartFunction
is called only once and it already includes writeConfigs
that it would be another suitable location
That said, this isn't blocking
submitted https://github.com/airbytehq/airbyte-internal-issues/issues/2543 to solve for non-jdbc destinations |
/publish connector=connectors/destination-snowflake
if you have connectors that successfully published but failed definition generation, follow step 4 here |
What
see https://github.com/airbytehq/airbyte-internal-issues/issues/2542 for information
I'm only planning to publish destination-snowflake; other destinations can get published whenever 🤷
How
In jdbc destination connectors, detect multiple tables writing into the same table (currently this just throws IllegalStateException) and throw a more informative config error.
Recommended reading order
yes
🚨 User Impact 🚨
none, this is just improving how we handle an already broken situation
Pre-merge Checklist
Community member or Airbyter
airbyte_secret
./gradlew :airbyte-integrations:connectors:<name>:integrationTest
.README.md
bootstrap.md
. See description and examplesdocs/integrations/<source or destination>/<name>.md
including changelog. See changelog exampleAirbyter
If this is a community PR, the Airbyte engineer reviewing this PR is responsible for the below items.
/test connector=connectors/<name>
command is passing/publish
command described here