-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase memory allocation for JDBC buffers in source DB connectors #20939
Conversation
…uffer from 60% to 70%
/test connector=connectors/source-postgres
Build PassedTest summary info:
|
@evantahler / @davinchia I was trying to follow the comments from the previous PR. What's a good way to test this increase? |
... I didn't come up with a good way to test this kind of change 😢 I think for now, the best we can do is to publish the change (OSS) and run a few syncs with it on a local or staging K8s cluster and compare new vs old memory consumption for a sync. In the future, we'll be looking into blue/green connector deployments to try this connector out on a few cloud workspaces before rolling it out to everyone. |
@akashkulk you should be able to build the image locally, load it into a local airbyte deployment (either docker or K8s) and inspect memory usage as the sync runs. This all can be done before we merge this change into OSS. Alternatively, you can also build the connector locally and run the connector's read command and inspect local memory usage. I'm going to find some time over the next few days to draft our a local test harness for this. Till then, I would recommend either of these two options. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving.... but you probably want to test it first
@davinchia What do you recommend to inspect docker memory usage in a local airbyte deployment? I was going to basically run |
Airbyte Code Coverage
|
Affected Connector ReportNOTE
|
Connector | Version | Changelog | Publish |
---|---|---|---|
source-alloydb |
1.0.36 |
✅ | ✅ |
source-alloydb-strict-encrypt |
1.0.36 |
🔵 (ignored) |
🔵 (ignored) |
source-mysql |
1.0.21 |
✅ | ✅ |
source-mysql-strict-encrypt |
1.0.21 |
🔵 (ignored) |
🔵 (ignored) |
source-postgres-strict-encrypt |
1.0.41 |
🔵 (ignored) |
🔵 (ignored) |
- See "Actionable Items" below for how to resolve warnings and errors.
✅ Destinations (0)
Connector | Version | Changelog | Publish |
---|
- See "Actionable Items" below for how to resolve warnings and errors.
✅ Other Modules (0)
Actionable Items
(click to expand)
Category | Status | Actionable Item |
---|---|---|
Version | ❌ mismatch |
The version of the connector is different from its normal variant. Please bump the version of the connector. |
⚠ doc not found |
The connector does not seem to have a documentation file. This can be normal (e.g. basic connector like source-jdbc is not published or documented). Please double-check to make sure that it is not a bug. |
|
Changelog | ⚠ doc not found |
The connector does not seem to have a documentation file. This can be normal (e.g. basic connector like source-jdbc is not published or documented). Please double-check to make sure that it is not a bug. |
❌ changelog missing |
There is no chnagelog for the current version of the connector. If you are the author of the current version, please add a changelog. | |
Publish | ⚠ not in seed |
The connector is not in the seed file (e.g. source_definitions.yaml ), so its publication status cannot be checked. This can be normal (e.g. some connectors are cloud-specific, and only listed in the cloud seed file). Please double-check to make sure that it is not a bug. |
❌ diff seed version |
The connector exists in the seed file, but the latest version is not listed there. This usually means that the latest version is not published. Please use the /publish command to publish the latest version. |
/publish connector=connectors/source-postgres-strict-encrypt run-tests=false
if you have connectors that successfully published but failed definition generation, follow step 4 here |
/publish connector=connectors/source-postgres run-tests=false
if you have connectors that successfully published but failed definition generation, follow step 4 here |
/publish connector=connectors/source-mysql run-tests=false
if you have connectors that successfully published but failed definition generation, follow step 4 here |
/publish connector=connectors/source-mysql-strict-encrypt run-tests=false
if you have connectors that successfully published but failed definition generation, follow step 4 here |
Work is part of #20417.
MAX_FETCH_SIZE
frpm 1,000,000 -> 2,000,000MAX_BUFFER_BYTE_SIZE
, which was hard coded to1GB
. The memory allocation now has an upper bound ofTARGET_BUFFER_SIZE_RATIO * MAX_HEAP_SPACE
. This is so we can take advantage of the increased memory for source containers.Tested locally, and hasn't caused any additional OOMs