-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support limiting bucket number in connectors and mongodb 6 #2539
Conversation
Hello williamlardier,My role is to assist you with the merge of this Available options
Available commands
Status report is not available. |
Request integration branchesWaiting for integration branch creation to be requested by the user. To request integration branches, please comment on this pull request with the following command:
Alternatively, the |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files
... and 3 files with indirect coverage changes
@@ Coverage Diff @@
## development/8.6 #2539 +/- ##
===================================================
+ Coverage 69.08% 69.33% +0.25%
===================================================
Files 192 194 +2
Lines 12713 12791 +78
===================================================
+ Hits 8783 8869 +86
+ Misses 3920 3912 -8
Partials 10 10
Flags with carried forward coverage won't be shown. Click here to find out more. |
758ed65
to
45a2e59
Compare
15a205e
to
80261a8
Compare
74a0cc7
to
7736410
Compare
extensions/oplogPopulator/allocationStrategy/LeastFullConnector.js
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The approach seems to go against in the "design" of the code, which makes it much harder to separate the concern and understand ("proof") that it is working well... And it may create corner cases.
There may obviously be more in the details, but from afar this change should:
- Adding a new
ImmutableAllocationStrategy
(maybe not needed at first, but would avoid going through the now possibly long list of connectors ; it could also help to handle the "histeresis" that we can have more than 1 bucket if the connector is already like that, but we must always allocate "new" buckets to new connectors) - Updating the Allocator (and AllocationStrategy "contract") to support the new situations (e.g. need to allocate a new connector)
- Connector is class to manage a connector: it should not be affected, simply we will never reach a situation where we change the list of buckets (thanks to the new allocator/strategy)
- Maybe some change to ConnectorsManager, as it bridges Allocator and Connectors, not sure
36f5611
to
7dc61a2
Compare
f0a136a
to
f431436
Compare
d17510b
to
93efc96
Compare
await connector.removeBucket(bucket); | ||
this.emit('bucket-removed', bucket, connector); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similarly, this should be done the other way to avoid race conditions:
await connector.removeBucket(bucket); | |
this.emit('bucket-removed', bucket, connector); | |
this.emit('bucket-removed', bucket, connector); | |
await connector.removeBucket(bucket); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(to avoid these, maybe we should [in a later PR] refactor the code to make these calls non-async, since currently they do not perform the change...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
} else if (connector.isRunning) { | ||
return connector.updatePipeline(true); | ||
} else if (connector.isRunning && this._allocationStrategy.canUpdate()) { | ||
const isPipelineUpdated = connector.updatePipeline(true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the function is async, so need to await:
const isPipelineUpdated = connector.updatePipeline(true); | |
const isPipelineUpdated = await connector.updatePipeline(true); |
...however this makes the event be generated after the update was performed, which is not what we want: we need to somehow send the event from the connector (e.g. connector becomes an event generator, or ConnectorManager provides an event generator to the connector)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the event in the connector, and I eventually moved both in it, so we can handle all events (destroy & update pipeline) from the connector class, and the event is then propagated to the oplog populator through the connectors manager.
I also used a constant to avoid duplication of the string.
1e8a7e7
to
bcf782e
Compare
@@ -185,6 +188,7 @@ class Connector { | |||
} | |||
try { | |||
await this._kafkaConnect.deleteConnector(this._name); | |||
this.emit(constants.connectorUpdatedEvent, this); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be sent before actually deleting the connector
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Either way will have flaws, if we send it before, and deletion fails, we loose track of the retained bucket (and no easy way to handle it as we work with event (same with callbacks))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I will change it before, but just sharing for visibility)
bcf782e
to
ccd7d42
Compare
extensions/oplogPopulator/allocationStrategy/RetainBucketsDecorator.js
Outdated
Show resolved
Hide resolved
44bb359
to
34f5e87
Compare
34f5e87
to
7396ddc
Compare
Issue: BB-601
7396ddc
to
a1e9895
Compare
/create_integration_branches |
Integration data createdI have created the integration data for the additional destination branches.
The following branches will NOT be impacted:
You can set option
The following options are set: create_integration_branches |
Waiting for approvalThe following approvals are needed before I can proceed with the merge:
The following options are set: create_integration_branches |
/approve |
I have successfully merged the changeset of this pull request
The following branches have NOT changed:
Please check the status of the associated issue BB-601. Goodbye williamlardier. The following options are set: approve, create_integration_branches |
Observations: