-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Framework] Allow bulk operations result logging #2117
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One nit, but largely LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to do it not here, but after bulk call was made?
It should return stats of bulk request and will give you much more precise stats (e.g. when you upsert it can tell you whether it created or updated the record)
… sending (operations can fail)
connectors/es/sink.py
Outdated
result = item[action_item].get("result") | ||
operation_failed = result not in SUCCESSFUL_RESULTS | ||
|
||
if operation_failed: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For failed operation, you should check
if "error" in item[action_item]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment above. Maybe it makes sense to name the condition non_successful_result
/ failed_result
?
Co-authored-by: Dmitriy Burlutskiy <dmitrii.burlutckii@elastic.co>
config.yml
Outdated
@@ -147,6 +147,10 @@ | |||
#elasticsearch.bulk.retry_interval: 10 | |||
# | |||
# | |||
## Enable to log ids of created/indexed/deleted/updated documents during a sync. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
help me make sure that this isn't lost with #2280 (depending on which merges first)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mine merged first. To resolve the conflict, this'll just need to be moved to config.yml.example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
few nits. But I like it!
Co-authored-by: Sean Story <sean.j.story@gmail.com>
…imgrein/log-deleted-ids
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good stuff!
We've an open tech improvement, which states that it's important for some customers for audit purposes to be able to log ids of documents, which were deleted.
This PR introduces one new configuration setting
elasticsearch.bulk.enable_operations_logging
to enable this option. The logs will be logged onDEBUG
level.Sample logs for successful operations:
Sample log for failed operations:
Checklists
Pre-Review Checklist
v7.13.2
,v7.14.0
,v8.0.0
)- [ ] if you added or changed Rich Configurable Fields for a Native Connector, you made a corresponding PR in KibanaRelease Note
Add configuration settings to be able to log ids of deleted documents.