-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add missing data types to IngestDocument deep copy #14380
Add missing data types to IngestDocument deep copy #14380
Conversation
PR opensearch-project#11725 added a new deep copy in the ScriptProcessor flow. If a script uses a Short or Byte data type then this new deep copy introduced a regression. This commit fixes that regression. However, it appears there has been an existing bug where using a Character type in the same way will fail (this failed before PR 11725). The failure is different, and appears to be related to something deeping in the XContent serialization layer. For now, I have fixed the regression but not yet dug into the failure with the Character data type. I have added a test that expects this failure. Resolves opensearch-project#14379 Signed-off-by: Andrew Ross <andrross@amazon.com>
dc0ccdc
to
40b9e07
Compare
❌ Gradle check result for dc0ccdc: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
❕ Gradle check result for 40b9e07: UNSTABLE
Please review all flaky tests that succeeded after retry and create an issue if one does not already exist to track the flaky failure. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #14380 +/- ##
============================================
+ Coverage 71.42% 71.68% +0.26%
- Complexity 59978 62066 +2088
============================================
Files 4985 5118 +133
Lines 282275 291829 +9554
Branches 40946 42188 +1242
============================================
+ Hits 201603 209190 +7587
- Misses 63999 65304 +1305
- Partials 16673 17335 +662 ☔ View full report in Codecov by Sentry. |
No, it did not. If you look at the code again, you will see that the PR only reused an existing deepCopyMap. This means that the issue has been there since the introduction of Byte data type in processor which wasn't reflected in deep copy map and surprisingly there were not tests to validate a byte field ingest document in script processor. Without my PR, the code will still break in toXContent in |
@vikasvb90 My initial read of the code, and all subsequent readings, suggest that a new invocation of deep copy was added in the ScriptProcessor code flow that was not there before, which is why this fix is needed now. I'm happy to update the description if that is incorrect. I would also appreciate a review of this code so I can address any comments. Thanks! |
@andrross Yes, the invocation is new in ingest flow. It wasn't there before. I thought you meant that I added the deep copy method itself. The code looks fine to me and I tested the rest tests myself. There are 2 things though which we can probably take in a separate PR as well.
|
I intended to do this with the new IngestDocumentTests::testCopy method. Let me know if you think we need more coverage of that method.
Correct. As far as I can tell this was broken in all previous versions as well. I opened #14382 to track it separately. Thanks @vikasvb90! |
PR #11725 added a new deep copy in the ScriptProcessor flow. If a script uses a Short or Byte data type then this new deep copy introduced a regression. This commit fixes that regression. However, it appears there has been an existing bug where using a Character type in the same way will fail (this failed before PR 11725). The failure is different, and appears to be related to something deeping in the XContent serialization layer. For now, I have fixed the regression but not yet dug into the failure with the Character data type. I have added a test that expects this failure. Resolves #14379 Signed-off-by: Andrew Ross <andrross@amazon.com> (cherry picked from commit 112704b) Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
PR #11725 added a new deep copy in the ScriptProcessor flow. If a script uses a Short or Byte data type then this new deep copy introduced a regression. This commit fixes that regression. However, it appears there has been an existing bug where using a Character type in the same way will fail (this failed before PR 11725). The failure is different, and appears to be related to something deeping in the XContent serialization layer. For now, I have fixed the regression but not yet dug into the failure with the Character data type. I have added a test that expects this failure. Resolves #14379 (cherry picked from commit 112704b) Signed-off-by: Andrew Ross <andrross@amazon.com> Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
…t#14380) PR opensearch-project#11725 added a new deep copy in the ScriptProcessor flow. If a script uses a Short or Byte data type then this new deep copy introduced a regression. This commit fixes that regression. However, it appears there has been an existing bug where using a Character type in the same way will fail (this failed before PR 11725). The failure is different, and appears to be related to something deeping in the XContent serialization layer. For now, I have fixed the regression but not yet dug into the failure with the Character data type. I have added a test that expects this failure. Resolves opensearch-project#14379 Signed-off-by: Andrew Ross <andrross@amazon.com>
…t#14380) (opensearch-project#14413) PR opensearch-project#11725 added a new deep copy in the ScriptProcessor flow. If a script uses a Short or Byte data type then this new deep copy introduced a regression. This commit fixes that regression. However, it appears there has been an existing bug where using a Character type in the same way will fail (this failed before PR 11725). The failure is different, and appears to be related to something deeping in the XContent serialization layer. For now, I have fixed the regression but not yet dug into the failure with the Character data type. I have added a test that expects this failure. Resolves opensearch-project#14379 (cherry picked from commit 112704b) Signed-off-by: Andrew Ross <andrross@amazon.com> Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Signed-off-by: kkewwei <kkewwei@163.com>
…t#14380) PR opensearch-project#11725 added a new deep copy in the ScriptProcessor flow. If a script uses a Short or Byte data type then this new deep copy introduced a regression. This commit fixes that regression. However, it appears there has been an existing bug where using a Character type in the same way will fail (this failed before PR 11725). The failure is different, and appears to be related to something deeping in the XContent serialization layer. For now, I have fixed the regression but not yet dug into the failure with the Character data type. I have added a test that expects this failure. Resolves opensearch-project#14379 Signed-off-by: Andrew Ross <andrross@amazon.com>
PR #11725 added a new deep copy in the ScriptProcessor flow. If a script uses a Short or Byte data type then this new deep copy introduced a regression. This commit fixes that regression.
However, it appears there has been an existing bug where using a Character type in the same way will fail (this failed before PR 11725). The failure is different, and appears to be related to something deeping in the XContent serialization layer. For now, I have fixed the regression but not yet dug into the failure with the Character data type. I have added a test that expects this failure.
Resolves #14379
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.