Fix large field keys preventing snapshot compactions #8425
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Required for all non-trivial PRs
This fixes an issue where a very large field key would be accepted during a write, but would subsequently fail when snapshotting the key to TSM. The issue was that series key validation only looked at the measurement + tagset for the max key length. The actual series key includes the field name and an internal tsm value. If a write with a very large field key was sent, it would pass this validation and be written to a tsm file. When the tsm file was loaded, it would fail to load because the actual key size overflow the 2 bytes allocated for storing the key length.
This fixes the validation to fail the writes early on so the client receives a 400/partial write error. It also fixes the bug in the tsm writer to prevent writing keys that are too large. This was check in
WriteValues
, but notWriteBlock
by mistake.Fixes #8417