-
-
Notifications
You must be signed in to change notification settings - Fork 158
normalise field name: change prefix from @ to _ to allow proper querying #1514
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
normalise field name: change prefix from @ to _ to allow proper querying #1514
Conversation
WalkthroughIntroduces JSON key normalization: a public Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Suggested labels
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @src/event/format/json.rs:
- Around line 121-122: collect_keys currently returns raw field names (e.g.,
"@foo") which are only normalized later by rename_json_keys, causing
derive_arrow_schema to miss matches and re-infer schemas; fix by normalizing the
collected keys immediately after collect_keys and before calling
derive_arrow_schema (i.e., pass rename_json_keys(collect_keys(...)) into
derive_arrow_schema or call rename_json_keys on the keys array returned by
collect_keys), ensuring functions rename_json_keys, collect_keys, and
derive_arrow_schema are updated accordingly so schema lookup uses the normalized
names.
🧹 Nitpick comments (2)
src/event/format/mod.rs (2)
60-67: Consider optimizing to avoid allocation.The
format!macro allocates a new String. Since the PR aims to "avoid extra allocation by mutating the var," consider usingreplace_rangefor in-place mutation:⚡ More efficient in-place mutation
#[inline] pub fn normalize_field_name(name: &mut String) { - if let Some(stripped) = name.strip_prefix('@') { - *name = format!("_{}", stripped); + if name.starts_with('@') { + name.replace_range(0..1, "_"); } }
347-376: Logic is correct but consider adding clarification.The normalization flow is correct: the field name is normalized for schema creation and pattern matching (line 357), while the original field name is used for value lookup from the JSON map (line 350). This works because JSON key normalization happens later in the pipeline via
rename_json_keys.Consider adding a brief comment at line 347-349 to clarify this subtle but important distinction for future maintainers.
📝 Suggested clarifying comment
.map(|field| { + // Normalize field name for schema - JSON keys are normalized later in the pipeline let mut field_name = field.name().to_string(); normalize_field_name(&mut field_name); match (schema_version, map.get(field.name())) {
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/event/format/json.rssrc/event/format/mod.rs
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-18T09:59:20.177Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1415
File: src/metrics/mod.rs:700-756
Timestamp: 2025-09-18T09:59:20.177Z
Learning: In src/event/mod.rs, the parsed_timestamp used in increment_events_ingested_by_date() is correctly UTC-normalized: for dynamic streams it remains Utc::now(), and for streams with time partition enabled it uses the time partition value. Both cases result in proper UTC date strings for metrics labeling, preventing double-counting issues.
Applied to files:
src/event/format/mod.rs
🧬 Code graph analysis (1)
src/event/format/json.rs (2)
src/event/format/mod.rs (1)
normalize_field_name(63-67)src/utils/arrow/mod.rs (1)
get_field(76-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: Quest Smoke and Load Tests for Distributed deployments
- GitHub Check: Quest Smoke and Load Tests for Standalone deployments
- GitHub Check: Build Kafka x86_64-unknown-linux-gnu
- GitHub Check: coverage
- GitHub Check: Build Default aarch64-unknown-linux-gnu
- GitHub Check: Build Default x86_64-unknown-linux-gnu
- GitHub Check: Build Default aarch64-apple-darwin
- GitHub Check: Build Default x86_64-pc-windows-msvc
- GitHub Check: Build Kafka aarch64-apple-darwin
🔇 Additional comments (1)
src/event/format/json.rs (1)
294-296: LGTM!The normalization here is necessary and correct since the schema being validated against contains normalized field names (with "_" prefix instead of "@").
all fields prefixed with `@` will be renamed to have `_` prefix this is to make field queryable
277674d to
0e0a8f8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/handlers/http/ingest.rs (1)
546-557: Fix: Avoid logging client errors at ERROR level; use conditional log levels.The
error!("{self}")syntax at line 547 is correct—error!is a tracing macro and will compile fine. However, the underlying concern is valid: logging all errors (including 4xx client errors) at ERROR level creates unnecessary noise.PostError::status_code()returnsBAD_REQUEST(4xx) for ~19 error variants including malformed requests, invalid parameters, and stream not found conditions. These should be logged at WARN level, not ERROR. Reserve ERROR logging for server errors (5xx).The proposed fix with conditional logging based on status code is appropriate:
Proposed fix
-use tracing::error; +use tracing::{error, warn}; @@ fn error_response(&self) -> actix_web::HttpResponse<actix_web::body::BoxBody> { - error!("{self}"); + let status = self.status_code(); + if status.is_client_error() { + warn!(status = %status, error = %self, "request failed"); + } else { + error!(status = %status, error = %self, "request failed"); + } match self {
🧹 Nitpick comments (1)
src/handlers/http/ingest.rs (1)
27-27: Prefer importing macros you’ll actually use (error!/warn!) after fixing the call site.Once the logging call is corrected (see below), consider importing both macros (or using
tracing::error!/tracing::warn!explicitly) to avoid another edit later.
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/event/format/json.rssrc/event/format/mod.rssrc/handlers/http/ingest.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src/event/format/json.rs
- src/event/format/mod.rs
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-06T04:26:17.191Z
Learnt from: parmesant
Repo: parseablehq/parseable PR: 1424
File: src/enterprise/utils.rs:65-72
Timestamp: 2025-09-06T04:26:17.191Z
Learning: In Parseable's metastore implementation, MetastoreError::to_detail() returns a MetastoreErrorDetail struct (not a string), which contains structured error information including operation, message, stream_name, and other contextual fields. This struct is designed to be boxed in ObjectStorageError::MetastoreError(Box<MetastoreErrorDetail>).
Applied to files:
src/handlers/http/ingest.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: Quest Smoke and Load Tests for Standalone deployments
- GitHub Check: Quest Smoke and Load Tests for Distributed deployments
- GitHub Check: Build Default x86_64-pc-windows-msvc
- GitHub Check: Build Default aarch64-apple-darwin
- GitHub Check: Build Default x86_64-unknown-linux-gnu
- GitHub Check: Build Default aarch64-unknown-linux-gnu
- GitHub Check: Build Kafka x86_64-unknown-linux-gnu
- GitHub Check: Build Kafka aarch64-apple-darwin
- GitHub Check: coverage
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
all fields prefixed with
@will be renamed to have_prefixthis is to make field queryable
Summary by CodeRabbit
New Features
Bug Fixes
Chores
✏️ Tip: You can customize this high-level summary in your review settings.