-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(server): Send metrics via the global endpoint #2902
Conversation
tags: &self.tags, | ||
} | ||
.hash64() | ||
let mut hasher = FnvHasher::default(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BucketKeyRef
is no longer required. The hash created here and the hash used for partitioning do not have to be the same, which is why we can hash differently.
/// | ||
/// The distribution of buckets should be even. | ||
/// If it is not, this metric should expose it. | ||
PartitionKeys, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As partitioning is now exclusively in the processor, this metric has moved here from relay-metrics
. Its key is still the same.
bucket: self.current.bucket + at, | ||
}; | ||
} | ||
self.current = Index { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The check for whether the bucket can split has been moved into split_at
. Invariants for splitting are still checked in the inner iterator.
c62bbe4
to
b11353e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment about test coverage, apart from that LGTM!
}) | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a lot of metrics-specific methods here now, maybe we should move them to a submodule (as standalone functions) in a follow-up PR.
Introduces an option
http.global_metrics
. When enabled and not inprocessing mode, Relay sends metrics to the global batch endpoint at
/api/0/relays/metrics/
instead of envelopes. This endpoint allows forbatched submission of metrics from multiple projects, which should
reduce the overall number of requests.
Bug Fixes
This change contains additional bug fixes that were discoverd during
implementation:
emit_outcomes
outcomes flag to be set. This was invalid copy & paste from the
outcomes endpoint.
compressed body. However, Relay requires the signature on the
uncompressed body.
Details
Building the request occurs in the
EnvelopeProcessor
in place ofbuilding envelopes in the following steps:
partition reaches the batch size limit, flush the partition eagerly.
Buckets at the border may be split.
apply HTTP encoding (compression).
SendMetricsRequest
with the payload and outcomemetadata directly to the upstream.
request does not have to be awaited.
In processing mode, Relay still produces to Kafka. The configuration
option does not have an effect in processing mode.
A note on stability: This endpoint and functionality is meant for
operation at scale within a distributed Sentry installation. At this
moment, it is not recommended to enable this option for external Relays.
Tasks
Possible Improvements
The changes in this PR allow for further optimization and changes. In
order to keep the scope of this PR to a necessary minimum, these have
not been included yet:
EnvelopeProcessor
,avoiding the roundtrip through the
EnvelopeManager
. This also avoidsa redundant call back to compress envelopes in the processor, should
HTTP encoding be enabled.
ExtractionMode
. It can be accessed moreconveniently through a getter. Additionally, it resides on project
config where it should actually be included in global config.
Currently it is being passed around in excess.