-
Notifications
You must be signed in to change notification settings - Fork 896
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split collection limit out of cardinality limit #3813
Comments
I think I agree with this proposal. The Lightstep metrics SDK which I used for prototyping does have two limits that can be roughly described as @MrAlias has described above. We might disagree on what "within a collection cycle" means. In my implementation, this "interior" cardinality limit is enforced between any two collection cycles by any two Readers. So -- and I admit this is not very intuitive -- when the interior limit is being reached, one way to address this is for the user to add another Reader with a shorter collection cycle. This will push the cardinality out of the interior data structure into each reader sooner, at which point the per-reader collection limit is well defined. @utpilla looking for input. |
If I understand this correctly, this is saying for a given attribute set (i.e. |
Is there something unclear with the description? |
we may want to revisit this issue after #3798 is resolved since seems to be some interdependency between them |
+1 I am confused too... Why do we want to limit the number of measurements allowed for this?
// Are we saying we need to have a limit on the number of measurements allowed ? i.e if I have 100 measurements, and limit is 90, then what happens to the other 10 measurements? What is the need of this limiting? Isn't the whole point of Metrics is that output is of fixed, predictable size, so if user has 100 measurements or a million of them, the output is still predictable size. It may be the case that we didn't understand each other. @MrAlias Can you clarify if my (and Jack's) understanding is correct? Also #3856 has clarified that the cardinality limit in the spec today is 2 from this issue. |
Note from the description:
Which means that if you made 100 measurements for distinct attribute sets, yes you would limit to 90. If you make 100 measurements for the same attribute set you would measure all 100. This assumes filtering is done in the collection phase. |
@MrAlias |
I think this can be postponed until after stabilization of the cardinality limit if we can decide on possible naming. It seems like what we are after is a "hard" and "soft" limit definition. The "hard" limit would be the one that is never exceeded, even during the measurement phase, and the "soft" limit is the current definition that may be exceeded if filtering is not done in the measurement phase. If we want to name them as such we should consider renaming the existing cardinality limit to be the cardinality soft limit. |
If we need to have this distinction, I think we can use the same name but apply it at different components/levels. This is similar to throttling; we can have the same "Request per second" throttling mechanism at various levels (e.g. for each endpoint, for each binding address, for each client IP address, etc.) |
As discussed in the last specification SIG meeting (2024-01-09) the existing cardinality limit is being used to represent two different limits:
This distinction is meaningful for a few reasons.
Proposal
cc @trask @jmacd @jack-berg @jsuereth
The text was updated successfully, but these errors were encountered: