-
Notifications
You must be signed in to change notification settings - Fork 697
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
map: use batch lookups in Map.Iterate() #1079
Comments
I would love it if
|
Stumbled upon this, I did some work recently in Cilium that dealt with the issue of default batch sizes that is likely related. There's been work done to significantly improve the batch lookup API, but it would be nice to start looking at maybe providing a iterator pattern that would satisfy the Go rangefunc experimental feature. |
I would love that. However, your work plus my recent experience with #1485 makes me think that the current API is basically unusable in a generic fashion. We don't know anything about the map in question except the max entries really. In the worst case, we could have a single bucket with |
I would like to discuss the future of the Map Iterator API. Currently, it doesn't transparently use batch lookups under the hood
However, the current batch API, as it is in the library, is relatively barebones, and it hasn't been iterated on since its inception or integrated with other parts of the Map API.
Maybe it could be an option to either break the API of the current
Map.Iterate()
(and make it take a config struct),or introduce a new
Map.Range()
orMap.For()
method that takes a configuration struct. Or maybe simplyMap.IterateBatch()
, but that sounds rather unimaginative.Ideally, we'd have a unified API where the caller can disable batch lookups if needed for certain use cases. Thinking
of hash map clearing where each call to .Next() is expected to point at the first element, which gets subsequently deleted.
Wondering if anyone's thought about this before, or if there's any opinions around this. Thanks!
The text was updated successfully, but these errors were encountered: