Opportunistically decode attributes in AttributeDecoder #157
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi Matt,
As discussed yesterday, this patch contributes two things:
AttributeDecoder.TypeFlags()
that returns the Type with an invertedattrTypeMask
UnmarshalAttributes()
andad.Next()
This change is motivated by ti-mo/conntrack#13, as I'd like to expose a
netfilter.AttributeDecoder/Encoder
andconntrack.AttributeDecoder/Encoder
interface all the way up the call chain. If done right, decoding a whole netlink message can be done with only a single allocation of each layer'sAttribute
type. 😱It doesn't save much on the benches in the repo, because it decodes all attributes anyway, but it allows for large efficiency gains in downstream libraries. Because attributes can now be decoded opportunistically, (eg. calling
ad.Next()
until you have the attributes you care about and calling it quits), a caller that uses AttributeDecoder will no longer implicitly decode the whole message. This potentially saves a lot of CPU time.I also cut down on memory allocations, since AttributeDecoder now uses a single
netlink.Attribute
as a scratch buffer it repeatedly unmarshals into, completely removing allocations from the hot path.This is illustrated by the benchmark results.
Before:
After:
The extra allocation visible in 0 and 1 is due to an added
make([]Attribute, 0, ad.Len())
inUnmarshalAttributes
that pre-allocates the backing array of the output slice. This amortizes nicely over time.The existing test suite was very helpful in weeding out any inconsistencies in behaviour, so for example I had to add an extra check in
UnmarshalAttributes
to make sure it explicitly returns a nil[]Attribute
when no attributes where decoded.Let me know what you think!