-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
encoding/binary: add func Append #60023
Comments
CC @dsnet |
SGTM. |
I've been playing with the implementation a bit, and I'd like to extend my proposal to cover an equivalent function for // Decode data from buf according to the given byte order.
//
// Returns the number of bytes read from the beginning of buf or io.ErrUnexpectedEOF.
func Decode(buf []byte, order ByteOrder, data any) (int, error) The rationale is similar to I struggled to come up with a good name for the function, what is the inverse of func Decode(buf []byte, order ByteOrder, data any) (int, error) // aka Read
func Encode(buf []byte, order AppendByteOrder, data any) ([]byte, error) // aka Write |
We have used Append as the word for all of the other appending encoders, both in this package and others; it is confusing to change to Encode now (for example, why does Encode take an AppendByteOrder and not an EncodeByteOrder? why is it Encode but AppendVarint? and so on). Let's keep using Append. |
If we need the decoding version, it could be Parse. |
This proposal has been added to the active column of the proposals project |
Looking at unicode/utf8, which has both AppendRune and DecodeRune, it would be fine to have Decode+Append here. But sometimes you are not actually appending, and it seems okay to add Encode too, like utf8 has EncodeRune. But Encode would not be an appender. It would return an error if the buffer was not large enough. If we did that, we'd have all three:
That would match utf8 better. |
Change https://go.dev/cl/579157 mentions this issue: |
I've dusted off the code I've had lying around and mailed a CL. Encode and Decode were straight forward additions. One interesting wrinkle: I've opted to implement all other primitive functions in terms of func toAppendByteOrder(order ByteOrder) AppendByteOrder {
switch order := order.(type) {
case littleEndian:
return order
case bigEndian:
return order
case nativeEndian:
return order
case AppendByteOrder:
return order
default:
return appendableByteOrder{order}
}
}
P.S.: ToAppendByteOrder is necessary if a dependent package takes ByteOrder as an argument but would like to use Append internally for performance reasons. Without it the choice of encoding primitive „leaks“ into the API and may require breaking changes. I’d like to extend the proposal to include the function (or I can make a separate proposal). |
Sorry but we're not going to add ToAppendByteOrder. It seems okay to have some duplication of special cases between Encode and Append. |
Based on the discussion above, this proposal seems like a likely accept. The proposal is to add: func Decode(buf []byte, order ByteOrder, data any) (int, error) |
No change in consensus, so accepted. 🎉 The proposal is to add: func Decode(buf []byte, order ByteOrder, data any) (int, error) |
We've been going around and around on the implementation of this and I think I just realized the central issue that's giving us trouble: why does Append take an AppendByteOrder instead of a ByteOrder? In the general case, it already pre-computes the size of the entire encoded value and grows the slice by that much. At that point, there's no need for the Append* methods of AppendByteOrder. In the current CL, it uses the Append methods for the fast path, but I'm not sure there's much value in that over just growing the slice. |
Dropping AppendByteOrder works for me. |
Change https://go.dev/cl/587096 mentions this issue: |
Updates #60023 Change-Id: Ida1cc6c4f5537402e11db6b8c411828f2bcc0a5e Reviewed-on: https://go-review.googlesource.com/c/go/+/587096 Reviewed-by: Austin Clements <austin@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Ian Lance Taylor <iant@google.com>
I'd like to propose adding a function with the following signature:
This is useful when repeatedly encoding the same kind of value multiple times into a larger buffer and is a natural extension to #50601. A related proposal wants to add similar functions to other packages in
encoding
: #53693.Together with #53685 it becomes possible to implement a version of
binary.Write
that doesn't allocate when using commonio.Writer
. See my comment forwriteBuffer()
. Roughly (untested):If the CLs to avoid escaping in
reflect
APIs lands,Append
would allow encoding with zero allocations.I think it might also allow encoding into stack allocated slices, provided the compiler is (or becomes) smart enough:
The text was updated successfully, but these errors were encountered: