-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compress/zstd: add new package #62513
Comments
Some comments on other popular compression formats and why Zstandard makes the cut, but not others:
Most compression formats (other than bzip2) are a combination of LZ77 with entropy encoding (and maybe other tricks like Markov chains for LZMA). The entropy encoding that GZIP uses is Huffman encoding, which can't compress better than 1-bit per byte. In terms of compression ratio, arithmetic encoding (which LZMA uses), range encoding, and ANS (which Zstandard uses) encoding provide better ratios. ANS has the benefit that is overcomes the limitations of Huffman encoding, is patent unencumbered (to my knowledge), and is also very fast. It seems unlikely a better entropy encoding than ANS will be discovered in the next decade. IMO, the lack of a formal specification should bar inclusion in stdlib as it makes "correctness" somewhat undefined. |
Makes sense but I have to say that I'm not sure that anybody could write a correct zstd decompressor based solely on reading RFC 8878. |
Heh... you're the expert there, having done it. I had written a Brotli decompressor solely from the draft RFC without looking at the implementation. This effort led to a number of updates to the draft so that it would be unambiguous when finalized. |
To add to the list of good points zstd has an extremely really wide range of levels available, depending on the compressor options you can have lz4 or gzip like ratios which have very low resource usage. |
@dsnet Just to clarify some mistakes in your initial post. zstd compression (nor deflate for that matter) uses any assembly or "unsafe" for compression. The assembly is used for decompression and can be fully disabled with a "noasm" tag. So obviously that is trivial to exclude.
Could you elaborate what you find particularly complex? I expect the stdlib would have the multithreading pulled out and only implement a fully synchronous mode, as well as possibly omitting dictionary features. Without that the API isn't really more complex than flate, gzip and similar. Without concurrency you can have the options on the Encoder (or Writer, whatever it is called), since you don't have any sharing to care about.
I could interpret this as an underhanded way of implying my library isn't safe to use. Probably not the intention, but just to be clear there is continuous fuzzing running and there is third party security review being done. Considering it doesn't use assembly, nor unsafe, I think it is fair to say that a battletested code is more secure than a freshly written code. If you want to write it, I will of course not take that fun from you - but to wrap it in an argument of "security" seems disingenuous or the very least not at all researched to me. I do think that features like concurrent (de)compression and possibly dictionaries should be taken out from the stdlib. I think having the |
@klauspost, thanks for your thoughts.
Correct. There should be no usages of goroutines in the stdlib implementation. The example in klauspost/compress#479 should be perfectly thread safe (as it is with the other stdlib packages). One "dictionary" feature we probably want to keep is the ability to specify the maximum window size.
Not at all the intention; I use your package quite extensively myself. That said, the stdlib avoids the use of assembly except for a few select places. Assembly can be a challenge to review and maintain. Continuous fuzzing is great, but takes a long time before confidence in the implementation is obtained. Fuzzing doesn't help when trying to review if a change to the assembly implementation is correct. I should also mention the asynchronous nature
I'm confused. Just above, you mentioned that assembly is used in your package for decompression, but here you saying it doesn't use assembly? Did you mean if you built it under
I'm not itching to write a zstd implementation from scratch. My end goal is to see support for it in stdlib. In my proposal, I deliberately didn't make a statement about how this would come to be fulfilled. Using the pure Go (i.e., no assembly) implementation of your package would be a reasonable possibility.
I personally find it rather strange that the Given the acceptance of #54078, I would rather see something like: func AppendCompress(dst, src []byte, level int) ([]byte, error)
func AppendDecompress(dst, src []byte, maxSize int) ([]byte, error) |
Yes. That is an essential security DOS mitigation. Though when I refer to dictionaries I only refer to predefined dictionaries that seeds the first blocks with backreferences and predefined entropy coders.
I assumed you were only looking for compression given there already is an internal decompressor, which I unfortunately haven't had the time to test. But since everything is marked by tags, taking it out is extremely easy.
I am not sure how you come to that. If you use it for streaming, you can only use it for one stream - that isn't different than a sync Writer. If you want writes to only happen when you are calling Write, yes, you need to disable concurrency - but that is an assumption you are making that doesn't hold. The documentation should be pretty clear on what you can expect. Either way, it is a non-issue since the concurrency should go. Both the compressor and the decompressor have an "async" and "sync" code path. Ripping out the async one is a minor task. Of course some cleanup can probably be done, but that is cosmetics.
So you want each of them to allocate, or try to hide the allocs with internal sync.Pools?
Being able to use the same Reader/Writer for concurrent Encode/Decode operations would have to go, but it would at least give control over how each is reused. Sidenote, |
Stack allocate if possible, reusing the destination buffer if possible (e.g., we don't need to maintain the LZ77 window for decompression since it can be obtained from |
To comment more on the |
While zstd is a noble goal, I woul like to highlight that it still has zero support in browsers as of today. So for anyone looking to use a more efficient |
@silverwind I think this is more relevant for #62492. ZSTD has found many uses outside |
To offer a bit of commentary on how it would be used: at my company Zstandard is becoming our default general-purpose compression algorithm because it's strictly better than gzip on all the dimensions we care about. There are cases where we would decide to use gzip (for compatibility) or snappy (for very high throughput) but in the absence of specific requirements, Zstandard is the go-to. |
Yes, just wanted to mention it. I'm in favor of adding any of the "better then gzip" mechanisms to the standard library for both http transfer, but also |
Correct (and it already uses dst instead of a dedicated history). I can see the single-use decoder working stateless with hidden, internal buffers (for things like literal decompression, entropy decoders for example). For encoding you have rather big hash tables to deal with, which will put quite some pressure on the stack. Level 1 has 256K, Level 2 has 1.2MB, Level 3 is 4MB. So these would also need some internal pooling, since having those on stack and forcing zeroing on each call, wouldn't be very optimal. Again entropy/literal coders would need to have internal pools, since they are rather expensive to set up. Doable, but I am not generally a fan of having a bunch of pools on a package level. Streams will of course be stateful, so ignoring those for now. That said, I don't really see any problem in separating single-shot state from the Reader/Writer state and having an internal pool for those.
Sure. I can see that. I try to avoid global package state. That is the main reason I haven't added it, since it would be rather annoying if one bad actor could cause problems for others. A global "pool" is nice for resource control. It is pointless to run more than GOMAXPROCS EncodeAll/DecodeAll concurrently, since each will consume exactly one thread fully until it is done. That has a "play nice" feel for preemption, and limiting this below GOMAXPROCS will of course make sure that compression/decompression will never oversaturate a system. Without concurrency there isn't the main "annoyance" with the current API - that you have to So yeah, I can see that as a way of simplifying things. Let me know if I understood your ideas clearly. |
@dsnet I tested the
Assembly disabled. Using streaming interface. Multithreading disabled. So "not great, not terrible" applies it seems. With assembly the difference grows significantly. Also, I did find some bugs while testing casually. Bugs foundI ran my fuzz set through the decoder and found some differences/bugs. I don't really know where to report them (should I open an issue?) So I collapsed them here, since they aren't super relevant for this issue. Here are a bunch of samples: zstd-internal-problems.zip
|
All failures for https://github.com/klauspost/compress/blob/master/zstd/testdata/benchdecoder.zip are fixed when window size is configured according to RFC instead of zero.
I've created #63224 to address this. |
Change https://go.dev/cl/531075 mentions this issue: |
@klauspost Thanks for the bug reports, @AlexanderYastrebov Thanks for the quick fix. @klauspost Please do feel free to open bug reports against the code on this issue tracker. Or I'll take a look either way. |
Fix copyFromWindow when match extends past initial buffer size For golang#62513
I replicated @rsc's proposal on that regard and I believe he's just copying what the "flate" and "gzip" package does. IMO, the main utility of it being a named type depends on the options pattern we go with. If we go with variadic options, there is a benefit to having it be a named type that implements the options type, so that you can pass it in directly without going through some options constructor. |
Re API, I think we want compress/zstd to be a mostly drop-in replacement for compress/flate except for the bytes it generates. So I think having dictionaries is important. A "raw dictionary" is meant to handle what https://datatracker.ietf.org/doc/html/rfc8878#name-dictionary-format refers to as a "raw content" dictionary and most closely matches compress/flate. But then we also need the zstd-format dictionaries. As for variadic vs setters, setters seem to work fine in the other compress packages. The variadic options can easily get overwhelming, and I think they're better avoided here. #62513 (comment) example for skippable frames makes clear that for starters we can have the decompressor just skip over them. We could also add an API that asks "am I now at a skippable frame" and could be called after every read, or we could have a Reader mode that stops at a skippable frame with ErrAtSkippable or something like that. (Plus we'd need a way to get its data.) But neither of those would actually help the Facebook seekable streams, so for now we should probably just leave them out for now, until more examples arise. I don't mind leaving out ReadFrom/WriteTo. For some reason I thought compress/flate had them, but it doesn't. So that would make the API:
Feedback still welcome. |
@rsc FWIW, I have never seen "raw dictionaries" used, nor had them requested. Only the "structured dictionaries" - and that at a low rate. So I am not sure if that needs to be part of the initial API.
|
I think we need to keep RawDict because it makes it easier to transition from compress/flate (or zlib) to compress/zstd. I don't mind ParseDict+AddDict(*Dict). As I understand it, you are allowed to have multiple dicts for a reader/writer to choose from, so AddDict not SetDict. |
I use them using your lib. I didn't requested it because they are already there and work so 🎉. |
It sounds like the revised API would be as follows. Do I have this right?
|
Should |
@dsnet That seems very reasonable and convenient, since it has a well-defined binary format. The only complication that comes to mind is the dicts without tables. I don't like the |
AFAICT the https://pkg.go.dev/compress/flate@go1.23.0#NewWriterDict https://pkg.go.dev/compress/gzip@go1.23.0#NewWriterLevel https://pkg.go.dev/compress/zlib@go1.23.0#NewWriterLevel https://pkg.go.dev/compress/zlib@go1.23.0#NewWriterLevelDict I'm not sure why that pattern should be changed if an invalid level can be passed to |
Perhaps this is a reason not to have ParseRawDict and ParseDict both return a *Dict. |
It's true that we changed to setters instead of NewWriterLevel, NewWriterLevelDict, but I think the setters are quite a bit clearer and avoid NewWriterLevelRawDict, NewWriterLevelRawDictWithOtherDicts, etc. The old approach did not scale. I should also add an error result from SetLevel, although we should make it clear that if you are setting a constant level like BestCompression it will never fail. |
Have all remaining concerns about this proposal been addressed? The proposal details are in #62513 (comment). |
Based on the discussion above, this proposal seems like a likely accept. The proposal details are in #62513 (comment). |
Another datapoint on a potential dictionary API, ideally it should be done in a way that the user can manage the lifecycle of their own dictionaries. In multi-tenanted systems — or any dealing with multiple data categories — you want to be able to hold many dictionaries in memory and switch between them with minimal duplication. We rely on Should dictionary support be added to the stdlib, I'd love this use-case to be considered. ETA: A trimmed example of our use of an internal type Dict struct {
Key uint32
Data []byte
}
type Codec struct {
dict *Dict
// Both of these share the underlying bytes inside dict.Data
cdict *gozstd.CDict
ddict *gozstd.DDict
}
func NewCodec(dict *dictkey.Dictionary) (Codec, error) {
if dict == nil {
return Codec{}, errors.New("'dict' must be a non-nil pointer")
}
cdict, err := gozstd.NewCDictByRef(dict.Data)
if err != nil {
return Codec{}, fmt.Errorf("creating cdict: %w", err)
}
ddict, err := gozstd.NewDDictByRef(dict.Data)
if err != nil {
return Codec{}, fmt.Errorf("creating ddict: %w", err)
}
return Codec{
dict: dict,
cdict: cdict,
ddict: ddict,
}, nil
}
func (c *Codec) Key() uint32 {
return c.dict.Key
}
func (c *Codec) Compress(dst []byte, src []byte) ([]byte, error) {
return gozstd.CompressDict(dst, src, c.cdict), nil
}
func (c *Codec) Decompress(dst []byte, src []byte) ([]byte, error) {
return gozstd.DecompressDict(dst, src, c.ddict)
} |
@coxley I don't understand what you mean by managing the life cycle. It seems like the ParseDict approach gives you a read-only Dict that can be shared by many encoders/decoders. What more is needed? Is the point for ParseDict not to copy the []byte it has been passed? |
No change in consensus, so accepted. 🎉 The proposal details are in #62513 (comment). |
Zstandard (RFC 8878) is well-positioned to replaced GZIP (RFC 1952) as the de-facto compression format.
Zstandard:
Given the likely ubiquitous place Zstandard is going have in the future, I propose first-class adoption of Zstandard in the stdlib.
Existing Go implementations:
unsafe
or assembly.Some goals for this package:
compress/gzip
.unsafe
or assembly similar to the existing compress package.compress/zstd
, while those want the best performance and/or advanced features of Zstandard can use @klauspost's package. Any stdlib packages (e.g.,net/http
that make use ofcompress/zstd
should make it possible to swap over to a differentzstd
implementation).compress/gzip
, which is generally good enough (although there is still room for performance optimizations), but power users can use github.com/klauspost/compress/gzip, which is much faster (as it makes use of assembly).We could:
Related issues:
Content-Encoding: zstd
tohttp.DefaultTransport
#62492The text was updated successfully, but these errors were encountered: