Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why Bitswsap and compression may not be a good match? #27

Open
adlrocha opened this issue Sep 25, 2020 · 2 comments
Open

Why Bitswsap and compression may not be a good match? #27

adlrocha opened this issue Sep 25, 2020 · 2 comments

Comments

@adlrocha
Copy link
Contributor

As part of RFC|BB|L203A of the Beyond Bitswap project we are exploring the use of compression within Bitswap. In our initial explorations we have implemented two main strategies:

  • BlockCompression: It compresses the RawData of blocks using GZip.
  • FullCompression: It compresses the full Bitswap message using GZip and adds it the new CompressedPayload of a new Bitswap message.

Thus, two additional fields have been added to Bitswap's protobuf message: CompressedPaylaod including the compressed payload of the original Bitswap message if FullCompression is enabled, and the CompressionType flag to signal the compression strategy used.

Initial tests show an approximate x10 overhead by the use of compression compared to Bitswap without compression. We compared our GZip compression approach to existing implementations of GZip compression http handlers in golang, and the main difference is that GZip http handlers pipe the compression writer to the http server writer streaming the compressed bytes directly to the client. In our case, we can't use stream compression for several reasons:

  • The most limiting one is that we use prefixes to determine the size of messages in our transmission channel. In order to stream the compressed message to the libp2p stream writer we need to know in advance the size of the compressed message and we can't do this beforehand.
  • We use a CompressionType field in the protobuf so we can be backward compatible. This is not a stopper because we could easily use a multicodec to signal that the stream is compressed.
  • In the current implementation of go-bitswap where the libp2p protocol stream is referenced and written in message.go, there is no easy way to pipe the compressed stream to the protocol stream writer (this is probably fixable once we figure out how to avoid the required size prefix).
  • Bitswap transmissions are done "block by block" so we can only compress "on-the-fly" the blocks requested in a wantlist and being sent in a message. We can't compress in advance all the blocks of a DAG so they are ready for future transmissions (although according to the blocks requested we could have a good sense of "what comes next"). As an idea from the top of my mind we could have JIT compression (compression of blocks "on-the-fly") and optimized compression (where we try to predict the next block and compile it in advance), using a similar approach to V8's compilation.

As a result of all of this evaluations we want to open a discussion on the following topics:

  • In order to be able to leverage stream handlers, instead of using a prefix to signal the size of sent messages, we could use multicodec prefix and KeepReading and EndOfStream signals in protocol streams so we don't have to know message sizes beforehand and we can pipe streams (such as what we were looking to do for stream compression).
  • Consider porting Bitswap from a message protocol to a stream protocol similar to graphsync, where first there is a discovery phase and then a stream is opened between the requesting peer and the holder of content. RFCBBL1201 goes in this direction.
  • Consider storing block's RawData compressed in the datastore removing the need of "on-the-fly" compression.
@jsoares
Copy link

jsoares commented Sep 25, 2020

I don't want to detract from your larger point on protocol mismatch but these numbers must be looked at with some care.

I just tried reading, gziping, and writing to disk a 20 MB file on my machine (virtualised, under load) and it took 0.08 s for highly redundant data and 0.6 s for random data. This is using -9 for the worst-case scenario... In reality, you'd probably start with a much lower level for on-the-fly compression.

Looking at your plot here and for a ~20 MB file, you're reporting an increase of ~20 s in the full message case and ~6 s in the block compression case (vs. the uncompressed baseline). The times are not directly comparable but that's still an order of magnitude difference and doesn't seem entirely algorithmic -- implementation considerations probably have a huge influence. Since compressed communication is always a trade-off (it trades off link capacity and transmission time for computational capacity and CPU time), it is very sensitive to how fast the implementation is. I don't think a naive implementation allows us to draw definitive conclusions.

Looking into it a little more, it appears the Go standard library gzip is pretty slow (or at least was in 2015). It may be worth it to just quickly benchmark the gzip implementation in isolation and/or try alternatives.

The other side of the experiment, of course, is that using compression will never look good in evaluation unless you're evaluating the send time over a constrained link. I don't know if we have any sort of numbers for average upload speed across IPFS, but using something like 10 mbps may not be terribly unfair. (I assume this is not currently being done as the plot says "bandwidth 0", though that doesn't explain why it takes 3 s to transfer a 20 MB file in the baseline case...)

All that being said, there's a good chance even an optimised implementation will not suffice to bridge the gap.

@adlrocha
Copy link
Contributor Author

I just realized I left this thread unanswered. A few updates on this compression matter:

@daviddias daviddias transferred this issue from protocol/ResNetLab Jan 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants