Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: buffered writer #70
feat: buffered writer #70
Changes from 2 commits
555e239
6ea51eb
61c0a9f
559d56c
686cdff
30c4ae3
90f7979
e1672a4
828539c
bf0301b
4c722d5
1520cde
d28a12a
4f47f7b
b8e67dd
88d228a
3da0a23
4bd57f7
f6fd59a
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This'll be out when you go above 24 roots, and then again at 256, cbor will be adding extra bytes to fit in the array length.
You could fix it by adding 1 byte for
>=24
and another byte for>=256
. There's another boundary at 65,536 where you'd need to add 2 more bytes, but maybe by that time you should be doing athrow
.There's also a difference in encoding for different sizes of CIDs for the same boundaries - a tiny CID <24 is going to be 1 byte more compact, then a huge over 256 is going to add another byte. These are going to be uncommon, but not out of the question. Unfortunately your arguments won't account for that, maybe documentation just needs to make it clear that this estimate is only good for sensible defaults.
It seems that the cost of a bad estimate is the huge content shuffle in
close()
which could be invisibly very expensive. Documenting that would be good too.So, some solid tests for this function for various CIDs might be good!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added an
EncodedLength()
calculator to the Go dag-cbor, which lets you figure out exactly how many bytes an object will take up once encoded, we could do the same for JS because it's not too hard, but it may not end up being that much cheaper than just doing a dag-cbor encode of a faked object and checking the byte count.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I end up publishing before I did a last push so I'm not sure if that is prior to a fix I have added to account for the
varintSize
. If so I assume that last change should address this or is that something else ?Yeah I'm not sure if there is a good way to account for all of that. Only thing I have considered is to overestimate a bit, which i think is better than underestimate. Alternatively we could return a range and let user decide which one to go with.
I do not think that overhead is huge, as far as I remember browsers optimize case of moving bytes within the same buffer
So, some solid tests for this function for various CIDs might be good!
Yeah I was thinking about that as well, but decided it was not worth an effort. However I think it would be great to have one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
something else, this is all about cbor encoding compression. Arrays have their length included at the start, if the length is <24 then that number is included in the single byte at the start that says "this is an array". Between 24 and 256 it's in its own byte, so the array takes up 2 bytes to get started, 256 to 65536 takes up another byte, etc. The same rule applies for byte arrays - so CIDs have this property too. A CID that's less than 24 bytes long will have one less byte in the prelude, and a CID greater than 256 bytes long will have an extra one. This is why a size estimator would be handy, because the rules are specific and get a bit weird and surprising in some edges. But the rules are fixes so if you lock them down then you're set.
Unfortunately if you start down this road then you're not far off making a full estimator for arbitrary objects!
So maybe just get this right for obvious cases and not have easy failure modes, and we can get a proper estimator into the dag-cbor codec to use later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could fix this too now you have the nice
arrayLengthEncodeSize()
which will work for CIDs too; I think if you subtract2
fromROOT_EXTRA_SIZE
(the byte array is >24<256 so it requires two bytes cbor prelude<bytes><length>
, the extra bits inROOT_EXTRA_SIZE
are to do with tags, mainly) and then add it back in by adding+ arrayLengthEncodeSize(cid.length)
here then you get a more accurate CID length. The problem with doing that is that your API allows for an optionalrootsByteLength
, but that could also probably be fixed by adding those2
to theDEFAULT_CID_SIZE
.If you add
bafkqaaia
andbafkqbbacaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
to your tests then you should see the impact of the bad sizes here.Identity CIDs are the most likely way we're going to end up here--it would be silly to use one as a root but it's allowed and it's not out of the question that someone finds a use-case for it.