-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
boxo/blockstore: failing to parse keys #293
Comments
Thank you for submitting your first issue to this repository! A maintainer will be here shortly to triage and review.
Finally, remember to use https://discuss.ipfs.io if you just need general support. |
Hello @m0ar , we received an alert about leaked AWS keys. I believe you edited your message etc. but about 13 repositories admin got notified and can see them. Please rotated them. |
Regarding your issue, Kubo is at v0.32.1 and the version you are using is over a year old. Please upgrade. Your analysis is probably correct though, would you like to send a patch? |
Woah, thanks a lot! So sloppy of me, but bound to leak some secrets at some point ig 😅 We've handled the issue, appreciate the heads up 🧁 |
All right, that's fair. Are there kubo versions where it needs to perform migrations within the datastore, and if so what does that process look like? Or, can we assume the bucket storage format is stable and freely migrate kubo versions? |
Re. the linked commit, not sure if it's enough after having a closer look. The change uses // NewKey constructs a key from string. it will clean the value.
func NewKey(s string) Key {
k := Key{s}
k.Clean()
return k
}
// Clean up a Key, using path.Clean.
func (k *Key) Clean() {
switch {
case len(k.string) == 0:
k.string = "/"
case k.string[0] == '/':
k.string = path.Clean(k.string)
default:
k.string = path.Clean("/" + k.string)
}
} As I understand it after reading We know q := dsq.Query{KeysOnly: true}
res, err := bs.datastore.Query(ctx, q)
[...] Following the logic in https://github.com/ipfs/go-ds-s3/blob/master/s3.go#L186-L243, we then know This should be the equivalent S3 API query: ❯ aws s3api list-objects-v2 --bucket=desci-ipfs-staging --max-keys 10 --prefix 'ipfs/' --delimiter '/' | jq --raw-output ".Contents[] | .Key"
ipfs/
ipfs/CIQA2223KXGBREW6S5HXAVLGJ2QGYQVA6CC4SE4PTNLJUSQ76R7LBQY
ipfs/CIQA2224437OEF75774FFOCEVPR5VXCM26W6N4N3WPZB4I5WTKP6HEI
ipfs/CIQA222HN4FOUCPIJVL255DMG2I53W6KPCUPRXVRXE6CJCCKGF3QKAQ
ipfs/CIQA222QZZZCHAHNZT2R77AXCYPYQV6EFEBRD6M37X7P6DACGZZTRPI
ipfs/CIQA222RKPSYT7YS56QSC77D7OUW2EHEZXHVBYVGUPLMAFTU6P24IIQ
ipfs/CIQA224PFVXWDIZ6KRZWOG4AFSWXW7ZV3CE3VEMIKX2HU2W4DJPHWEQ
ipfs/CIQA2274WAKEDP4ICBKMSTA5QHDI63OA4NM25GJELROEKYZMBC2QGNI
ipfs/CIQA227AE5IEBPVBCZ7ILEDSVDVEYKUV2RJW6DHV6NM77CW5HXT5UBY
ipfs/CIQA22A7CZPEBSYZFJ4MAQS3F47EPJXPMMWJV2T7OMZ7AY5ES66RYRI Then, the module only packages those results up as-is: entry := dsq.Entry{
Key: ds.NewKey(*resp.Contents[index].Key).String(),
Size: int(*resp.Contents[index].Size),
} ...in which case this error makes total sense to me, because we never chop off the
|
Isn't it what this comment is doing: #198 (comment) |
Were you previously running an old version of go-ipfs (e.g. v0.8) which created block keys in an old format? If so the migration to the new format of of blockstore keys is found in v0.12. https://github.com/ipfs/kubo/releases/tag/v0.12.0 Or, is this an installation of a recent version of kubo and s3-plugin which is somehow creating new keys in a non-supported format? |
If I understood correctly, the problem affects only listing all the keys. Wrong key format would affect any GET. |
Or maybe we still have the old hack in place for backwards compatibility with old keys on GET. |
No I don't think that exists. |
Hi 👋
Problem
We are using
go-ds-s3@0.11.0
bundled/built intokubo@0.24.0
, but are seeing consistent failures in everything that is usingAllKeysChan
. It prevents reprovide from being functional, as well as all other block listing operations likeipfs repo ls
.These errors are logged at a very high speed every time that happens:
The source of the error is this check in
boxo/blockstore.go
(v0.15.0
forgo-ds-s3@v0.11.0
): https://github.com/ipfs/boxo/blob/v0.15.0/blockstore/blockstore.go#L296-L300This is quite bad as it prevents the node from being a useful network participant. While it can serve most direct queries, it prevents efficient content discovery from other peers.
It seems related to #198, as this has the same failure mode. (cc @Stebalien @obo20 )
Not sure if fcbd2ef alone would do the trick, or if it relies on other changes in that branch? 🤔
Analysis
We have a few other nodes running
go-ds-s3@kubo-0.26.0
withkubo@release-0.26.0
, that show the same issues, but on byte 6 instead:Given those two datapoint, I suspect it's choking on the
rootDirectory
being included in the response when comparing the datastore specs. The node erroring out at byte 4 hasrootDirectory: "ipfs"
, and the other hasdesci-s3-public-ipfs-production
. When looking at the S3 paths, I think it chokes on special chars in the path, but this shouldn't happen as boxo should only be getting the naked flatfs keys:Datastore specs 👈 (click to expand)
Questions
The text was updated successfully, but these errors were encountered: