Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add initial support for identity hashes (Part 1) #5117

Merged
merged 1 commit into from
Jul 22, 2018

Conversation

kevina
Copy link
Contributor

@kevina kevina commented Jun 14, 2018

This provides support for identity hashes by using the new blockstore in ipfs/go-ipfs-blockstore#4.

With this identity hashes just work without having to be stored anywhere. It also works with bitswap: if a node that doesn't have id-hash support requests an id-hash from one that does, it will be returned.

Towards #4697.

@kevina kevina requested a review from Kubuxu as a code owner June 14, 2018 06:28
@ghost ghost assigned kevina Jun 14, 2018
@ghost ghost added the status/in-progress In progress label Jun 14, 2018
@kevina
Copy link
Contributor Author

kevina commented Jun 14, 2018

After this goes in the next step will be to automatically use identity hashes as a way of inlining tiny blocks. But that change will likely require a go-cid update.

@ivan386
Copy link
Contributor

ivan386 commented Jun 14, 2018

if a node that doesn't have id-hash support requests an id-hash from one that does, it will be returned.

This will be flood from all connected peers?

@kevina
Copy link
Contributor Author

kevina commented Jun 14, 2018

This will be flood from all connected peers?

I don't know enough bitswap to answer that. I just tested it using two peers, one with the P.R. and one without. However, I don't think it will be any worse than requesting any very common hash.

The older node will store the identity-hash in its blockstore (like any other hash) so it won't have to request it again.

@Stebalien
Copy link
Member

This will be flood from all connected peers?

Yes, but that's an issue with bitswap (and already happens for common blocks).

@ivan386
Copy link
Contributor

ivan386 commented Jun 14, 2018

Yes, but that's an issue with bitswap (and already happens for common blocks).

Common blocks send only from peers that have it. Identity blocks will be send from all peers that knows identity hash.

@Stebalien
Copy link
Member

Unfortunately, there's really not much we can do here other than wait a few releases between this patch and one that allows users to use inline blocks. That is, users of existing releases of IPFS will always either:

  1. Get spammed.
  2. Never get the block.

I'd rather have 1.

@kevina
Copy link
Contributor Author

kevina commented Jul 16, 2018

@Stebalien @whyrusleeping I would like to see this get in ASAP. Someone whose name I forgotten spoke to me during the dev. conf. and apparently is depending on this. Do you see any blockers to getting this reviewed and merged. A migration is not needed for this, it just that for a little some users may have identity hashes stored in the blockstore that can't be removed. Nothing should break.

Gx dependency conflicts are starting to get on my nerves ☹️ (and I am assuming that is the source of most of the conflicts), but I can rebase this if it is required for a review. I will of course rebase this just before it is ready to merge.

Copy link
Member

@whyrusleeping whyrusleeping left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I havent looked at this until now, The change looks really clean. I see no reason not to ship it. cc @magik6k or @schomatis for a quick once-over?

Copy link
Contributor

@schomatis schomatis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -196,18 +196,20 @@ func setupNode(ctx context.Context, n *IpfsNode, cfg *BuildCfg) error {
opts.HasBloomFilterSize = 0
}

cbs, err := bstore.CachedBlockstore(ctx, bs, opts)
wbs, err := bstore.CachedBlockstore(ctx, bs, opts)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small nit: why is the CachedBlockstore renamed from cbs to wbs? Shouldn't wbs (which I'm not sure what it stands for) be defined later in NewIdStore?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wbs stands for wrapped-data-store.

I could keep cbs and then defined yet another variable. But that could lead to bugs where cbs was used when wbs was meant to be used.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I was just saying because the following self-definition seem a bit weird to me,

wbs = bstore.NewIdStore(wbs)

and I would have expected to create it from another blockstore, not from itself, like

wbs = bstore.NewIdStore(cbs)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wbs stands for wrapped-data-store.

Shouldn't it be named wds instead? To avoid confusing it with a blockstore (like cbs).

@kevina
Copy link
Contributor Author

kevina commented Jul 17, 2018

I will rebase this on Thursday when I get back.

@kevina
Copy link
Contributor Author

kevina commented Jul 18, 2018 via email

License: MIT
Signed-off-by: Kevin Atkinson <k@kevina.org>
@kevina
Copy link
Contributor Author

kevina commented Jul 19, 2018

Rebased.

@Stebalien Stebalien added RFM and removed status/in-progress In progress labels Jul 19, 2018
@Stebalien Stebalien mentioned this pull request Jul 20, 2018
@whyrusleeping whyrusleeping merged commit b818679 into master Jul 22, 2018
@whyrusleeping whyrusleeping deleted the kevina/idhash0 branch July 23, 2018 00:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants