Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable encryption and feed-replication-control #18

Merged
merged 13 commits into from
Mar 24, 2019

Conversation

telamon
Copy link
Contributor

@telamon telamon commented Dec 21, 2018

Hey!
Here's a patch that somehow required more words than necessary to describe,
apologies.

Summary

It's entirely backwards compatible in storage so all previously created&stored multifeeds can be safely upgraded. (I haven't touched the storage handling routines)

Protocol changes

But PROTOCOL_VERSION bumped to 2.0.0 because replication was heavily refactored.

The header exchange was rewritten to use hyperprotcol-extension messaging.
Meaning that the multifeed header exchange now uses the 'fake feed' key for encrypting the communication and as a positive side effect we also benefit from hyperprotocol's built-in multiplexing.
The effective result is that we now have a secure active sidechannel that allows the transport of custom protocol messages even if they are split up into multiple chunks.
Also provides us with the opportunity to exchange messages/feeds after the first initial manifest-exchange.
(Realtime feed-key exchange during live replication, no need to force
reconnection between peers to share new feeds)

TODO: Update README.md with encourangement for setting the 'fake' public key.

Currently set using: multifeed(hypercore, storage, {key: ENCRYPTION_KEY})
(private key can still be safely discarded, but I would discourage the use of
the hardcoded pubkey as it exposes feed-exchange to non-participating third parties.)
Maybe even radically change the constructor to require multifeed(hypercore, protoKey, storage, opts)

Lowlevel key-exchange API

I've implemented a what I hope to be 'fair & secure' standard policy that can be
exposed to higher level applications for fine-grained control.

Replication now follows theese sets of rules:

Given multifeed A and multifeed B.

  1. A sends a list of 'have' feeds to B
  2. B sends a list of 'have' feeds to A
  3. A uses B's 'have' to send a list of 'want'
  4. B uses A's 'have' to send a list of 'want'
  5. Both A and B individually compares the 'wants' and 'haves'
    to build an identical array of feeds to be exchanged using:
    keys_to_replicate = (
    (AWant - (BHave - Awant)) +
    (BWant - (AHave - BWant))
    ).sort()

That should prevent your node from ever sharing something it does not offer
and receiving something it did not ask for.

Unconfigured 'default' behaviour works the same as before:
Have: All local feeds
Want: All remote feeds

Highlevel exchange API

This is we're I could use some help, because if you're looking at the patch
you're probably going think "what the hell is _restrictedMode?"

I started this quest by trying to implement a personal-feed exhange policy
which enables selective replication by signature generated from a
pre-shared secret, in an attempt to overcome the limitation of one writable-feed per
device even if both devices are my own.

Use-cases: personal storage, multi-device profile,
public read but invite-only write cabal. You get the picture.

And well along the way of refactoring I actually finished a working
implementation and immediately regret polluting multifeed with this 'hardcoded' policy.
(It's disabled by default has no impact on storage or replication unless
activated, the test/signed-exchange.js describes it's behaviour)

So for lack of better knowledge I maybe propose to expose the feed selection
api using the middleware pattern:

var m = multifeed(...);

// Current 'SHARE_ALL_ACCEPT_ALL' policy
m.use({
  have: function(localKeys, share) {
    share(localKeys)
  }),
  want: function(remote, request) {
    request(remote.keys)
  }
})

// Cabal incative feeds policy
m.use({
  have: function(localKeys, share) {
    getLastActivity(localKeys, function(timestamps){
      // share all feeds with extra meta-data about feeds
      share(localKeys, {timestamps: timestamps})
    })
  }),
  want: function(remote, request) {
    var activeFeeds = remote.keys.filter(function(key) {
      // Reject feeds that hasn't had a new entry in the previous 30 days
      return (new Date() - remote.timestamps[key]) > 30 * 86400000
    })
    request(remoteKeys)
  }
})

That way I'd be able to move out the personal-feed-policy into a separate
reusable multifeed-plugin.

@hackergrrl
Copy link
Member

Exciting! I'm not sure if I'll have time to review this before the new year, though I'm looking forward to going over it. 🎉

@telamon
Copy link
Contributor Author

telamon commented Dec 21, 2018

@noffle no worries. this is probably my last contribution for 2018 as well 🎉🎊

@telamon
Copy link
Contributor Author

telamon commented Dec 21, 2018

Added one more commit that removes all code related to the signed exchange, restoring multifeed back to the state of minimalism, should be less code to review now. A copy of the signature based exchange can be found here: https://github.com/telamon/multifeed/tree/feature/signed-exchange-backup

Formalized passed replication opts to avoid unintended behaviour
@hackergrrl
Copy link
Member

Great PR! These are great features & code reorg 🎉

index.js Outdated Show resolved Hide resolved
mux.js Outdated Show resolved Hide resolved
debug('[REPLICATION] New mux initialized', key.toString('hex'), opts)
var self = this
self._opts = opts = opts || {}
self.extensions = opts.extensions = SupportedExtensions || opts.extensions
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.extensions = opts.extensions = SupportedExtensions || opts.extensions
self.extensions = opts.extensions || SupportedExtensions

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean to have opts.extensions be used when passed and fallback to Supported, or actually vice versa?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops resolved conversation by accident. I approve of the change. The original intention was to keep self.extensions and opts.extensions in sync thats why there's two lefthand assignments

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes verifying opts.extensions should be used when passed and SupportedExtensions is the fallback.

mux.js Outdated Show resolved Hide resolved
mux.js Outdated Show resolved Hide resolved
mux.js Outdated Show resolved Hide resolved
mux.js Outdated Show resolved Hide resolved
self._initRepl()
break
// case ANNOUNCE_FEED:
// self._announceFeed(JSON.parse(message.toString('utf8')))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dead code 👻

Copy link
Contributor Author

@telamon telamon Mar 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uh this is a mental note to implement 'new feed available' during live-replication.
To mitigate the fact that the feed-exchange only occurs during new connections.

The code is untested but it should be the starting point of doing a second feed-exchange when a new feed becomes available at a node and we want to announce it's existence to other connected nodes without having to reconnect.

Do you want it removed?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think keeping untested commented code around is confusing. Maybe keep this code, uncommented, in a feature branch so that it's still there for you for future dev?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you put it that way. Yeah I agree. I had optimistically hoped that someone wanted to take over it and finish that feature but it's better to move those lines into a new Issue & Featureand remove them from the stable branch as you suggested.

@hackergrrl hackergrrl merged commit 4e94ab6 into kappa-db:master Mar 24, 2019
@cblgh
Copy link
Contributor

cblgh commented Mar 25, 2019

woah!!!!!!!!!!!!!!!!!!!! 🔥 🔥 📯

good job @telamon & @noffle !!! this is really big ^_^

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants