-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Choose a pinning service #90
Comments
It would be great if we could have a standard for the API interface that would be used by pinning services. Ideally, it would look the same as the one you would use for pinning locally with either js-ipfs or go-ipfs, so it's easy to plug in your own node if wanted. |
@victorbjelkholm what @flyingzumwalt is describing is a different type of pinning IMO. "Pinning" in this context means more "follow and persist the CRDT, so that other nodes can bootstrap from it if the author(s) go(es) away.". Correct? |
@pgte is right. This feature would require pinning services that follow pubsub channels. A variant of this would be pinning services that follow an IPNS name and pin the corresponding content whenever the name gets updated. A key thing here -- it's driven by a very common real-world UX need rather than speculation about what a pinning service API should look like. It forces us to grapple with things like the fact that end-users don't want to keep telling the service to pin their content - they just want it to work. In fact, the ideal UX would ensure that things get pinned in the right places without the user even knowing -- that's how things like icloud have become essential so quickly -- most users have had their stuff "pinned" on icloud for months before they even realize that it's happening. |
Hm, I see. I agree with the general idea and my point still stands, but I might have dived into the actual implementation too fast/in the wrong place though. Pinning based on pubsub topics/IPNS names is a good idea and something that probably should also be implementation in js-ipfs and go-ipfs, so the I don't think this is any different from the concept we have of pinning already, it's just adding the feature of being able to follow updates when they happen via pubsub/IPNS. |
(I am so excited about peerpad and this awesome feature ^) |
Both in terms of everyday usefulness and in terms of PeerPad demonstrating how to build peer to peer apps, I think this feature is the most important peerpad feature that's not implemented yet. |
About privacy: Snapshots can be safely pinned without compromising privacy: they need the decryption key (given out of band, typically in the full URL) to be able to decrypt the content. (The key is unique per snapshot). Now, following the changes and persisting them is more tricky. We have to think of a way of saving the changes only knowing the pad ID. The read or write keys should not be needed. Have to take some time to fully dig more into this one.. Ideas are welcome :) |
@pgte the problem here looks like: "where do I store the keys"? Not natural solution: I really love the way metamask becomes some form of browser passport that holds your keys. It would be nice to integrate Metamask or an IPFS-similar solution (remember the do not require any extension!) In this case, the key can be store encrypted under the passport identity |
@pgte a temporary solution is being able to configure an IPFS endpoint, being the default the open Infura nodes. You can use js-ipfs-api to connect to:
This is not perfect because we want more guarantees, but it would give that async experience people are looking for when sharing their snapshot urls. |
@nicola I think it's more complicated than that, if I'm understanding you correctly. Hopefully a purely pinning node would pin the changes without decrypting them, i.e., without having to know the key. I think that'd be possible, but I'd have to experiment around a bit. |
@diasdavid what do you mean by "configure IPFS endpoint"? Use ipfs-api client remotely? |
Yes to ping a IPFS node that can pin the files for us. |
@diasdavid One issue: that would not allow to snapshot while offline, contrary to the current. |
With The IIIF work in June we described a pattern where pinning services (ie. a University Library) follow pubsub channels that have been identified by their users (ie. a researcher at that university). I’d like to keep that pattern in mind here. Yes, users need to tell the pinning service which topics to follow, which is a good fit for hitting an api endpoint, but the ongoing pinning of updates should follow the pubsub pattern if possible. |
Once you create info on an IPFS node, you usually need/want to get it pinned onto some other node that's always live. You might be running this pinning service yourself, or it might be run by your office, your family, your local public library, your favorite cloud provider, etc. This gets at the need for Data Together -- ways for groups of people to possess data as collective assets, but in the short run we just need to give people a way to make their peerpads stick around.
Use Case:
I'm using peerpad to collaborate with my coworkers. We have set up an IPFS pinning service to hold copies of the files and data we've shared with each other. I want my peerpads to get pinned by that pinning service. I also want to know when my changes have been successfully saved to that pinning service.
Discussion:
Underneath the hood, the key is to get the pinning service to follow the topic, and to provide an easy way for end-users to know when new hashes have been successfully pinned. We could handle this all out-of-band, where I use some other interface to tell the pinning service which pubsub topics to follow, but that breaks the user flow. A better journey would be to allow end-users to tell peerpad which pinning service to "use".
Long-term, you would probably want to keep a list of pinning services that you rely on, entering the info once, choosing a default service to automatically pin everything onto, and switching services (or choosing multiple!) on the fly.
cc @diasdavid @b5
The text was updated successfully, but these errors were encountered: