-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The Future of "accessing API of remote IPFS node" #137
Comments
Stage 2 is when it gets interesting. Stage 1 requires installing ipfs companion, and then any browser based application detecting the presence of both ipfs companion and the local IPFS nodes, it complications to the point of being unlikely to happen. If Stage 2 - or some version of it was implemented - then for example the dweb.archive.org UI could detect the presence of a local node and use that as a persistent cache rather than using js-ipfs with all the limitations that come from running in the browser (including lack of persistence after browser window closed and the extreme load on CPU which encourages people to close pages that are running IPFS;) Obviously relying on CORS in a content-addressed filesystem makes no sense to me since both trusted & untrusted content could come from anywhere (e.g. from https://ipfs.io), one option I think would be worth considering along with authentication would be allowing a subset of the API to run without authentication - e.g. get, add, urlstore, pin , while saving more sensitive operations (like editing the config) until authentication was implemented. |
@Gozala shared some relevant ideas in Progressive peer-to-peer web applications (PPWA). I need to think about this more, but my gut feeling is Stage 2 could be refined by introducing sw/iframe-based API provider as the universal entry point. We could do access control there (before it lands in the actual API), and also iterate on graceful fallback / opportunistic upgrade mechanisms (eg. internally using @mitra42 we started experimenting with a subset of the API to run without authentication in ipfs-companion's |
We really dont want to be running this through ipfs-companion. We want to run IPFS in the web browser and have the libraries (js-ipfs and js-ipfs-api) integrated in the page so that the user doesnt' NEED to do anything other than visit the page, but we do want to take advantage of a local peer if one exists. I acknowledge the risks, but I think they are much smaller than the loss of functionality from not being able to use a local IPFS peer at all, or even worse the current situation where people running a peer have the choice between not being able to use it for anything local (leaving CORS on) or exposing themselves to all kinds of malicious attacks by turning CORS off since there is no authentication even for damaging activities. |
To me it seems that IPFS Companion is great, because it enables opt-in. I really don't want websites using my local IPFS node just because I have one. But if I enable IPFS Companion then I'm telling then they can. At the same time IPFS Companion abstracts way the need to inject IPFS libraries and/or do manual calls to the IPFS API from webapps that may use a local IPFS node. You can just use |
To be clear what I was suggesting is to make say As of opting-in / permissions companion.ipfs.io could do that based in client origin |
@fiatjaf and @Gozala - I can't figure out how to make either of those suggestions work in practice. Assume a website (such as dweb.archive.org that wants to run in any situation, it can bundle js-ipfs and (js-ipfs-api) but it cant require users to download anything. We have code to try and autodetect in our IPFSAutoConnect function at [https://github.com/internetarchive/dweb-transports/blob/TransportIPFS.js#L81]. It fails in most cases currently because the local IPFS peer refuses CORS. A vanishingly small portion of the users will have IPFS companion installed because (as far as I can tell) it doesnt add anything unless they want to interact with IPFS directly. Some might have IPFS or a nearby IPFS node as part of the dweb-mirror project. We could include the IPFS code from ipfs-companion into the Wayback Machine extension which a larger number will have installed, but we haven't had anyone (volunteer or paid) with the bandwidth and browser-extension expertise to either bundle js-ipfs directly into our extension, or bundle some part of ipfs-companion and figure out all the browser limitations. |
I am building a proof of concept of proposed idea. I'll be happy to share it here once it's ready. |
I’ve put together a prove of concept that attempts that proposed idea is possible. There are some good news and bad news. I’ll start with what I have working: https://github.com/gozala/lunet
As of bad news:
|
I made little more progress in my prototype:
So with
|
Next thing I want to do is create another site say BTW I think IPFS-HTTP-API would need to learn picking up some config changes through API itself. Like ideally |
After more research I am considering an alternative approach, I think it would work better better than current approach where App SW needs to connect to Daemon SW approach because SWs are really eager to terminate and that problem is multiplied by the fact that we're trying to have Daemon SW alive and connected to the app SW, as they both race to terminate either of them succeeding breakes a MessageChannel which also happens to be impossible (without hacks) to detecting) on the other end. This is why I'm considering an alternative approach Daemon site (one that is embedded in iframe) will spawn a SharedWorker (and fall back to Dedicated Worker pool if API is not available, Thanks Apple 😢). This way we don't have to fight Daemon SW to keep it alive, as long as one Daemon page is around worker will be able to keep the connection alive. In practice that should be the case as long as there is at least one active client app. Only case that is not true if all apps have being closed and later you do open one and that case is fairly easy to detect (SW has no clients) in which case it can serve page that just embeds Daemon iframe and once connection between Deamon Worker and SW is established then redirect to the action page that was requested (Please note that this sounds complicated, but that is what is happening in current setup and works remarkably well). It does imply that client apps need to embed Daemon iframe or else corresponding worker will terminate. However that was more or less a problem already, and I was already considering to workaround that by appending to navigation responses. Additionally in that added markup can be used to do user prompting for permissions (and it needs to be with-in the iframe so privileges can't be escalated). This approach has additional advantage for in browser node case as frequent terminations don't exactly mix with well with that. Trickiest bit is going to support browsers without It is also worth considering that if Daemon manages to connect to a companion add-on or a local Daemon through REST API there will be no need to even spawn any workers. Still there will be some extra work to consider like propagating content added to the in worker node to the local Daemon. Edit: Not sure what I was supposed to follow this |
This is great. I've been thinking what developer-facing artifacts could be extracted from this and I think drop-in library/toolkit that acts as a replacement for standalone js-ipfs is a way to go, as it should help with addressing two high level problems:
@Gozala I agree that SharedWorker is worth investigating. To remove need for access control and keep things simpler we may want to focus on (1) initially, as security perimeter is easier to understand.
...? (the suspense is killing me 😅) |
Oops I'm not sure how my comment ended like that, nor I can remember if there was anything specific I was going to say. Sorry |
I spend little more time on this and currently implemented something in between what I originally made and alternative option I described. Current status is: Things work really well on Chrome and Firefox but I'm struggling to identify the issue with Safari. At the moment setup looks as follows: Client App / SiteClient site in the example I had
In terms of interaction this is what happens:
HostDocument that client embeds in iframe is what I refer to as host. Host document is also pretty much just this
WishlistHere are the things I would like to change about this setup
|
It turns out Safari does not implement |
Alright I think something else could be done on Safari (or anywhere where const extendLifetime = async() {
await sleep(1000 * 60 * 4) // Firefox will wait for 5mins on extendable event than abort.
const clients = await self.clients.matchAll({ includeUncontrolled: true })
for (const client of clients) {
client.postMessage("ping")
}
await when("message", self)
}
const when = (type, target) =>
new Promise(resolve => target.addEventListener(type, resolve, {once:true})
self.addEventListener("activate", event => event.waitUntil(extendLifetime())
self.addEventListener("message", event => event.waitUntil(extendLifetime()) I believe this should keep Service Worker alive and going as long as there are clients talking to it, which is in fact a case for |
This is fantastic, especially getting it work on Safari 👍 I really like the mount metaphor and how small is the amount of code end developer needs to put on the static page. This is exactly what we should aim for. @Gozala Regarding the first item from your Wishlist:
We have an API for DNSLink lookups, but may want to support
ps2. I see how a hybrid approach could be supported as well, where static HTML with regular website is returned with one extra |
@lidel I’ve considered doing dns lookup instead of meta tag (as per your suggestion), however goal is for user not to have to static site for bootstrapping in first place, basically I want flow to be |
Status updateI spend more time on this to get the in-browser fallback working. It took quite a bit more effort than I anticipated but good news is it works. Below is the an image of fully functional webui loaded via lunet from ipfs through in-browser node & 0 changes (to the webui code) My peerdium demo also works with in-browser node with 0 changes as well. 🎉 At the moment this version lives in a separate branch because:
Details
Open questions (would love feedback)
|
Should I be looking at IPFS Cluster stuff for this stuff ? |
I found this issue through searching around related to a concept I've been thinking more about. That idea of both an in-browser node and a native/standalone node I think is something that should be fleshed out as a user norm. Using as an example how users interact with services like Dropbox, I have Dropbox clients on my desktop, laptop, and phone, and have different files "starred" for offline use on my phone than I use most frequently on my laptop or desktop. I think it would be ideal that among the standard peer discovery methods that any given IPFS node (in-browser or standalone) has, that it additionally allows a user to indicate another node as "theirs" (add authentication/credentials?), and then those nodes actively sync pins/virtual filesystem structures between them. In that way, I could have a standalone node running on my workstation, and when I open a browser on my workstation, laptop, or phone, they all create an in-browser node, and I end up with four nodes that are all "me" and storing my data. I'd probably want to configure the in-browser node on my workstation to do minimal storage (since there's another node on that machine that should be primary), and would like the control to indicate my workstation node should pin/keep a copy of everything the others pin (primary backup), the laptop should as well, when it's online (secondary backup), and the phone node would only pin important things (space concerns), but being able to browse "known" hashes/files on the workstation/laptop nodes would be ideal. From that perspective, it would be fine if all in-browser nodes stayed in-browser nodes (no need to "change over" to a standalone node if it came back online), but pinned/known file syncing could be very useful. |
One question I have that might bring some clarity to these questions: what is the delta between what we think an ideal “integrated-in-browser-IPFS-node” and a The reason I’d like to think about things this way is that this excercise might surface differences between “features missing in the web platform” that we don’t have without native integration and features the platform doesn’t have because of legitimate security and isolation concerns between applications. The security story for the current locally running server (either Go or IPFS Desktop) is practically non-existent. Having a similarly scoped shared resource will need to drastically improve that security story and it’s not yet clear to me where this is the responsibility of IPFS or if we’re actually missing a feature or integration in the browser. |
I can only speak for myself and what I think & going for is "browser is your IPFS node". JS-IPFS, SW, IPFS-Desktop etc... are just polyfills to deliver / explore that experience.
I think there is general assumption that web platform lacks features to implement full fledged IPFS node in the web content context. I think it's an incorrect way to look at things. Even if browsers exposed all the low level networking primitives (which is highly unlikely) to allow that, each browser tab running own IPFS node would be a terrible experience. That is to suggest that if / when IPFS is adopted by a browser, browser itself will become IPFS node and expose limited API to access & store content off of network. And yes it will impose same / similar origins separation concerns as it does today. Goal of this exploration is to polyfill described experience through variety of tools available:
That way it applications:
Yes and that is a huge issue waiting to be exploited. I would absolutely encourage to lock it down. Last time I checked ipfs deamon / gateway comes with default of Both should be locked down to a one single origin maybe
No features are missing on that end. What this PoC does is uses special origin (in this case lunet.link, but should be |
I have cluster branch now which runs in-browser node and attempts to use local native node through REST API simultaneously. At the moment it's pretty dumb it just forwards requests to both nodes and attempts to serve response from native node with fallback to in-browser node. At the moment there is no attempt to sync two, for that it would probably make most sense to borrow the logic from ipfs-cluster rather than trying to hack things together. I'll focus on getting this working in Safari through SW poly-fill for SharedWorker and then deploy current version. |
Responding to @MidnightLightning
That is good point! However case with native node vs in-browser node is different as from user point of view it's the same device.
I have being thinking about this in a slightly different way. I imagine library organized as "collections" (or threads in Textile terms). Ideal is that you can invite others to collaborate on those collections. Those others can be your other devices, your friends or pinning services.
I think all those use cases fit nicely with an above described solution, further more it follows the interaction flow. User during sharing / publishing phase chooses who to share it with. Implementation vise it seems that "collection" should just be a "ipfs-cluster". |
@Gozala I like the idea of a seamless/self-healing abstraction for browser context (Access Point Facade), but figuring out how to handle the surface of IPFS API when API provider is a facade on top of multiple nodes going online and offline will be a challenge. Agree we should look at ipfs-cluster for inspiration, but security considerations will not fit exactly. eg. in browser we want to build security perimeter around Origin and based on it introduce key/mfs write/read scoping/sandboxing, limit access to sensitive endpoints such as Seems that the MVP would need:
|
I agree. I am also getting more and more convinced that exposing full IPFS API may not be a good idea in first place. While it is cool to have webui running over this, I think it's a wrong abstraction for most apps.
I need to write a coherent story about the experience I have in mind, but before I'll get around to doing it here is a gist:
Absolutely! However I think that should happen at the lunet (Access Point Facade) level before any calls are issued to any of the IPFS nodes. On the sandboxing I'm still working out some details in my head but I think there is a real opportunity to improve on the mess we're in the conventional web by limiting read / write access to only a app resources / document being operated on. Largest issues on the web are due to third parties doing tracking and aggregating user data on some servers. I think it would be really great if we enforced a setup as something like In this setup app can't really spy on user, sure it can save some data however that data is local and user personally needs to choose to share it and even then app isn't really able to let own server know where to grab it from. There are things to be worked out but I'm inclined to think that combination of SW & sandbox-ed iframes might allow for such sandboxing. |
I got it finally working in Safari 🥳 (Debugging SW in Safari is quite a throwback to old days of JS with no debuggers, except there’s no alert or reliable way to print output either 😂) Now peerdium fork loads with no changes, using in-browser node running SharedWorker polyfill using ServiceWorker. IssuesHowever aome content, like posts created by me seem to fail to load, specifically js-ipfs call I’ll do more digging tomorrow, but thought I’d post in case this is known issue |
Safari also seems to reject POST requests with form data as body. |
It seems that in Safari call to |
I am exploring attentive approach for loading this described here Gozala/lunet#2 (comment) |
Just to partly revive the discussion, https://datatracker.ietf.org/doc/draft-ietf-dnsop-alt-tld seems like an interesting idea to remember. It reserves |
Granting access to local or remote node remains a challenge both on UX and security fronts.
This is an attempt to plot possible paths for going forward.
Disclaimer: below is not a roadmap, but a "what if" exercise to acts as a starting point for discussion and experimentation that follows in comments
Initial idea is to think about the problem in three stages:
Stage 1: window.ipfs.enable(opts)
postMessage
-based IPFS API Proxy exposed underwindow.ipfs
by ipfs-companionStage 2A: Opaque Access Point with Service Worker
Stage 2B: HTTP/WS /api/v1/ with access controls
*
)/api/v1/
can start as an experimental overlay provided by ipfs-desktoppostMessage
is removedStage 3: Nodes talking to each other over libp2p
ipfs p2p
)Parking this here for now, would appreciate thoughts in comments below.
The text was updated successfully, but these errors were encountered: