Sharing/browsing large volumes of offline content #38
Replies: 6 comments 10 replies
-
I don't think LoRa is well suited to this kind of data transfer personally. Getting short messages across a multi-hop mesh is challenging enough IMHO. I've been playing with piratebox a bit here on an Orangepi, with a view to adding a LoRa shield to that once it's working as a file share device & SSB server. Some degree of sharing SSB messages over the LoRa interface might be possible, although that's a bit of an open question The content I've pulled together is here: https://rosettaphone.org/ it includes the IWS.apk and shareviahttps.apk so there is the possibility of using android devices as hotspots or further sharing the content Android<>Android. |
Beta Was this translation helpful? Give feedback.
-
Content browsing and file sharing is the primary goal of RRTP, which is built on RNS Link and Resource API's in order to handle data compression + encryption in an efficient manner. Obviously downloading 12GB over a multi-hope LoRa connection would take quite a while, but searching for individual text files a few kb's in size would be doable even over the slowest LoRa connections. I wrote a demo of a basic RRTP server, and a text-based browser: https://github.com/4c3e/rrtp-demo-1 NOTE: these demos are already out of date because the RRTP spec is still very much under construction. Once RRTP becomes more stable, users could host an RRTP proxy server for the Kiwix wikipedia instance. The proxy server would take RRTP requests, translate them into a request that the local kiwix server can handle, then forward the response over RNS to the client. The cleaner solution would be to create something like Kiwix, but as a native RRTP server, which would be a great project to work on at some point 😄 |
Beta Was this translation helpful? Give feedback.
-
I think it's an OK assumption that libraries are mostly centralised in communities (but should be very easy to replicate and make redundant so they spread to anyone who wants to host them), and I like the idea of being able to serve ZIMs directly to NomadNet clients. In this case I think search functionality is pretty essential, and it is actually almost possible right now. I have a kinda-working proof-of-content thingy that serve content from ZIMs directly to the NomadNet browser, but it is an awful hackjob that was more of an experiment to see what would work. Search can happen, but only "statically" (via a predefined link, serving a "predefined" search) since there is not yet any support for inputting and sending data to the page request handlers. I think there is two ways to go about this. Either we make it possible via the NomadNet browser, which require something like the following:
Or we could just build a small standalone server and client only for the purpose of connecting to and browsing ZIMs served over RNS. I think I'm more in favor of getting the transfer spec standardised, and integrating it into NomadNet though. I think that makes most sense in the long run. |
Beta Was this translation helpful? Give feedback.
-
Hi folks; I'm only just getting up to speed in this community, so I'm lacking a lot of context on what currently exists — my apologies if I'm retreading ground! The problem of widely distributing large, identical content is something that IPFS has put some effort into (see https://en.wikipedia-on-ipfs.org/) — the code that creates the bundles of data is in distributed-wikipedia-mirror. Reticulum + LXMF seem like an excellent combination link/transport layer for the IPFS protocols. I'd be interested in researching where IPFS's content-addressed retrieval would fit into Reticulum's ecosystem. When it comes to search, IPLD (a method for linking data in an extensible format, used by IPFS) is flexible enough to allow a search index to be stored in such a way that you only need retrieve the parts that you need to get the content-hash of the data/page you want, then requesting the data behind that hash from your connected peers. Folks have already experimented with storing a lunr.js search index as a retrievable file that can be used in a JS-based retrievable homepage with solid success (though I've yet to find examples of people breaking apart the index file in a suitable way when large amounts of data are indexed). Help me direct my thoughts & experience in these spaces to help out here? |
Beta Was this translation helpful? Give feedback.
-
I don't have much to usefully add. I did bring up IPFS here: #45 |
Beta Was this translation helpful? Give feedback.
-
Another relevant protocol for this: https://hypercore-protocol.org/ , https://github.com/hypercore-protocol/hyperdrive Not sure how it compares to IPFS, but there is a nice looking browser to go along with this protocol: https://beakerbrowser.com/ Edit: just kidding beaker browser is defunct beakerbrowser/beaker#1944 (comment) Modern alternative seems to be https://agregore.mauve.moe/ |
Beta Was this translation helpful? Give feedback.
-
This is a more of a thought experiment than anything else. The scenario I have in mind while thinking about this is hosting a local Wikipedia instance using kiwix on the same computer as a NomadNet node, or on another computer on the same local network (i.e. not requiring internet access). I was initially imagining an experience similar to Lynx or other text-based web browsers. I believe this would require external link support (see discussion #34) as well as parsing html into micron on-the-fly. Maybe there is a simpler way to achieve this?
In general using NomadNet's file sharing support would make more sense with how the system is structured today. However, the zim files used by kiwix are quite large (the text-only version of Wikipedia is over 12GB) and quite binary (not easily parsed). Even so, they are the best thing I have found so far for sourcing and locally hosting large volumes of web content (Wikipedia, Project Gutenberg Library, StackExchange, etc). If anyone knows of a better alternative please share!
Ideally there would be a better system for compiling content. I have yet to experiment with this, but one idea is to setup a local kiwix server hosting the desired zim content and then use wget to mirror from the local kiwix server (since the restructuring for offline use is already done in the zim file content). This would generate the individual files needed to share over RNS. Additional parsing of links would be required to point to adjacent content on the node (possibly using the wget --base option?). You would lose the search capabilities of kiwix as well, which would be a significant loss. Maybe the above concept of creating a mirror of individual files plus basic file content search capability in NomadNet for searching/filtering files on a hosting node would be a good compromise? Hosted file content search would be a new feature for NomadNet as far as I know.
I'm interested if others have thoughts on how best to achieve such a system. The answer may be that libraries are centralized within each community for a reason and this case isn't any different, meaning that it would be best for someone in each community to take on the responsibility of hosting large volumes of content for use locally instead of trying to distribute it. Or maybe creating a RNS browser should be a separate application outside of NomadNet all together (possibly related to discussion #12 on RRTP?).
Beta Was this translation helpful? Give feedback.
All reactions