-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Request] Chaotic-AUR repositories #63
Comments
Hey @FireMasterK, thanks for your request. How much storage does this need? |
The official website lists @dr460nf1r3 as the main maintainer. He might know the answer, since I didn't find it online, and might also be interested in this project. |
The repository size is currently 80GB, potentially increasing when adding new packages 😊
As maintainer, I'm also quite interested in this as it opens new possibilities for our users :) |
Hey @dr460nf1r3! Thanks for the info. Sadly, my main server isn't capable of handling so much data additionally, since the setup requires me to keep a local copy and a copy within IPFS. If you got a server for this, I'm happy to help to set an additional collaborative cluster up, just for Chaotic-AUR. Let me know if that's an option :) |
The requirements is a user account and two services on a server which gets already the updates via Rsync or can be included into the rsync "group". My toolset will then just read the sync log after syncing and push the changes to the local IPFS storage and finally publish the new changes to the ipfs cluster, so other cluster-members can fetch it. |
@RubenKelevra looking at this issue, people are talking about somehow mounting the directory on IPNS so storage doesn’t need to be duplicated, perhaps that could be used? I’m about to go to sleep, but I’ll try to find some documentation tomorrow. |
Hm, no this doesn't work. There are three issues:
Thanks for the heads up :) Anyway there's another option I used a while ago: ZFS deduplication below IPFS and the storage. But it takes up a bit IO and memory and limits my choices on the IPFS blocksize. But I might give this another try. |
I've worked on the server to enable ZFS deduplication again. This gives quite some more room, so I'm happy to add this to the cluster:
But I need a way to sync via rsync, as my setup requires to parse the rsync logs to get the changes. |
Hey @RubenKelevra, that's some great news! :) We do have
How often would you sync btw? 👀 |
Thanks :)
@dr460nf1r3 well, I check via HTTPS for changes on the timestamp files once a minute and start syncing if there are changes. Would that work for your repo? |
Yeah, that would be absolutely fine. Thank you! 😊 |
Hey! :) Is there any URL btw? I would like to add this to our website, so people can make use of it 😊 |
Hey @dr460nf1r3, just in the process of adding alhp and chaotic-aur to the cluster (will take an hour or so I guess). The URL will be |
Perfect, just added it to our mirrorlist :) |
@dr460nf1r3 this obviously only works with a locally running IPFS daemon. :) However you can in theory access it through ipfs.io for example, but it's rather slow and might block it if there are too many requests. So maybe just link to this project's readme? I'll add the URL there too. :) |
Cool! :) |
@dr460nf1r3 while setting it up, I was wondering if you're interested to have a dedicated cluster instance for just Chaotic AUR. This way you could join without having the full storage requirements of ALL repos, but only chaotic aur. It's a bit of work to set up, but doesn't increase my storage requirements, as the data is just linked in two cluster instances, but hold only once in a single IPFS daemon. So if you're interested in switching over from Syncthing or have this as an additional service, just let me know. :) |
Thanks for the offer, thats a kind offer. I'll keep it in mind for the future, maybe it'll come in handy one day! Cheers 🥳 |
You're welcome! :) |
Chaotic-Aur is an automated building repo for AUR packages, while its not a distro, it includes packages for common aur packages.
The text was updated successfully, but these errors were encountered: