-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't access InRelease files #7373
Comments
Is it a symlink? |
There's a huge probability it is; There's way too many links in there. I noticed some of them were just downloaded as files and it looks like some other just aren't reachable. |
Could you give me your full multiaddr? I can't find your node. |
(but yeah, we need to follow symlinks on the gateway) |
I can't seem to reach any of those addresses. But you can check to see if it's a symlink by calling `ipfs get` on the file in question.
|
Oh. I think we found the problem. ipfs get bafybeihocm6ufvyz44kde6fewu2wsj4qfiecfzbjubbekvcnw3hr7u3smq/ubuntu/dists/focal-updates/
Saving file(s) to focal-updates
311.33 MiB / 311.33 MiB [==================================================================================] 100.00% 1s
Error: data in file did not match. mirrors/ubuntu/dists/focal-updates/InRelease offset 0 Because Is there a way to make the adding process faster? Right now, the command I'm using is I saw in ipfs-inactive/package-managers#18 that removing |
Removing |
|
Got it. I wanted to make sure you were using badger without sync writes enabled. I'm not sure why removing Note: I'd consider using snapshots to decouple these. That is, you can:
That will mean that the IPFS mirror will always be a bit behind but you'll never have to stall the HTTP mirror to wait on the IPFS mirror. This will also ensure that you never modify files after adding them to IPFS. |
Oh, that's very interesting. For the Is there a way to cleanup the snapshots? What happens if I add using |
Yes, but the snapshots should dedup.
Unfortunately, I don't think it's possible to override old files with new files. I believe for performance reasons, we don't bother replacing old "filestore no copy" records with ones pointing to new files. Honestly, I think the best approach here would be to create a new repo, add a new snapshot, then delete the old repos and the old snapshots (once every few days). I assume the repos (with Otherwise, we may be able to find a way to bypass the "do I already have this block check" by adding yet another flag (but I'd prefer not to if possible). |
This seems very useful. In fact, it's confusing that it's not already the case; If I add a new file using I believe the benefices are real. Should I raise an issue for that? |
It deserves an issue, but I'm not sure about the best approach. A really nice property of the current blockstore is that it's idempotent. This change would break that. |
@Stebalien |
I'm closing this as it's not really a bug. Removing/changing a file on disk after adding it to go-ipfs with the |
Hey! I just wanted to add that I've updated my script to manage snapshots as you suggested. I had to create a Btrfs subvolume and move the mirror over, but this done overnight, I'm now adding it back to IPFS using a fresh badgerds. It seems to take a very long time. The problem with the program I made is that it's now dependent on Btrfs. While I do love Btrfs, I'm not sure if it's a great idea for my ipfs-mirror-manager to be tied to a filesystem. Moreover, the Nonetheless, successfully pulling off an IPFS mirror of the Ubuntu archive on a Raspberry Pi would be very impressive, and I'm extremely proud that IPFS has come this far. At this time, the |
So, my ideal solution here would be to just not use the go-ipfs daemon, but instead write a custom dropbox like IPFS service by cobbling together bitswap, libp2p, a datastore, and the DHT. It would:
The database schema would be:
On start:
|
perhaps you should create a new issue to track the development of this idea |
Good point. I've filed an issue here: ipfs/notes#434
|
I can't really afford the time it would take to build a custom IPFS daemon, I have to do with what I have. And now, what I have is a mirror that takes around 2 days per updates. I posted it on Reddit. In the meantime, is there any way to optimize it? Right now, the command I'm using is CPU usage is about 40% and HDD read speeds are at about 15-30 Mbps. |
Don't use |
Version information:
Description:
I'm trying to build a mirror of Ubuntu Archives on IPNS using a Raspberry Pi and a 2 TB external HDD. So far, thing are going pretty well, but I think I've encountered a breaking bug.
According to those logs, the problem occurs at http://localhost:8080/ipns/QmSbCLwYuqBGQYTG4PBHaFunsKcpLLn97ApNn1wf6cV8jd/ubuntu/dists.
I'm using this to query multiple public gateways to know if they can access the file.
To speed up discovery,
ipfs swarm connect /p2p/QmV8TePNsdZiXUpq62739hp5MJLSk8SdpSWcpLxaqhRQdR
.The text was updated successfully, but these errors were encountered: