-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug(webtransport): advertisement of webtransport multiaddrs seems to be broken #2568
Comments
A couple notes here:
My guess is this data is already being tracked by nebula, etc. which would hopefully make it relatively straightforward to investigate. Although @dennis-tra would know better. Also, @SgtPooki if you've got any specific examples of nodes with kubo >= 20 advertising public quic-v1 but only private webtransport that'd be helpful to poke at. |
Just some clarification:
These numbers are from a single crawl that happens ~midnight UTC each day. The number
These numbers show unique PeerIDs for each agent version over the course of one full week (so not a single crawl). E.g., this means we have found
Actually, that's
This is tracked and just requires some SQL wizardry. I'll come back here when I have the numbers 👍 |
I saw that I'm not storing private multiaddresses because I found that they'd pollute the database with uninteresting records. So, unfortunately, I don't have numbers on private addresses. It's just a two-line change. If you think it'll make sense also to track private addresses, I can change Nebula to also do that. Still here are some numbers: From the latest crawl:
Webtransport agents:
142 other |
Thanks @dennis-tra So it seems like this is still an issue because we're advertising 21k quic-v1 addresses, but only 1.5k webtransport addresses? |
Can we track only private webtransport addresses? I think that would give us a useful figure to compare against public webtransport & quic-v1 to answer Adin's Q:
|
I just booted up a kubo node on a digitalocean droplet and ran
which means that webtransport listening is working properly for ubuntu 22.04 (LTS) x64 via docker. I was able to dial the webtransport address from my local laptop kubo, but unable to dial it from https://helia-identify.on.fleek.co/ |
one interesting callout is that I can run kubo from the docker container and fail to connect to the webtransport address from helia-identify, but if i run the kubo daemon directly (downloaded kubo binary from dist.ipfs.tech), I can connect to it. |
@SgtPooki could it be that UDP traffic isn't forwarded to the docker container? By specifying |
@dennis-tra do you have the breakdown of quic peers by agent? I'm specifically interested in the number of kubo v0.22 peers advertising a public quic-v1 address. |
IIUC to understand if there is a bug here/what bug exists it'd be great to get the numbers on the number of kubo >=v0.20 amino server nodes that are:
@dennis-tra is this easy enough to do with nebula, or a pain to modify/plumb through? |
Sorry for taking so long to respond. Here are some numbers from the latest crawl: General
I could do a manual one-off run of the crawler where I enable the tracking of private addresses. However, given the number of peers with agent version >0.20 of The number of peers with an agent version starting with |
Thanks for diving into this, Dennis :) |
When looking at https://probelab.io/ipfsdht/#kubo-version-distribution and https://probelab.io/ipfsdht/#dht-transport-distribution, we have the following distributions:
1582 (WT) / 31392 (all) = 5% WT
(26 (Kubo23) + 1300 (Kubo22) + 2574 (Kubo21) + 489 (Kubo20) + 389 (Kubo19)) = 4778 Kubo versions > 18
(4778 + 21068 + 2468 + 1677 + 234 + 480 + 7 + 3709) = 31953 Total Kubo Nodes
4778 / 31953 = 15% Kubo versions > 18
Seems like there is a discrepancy in amount of WT transports in the network (5%) compared to the number of versions (15%) that are supposed to be advertising WT successfully.
That 10% gap equates to roughly ~3139 nodes that should be advertising WT that aren't.
NOTE: The total kubo nodes and total transports are mismatched, meaning some of my math might be borked.
The text was updated successfully, but these errors were encountered: