Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lotus sync issue: libp2p v0.34.1 #2858

Closed
Tracked by #2778
rjan90 opened this issue Jul 2, 2024 · 13 comments
Closed
Tracked by #2778

Lotus sync issue: libp2p v0.34.1 #2858

rjan90 opened this issue Jul 2, 2024 · 13 comments

Comments

@rjan90
Copy link

rjan90 commented Jul 2, 2024

Louts updated to libp2p v0.34.1 in its latests release Lotus v1.27.1, and we are getting some reports from users encountering syncing issues which seems to be related to the resource manager:

{"level":"debug","ts":"2024-07-02T07:14:40.235Z","logger":"rcmgr","caller":"resource-manager/scope.go:480","msg":"blocked stream from constraining edge","scope":"stream-16657929","edge":"transient","direction":"Inbound","current":4233,"attempted":1,"limit":4233,"stat":

{"NumStreamsInbound":0,"NumStreamsOutbound":4233,"NumConnsInbound":0,"NumConnsOutbound":0,"NumFD":0,"Memory":0},"error":"transient: cannot reserve stream: resource limit exceeded"}

Another report indicated that they were unable to get peers after updating, but after a couple of restarts of their node, they were able to get back in sync. Unfortunately they were not able to get a goroutine dump, but will do it next time they enounter the same issue.

Do you have any additional tips, for what information to gather when encopuntering these rcmgr-issues?

@MarcoPolo
Copy link
Collaborator

Setting up Grafana dashboards would really really help debug stuff like this.

The log points to an issue reserving a space for a new inbound stream in the "transient" scope. Streams are considered "transient" before we know what protocol they will be used for. Streams should only be transient until multistream finishes or the 10s timeout is hit, whichever comes first. I would be surprised if we were leaking "transient" streams, but it would be obvious in the dashboard if we are. Does this error log persist? Does the number in NumStreamsOutbound ever go down?

@MarcoPolo
Copy link
Collaborator

We've tried to make it as easy as possible to get started with the dashboards, so there's a docker compose file that spins up everything you need. Refer to go-libp2p/dashboards/README.md for more detailed instructions. But, in case it helps, here are some step by step tips to get it working with lotus.

  1. go to go-libp2p/dashboards
  2. Make this change to prometheus.yml. This tells it about Lotus's default metrics endpoint
diff --git a/dashboards/prometheus.yml b/dashboards/prometheus.yml
index f0917188..bfa09fc5 100644
--- a/dashboards/prometheus.yml
+++ b/dashboards/prometheus.yml
@@ -23,8 +23,8 @@ scrape_configs:
   honor_timestamps: true
   scrape_interval: 15s
   scrape_timeout: 10s
-  metrics_path: /debug/metrics/prometheus
+  metrics_path: /debug/metrics
   scheme: http
   static_configs:
   - targets:
-    - host.docker.internal:5001
+    - host.docker.internal:1234
  1. run docker compose -f docker-compose.base.yml up on macOS or docker compose -f docker-compose.base.yml -f docker-compose-linux.yml up on Linux.
  2. View metrics on Grafana at http://localhost:3000.

@rvagg
Copy link

rvagg commented Jul 11, 2024

Some relevant logs from a lotus user just now in a stuck node: filecoin-project/lotus#12205 (comment)

@BigLep
Copy link
Contributor

BigLep commented Jul 17, 2024

@rvagg : what are the next steps for confirming that the new go-libp2p release addresses the issue found with Lotus? Are you checking on your machine, or should we push this out with 1.27.2 and ask more people to upgrade? (Context: I want to have high confidence this is addressed before the 1.28.0 release.)

@rvagg
Copy link

rvagg commented Jul 17, 2024

Yeah, 1.27.2 is not a bad idea, I can queue that up. So far, running 1.27.1 with an upgraded libp2p to 1.35.3 for a little over 24 hours, the various libp2p dashboards for my mainnet node are fairly stable and it's hard to tell on any of them when I did the upgrade.

On the other hand, for my calibnet node which is running master and therefore upgrading from the same libp2p that's in 1.28.0, there's a lot of difference but I don't know what's good or bad here so @sukunrt || @MarcoPolo would be best to interpret these. Transient streams seems the most promising measure, with no evidence of leaky growth. Here's where I see differences (the upgrade is around 07/16 10:00 on these graphs, there was another, longer restart earlier in the graphs that is less relevant):

Screenshot 2024-07-17 at 4 49 53 PM Screenshot 2024-07-17 at 4 50 42 PM Screenshot 2024-07-17 at 4 50 56 PM Screenshot 2024-07-17 at 4 51 11 PM Screenshot 2024-07-17 at 4 51 25 PM Screenshot 2024-07-17 at 4 51 37 PM Screenshot 2024-07-17 at 4 52 32 PM Screenshot 2024-07-17 at 4 52 43 PM

@sukunrt
Copy link
Member

sukunrt commented Jul 17, 2024

It does look like the bug fixed by #2869 was the culprit. On #2764 I see that there was a regression on moving from v0.31 to v0.33. This bug was introduced in the v0.32 release, by this change(#2555).

Nothing immediately stands out for me from the graphs @rvagg shared. If you think any particular graph is suspicious, happy to help debug it.

@sukunrt
Copy link
Member

sukunrt commented Jul 17, 2024

Note: #2875 while related, is not causing this issue. That does add to the transient streams, but those are garbage collected once the stream completes.

We'll make a patch release with #2875 once it's merged.

@hsanjuan
Copy link
Contributor

Some users are complaining that IPFS Cluster displays lots of connection problems after upgrading from v0.33.2 to v0.35.2 and they are resolved by downgrading go-libp2p. If this issue was introduced v0.32 then there must be some other issue at play. @jonasrmichel can perhaps provide more details...

@jonasrmichel
Copy link

Unfortunately, I can't provide too much more concrete data.

However, here's what I observed in a network with a couple thousand peers.

Running go-libp2p v0.35.1, a majority of the peers in the network have trouble negotiating connections. Both sides log connection resets and messages like this...

failed to negotiate security protocol: privnet: could not read full nonce

Also, running v0.35.1 the network consistently exhibits many routing errors with logs indicating routing not found for established peers.

Downgrading go-libp2p to v0.33.2 resolved both of the above issues.

@sukunrt
Copy link
Member

sukunrt commented Jul 20, 2024

@jonasrmichel @hsanjuan, Do you have many nodes on the same IPv4 address or the same /64 IPv6 subnet? If so the added limits for concurrent connections to the same IP might be the problem(https://github.com/libp2p/go-libp2p/blob/master/p2p/host/resource-manager/conn_limiter.go#L32)

Is it possible to setup metrics like this comment explains?

@jonasrmichel
Copy link

Ah, fascinating.

Per-subnet connection limiting does map to the symptoms I've observed.

Our network's bootstrap peers are behind a load balancer. While each bootstrap peer is reachable on a dedicated port, these peers do share three public IP addresses assigned to the load balancer.

Is there a way to disable the connection limits per subnet?

@MarcoPolo
Copy link
Collaborator

Yes you can disable it. You can also set no bounds for a specific cidr. This comment may help #2872 (comment). Along with the go doc: https://pkg.go.dev/github.com/libp2p/go-libp2p@v0.35.4/p2p/host/resource-manager#WithNetworkPrefixLimit

@sukunrt sukunrt mentioned this issue Jul 22, 2024
19 tasks
@rjan90
Copy link
Author

rjan90 commented Jul 29, 2024

From @rvagg in a thread in the Filecoin-slack:

graphs looking pretty healthy on both my nodes, nothing suggesting leaky behaviour, hopefully this is all resolved now
thanks for the help on this! 🤝

I will close this issue now and re-open it if more users report this error after upgrading to go-libp2p v0.35.4.

@rjan90 rjan90 closed this as completed Jul 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants