-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial passive testing is happy but not next ones started with 10m succession #765
Comments
And looks like lotus (and by extension Observer, F3, etc.) retains negative scoring for 6 hours. This is a setting set at top level pubsub. I assume it affects the pubsub instance, i.e. all topics in its lifetime. |
Anecdotally I see a lot of PeerIDs with the exact same really high negative score:
A total of 139 on my node with the exact same negative score, out of a total of:
Total number of PeerIDs that have negative scores is:
|
For clarity I also grepped for the ones that subscribe to F3, and most have 0 scores - with some occasional negative ones, but not the high negative score as ^^
And the ones with extremly high negative scores are IPColocationFactor
Another test after a prolonged pause should be ran to rule out peer scares, but it does not seem that peerIDs get negatively scored. |
I have not found sufficient evidence to believe this is a genuine issue:
Closing. |
Critical question: Why is it that the first test in the morning always seem to work nice, and successive tests seem to run not as good?
Looking at the pubsub settings we forked over from Lotus, there are... a lot of questionable decisions that seem to be rooted in pre-F3 filecoin network behaviour (e.g. this).
I wonder if change in passive testing network causes some loss of mesh or unfair peer scoring such that gossip sub mesh becomes ineffective to the point where messages simply do not propagate fast enough. Take invalid message scoring for example, when networks change it is inevitable that some messages arrive rom previous network that would be considered invalid. We also observe spike in invalid message error in validation flow documented here at initial instance.
So...
The text was updated successfully, but these errors were encountered: