-
Notifications
You must be signed in to change notification settings - Fork 784
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Beacon Processor's Inbound Event Queue Is Unaffected By a Runtime Argument #5390
Comments
You're right that the lighthouse/beacon_node/beacon_processor/src/lib.rs Lines 74 to 196 in 10a38a8
We are planning to hide, deprecate and remove these flags. The solution in your case is to allocate more resources (CPU, RAM, I/O), as queueing more messages is usually not a useful behaviour. |
Okay, noted. Thank you for the explanation as well as the advice! |
It seems like the "Attestation queue full" error message is only appearing immediately after the usual set of "Previous epoch attestation(s)…" info messages. Is it possible that there is simply a sufficient count of attestation messages broadcast to a sufficient peer count, that a length of 16,384 is guaranteed to be momentarily exceeded? I'm running a 16-core, 3.5 GHz Intel Xeon Ice Lake CPU (with SHA256 extensions and ADX, and built using "maxperf"). My storage is a Provisioned IOPS network drive. This machine is only running (I am using Is it possible that the allocation of more CPU, RAM, or I/O resources is not primarily what is needed? My resource/process monitoring suggests that I am not making use of what is already available. Sorry if it is a crazy idea, but I've got to ask… |
Ethereum mainnet has 977188 active vals, so that's ~30,000 unaggregated attestations per slot max. Maybe we need to review the queue numbers up @paulhauner ? |
Totally! Some sort of mechanism to give us a heads up on this would be handy. I'm not sure of the cleanest way to do this, though. It would be nice if it would just alert us devs rather than every user. Adding something to CI which calls an API (beaconcha.in?) could be an option, but it would also be annoying (some random PR would just start failing). |
Why not make those values dynamic on the state size on the node starting? Should be good enough even for someone running the same process for 1 year un-interrupted |
Yep, dynamic is definitely ideal. I think it'll be the first dynamic queue in the |
Beacon processor queue sizes are dynamic as of: The beacon processor's scheduler is still sub-optimal, and I am planning to write-up an issue for that after some more investigation. Will close this issue for now, as the issue identified has been resolved. |
Description
The length of the Beacon Processor's inbound event queue (a.k.a. work queue) is unaffected the by the
--beacon-processor-work-queue-len
runtime argument.Version
Multiple versions of Lighthouse are affected.
Lighthouse
Output of
lighthouse --version
Output of
lighthouse --version
Command Issued for Building
Rust
Output of
rustc --version
Present Behaviour
Runtime Configuration
The Beacon Node service is started with a number of runtime arguments. Here are those which are likely to be relevant:
The following are the relevant runtime parameters for which an argument is not declared (given here for sake of completeness):
Output
Lighthouse's log will include the following:
The log entry suggests that the queue length is 16384 (suspiciously, this is the default value).
Expected Behaviour
I would expect to see log entries like:
Note that
queue_len
above matches the declared--beacon-processor-work-queue-len
value.Online Documentation Referenced
Output of
$ lighthouse bn --help | grep -i beacon-processor-work-queue-len -A2
:Steps To Resolve
I do not know.
The text was updated successfully, but these errors were encountered: