-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: add pass tipset to StateMinerInfo
#105
Conversation
This makes it easier for Glif to cache this call.
StaterMinerInfo
StateMinerInfo
@@ -1,15 +1,37 @@ | |||
import { retry } from '../vendor/deno-deps.js' | |||
import { RPC_URL, RPC_AUTH } from './constants.js' | |||
|
|||
async function getChainHead ({ maxAttempts = 5 } = {}) { | |||
try { | |||
const res = await retry(() => rpc('Filecoin.ChainHead'), { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you check with Glif how well they can cache Filecoin.ChainHead
requests? If they cannot cache it easily, then we are just shifting the difficult to a different place, aren't we?
Also, what do you think about obtaining the chain head on the spark-api side as part of the initialization of a new round? Such a solution would ensure that all checkers are querying the same version of the miner info.
What does Filecoin.ChainHead
return? The documentation is not very clear:
https://docs.filecoin.io/reference/json-rpc/chain#chainhead
{
"Cids": null,
"Blocks": null,
"Height": 0
}
If you need Height
, then you can use the value found in round details, the property is called startEpoch
- no need to make another RPC call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you check with Glif how well they can cache Filecoin.ChainHead requests? If they cannot cache it easily, then we are just shifting the difficult to a different place, aren't we?
Good question, I have asked Glif about the performance implications of this.
Also, what do you think about obtaining the chain head on the spark-api side as part of the initialization of a new round? Such a solution would ensure that all checkers are querying the same version of the miner info.
During our colo I suggested moving the StateMinerInfo
calls to spark-api
and you were against it as it was moving us towards centralization. How is this different here?
What does Filecoin.ChainHead return? The documentation is not very clear:
Yes, I had to perform the actual call to see what's being returned. res.Cids
is tipSet key you expect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ChainHead
always costs 1 CU. Uncached StateMinerInfo
costs 8 CU, cached 1 CU. Therefore, with the cache and the extra call we are at a total of 2 CU, instead of the current 8 CU
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
During our colo I suggested moving the StateMinerInfo calls to spark-api and you were against it as it was moving us towards centralization. How is this different here?
I see your point.
In the current design, spark-api is keeping track of the rounds managed by the smart contract, and records the block number when a new round started. I am suggesting to rework that part to not only record the block number, but also the tipset CIDs.
That way, we preserve the current level of (de)centralisation.
ChainHead
always costs 1 CU. UncachedStateMinerInfo
costs 8 CU, cached 1 CU. Therefore, with the cache and the extra call we are at a total of 2 CU, instead of the current 8 CU
That's a meaningful improvement 👍🏻
It may be good enough for the first iteration.
Having said that, I have an idea to try: how about using ChainGetTipSetByHeight?
- The checker node can call
ChainGetTipSetByHeight
with the block height (block number) of when the Spark round started. - The checker node can also cache the tipset for the duration of the current round - this should reduce RPC API calls by another factor of 10. (Checker nodes execute 10-20 retrievals per round, IIRC).
- As a benefit, we will get more predictable results in cases where the miner changes their PeerID during a Spark round.
- Also, there is no need to change spark-api; all improvements stay inside the checker codebase.
This also adds a bit of centralization, as the nodes can't perform their work if this centralized node fails performing that extra call and exposing this extra data.
Let's add a follow up issue for that? I would rather land this now and get 80% (2 CUs) then going to 100% (1 CU) after spending more time |
Right. We still need to make that second call for StateMinerInfo even if we cache the current tipset, so the improvement won't be as significant as I thought. SGTM. |
This makes it easier for Glif to cache this call.