Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: add pass tipset to StateMinerInfo #105

Merged
merged 1 commit into from
Jan 28, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 23 additions & 1 deletion lib/miner-info.js
Original file line number Diff line number Diff line change
@@ -1,15 +1,37 @@
import { retry } from '../vendor/deno-deps.js'
import { RPC_URL, RPC_AUTH } from './constants.js'

async function getChainHead ({ maxAttempts = 5 } = {}) {
try {
const res = await retry(() => rpc('Filecoin.ChainHead'), {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you check with Glif how well they can cache Filecoin.ChainHead requests? If they cannot cache it easily, then we are just shifting the difficult to a different place, aren't we?

Also, what do you think about obtaining the chain head on the spark-api side as part of the initialization of a new round? Such a solution would ensure that all checkers are querying the same version of the miner info.


What does Filecoin.ChainHead return? The documentation is not very clear:

https://docs.filecoin.io/reference/json-rpc/chain#chainhead

{
  "Cids": null,
  "Blocks": null,
  "Height": 0
}

If you need Height, then you can use the value found in round details, the property is called startEpoch - no need to make another RPC call.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you check with Glif how well they can cache Filecoin.ChainHead requests? If they cannot cache it easily, then we are just shifting the difficult to a different place, aren't we?

Good question, I have asked Glif about the performance implications of this.

Also, what do you think about obtaining the chain head on the spark-api side as part of the initialization of a new round? Such a solution would ensure that all checkers are querying the same version of the miner info.

During our colo I suggested moving the StateMinerInfo calls to spark-api and you were against it as it was moving us towards centralization. How is this different here?

What does Filecoin.ChainHead return? The documentation is not very clear:

Yes, I had to perform the actual call to see what's being returned. res.Cids is tipSet key you expect.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ChainHead always costs 1 CU. Uncached StateMinerInfo costs 8 CU, cached 1 CU. Therefore, with the cache and the extra call we are at a total of 2 CU, instead of the current 8 CU

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

During our colo I suggested moving the StateMinerInfo calls to spark-api and you were against it as it was moving us towards centralization. How is this different here?

I see your point.

In the current design, spark-api is keeping track of the rounds managed by the smart contract, and records the block number when a new round started. I am suggesting to rework that part to not only record the block number, but also the tipset CIDs.

That way, we preserve the current level of (de)centralisation.

ChainHead always costs 1 CU. Uncached StateMinerInfo costs 8 CU, cached 1 CU. Therefore, with the cache and the extra call we are at a total of 2 CU, instead of the current 8 CU

That's a meaningful improvement 👍🏻

It may be good enough for the first iteration.

Having said that, I have an idea to try: how about using ChainGetTipSetByHeight?

  • The checker node can call ChainGetTipSetByHeight with the block height (block number) of when the Spark round started.
  • The checker node can also cache the tipset for the duration of the current round - this should reduce RPC API calls by another factor of 10. (Checker nodes execute 10-20 retrievals per round, IIRC).
  • As a benefit, we will get more predictable results in cases where the miner changes their PeerID during a Spark round.
  • Also, there is no need to change spark-api; all improvements stay inside the checker codebase.

// The maximum amount of attempts until failure.
maxAttempts,
// The initial and minimum amount of milliseconds between attempts.
minTimeout: 5_000,
// How much to backoff after each retry.
multiplier: 1.5
})
return res.Cids
} catch (err) {
if (err.name === 'RetryError' && err.cause) {
// eslint-disable-next-line no-ex-assign
err = err.cause
}
err.message = `Cannot obtain chain head: ${err.message}`
throw err
}
}

/**
* @param {string} minerId A miner actor id, e.g. `f0142637`
* @param {object} options
* @param {number} [options.maxAttempts]
* @returns {Promise<string>} Miner's PeerId, e.g. `12D3KooWMsPmAA65yHAHgbxgh7CPkEctJHZMeM3rAvoW8CZKxtpG`
*/
export async function getMinerPeerId (minerId, { maxAttempts = 5 } = {}) {
const chainHead = await getChainHead({ maxAttempts })
try {
const res = await retry(() => rpc('Filecoin.StateMinerInfo', minerId, null), {
const res = await retry(() => rpc('Filecoin.StateMinerInfo', minerId, chainHead), {
// The maximum amount of attempts until failure.
maxAttempts,
// The initial and minimum amount of milliseconds between attempts.
Expand Down
Loading