-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Integrate FIP0013 #6235
WIP: Integrate FIP0013 #6235
Conversation
2021-05-13 conversation:
|
2021-05-17: for getting this to merge:
|
FIP 0013 addenda
FIP 13 addenda: correctly handle commit batch timer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About half way
@@ -1247,7 +1247,7 @@ func upgradeActorsV5Common( | |||
|
|||
// Persist the result. | |||
newRoot, err := store.Put(ctx, &types.StateRoot{ | |||
Version: types.StateTreeVersion3, | |||
Version: types.StateTreeVersion4, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that bumping to version 3 in actors v4 was a mistake and that we were not planning to continue bumping the version every time. Is this intentional? If so what changed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I just assumed that /something/ is expecting this to be bumped based on the nv12 diff, might have been wrong
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the only good reason for it is version check in chain/stmgr/forks.go:upgradeActorsV5Common
, I'm ok with dropping that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I defer to you and @arajasek
@@ -158,7 +160,8 @@ var prices = map[abi.ChainEpoch]Pricelist{ | |||
|
|||
hashingBase: 31355, | |||
computeUnsealedSectorCidBase: 98647, | |||
verifySealBase: 2000, // TODO gas , it VerifySeal syscall is not used | |||
verifySealBase: 2000, // TODO gas , it VerifySeal syscall is not used |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the addition of a new pricelist method warrant creating a new pricelistV0 in the top level map starting at the upgrade epoch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless @Kubuxu disagrees I'd say no
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Integration LGTM
I have not yet reviewed the extensive storage-sealing changes. If we want a thorough review from me on this it is going to take significant time to get up to speed because the storage fsm is totally new code to me. That might be expected but I want to check in about it before moving forward.
t.Fatal("sector in a failed state", st.State) | ||
} | ||
} | ||
if states[api.SectorState(sealing.SubmitPreCommitBatch)] == nSectors || |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite understand why both this code and flushSealingBatches
exist. Is there any opportunity for deduplicating these two?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a "special version" which only flushes batches when all sectors are queued for the next batch (to test how things behave with max-sized batches)
extern/sector-storage/mock/mock.go
Outdated
} | ||
|
||
func (m mockVerifProver) AggregateSealProofs(aggregateInfo proof5.AggregateSealVerifyProofAndInfos, proofs [][]byte) ([]byte, error) { | ||
out := make([]byte, 200) // todo: figure out more real length |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fyi from crypto lab
nproofs,aggregate_create_ms,aggregate_verify_ms,batch_verify_ms,batch_all_ms,aggregate_size_bytes,batch_size_bytes
8,24,9,3,3,11220,1536
16,44,11,3,6,14196,3072
32,89,11,5,12,17172,6144
64,143,13,7,18,20148,12288
128,257,15,12,36,23124,24576
256,400,17,21,71,26100,49152
512,780,18,38,119,29076,98304
1024,1411,21,64,203,32052,196608
2048,2455,23,117,406,35028,393216
4096,4343,28,222,884,38004,786432
8192,8770,33,435,1761,40980,1572864
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aggregate_size_bytes
is the relevant column
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed the storage logic, and it looks good. Ran a devnet and it seems to do the thing.
(Verbal green check mark from me, since it's """my PR""")
|
||
type AggregateInput struct { | ||
spt abi.RegisteredSealProof | ||
// TODO: Something changed in actors, I think this now needs to be AggregateSealVerifyProofAndInfos todo ?? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can now be removed
Overhauls #5769 to be on v5 actors and go into the staging branch.
Currently this is pulling in filecoin-project/filecoin-ffi#169, filecoin-project/go-paramfetch#13 and filecoin-project/specs-actors#1381, and implementing some plumbing logic for proof aggregation
TODO:
asr/temp
).