-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
continuation optimization #217
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
small adjustments are needed.
I can not figure out how the output of test_rlp_slice and test_rlp_from_file are compared.
Further optimizing witness generation time through strategies such as caching, flexbuffer utilization, and pooling
If this is a WIP PR, please mark it as WIP. |
Marked the as WIPs. |
Regarding comparing Others are marked as WIP. |
context_output.clone(), | ||
)?; | ||
|
||
write_context_output(&context_out.lock().unwrap(), context_out_path)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replace context_out
with context_output
and remove context_out
variable?
@@ -41,3 +76,931 @@ impl InitMemoryTable { | |||
self.0.get(&(ltype, offset)) | |||
} | |||
} | |||
|
|||
pub fn memory_event_of_step(event: &EventTableEntry) -> Vec<MemoryTableEntry> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it’s necessary, could you move the function to specs/src/mtable.rs
? I’m curious why it was moved to the specs
crate.
… writing witness table
Presently, when zkwasm's instructions exceed 2 billion (as observed in zkGo), the generated trace table becomes too large to fit into memory. Moreover, the generation of the witness table consumes a considerable amount of time, taking, for instance, up to 7 hours for 2 billion instructions. This pull request aims to optimize several aspects:
Implementing the capability to dump the trace table and reload it to reconstruct the circuit accurately. A specific test case
test_rlp_from_file
will be provided to ensure the outcome aligns withtest_rlp_slice
, as in 1rd commit.Introducing a tracer callback mechanism to dump tables periodically per
compute_slice_capability function's output
, as in the 2rd commit. Note that there is also a related pr inwasmi
repo. Notably,wasm's maximum memory has been hard-coded to 64MB viaLINEAR_MEMORY_MAX_PAGES
, otherwise, the current implementation will incur OOM due topush_init_memory
pushes all the memory intoimtable
.Adding support for binary files as private inputs for scenarios involving large input sizes. The new arg is
--private <filename>:file
, as in the 3rd commit.Further optimizing witness generation time through strategies such as caching, flexbuffer utilization, and pooling, as in this commit
[WIF] reconstruct code