Skip to content

Request: stand-alone VM with instrumentation (especially gas) #1080

@anorth

Description

@anorth

This is a request for FVM tooling to support the development of efficient actors, motivated specifically by the built-in actors.

With a real VM, tracing execution paths and costs in native actors has become much harder. This is partially a necessary consequence of metering execution cost of compiled code, but there's also plenty of room to restore tooling similar to what we had in Go. At present it is very hard to understand the gas costs (and other execution metrics) while developing actor code, and all but impossible to get detailed traces to inform optimisations.

Here's a brief outline of a workflow I would like to have available, and I think could be built.

  • A stand-alone FVM instance presents an VMDriver API which supports
    • Installing actor WASM bundles
    • Direct manipulation of state
    • Sending messages
    • Direct inspection of state
  • The stand-alone instance gathers traces of gas usage at every syscall boundary, which I can obtain from the VMDriver API
  • Ditto gathers metrics about all storage reads and writes
  • I can write test/benchmark code in Rust that set up an environment (direct state manipulation), feeds messages into the VM, then inspection of both the final state and the execution/gas/storage traces

To benchmark native actors I would create a new project that imports this VM, the actors I want to test, and contains the the driver "scripts" and assertions.

Ideally, the VMDriver API would be sufficiently abstract that it could also be backed by a fake VM that doesn't involve WASM at all, but executes actor code natively. Then I could write test/benchmark scripts alongside the built-in actors directly, get some tracing (e.g. storage) that way, and merely import and execute them in a third project for gas tracing.

For some context, in specs-actors we had a full VM in the project for gathering traces, which which we could write scenario tests which establish an environment then automate messages. We could gather all necessary metrics because execution was unmetered.

There is probably some overlap with the existing testing/integration code – a sketch of what I'm requesting might start with documentation/examples of how to use that to run built-in actors, though AFAIK the metering isn't there yet.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions