Skip to content
This repository has been archived by the owner on Feb 26, 2024. It is now read-only.

In-process provider is 40 to 60% slower than ganache-cli launched at command line #481

Closed
cgewecke opened this issue Sep 11, 2019 · 7 comments

Comments

@cgewecke
Copy link
Contributor

cgewecke commented Sep 11, 2019

In the past I've heard ganache engineers suggest that running as an in-process provider should be faster than running the client separately as a server. Intuitively this makes sense since there's no ipc overhead etc.

However, in practice I'm seeing the opposite. Running zeppelin-solidity (~2200 Truffle unit tests) using the two options consistently has ganache.provider running 40 - 60% slower than ganache-cli as a separate process. Examples can be seen the CircleCI jobs below.

Using 6.4.1 ( provider ~40% slower than server)
Using 6.7.0 ( provider ~60% slower than server)

Any ideas why this might be?
Is there any way to address the difference?

Context

I 'm working on a coverage tool which inspects opcodes with an in-process provider and seeing worse performance than expected. In some cases computationally intensive tests are taking > than twice as long to run with coverage than without. TLDR; I'm trying to isolate what the bottleneck is.

NB: this issue purely about perf disparities between server & in-process provider. In the CircleCI benchmarking jobs, coverage is run as a separate item.

(cf: solidity-coverage 372

Your Environment

  • Version used: 6.4.1 and 6.7.0
  • Environment name and version: Node 10
  • Server type and version: ganache-cli.provider & ganache-cli launched at command-line
  • Operating System and version: CircleCI (Linux Docker?) and Mac OSX Sierra
  • Link to your project: sc-forks/zeppelin#provider-benchmarks
@davidmurdoch
Copy link
Member

Interesting observation!

One difference between the two tests is that the gasLimit is set to 0xfffffffffff in the npm run test test, but I really can't see how that difference could possibly slow things down at all.

It could be that our assumptions about performance of in-process ganache RPC vs RPC of HTML (running ganache-cli as a server) are just flat out wrong.

It could be that the tests themselves are doing a lot of work in the process which is causing the "in-process ganache" to wait; whereas with ganache-cli it gets its own process and isn't ever waiting on the tests' process (only waits on the tests requests).

I'll definitely want to figure out what is going on here (as well as investigate why 6.7.0 has caused such a slow down)!


That said, have you run these benchmarks multiple times to make sure it wasn't a fluke of CircleCI? I've seen our Travis and AppVeyor vary test times by several minutes on successive runs without even changing any code.

@cgewecke
Copy link
Contributor Author

cgewecke commented Sep 11, 2019

@davidmurdoch Thanks!

are you sure it wasn't a fluke of CircleCI

I am (pretty) sure that the provider vs. server is consistently slower - have rerun that many times. The version difference I am less sure about - that surprised me a bit. There's definitely variance run to run.

It could be that the tests themselves are doing a lot of work in the process which is causing the "in-process ganache" to wait; whereas with ganache-cli it gets its own process and isn't ever waiting on the tests' process (only waits on the tests requests).

Yes, that makes a lot of sense. Do you know if this is something worker threads might help with? Have not looked into that stuff at all...

@davidmurdoch
Copy link
Member

Do you know if this is something worker threads might help with?

Worker threads are not great for I/O bound tasks, according to the docs, and ganache currently is I/O bound. I know it doesn't make much sense for it to be this way and is something I've been wanting to optimize for.

...which gives me an idea. Try creating the provider with memdown as the db option passed to the provider. Like this:

const memdown = require("memdown");
const provider = ganache.provider({db: memdown()});`

This may make the provider faster than the server, but only because it now has the unfair advantage of avoiding disk I/O, which ganache-cli doesn't have a way to do (the --db option is different than the provider's db option, an unfortunate circumstance I have to live with :-) ). Of course, this requires more memory use and it mean you can't persist the db to disk, but I don't think your use case needs any of that.

Back to the idea of using worker_threads. Another issue is that we still need to support node 8, as some major cloud providers are still stuck on Node 8 (I'm looking at you, Google Cloud Functions!).

@cgewecke
Copy link
Contributor Author

cgewecke commented Sep 12, 2019

@davidmurdoch Thanks so much - memdown does help a bit. There's still a gap but it's smaller.

Also tried using a mocha reporter called 'min' that does almost no terminal writes and that seems a bit faster too. There might be several things adding up - I'm going to close because I suspect there isn't a silver bullet in the offing here. Thanks!

@cgewecke
Copy link
Contributor Author

Cross-linking to ganache-cli #677 - might be one piece of the differences seen here.

@sohkai
Copy link

sohkai commented Nov 14, 2019

Try creating the provider with memdown as the db option passed to the provider. Like this:

@davidmurdoch Just wondering, how much interest would there be in having a flag in ganache-cli to use memdown instead of the default persisted-db?

I was playing around with running ganache in Github Actions recently and they have a limit on file handles, making tests inconsistent (see aragon/aragon-court#219). Moving ganache to using an in-memory db not only sounds faster for tests, but also solves this particular issue with Github Actions :).

@davidmurdoch
Copy link
Member

If you'd like to put in the work to get this feature done I'd merge it in :-D

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants