Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network: disable cache on localhost #3088

Closed
kumavis opened this issue Jan 24, 2018 · 6 comments
Closed

Network: disable cache on localhost #3088

kumavis opened this issue Jan 24, 2018 · 6 comments
Labels
area-provider Relating to the provider module. type-bug Something isn't working

Comments

@kumavis
Copy link
Member

kumavis commented Jan 24, 2018

There are some issues with testrpc/ganache-cli and metamask cache, when using instant blocks. One easy way to resolve this is to not include the BlockCache when targeting localhost.

Selected P1 because this affects lots of developers using metamask. They see old state from before the last tx.

@timothywangdev
Copy link

We experienced the same bug while testing on ganache-cli. Metamask always returns the old (incorrect) value for a contract variable. My current workaround is to connect Metamask to an other network then reconnect to localhost.

@danfinlay
Copy link
Contributor

More caching observations by @MicahZoltu:

micah.zoltu [1:47 AM]
I'm trying to author a dapp against MetaMask. When connected to a private network and I attempt to do eth_call, MetaMask returns a JSON-RPC success response with .result: '0x'. When I watch the network traffic coming out of MetaMask, I can see that no request is being made of the private node, which means MetaMask is generating the response internally.
I am able to do net_version successfully and I can witness the request going to the private node I'm pointing at.
When I switch networks to Rinkeby and fire the exact same eth_call request I can see it posting to Infura. This implies to me that the issue is specific to custom nodes and eth_call in some way?
Am I doing something wrong?
Ahah! Apparently MetaMask doesn't like me passing from: '0x00000...' to eth_call. Will file a bug about that.

Hmm, it actually appears to be some super aggressive caching by MetaMask.
As a developer, how do I force MetaMask to bust its cache? I rebuilt my private network with a bugfix, but it appears that MetaMask cached the response from before my bugfix.

Previously I hadn't worked on MetaMask integration personally on other projects, though I have heard other developers complain about "weird MetaMask behavior that eventually went away". Now that I am experiencing them I took some time to debug to try to figure out the source of the problem and caching looks like the root cause.

I can certainly appreciate the desire to have the results of many things cached (e.g., eth_call) to avoid having a poorly written app hammering the backing nodes. It just feels like at the moment the caching is a bit too aggressive and busting the cache is a bit too hard.

@bdresser
Copy link
Contributor

being handled in #4279

@bdresser
Copy link
Contributor

fixed in #4279

@saturnial
Copy link

This issue is fixed apparently, but curious to know what the original caching strategy was -- and how exactly it has been addressed in #4279.

@kumavis
Copy link
Member Author

kumavis commented Aug 20, 2018

@saturnial #4279 changes our handling of rpc methods from provider-engine to json-rpc-engine.

before with provider-engine we used this default configuration for all networks. Its cache logic can be found here.

now with json-rpc-engine and some additional changes in metamask's NetworkController, we configure infura mainnet like this and localhost like this. The primary difference is the inclusion of these two layers of caching. The caching layers are the primary block-based and the network-request deduping inflight-cache.

The block-cache was built to do a straightforward cache on requests with a lifetime of one block. The inflight cache was built to solve the problem where request 1 and 2 are equivalent but request 2 is made before request 1 finishes. Request 2 will be held until request 1 completes, then will return with Request 1's result. Here is where we determine the methods that we cache (anything marked perma + block).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-provider Relating to the provider module. type-bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants