Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Watching events: The request took too long to process #2114

Closed
eloudsa opened this issue Sep 18, 2017 · 20 comments
Closed

Watching events: The request took too long to process #2114

eloudsa opened this issue Sep 18, 2017 · 20 comments

Comments

@eloudsa
Copy link

eloudsa commented Sep 18, 2017

Hello

I try to watch my events but I have issues when I'm using the web3 implementation provided my Metamask when my smart contract is published on the live chain.

My smart contract defines the following event:
event sellArticleEvent (
uint indexed _id,
address indexed _seller,
string _name,
uint256 _price
);

On my app.js file, I have the following code:

listenToEvents: function() {
          App.contracts.ChainList.deployed().then(function(instance) {

               var sellEvent = instance.sellArticleEvent({},
                    {fromBlock: 0,toBlock: 'latest'}).watch(function(error, event) {
                         if (!error) {
                              console.log(event);
                         } else {
                              console.error(error);
                         }
                    })
...

When I open my Dapp on the browser, I can see the following error on the console window:

Error: Gateway timeout. The request took too long to process. This can happen when querying logs over too wide a block range.
    at Object.InvalidResponse (web3.min.js:1)
    at web3.min.js:2
    at inpage.js:209
    at inpage.js:8205

But, if I use Truffle console, I'm able to quickly retrieve the logs:

truffle(development)> ChainList.deployed().then(function(instance) {app = instance; })
truffle(development)> var sellEvent = app.sellArticleEvent({}, {fromBlock: 0,toBlock: 'latest'}).watch(function(error, event) {console.log(event);})
truffle(development)> { address: '0x232795aa05d2c739cb5e81b68984a957db682466',
  blockNumber: 4285152,
  transactionHash: '0xc958f0037ae489593bab6fe37a39dffda090ad19f271a8a626397189585c5eb4',
  transactionIndex: 37,
  blockHash: '0xd75fb764588bb1ae9ad9638c90f0ab1408041c87f3c831cb211d84f9132d98cf',
  logIndex: 16,
  removed: false,
  event: 'sellArticleEvent',
  args: 
   { _id: { [String: '1'] s: 1, e: 0, c: [Array] },
     _seller: '0x3e67b24b83ff770f26496a20f5de12b125b3aca4',
     _name: 'Sample article',
     _price: { [String: '5000000000000000'] s: 1, e: 15, c: [Array] } } }

I also have to mention that the frontend app is working properly when the smart contract is deployed on testprc or even on Rinkeby.

Is it a well know problem or limitation of web3 provided by the Metamask extension or do I have to adjust my code?

Thanks

Said

@eloudsa
Copy link
Author

eloudsa commented Sep 18, 2017

I have some news.

I've noticed that the problem only occurs when Metamask is connected to the Main Ethereum Network with the error:
Gateway timeout. The request took too long to process. This can happen when querying logs over too wide a block range.

If I run my own node connected on the live chain and then if I switch Metamask on my localhost:8545, the events are properly retrieved and processed by my frontend app.

@danfinlay
Copy link
Contributor

Infura does its best to allow Dapps to request the data they want, but it serves so many users at once, that it cannot guarantee that requests for events/logs over an arbitrarily long block range will succeed. When you get a timeout error like this, it means Infura took long enough that it dropped this request. When you get this, it means you should request smaller block ranges per request. You can either hard-code those smaller block ranges, or hard-code them.

It might be cool if we automatically broke up requests like this into smaller chunks, that are easier for Infura to process.

At the very least, we should add a link to this error message to a post that describes in more detail what it means, like I just did.

Leaving this issue open for one of those solutions to be implemented.

@danfinlay danfinlay removed their assignment Sep 18, 2017
@eloudsa
Copy link
Author

eloudsa commented Sep 18, 2017

Hi @FlySwatter
Thanks for your quick reply.

@eloudsa
Copy link
Author

eloudsa commented Sep 19, 2017

Hi @FlySwatter
I've just installed 3.10.3-pre.

I'm able to see retrieve and display an event only when I send a transaction to the main net.

But if I refresh my page, the watch callback is no longer called.

I don't have the Gateway timeout anymore but another exception:

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
EventEmitter.addListener @ contentscript.js:2533
Readable.on @ contentscript.js:4463
eos @ contentscript.js:2375
anyStreamEnd @ contentscript.js:3033
createStream @ contentscript.js:2956
setupStreams @ contentscript.js:75
(anonymous) @ contentscript.js:25
1../lib/port-stream.js @ contentscript.js:120
s @ contentscript.js:1
e @ contentscript.js:1
(anonymous) @ contentscript.js:1

@gghidini
Copy link

Hello @danfinlay .

As of today (Sep. 29, 2017) it looks like MetaMask (or Infura in the back-end) are silently not returning events over large block ranges instead of emitting the Gateway timeout error reported by @eloudsa that I've also observed in the past.

I'm working w/ MetaMask 3.10.6 connected to the built-in Infura-provided Rinkeby Test Net (not my own local test node).

Based on my own experiments, it looks like MetaMask/Infura return events generated over the previous 12 hours worth of blocks on the Rinkeby test network and silently drop any event before that without firing any error.

Could you please confirm such changes were made?

If this is the case, I think it's an unfortunate development. As a DApp developer, I would much rather prefer receiving an error from MetaMask instead of just seeing recent events with no info about potential earlier events that have not been reported.

Furthermore, I would like there to be clear usage limits set by MetaMask/Infura instead of what currently seems to be a best-effort approach (as much as I like it).

For instance, it would be great if MetaMask/Infura could state that each user of a DApp is granted an n-long block range for event watches at any single time.

Accordingly, the DApp developer could make sure that the DApp stays within that limit, and thus the DApp works as expected for the end-users.

I think this issue is very important to the long-term adoption of MetaMask as an enabler of Ethereum-based DApps for the general public.

BTW, I also agree with your idea of breaking down large block ranges for event watches.

@gghidini
Copy link

Hello @danfinlay .

I'd like to share a little more info from my investigation.

First, it looks like the event watch is silently not reporting events only when MetaMask is connected to the Rinkeby test network via the official node provided by Infura.

In fact, all events in the interval are properly returned when I connect MetaMask to the Rinkeby test network via my own local Ethereum node (using Geth). (This makes sense given that we're talking about saving Infura's resources.)

Second, it looks like the MetaMask/Infura solution doesn't return ANY event in blocks older than approx. 12 hours, no matter how small the block range.

I confirmed this by setting a 3-block range around an event that I know to exist in a block in the 976530s and thus occurred approx. 14 hrs ago (Sep-29-2017 04:xx:xx AM +UTC). The event is not returned when I'm connected to the Rinkeby test network thru the Infura-backed node, while it is returned when I'm connected to my local node (always via MetaMask).

Such a policy (of returning only events within last 12 hours no matter how small the block range is) from MetaMask/Infura may make sense on the Rinkeby test network, given that it is indeed a test network, and one may just run experiments and then forget about older contracts and related events. Nevertheless, it would be greatly helpful if this was documented somewhere.

Furthermore, is the same policy applied to MM/Infura nodes for the main network? That would be detrimental to any usage of MM for production applications on the main network.

Could you investigate within MetaMask and with Infura to confirm these findings?

@kumavis
Copy link
Member

kumavis commented Sep 29, 2017

@eloudsa that warning message is unrelated to your filter request

@kumavis
Copy link
Member

kumavis commented Sep 29, 2017

@gghidini we're investigating with infura

@egalano
Copy link

egalano commented Sep 30, 2017

@gghidini I'm on the INFURA team and we checked all of our rinkeby nodes and attempted pulling logs from several block ranges and could not reproduce. Could you provide us with the specific block range you are using or getLogs parameters you are using for your test so we can try reproducing?

Here is an example of how we tested our nodes:

$ curl -X POST -s --data '{"jsonrpc":"2.0","method":"eth_getLogs","params":[{"fromBlock": "0xc35fe", "toBlock": "0xc35fe"}],"id":74}' https://rinkeby.infura.io | jq .
{
  "jsonrpc": "2.0",
  "id": 74,
  "result": [
    {
      "address": "0xaca0cc3a6bf9552f2866ccc67801d4e6aa6a70f2",
      "topics": [
        "0x581d416ae9dff30c9305c2b35cb09ed5991897ab97804db29ccf92678e953160"
      ],
      "data": "0x000000000000000000000000000000000000000000000000000000003c4a43e0",
      "blockNumber": "0xc35fe",
      "transactionHash": "0x776af635e167bea91b5efb71c1043a552ac0f3929dffc846a0975887214610b5",
      "transactionIndex": "0x0",
      "blockHash": "0x9985a35918e773f690055c5e9dab3dc7b26cc433ba2f0a220770ddc3a28532a1",
      "logIndex": "0x0",
      "removed": false
    },
    {
      "address": "0xaca0cc3a6bf9552f2866ccc67801d4e6aa6a70f2",
      "topics": [
        "0x7f401c767587c1730c37769fe771d1036a16726244368ea9b7d7a054c30c9bf3"
      ],
      "data": "0x00000000000000000000000000000000000000000000000000219a2155adbf60",
      "blockNumber": "0xc35fe",
      "transactionHash": "0x776af635e167bea91b5efb71c1043a552ac0f3929dffc846a0975887214610b5",
      "transactionIndex": "0x0",
      "blockHash": "0x9985a35918e773f690055c5e9dab3dc7b26cc433ba2f0a220770ddc3a28532a1",
      "logIndex": "0x1",
      "removed": false
    },
    {
      "address": "0xaca0cc3a6bf9552f2866ccc67801d4e6aa6a70f2",
      "topics": [
        "0x5a00e19d43b47dab27926de9fb1aee5b35c9125bf38d0908cb23c8c6e451241d"
      ],
      "data": "0x0000000000000000000000000000000000000000000000005411acecf8427188",
      "blockNumber": "0xc35fe",
      "transactionHash": "0x776af635e167bea91b5efb71c1043a552ac0f3929dffc846a0975887214610b5",
      "transactionIndex": "0x0",
      "blockHash": "0x9985a35918e773f690055c5e9dab3dc7b26cc433ba2f0a220770ddc3a28532a1",
      "logIndex": "0x2",
      "removed": false
    }
  ]
}

@gghidini
Copy link

Hello @egalano (and @kumavis ).

I think the issue may not be at the level of the Infura nodes, but somewhere between the MetaMask Chrome plugin (included) on one's laptop and the Infura JSON-RPC API.

Indeed, I can retrieve the log of the event in my application both when I point at my local node (on a local VirtualBox VM) and when I point at the Infura Rinkeby endpoint w/:

$ curl -X POST -s --data '{"jsonrpc":"2.0","method":"eth_getLogs","params":[{"fromBlock": "0xee691", "toBlock": "0xee691"}],"id":74}' https://rinkeby.infura.io | jq .
{
  "jsonrpc": "2.0",
  "id": 74,
  "result": [
    {
      "address": "0xaca0cc3a6bf9552f2866ccc67801d4e6aa6a70f2",
      "topics": [
        "0x581d416ae9dff30c9305c2b35cb09ed5991897ab97804db29ccf92678e953160"
      ],
      "data": "0x000000000000000000000000000000000000000000000000000000007537ef40",
      "blockNumber": "0xee691",
      "transactionHash": "0x4c8e12d2e0fe194bc5388ed6783ca1504915bb7de6a13c6b6eeea3e838b59c48",
      "transactionIndex": "0x1",
      "blockHash": "0x47ad318bd7369c1a0c0984e43e7fd5e1713853d12719eb07b725d6a2e0c346ae",
      "logIndex": "0x0",
      "removed": false
    },
    {
      "address": "0xaca0cc3a6bf9552f2866ccc67801d4e6aa6a70f2",
      "topics": [
        "0x7f401c767587c1730c37769fe771d1036a16726244368ea9b7d7a054c30c9bf3"
      ],
      "data": "0x0000000000000000000000000000000000000000000000000023eeadf1fb3cc0",
      "blockNumber": "0xee691",
      "transactionHash": "0x4c8e12d2e0fe194bc5388ed6783ca1504915bb7de6a13c6b6eeea3e838b59c48",
      "transactionIndex": "0x1",
      "blockHash": "0x47ad318bd7369c1a0c0984e43e7fd5e1713853d12719eb07b725d6a2e0c346ae",
      "logIndex": "0x1",
      "removed": false
    },
    {
      "address": "0xaca0cc3a6bf9552f2866ccc67801d4e6aa6a70f2",
      "topics": [
        "0x5a00e19d43b47dab27926de9fb1aee5b35c9125bf38d0908cb23c8c6e451241d"
      ],
      "data": "0x000000000000000000000000000000000000000000000000a345b2df88e5a429",
      "blockNumber": "0xee691",
      "transactionHash": "0x4c8e12d2e0fe194bc5388ed6783ca1504915bb7de6a13c6b6eeea3e838b59c48",
      "transactionIndex": "0x1",
      "blockHash": "0x47ad318bd7369c1a0c0984e43e7fd5e1713853d12719eb07b725d6a2e0c346ae",
      "logIndex": "0x2",
      "removed": false
    },
    {
      "address": "0x0eeab8b9e0f330df7c5581a6101c9a30275e3fa1",
      "topics": [
        "0xa439d3fa452be5e0e1e24a8145e715f4fd8b9c08c96a42fd82a855a85e5d57de"
      ],
      "data": "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000050000000000000000000000000000000000000000000000000000000000000000",
      "blockNumber": "0xee691",
      "transactionHash": "0x2c2c30f96dcc7c9b66fbd93caa8bc782a2a1f6593b52c295e881a01deb22c13f",
      "transactionIndex": "0x2",
      "blockHash": "0x47ad318bd7369c1a0c0984e43e7fd5e1713853d12719eb07b725d6a2e0c346ae",
      "logIndex": "0x3",
      "removed": false
    },
    {
      "address": "0x0eeab8b9e0f330df7c5581a6101c9a30275e3fa1",
      "topics": [
        "0x27b022af4a8347100c7a041ce5ccf8e14d644ff05de696315196faae8cd50c9b"
      ],
      "data": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001",
      "blockNumber": "0xee691",
      "transactionHash": "0x2c2c30f96dcc7c9b66fbd93caa8bc782a2a1f6593b52c295e881a01deb22c13f",
      "transactionIndex": "0x2",
      "blockHash": "0x47ad318bd7369c1a0c0984e43e7fd5e1713853d12719eb07b725d6a2e0c346ae",
      "logIndex": "0x4",
      "removed": false
    },
    {
      "address": "0x0eeab8b9e0f330df7c5581a6101c9a30275e3fa1",
      "topics": [
        "0x27b022af4a8347100c7a041ce5ccf8e14d644ff05de696315196faae8cd50c9b"
      ],
      "data": "0x0000000000000000000000003bb425b58db0e06d9a270d5568b495f649d0c4830000000000000000000000000000000000000000000000000000000000000001",
      "blockNumber": "0xee691",
      "transactionHash": "0x2c2c30f96dcc7c9b66fbd93caa8bc782a2a1f6593b52c295e881a01deb22c13f",
      "transactionIndex": "0x2",
      "blockHash": "0x47ad318bd7369c1a0c0984e43e7fd5e1713853d12719eb07b725d6a2e0c346ae",
      "logIndex": "0x5",
      "removed": false
    },
    {
      "address": "0xab5f0bf6630bb35ff57c57fc4b8cfb43b39154a9",
      "topics": [
        "0x7a6d49299bd51968d16982805ff76ce1e2cb5749ccb7c321bc7100af8f494896",
        "0x00000000000000000000000043704a2ed851febf0d8bb8d568d2ce1ab0220724",
        "0x6ab2b53755559ec1000000000000000000000000000000000000000000000000"
      ],
      "data": "0x00000000000000000000000037c982cb9dc5cb074e5ca77d64f6a0bdd5b18235000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000096b6a617366726b6a750000000000000000000000000000000000000000000000",
      "blockNumber": "0xee691",
      "transactionHash": "0x50935a40376d50fd2b61d6ae5fe7cf5d7b3ec83e9cb0dd4c028e82b5cdc44a35",
      "transactionIndex": "0x3",
      "blockHash": "0x47ad318bd7369c1a0c0984e43e7fd5e1713853d12719eb07b725d6a2e0c346ae",
      "logIndex": "0x6",
      "removed": false
    }
  ]
}

FYI, the log in question is the one at index 0x6 originating from address 0xab5f0bf6630bb35ff57c57fc4b8cfb43b39154a9.

However, my DApp (the same exact version accessed on the same exact laptop w/ the same Chrome browser and MetaMask plugin almost at the same time):

  • receives a callback for this event (on a watch set with Javascript API/Web3.js) when the MetaMask plugin is connected to my Rinkeby node on the VirtualBox VM; but
  • receives no callback when the MetaMask plugin is connected to the Rinkeby Test Net backed by Infura.

BTW, I also tried to run an experiment with the JSON-RPC setting a filter w/ eth_newFilter and then getting the results w/ eth_getFilterLogs, but the Infura endpoint responds with a HTTP/1.1 405 Method Not Allowed. I tried this method because I'm not sure which one (eth_getLogs or eth_newFilter) the Javascript API in Web3.js relies upon to implement an event watch on a contract. (eth_newFilter works fine against my local Ethereum node as one would expect.)

Then I did search for getLogs in the Web3.js repo on GitHub w/ https://github.com/ethereum/web3.js/search?utf8=%E2%9C%93&q=getLogs&type= and it looks like eth_getLogs is never called by the Web3.js library. In fact, the getLogs method actually calls eth_getFilterLogs (which is the one I was trying to test w/ my call to set up the eth_newFilter not authorized by Infura endpoint) according to https://github.com/ethereum/web3.js/blob/6d3e61a010501011a107a79574cc7516900fa9e4/lib/web3/methods/watches.js#L58

To sum up, I think it would be useful to test the proper behavior of the eth_newFilter and eth_getFilterLogs in retrieving the event because it seems to me that this is what Web3.js uses to implement the event watch in the Javascript API that is used in DApps.

Furthermore, it may be worthwhile to check what may drop the event log on the way back to a callback in the DApp between the Infura JSON-RPC API and the MM Chrome Plugin.

On my side, I will double check for race conditions in my DApp which may make it work when MM is hitting a local node (short round-trip time) and not work when MM has to reach out to the Infura node (longer latency).

@gghidini
Copy link

Hello @egalano and @kumavis .

I've run a few more experiments that seem to confirm my hypothesis that something between MM Chrome plugin (included) and Infura JSON-RPC API endpoints is silently dropping events without returning any error to the Web3.js Javascript API employed in my DApp.

In particular, today I ran experiments with the following configurations:

  • Sep. 13 version of my DApp hitting respective contract and events on Rinkeby vs. today's version of my DApp hitting respective contract and events on Rinkeby
  • Same MM plugin version 3.10.8 connected to Infura-backed Rinkeby Test Net vs. MM connected to my local Rinkeby test net node

I ran these experiments because on Sep. 13 MM was properly retrieving events based on watches my DApp set up, and was also throwing the gateway error timeout when the block range was too large. Most likely, I was running MM Chrome plugin v. 3.10.0 (https://github.com/MetaMask/metamask-extension/releases/tag/v3.10.0) given the plugin release schedule and automatic updates on Chrome, when I observed this behavior on Sep. 13.

However, when I ran these experiments today with the Sep. 13 version of my DApp, MM connected to the Infura-backed Rinkeby test net DID NOT retrieve the events and DID NOT throw the gateway timeout error.

That is, MM connected to the Infura-backed Rinkeby test net behaves the same (incorrect) way no matter when I use my Sep. 13 version Dapp or the current one (with their respective different contracts and events on Rinkeby), but it behaves differently than how it did when I ran the same experiment on Sep. 13.

This would suggest that the failure to retrieve the events or throw the gateway timeout error is due to some changes in the MM Chrome plugin or the Infura JSON-RPC API given that the plugin is the only component that changed between the experiments I ran on Sep. 13 and today. (BTW, Mist 0.9.0 on Rinkeby test net w/ Mist-provided local Geth node always performs correctly just like MM connected to Rinkeby via my local node on VirtualBox VM.)

To corroborate hypothesis that recent versions of MM changed something, I also noticed that several MM releases after Sep. 13 (when gateway timeout error was still thrown and events where retrieved given small enough range) dealt with filters and memory leaks such as the one featuring this pull request #2070 or this one #2155 . I have not delved into the code to figure out whether changes may have effected this problem I'm reporting.

Furthemore, we could break down this problem into two:

  1. Why is the gateway error timeout (or some related error) NOT thrown anymore in recent releases of MM Chrome plugin, when a large block range (say, 0 to latest) is requested by a DApp via Web3.js? Is anyone able to trigger such error in a DApp (I'm not)? If not, is it because Infura is actually accepting those large block ranges or because the error is silently dropped somewhere between Infura JSON-RPC API endpoint and the MM Chrome plugin?
  2. Why are events not retrieved even for 3-block long ranges via MM and Infura-backed nodes on Rinkeby Test Net?

@kieranelby
Copy link

FWIW, I'm testing an exchange Dapp on Rinkeby ( http://ubitok.io/exchange/?pairId=TESTI-ETH ) and seeing a similar problem with some contract events not appearing when using MetaMask - but they /do/ appear when I use Infura directly.

In my case it's not historic events, it's some (but not all) events from new blocks.

I'm not doing anything "unusual" when subscribing, just the web3 contract.event().watch:

    var filter = this.bookContract.MarketOrderEvent();
    filter.watch((error, result) => { ... my update logic ... }

It does sometimes work on page load - I see the callback invoked for events in the current block. Confusingly, for another type of event in the same contract it seems to work fine - events do appear as expected!

When connecting web3.js to Infura directly I use the ZeroClientProvider from "web3-provider-engine/zero.js" to get filtering working:

      this.web3 = new Web3(
        ZeroClientProvider({
          static: {
            eth_syncing: false,
            web3_clientVersion: "ZeroClientProvider",
          },
          pollingInterval: 4000,
          rpcUrl: endpoint,
          getAccounts: (cb) => cb(null, [])
        })
      );

whereas for MetaMask I just use:

this.web3 = new Web3(window.web3.currentProvider)

I haven't seen this problem on Ropsten (though it's so slow to get my txns mined at the moment it's hard to tell).

Apologies if this is a bit anecdotal, I'd like to come up with a minimal test case but haven't had a chance to do so yet.

@gghidini
Copy link

gghidini commented Oct 3, 2017

@kieranelby : Thank you for sharing this info. I also noticed other issues in which this has been referenced.

@egalano : I just developed a simple one-page HTML/JS app that watches events on a bare-bone contract that I deployed to Rinkeby few minutes ago.

Since the problem I reported occurs for 12-hr-old events, I'll wait until then, and then report back on this issue, and share info so that others can try to reproduce.

gghidini added a commit to gghidini/metamask-extension that referenced this issue Oct 4, 2017
gghidini added a commit to gghidini/metamask-extension that referenced this issue Oct 4, 2017
gghidini added a commit to gghidini/metamask-extension that referenced this issue Oct 4, 2017
gghidini added a commit to gghidini/metamask-extension that referenced this issue Oct 4, 2017
gghidini added a commit to gghidini/metamask-extension that referenced this issue Oct 4, 2017
@gghidini
Copy link

gghidini commented Oct 4, 2017

@egalano , @kumavis, @kieranelby and others:

You can try to reproduce the problem with events not being reported by checking out the branch https://github.com/gghidini/metamask-extension/tree/bug/2114-demo-bug in my fork of this repo and then launching the HTTP server on your localhost as described at https://github.com/gghidini/metamask-extension/blob/bug/2114-demo-bug/test/bug-2114/README.md

Feel free to ask questions/comment on the demo.

Please report whether you can reproduce.

@maxsupera
Copy link

Hi everyone. im experiencing the same problems. Simple events not being watched with metamask, on any network, even localhost -only when metamask is the web3 provider. If I force the provider to be localhost or infura (rinkeby), everything works fine. As in, including Web3 and using the following: web3 = new Web3(new Web3.providers.HttpProvider('https://rinkeby.infura.io') )

So it seems there is a bug in metamask watching contract events.

Ill be checking back here impatiently to see if anyone finds a fix...

@gghidini
Copy link

@HBWoodrose :

Have you tried reproducing the problem using my self-contained demo at https://github.com/gghidini/metamask-extension/tree/bug/2114-demo-bug ?

I'm still experiencing this issue.

Thanks.

@maxsupera
Copy link

@gghidini Hey there. No, Ive got this problem on my own project. It seems quite easy to reproduce tho... just add metamask and events are no longer watched...

@gghidini
Copy link

gghidini commented Oct 20, 2017 via email

@gghidini
Copy link

@lazaridiscom : the MetaMask plugin is working properly for me nowadays after the team fixed a few things in this issue and related ones with filters.

@eloudsa
Copy link
Author

eloudsa commented Feb 12, 2018

@lazaridiscom
The error message is not more displayed. So, for we can close this issue.
Anyway, I still have issues to catch events from Metamask. But, it's related to another issue (#2393) that seems have been fixed today.

@eloudsa eloudsa closed this as completed Feb 12, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants