-
Notifications
You must be signed in to change notification settings - Fork 770
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Client: add basic TxPool #1176
Client: add basic TxPool #1176
Conversation
Codecov Report
Flags with carried forward coverage won't be shown. Click here to find out more. |
Rebased this. |
Ok, made some progress here. The tx pool now has some basic - hopefully - sufficient data structure, collects and de-duplicates incoming new txs and also (naively, so without checking) works with req IDs from ETH/66. Tests are currently not passing and I need to fix and also need to write some tests in general. Feel free to give this some early review and/or also continue some work on this branch if you have some strong opinion how things can be improved (then please drop some note before). This is what the current output looks like: Some note: there are reoccuring errors "peer returned no headers for blocks ..." like these: I've checked, these have been already present on The next step would be to do some tx pool cleanup, my current idea is to do this in the following steps:
Going forward with 1. will need the tip of the chain behavior from #1132 to be implemented, so we could rather wait on this before continuing or merge here at some point yet without the clean up (but with e.g. some additional simple limitation of the pool size). Also note that this is only the passive pool behavior, so we are not answering any incoming tx requests yet or broadcasting new txs ourselves. Also - as some TODO note: our own txs submitted via RPC also needs to be added to our own pool so that we include our own txs when we produce our own blocks. Still not completely sure when the whole tx pool behavior should be activated (directly at the client start or rather when the chain is synced). But at the end this is secondary for this initial implementation, since this can likely be very simply added/adopted by a simple switch. |
Currently there are some open handles left which prevent the client tests from finishing. I test with this leaked-handles I just found - and which is actually pretty handy 😀 - this is the output: no of handles 3
tcp stream {
fd: 20,
readable: false,
writable: true,
address: {},
serverAddr: null
}
tcp stream {
fd: 22,
readable: false,
writable: true,
address: {},
serverAddr: null
}
tcp stream {
fd: 23,
readable: true,
writable: false,
address: {},
serverAddr: null
} Didn't dig any further yet. |
1032b71
to
220ebd0
Compare
On this, it was added here as part of the The general PR for the txpool looks great! I did a little experimenting along the question of the leaked handles but got basically the same result as you and it's not clear to me out to use the output provided to do useful research. I did notice that if you run |
Ah, thanks for the explanation. We had this kind of problem already earlier in client development (too verbose output on these kind of connection/data errors. One good rule of thumb we came up with: if the error (unexpected behavior) is within the scope of our client (the client is doing something which it shouldn't do), use This works reasonably well respectively is a good fit in most cases. |
just reviewing the code, looks great! should we add a cli option I'm still trying to locate where the process leak is happening but no clues yet, the first culprit is usually the intervals but they seem to be cleared properly as every pool.open() is followed by a close(). by the way those methods are marked as async and awaited properly but there are no async methods actually within them. as you mentioned on our last call @holgerd77 I think once we are at the tip of the chain we can have some kind of new block notification we can subscribe to in the pool and remove all the newly mined txs. We can use |
Not sure, do we need this? Can't this be active by default? 🤔 |
Ah sure, I was thinking in some cases people might not want the tx pool active, but perhaps not. I checked what flags Their cli options were an interesting read nonetheless to get a sense of configurability and defaults:
I also learned the difference between an executable and non-executable transaction: https://ethereum.stackexchange.com/a/86698 |
I found the hanging test, we needed to stub |
if (txHashes.length === 0) { | ||
return | ||
} | ||
this.config.logger.info(`TxPool: received new pooled hashes number=${txHashes.length}`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, after finishing this is just nothing we want on the info
level I guess since it is too verbose. We might just move to debug
.
@ryanio cool, nice to see this evolving! 😄 |
…H/66, added basic data structures for tx request selection, minor fixes
Rebased this. |
Some debug note: hanging tests are in (It's really tricky that one is often just suspectiving the last tests executed to cause the test run hanging, which is often not the case) |
…tests), fixed light ethereum service tests
…st suite completion
…n, start tx pool if BLOCKS_BEFORE_TARGET_HEIGHT_ACTIVATION threshold is reached
@ryanio wanted eventually take this in for the block builder PR he plans to do and do some last additions here along that PR. I will therefore put the PR to the "Needs Review" state, but please let Ryan do the (at least final) review. Some general note: thought about this for a couple of days and I think I have finally found a practical solution on how and when to start the pool. The pool is now activated when a This should lead to some descent state of the pool once sync is finished. Then we could do another final nonce check on all left pool txs to have the pool really ready for including txs in a new block. The Ok, so far. 😄 |
* clean up unneeded EventEmitter usage * unnest announcedTxHashes logic when length === 0 * add logger debugs for tx pool requests
// Craft block with tx in pool | ||
block = Block.fromBlockData({ transactions: [txB02] }, { common }) | ||
pool.newBlocks([block]) | ||
t.equal(pool.pool.size, 0, 'pool size 0') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Cool new tests! 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm! 🚀🚀
await new Promise((resolve) => setTimeout(resolve, 5000)) | ||
peer = this.best() | ||
numAttempts += 1 | ||
}*/ | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, interesting. This can now be reactivated without breaking the tests? Might have accidentally fixed something on a more profound level when reworking some initialization parts of the sync and service tests. 😋
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hehe, yeah I had to add && this.opened
to the while loop conditions to fix it hanging during the integration tests, I was pleased to find it as a simple solution :)
Pretty cool 🎉! Looking really forward to see this applied! 😄 |
Ok, this first round was relatively smooth. Transaction retrieval based on the announced hashes is already working,
TransactionFactory
is also able to create respective tx objects. 😄