-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local Tx Monitor #3404
Local Tx Monitor #3404
Conversation
70e37ad
to
3b847a0
Compare
2a7cbd7
to
f4e4f0b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for such a nicely done PR. I pointed out a few style things -- nothing serious.
My only high-level thought is whether a (perhaps buggy) client could cause us problems here. But I'm inferring that the server can only ever hold onto one mempool snapshot at a time (per client), so that at least bounds the memory usage. And mempool snapshots are not that large relatively speaking (We should double-check this. A node could customly configure to allow for very large mempools, but still I see no incentive to do that to a degree that would endanger a resource outage from a LocalTxMonitor
client. )
Can you attend the Consensus Planning Meeting next Tuesday (the 12th) at 15:00 Paris Time to discuss this? I'll add you to that event now; feel free to decline.
ouroboros-consensus/src/Ouroboros/Consensus/MiniProtocol/LocalTxMonitor/Server.hs
Show resolved
Hide resolved
ouroboros-consensus/src/Ouroboros/Consensus/MiniProtocol/LocalTxMonitor/Server.hs
Show resolved
Hide resolved
ouroboros-consensus/src/Ouroboros/Consensus/MiniProtocol/LocalTxMonitor/Server.hs
Outdated
Show resolved
Hide resolved
ouroboros-consensus/src/Ouroboros/Consensus/MiniProtocol/LocalTxMonitor/Server.hs
Show resolved
Hide resolved
ouroboros-network/src/Ouroboros/Network/Protocol/LocalTxMonitor/Type.hs
Outdated
Show resolved
Hide resolved
ouroboros-network/src/Ouroboros/Network/Protocol/LocalTxMonitor/Type.hs
Outdated
Show resolved
Hide resolved
ouroboros-network/src/Ouroboros/Network/Protocol/LocalTxMonitor/Type.hs
Outdated
Show resolved
Hide resolved
ouroboros-network/src/Ouroboros/Network/Protocol/LocalTxMonitor/Type.hs
Outdated
Show resolved
Hide resolved
ouroboros-network/src/Ouroboros/Network/Protocol/LocalTxMonitor/Server.hs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice; a few comments follows.
ouroboros-network/src/Ouroboros/Network/Protocol/LocalTxMonitor/Server.hs
Outdated
Show resolved
Hide resolved
ouroboros-network/src/Ouroboros/Network/Protocol/LocalTxMonitor/Server.hs
Outdated
Show resolved
Hide resolved
ouroboros-network/src/Ouroboros/Network/Protocol/LocalTxMonitor/Server.hs
Outdated
Show resolved
Hide resolved
ouroboros-network/src/Ouroboros/Network/Protocol/LocalTxMonitor/Client.hs
Show resolved
Hide resolved
ouroboros-network/src/Ouroboros/Network/Protocol/LocalTxMonitor/Type.hs
Outdated
Show resolved
Hide resolved
ouroboros-network/protocol-tests/Ouroboros/Network/Protocol/LocalTxMonitor/Test.hs
Outdated
Show resolved
Hide resolved
ouroboros-network/protocol-tests/Ouroboros/Network/Protocol/LocalTxMonitor/Test.hs
Outdated
Show resolved
Hide resolved
ouroboros-network/protocol-tests/Ouroboros/Network/Protocol/LocalTxMonitor/Test.hs
Show resolved
Hide resolved
ouroboros-network/src/Ouroboros/Network/Protocol/LocalTxMonitor/Codec.hs
Show resolved
Hide resolved
ouroboros-network/src/Ouroboros/Network/Protocol/LocalTxMonitor/Codec.hs
Outdated
Show resolved
Hide resolved
c5ff533
to
932a854
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure this is the style of protocol that you want?
If I've read and understood correctly what you have here is a snapshot style where you can take a mempool snapshot at any time, but you cannot wait for changes to the mempool. So it's a polling interface. How frequently do you plan to poll?
The sketch in the placeholder protocol was not a polling style: it let you ask for the next tx and have it block until a new tx arrived. Yes that version did not give consistent snapshots.
Note that either way, polling or blocking, you cannot guarantee to see every tx that flows through the mempool. It's always possible to be too slow and something can have been added and removed before you notice. In the polling snapshot design you'll miss things that are added and removed between snapshots. In the "next tx" design you'll miss things that are added to the end of the mempool and removed again before you get to that tx.
If you basically like the snapshot style (and sure, consistent snapshots are nice) can I suggest that you recover a blocking / non-polling style by changing MsgReAcquire
so that it is a blocking operation: that it only replies when there has been a change in the content of the mempool.
No. This is why I am asking for feedback here 😊
Correct.
That's totally up to clients. The main use-case I see for this protocol is to control the state of transactions that one client insert in the mempool (not so much for those received from peers). From the discussions I had with various folks, they either want to (a) know how empty is the mempool to batch a few transactions (although I believe it should be sufficient to wait on the local-tx-submission when the mempool is full which automatically applies back pressure) and (b) be more efficient w.r.t to transaction re-submission. That is, if a transaction hasn't been seen on-chain and is no longer in the mempool, it should be resubmitted. Some are also just interested about the mempool for monitoring / observable; to better understand how the system behaves behind the scene. So I imagine a typically poll frequency would be once every block.
Yes, and it didn't also give you any means to know whether one transaction that you had seen in the past was still present in the mempool which felt quite limiting.
Indeed. I don't think there's any easy way around that because transactions may come from two sources anyway (local or peer submission). Either way, I don't think it's a problem to not have this guarantee; what's more interesting actually are transactions which are still in the mempool in the end.
I like the snapshot style, I find it pretty useful on the LSQ protocol to be able to reliably make a few queries on a consistent state. It feels like STM over the wire 😊
Ho! That is a nice suggestion indeed! And quite easy to implement on the server side as well. |
eda2e0e
to
999773e
Compare
If you like the idea then it might be good to rename |
bde8158
to
a284026
Compare
ouroboros-consensus/src/Ouroboros/Consensus/MiniProtocol/LocalTxMonitor/Server.hs
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Please also add an entry atop the https://github.com/input-output-hk/ouroboros-network/blob/master/ouroboros-consensus/docs/interface-CHANGELOG.md file. Thanks! |
@KtorZ Would you post a status update here?
Thanks! |
I've added an extra step to acquire a particular mempool snapshot to better map the MemPool API from the consensus. This is a similar pattern to the Local State Query protocol, although it only acquires the latest mempool snapshot. Also added a query to check for a particular transaction, given its id which may be master than traversing the entire sequence (although, highly depends how this is implemented by the mempool behind the scene).
Mostly following what's done for the other mini-protocols. ``` Test suite test: RUNNING... ouroboros-network Ouroboros.Network.Protocol LocalTxMonitor codecM: OK +++ OK, passed 100 tests. codec 2-splits: OK +++ OK, passed 100 tests. codec 3-splits: OK +++ OK, passed 100 tests. codec cborM: OK +++ OK, passed 100 tests. codec valid cbor encoding: OK +++ OK, passed 100 tests. ```
…d server. ``` Test suite test: RUNNING... ouroboros-network Ouroboros.Network.Protocol LocalTxMonitor direct: OK +++ OK, passed 100 tests. ```
* StBusyNext -> NextTx * StBusyHas -> HasTx * StBusySizes -> GetSizes This reads better in type signatures when composed with other types of the protocol.
'MsgAwaitRequire' now replaces 'MsgReAcquire' so that it only replies when there has been a change in the content of the mempool. This allows for clients to efficiently consume a snapshot and then, wait for the next one without having to poll for updates based on arbitrary times. Also, I took the opportunity to update the module's header and fix the state-machine diagram rendering in Haddock.
a419c70
to
feff0a6
Compare
Alrighty. I think the only obstacle before you can send the |
I am currently running stylish haskell through the nix-shell indeed.. kicked it about 20 minutes ago but Nix decided to re-compile the whole world despite trusted keys 🤷 ... I'll push as soon as it's ready and trigger bors 👍 |
bors merge |
Build succeeded: |
'grats @KtorZ! Thank you for your time and determination here. |
📍 Resurrect LocalTxMonitor protocol, reworking protocol's types
I've added an extra step to acquire a particular mempool snapshot to better map the MemPool API from the consensus. This is a similar pattern to the Local State Query protocol, although it only acquires the latest mempool snapshot. Also added a query to check for a particular transaction, given its id which may be master than traversing the entire sequence (although, highly depends how this is implemented by the mempool behind the scene).
📍 Write client interface for the 'LocalTxMonitor' mini-protocol.
📍 Write server interface for 'LocalTxMonitor' mini-protocol.
📍 Add 'MsgRelease' to allow protocol to loop back to 'StIdle'
And seemingly, allow client to send 'Done' even after acquiring.
📍 Add 'ASCII-like' diagram illustrating the protocol state-machine.
📍 Add codec for LocalTxMonitor protocol.
📍 Implement local tx-monitor server using kernel's mempool.
📍 Wire-up the new mini-protocol with the other Node-To-Client protocols.
📍 Add property tests to cover LocalTxMonitor codecs
Mostly following what's done for the other mini-protocols.
📍 Add extra query to get mempool's size and capacity.
📍 Adjust code style to team's conventions 💅
📍 Add direct execution & property with example LocalTxMonitor client and server.
📍 Rename StBusyKind & TokBusyKind
This reads better in type signatures when composed with other types of the protocol.
📍 Add cddl specification (and tests) for LocalTxMonitor codecs.
📍 Increment NodeToClient version and integrate LocalTxMonitor as client mini-protocol.
📍 Add property tests for LocalTxMonitor via connected channels in IO and IO-sim
📍 Change semantic (and rename) 'MsgReAcquire' to be blocking for updates
'MsgAwaitRequire' now replaces 'MsgReAcquire' so that it only replies when there has been a change in the content of the mempool. This allows for clients to efficiently consume a snapshot and then, wait for the next one without having to poll for updates based on arbitrary times.
Also, I took the opportunity to update the module's header and fix the state-machine diagram rendering in Haddock.
📍 Add CHANGELOG entry about the LocalTxMonitor protocol.
📍 Fix imports for moved module / code.
📍 Add missing golden serialisation tests for 'GenTxId' & 'SlotNo' in NodeToClient.