-
Notifications
You must be signed in to change notification settings - Fork 417
lightning-liquidity
persistence: Add serialization logic for services and event queue
#4059
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Thanks for assigning @TheBlueMatt as a reviewer! |
124211d
to
26f3ce3
Compare
a98dff6
to
d630c4e
Compare
@@ -248,7 +252,8 @@ where | |||
/// [`LiquidityClientConfig`] and [`LiquidityServiceConfig`]. | |||
pub fn new_with_custom_time_provider( | |||
entropy_source: ES, node_signer: NS, channel_manager: CM, chain_source: Option<C>, | |||
chain_params: Option<ChainParameters>, service_config: Option<LiquidityServiceConfig>, | |||
chain_params: Option<ChainParameters>, kv_store: Arc<dyn KVStore + Send + Sync>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this dyn
vs parameterizing the manager
with the KVStore
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this
dyn
vs parameterizing themanager
with theKVStore
?
Because dealing with the generics in the API that needs to support both KVStore
and KVStoreSync
is very cumbersome to impossible without rerwriting things even more fundamentally (I initially tried going the generics way).
For instance, we'd then also need to parametrize LSPS2ServiceHandler
/LSPS5ServiceHandler
with a KVStore
, which would be part of the LiquidityManager
API through LiquidityManager::lsps2_service_handler
/lsps5_service_handler
, in turn making wrapping it for LiquidityManagerSync
~impossible without having both generics on LiquidityManagerSync
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because dealing with the generics in the API that needs to support both KVStore and KVStoreSync is very cumbersome
sorry for the dumb / high level question: why do we need to support both KVStore
AND KVStoreSync
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we sync and async versions of the background processor, and the sync one only supports KVStoreSync
.
There is also process_events_async_with_kv_store_sync
now, which is the async variant that's still using a sync KVStore
(which is what we're currently using in LDK Node still, though we want to drop that eventually once we got around to writing async wrappers for all remaining sync variants).
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #4059 +/- ##
==========================================
- Coverage 88.76% 88.59% -0.17%
==========================================
Files 176 178 +2
Lines 129345 129876 +531
Branches 129345 129876 +531
==========================================
+ Hits 114812 115064 +252
- Misses 11925 12192 +267
- Partials 2608 2620 +12
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
We add `KVStore` to `LiquidityManager`, which will be used in the next commits. We also add a `LiquidityManagerSync` wrapper that wraps a the `LiquidityManager` interface which will soon become async due to usage of the async `KVStore`.
We add simple `persist` call to `LSPS2ServiceHandler` that sequentially persist all the peer states under a key that encodes their node id.
We add simple `persist` call to `LSPS5ServiceHandler` that sequentially persist all the peer states under a key that encodes their node id.
We add simple `persist` call to `EventQueue` that persists it under a `event_queue` key.
.. this is likely only temporary necessary as we can drop our own `dummy_waker` implementation once we bump MSRV.
d630c4e
to
70118e7
Compare
for fut in futures { | ||
let res = fut.await; | ||
if res.is_err() { | ||
ret = res; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this only returns the last error found, it overwrites the others
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is correct. I think we could alternatively abort on the first error, but I wanted to at least attempt them.
In the future (when looking into parallelizing persistence) we could use something like MultiResultFuturePoller
to poll all futures in parallel and get all results. However, at some point we'll have to actually deal with the error values, and in the background processor that will likely mean logging and ignoring the persistence failure anyways. So not sure if it would make a whole lot of a difference to bubble up more than only the last-found error.
We read any previously-persisted state upon construction of `LiquidityManager`.
We read any previously-persisted state upon construction of `LiquidityManager`.
We read any previously-persisted state upon construction of `LiquidityManager`.
70118e7
to
dd43edc
Compare
this all LGTM. I have a small concern: maybe I’m being a little paranoid, but read_lsps2_service_peer_states and read_lsps5_service_peer_states pull every entry from the KVStore into memory with no limit. That could lead to unbounded state, exhausting memory and crash. Maybe we can add a limit on how many entries we load into memory to protect against this dos? not sure how realistic this is though. maybe an attacker could have access to or share the same storage with the victim, and they could dump effectively infinite data onto disk. in this scenario, probably the victim would be vulnerable to other attacks too, but still.. |
Reading state from disk (currently) happens on startup only, so crashing wouldn't be the worst thing, we would simply fail to start up properly. Some even argue that we need to panic if we hit any IO errors at this point to escalate to an operator. We could add some safeguard/upper bound, but I'm honestly not sure what it would protect against.
Heh, well, if we assume the attacker has write access to our |
This is the second PR in a series of PRs adding persistence to
lightning-liquidity
(see #4058). As this is already >1000LoC, I now decided to put this up as an intermediary step instead of adding everything in one go.In this PR we add the serialization logic for for the LSPS2 and LSPS5 service handlers as well as for the event queue. We also have
LiquidityManager
take aKVStore
towards which it persists the respetive peer states keyed by the counterparty's node id.LiquidityManager::new
now also deserializes any previously-persisted state from that givenKVStore
. Note that so far we don't actually persist anything, as wiring upBackgroundProcessor
to drive persistence will be part of the next PR (which will also make further optimizations, such as only persisting when needed, and persisting some imporant things in-line).This also adds a bunch of boilerplate to account for both
KVStore
andKVStoreSync
variants, following the approach we previously took withOutputSweeper
etc.cc @martinsaposnic