Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

archive/storageDiff: Suggestion to make storageDiff subscription-based #160

Open
lexnv opened this issue Oct 10, 2024 · 8 comments
Open

Comments

@lexnv
Copy link
Contributor

lexnv commented Oct 10, 2024

At the moment, storageDiff is a plain method.

To implement storageDiff of two arbitrary blocks, we'll have to iterate through the keys of both blocks and access the storage of those keys requested by the user.

This operation might be expensive enough to DoS the node, considering users might request differences between genesis <-> latest-finalized.

Another benefit of having a subscription-based approach for storageDiff is that we can leverage the backpressure system implemented for the chainHead subscription.
On the other side, users will have to adapt their code to work with a subscription-based approach.

Implementation ref: https://github.com/paritytech/polkadot-sdk/pull/5997/files#diff-ac650a2e540d8d37576d05083cce78260192944382ecf1e68f0db50b5d86907eR350-R369

// cc @paritytech/subxt-team @tomaka @josepot Would love to get your thoughts on this 🙏

@niklasad1
Copy link
Member

niklasad1 commented Oct 10, 2024

I would also want to change archive_unstable_storage to be subscription-based and be similar to the chainHead_v1_storage.

The main benefits of it:

  • It's easy to backpressure it i.e, to "stream" the results to make sure that the client can consume in same pace as the server instead of building a potentially huge method response huge. Similar to how the chainHead API works
  • A single method response may be super expensive for the server to produce because it could be huge without static limits.
  • The implementation for the method response needs to have some kind of limit (or at least polkadot-sdk has that) to limit the number of storage queries to avoid DOS attempts. From client-side such a thing is annoying to deal with, for instance if one wants query plenty of storage values then the server may only reply with a handful, "x items were not processed" and the client would need to make several RPC calls to complete all them. Similar to what occurred in JSON-RPC: performance problem with chainHead_v1_storage queries using descendantValues polkadot-sdk#5589

I think it would beneficial to make it subscription-based but I haven't followed discussion why it's not, WDYT?

@tomaka
Copy link
Contributor

tomaka commented Oct 13, 2024

There are big differences between chainHead_storage and archive_storage that explain why chainHead_storage uses subscriptions and not archive_storage.
In the case of chainHead, a light client needs to send multiple networking requests to full nodes in order to fulfill the request, and thus it is advantageous for latency to send notifications to the RPC client progressively rather than all at once. It is also important for the request to be cancelable because it might take a long time, and during this time the RPC client might no longer be interested in the operation. Light clients can't implement archive_storage by design, so it's not a problem.

None of these reasons are related to backpressure/DoS attacks, and chainHead_storage does not require/have any backpressure system other than what all JSON-RPC functions have. The chainHead_storage_continue function exists not for backpressure, but for the situation where a RPC client is no longer interested in an operation. Without storage_continue, in the time between when the client is no longer interested in an operation and when the server receives the chainHead_stopOperation the server might have sent several megabytes. storage_continue exists so that the client and server don't waste bandwidth.

In theory, archive_unstable_storage should be able to simply stream its result onto the TCP connections, and the back-pressure can be automatically handled. Think for example of a std::io::Read or AsyncRead trait implementation that progressively generates results on-the-fly when read or poll_read are called.

The implementation for the method response needs to have some kind of limit (or at least polkadot-sdk has that) to limit the number of storage queries to avoid DOS attempts.

This is already covered by the function:

The JSON-RPC server is encouraged to accept at least one `archive_unstable_storageDiff` method per JSON-RPC client. Trying to make more calls might lead to a JSON-RPC error when calling `archive_unstable_storageDiff`. The JSON-RPC server must return an error code if the server is overloaded and cannot accept new method call.

if one wants query plenty of storage values then the server may only reply with a handful, "x items were not processed" and the client would need to make several RPC calls to complete all them.

I don't see how this helps against DoS attacks. You suggest that the server sends a small amount of data then waits for the client to send back some kind of message before the server continue. This is exactly what happens on the TCP connection with ACK messages. There's no need to build a similar mechanism on top of what TCP/IP already does. The chainHead_storage_continue function does that, but it exists in order to save bandwidth when the client is cooperative rather than to protect against any kind of attack.

@tomaka
Copy link
Contributor

tomaka commented Oct 13, 2024

The one aspect where turning archive_storage into a subscription would be advantageous is for head-of-line blocking problem.
If the client asks for a huge number of keys, and the server sends back a huge response, during the time when the response is being downloaded no other request response can arrive. For example if the client wants to send a small system_name request, the response to this system_name will come after the response to archive_storage, which is potentially several seconds later.

But again this is just to help the client, and if the client experiences this problem it can simply decide by itself to send smaller queries in a row. There's no need to adjust the API of archive_storage.

@niklasad1
Copy link
Member

In theory, archive_unstable_storage should be able to simply stream its result onto the TCP connections, and the back-pressure can be automatically handled. Think for example of a std::io::Read or AsyncRead trait implementation that progressively generates results on-the-fly when read or poll_read are called.

I don't see how this helps against DoS attacks. You suggest that the server sends a small amount of data then waits for the client to send back some kind of message before the server continue. This is exactly what happens on the TCP connection with ACK messages. There's no need to build a similar mechanism on top of what TCP/IP already does. The chainHead_storage_continue function does that, but it exists in order to save bandwidth when the client is cooperative rather than to protect against any kind of attack.

Sure but when it comes archive_unstable_storage it's quite annoying the client could send "n big queries" (depends on the server's buffer size because no other limit for that AFAIU). For example if the server doesn't have a streaming JSON serializer i.e, would serialize the complete method response all those would have kept in memory until the complete message is sent or the TCP/IP connection is closed because that max retransmission limit is exceeded (such thing would be in order of minutes). This can also be prevented by having a limit for the number of storage queries in each call which I was hoping to avoid, max storage queries per call * buf_cap would be the limit then.

Am I understanding you correctly that you think is more of implementation thingy which would be achieve with a single method response?

It would certainly be easier to implement such a thing with a subscription instead by iterating and sending one a value at the time but bounding the number of storage queries in each call works as well for now.

@tomaka
Copy link
Contributor

tomaka commented Oct 14, 2024

Am I understanding you correctly that you think is more of implementation thingy which would be achieve with a single method response?

Yes!

I don't think that the lack of streaming JSON serializer is a big problem. JSON is a very simple language, and you can simply generate JSON manually as strings. Send something like {"jsonrpc":"2.0", "id": ..., "response": (I'm writing this from the back of my head), then send the items one by one (properly serialized), then finish by writing }.

I realize that this might conflict with your existing abstraction layers (which I'm not familiar with anymore), but between having a complicated JSON streaming system or having a complicated notifications system for that RPC function, or simply sending strings, IMO simpler is better.

@jsdw
Copy link
Collaborator

jsdw commented Oct 14, 2024

Note: I'm only really thinking about archive_storageDiff here:

I realize that this might conflict with your existing abstraction layers (which I'm not familiar with anymore), but between having a complicated JSON streaming system or having a complicated notifications system for that RPC function, or simply sending strings, IMO simpler is better.

I guess this is the main point yeah. We could implement the method in a nice way by manually streaming it as you described, but that also requires punching through our JSON-RPC abstraction to the websocket implementation so that we can break the message into multiple frames (and avoid sending a length up front), which I suspect is very difficult for us to do.

And so, my argument for having it be subscription based would be that it's difficult for us to implement nicely, and (if we cared) I suspect it would be difficult for anybody else to do the same. If we instead use the tools available at the JSON-RPC abstraction layer (ie subscriptions) we'll avoid this difficulty (and also happen to have the nice side effect of avoiding head-of-line blocking, since you can't interleave anything other than opcodes with partial websocket frames).

Owing to this difficulty/complexity of implementation, we'll likely have to build up a response in memory and error if that grows too large, which is less nice for users (who have to then figure out how to break their requests up to avoid such errors) but also doable.

@tomaka
Copy link
Contributor

tomaka commented Nov 4, 2024

If I summarize, we agree that ideally the JSON-RPC API would stay as it is, and the implementation would be modified, but in practice it is too difficult to do so.

But I'd like to raise another trade-off: is someone actually going to attack a JSON-RPC node by repeatedly sending large requests? As far as I'm aware no DoS attempt has ever happened to Parity, and I also have never heard of this happening to anyone.
The archive nodes that are "important", for example the ones of Subscan/Polkascan/etc. are, as far as I'm aware, not publicly-accessible from the Internet, so this is irrelevant for them.

On the negative side, this modification to the JSON-RPC API is essentially a hack, and makes the life of the JSON-RPC client more complicated. On the positive side, it would prevent some DoS attacks. But if DoS attacks are in reality not really a concern, is it really worth doing this?

@jsdw
Copy link
Collaborator

jsdw commented Nov 4, 2024

If I summarize, we agree that ideally the JSON-RPC API would stay as it is, and the implementation would be modified, but in practice it is too difficult to do so.

My summary would be more like (again focusing only on archive_storageDiff here):

Pros of current API:

  • Simpler for clients to call/use (one JSON request/response pair versus having to subscribe and then gather the result up from the subscription responses).
  • Subscriptions aren't standardised either, so some JSON-RPC clients won't help much with handling these.

Pros of subscription based API:

  • Less susceptible to DoS attacks.
  • If otherwise building up response in memory:
    • Lower memory usage when large diffs are handed back: we'll stream them rather than build up in memory.
    • No need to enforce any limits on the size of response we can accumulate/hand back, and so the user won't see errors in these cases and won't need to work out how to break down their request to make it succeed.
    • Even for big diffs, the user gets something back that they can work with much faster, and can potentially work with it in a streaming way to avoid accumulating a big thing in memory on their side, too.
  • If otherwise using partial websocket frames to stream back a response:
    • Easier to implement the API (this affects us as well as eg any proxy wishing to use these APIs).
    • No head of line blocking issues; streaming responses for large requests can be interleaved with other responses.

On the positive side, it would prevent some DoS attacks. But if DoS attacks are in reality not really a concern, is it really worth doing this?

Even if they are not a reality, would it not be better to avoid such a potentially easy target?

Overall, I don't overly love having to complicate the API for users even a little bit, but in return we would also make it more likely to succeed (for eg large responses), more resilient (to eg DoS attacks) and more performant (than the non-difficult implementation at least, for users too). So I can't help but think that it is worthwhile overall.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants