Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: don't use WakuMessageSize in req/resp protocols #2601

Merged
merged 3 commits into from
Apr 20, 2024

Conversation

chaitanyaprem
Copy link
Contributor

@chaitanyaprem chaitanyaprem commented Apr 18, 2024

Description

While debugging waku-org/go-waku#1076, i had noticed that for lightpush protocol, the read from stream is being done by using defaultMaxMsgSize which is 150Kib. This means even if a node sets this value to a higher one, lightpush send fails from a client if messageSize crosses 150Kb+overhead.

More discussion on this here : https://discord.com/channels/1110799176264056863/1230492037099294720

Changes

  • Rename MaxWakuMessageSize to DefaultMaxWakuMessageSize to avoid confusion.
    - [ ] LightPush serviceNode to use MaxWakuMessageSize to calculate RPCSize while reading a message. Now i am not so sure if this is the right approach, rather let it try to publish via relay which would fail?
  • All req/resp protocols to not use MaxWakuMessageSize rather try to read as much as possible by passing -1 to readLP. Not sure of the impact of this yet.

Copy link

github-actions bot commented Apr 18, 2024

You can find the image built from this PR at

quay.io/wakuorg/nwaku-pr:2601-rln-v2-false

Built from 035f90e

Copy link
Contributor

@jm-clius jm-clius left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments :)

waku/waku_lightpush/protocol.nim Outdated Show resolved Hide resolved
): T =
let wl = WakuLightPush(
rng: rng,
peerManager: peerManager,
pushHandler: pushHandler,
requestRateLimiter: newTokenBucket(rateLimitSetting),
maxRPCSize: calculateRPCSize(maxMessageSize),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think adding state and dynamic calculation here is overkill. The idea of the max stream read size was simply to have some reasonable safety limit that will prevent unrealistically (read: adversarially) large requests (which presumably could be an attack vector). We could have just used a very, very big magic number, but opted to initialise our constants with some value based off the MaxWakuMsgSize with safety margins to show that there was some rhyme and reason behind our thinking. Because this max msg size value is now configurable, our generous high limit may indeed now not be generous enough for differently configured nodes - at least in the lightpush case. However, even here the node could likely not configure message size to anything larger than the underlying libp2p limit (of 1MB). If we want to change this to be future proof, I'd rather keep using the underlying libp2p constant (1 MB) for all nodes and keep the simplicity of pure constants or just ignore the max rpc size limit (set to -1 all around). The latter could presumably expose node to more attacks, though.

Copy link
Contributor Author

@chaitanyaprem chaitanyaprem Apr 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, realised this ... will change it to -1.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The latter could presumably expose node to more attacks, though

I am wondering what kind of attacks, because read would anyways be done by the libp2p layer ir-respective of length passed. It is only the length check for the message that is done based on what is passed to it.
We could keep it to 1MB, but then it is going to cause issues for protocols like Store where response can include 100*WakuMessageSize bytes which would definitely be more than 1MB even with default value.

If we really have to deal with attacks, then maybe there should be some sort of length negotiation as part of metadata protocol or some other protocol when nodes connect to each other. This i feel is overkill at this point and can be taken up if we notice any vulnerabilities.

Copy link
Contributor Author

@chaitanyaprem chaitanyaprem Apr 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

owever, even here the node could likely not configure message size to anything larger than the underlying libp2p limit (of 1MB).

Also, if this is a hard limit set by libp2p...then maybe we shouldn't let users configure anything more than 1MB-WakuHeaders as maxMsgSize.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering what kind of attacks

Mmm. Indeed, not sure if possible impacts. It seems to me if someone were to pack the length prefix (or just send a very large amount of bytes), we'd have some potential to DoS the stream. This is a sanity check for the buffer length and if not reasonable the stream reader exits: https://github.com/vacp2p/nim-libp2p/blob/b30b2656d52ee304bd56f8f8bbf59ab82b658a36/libp2p/stream/lpstream.nim#L238-L239

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could keep it to 1MB, but then it is going to cause issues for protocols like Store where response can include 100*WakuMessageSize bytes which would definitely be more than 1MB even with default value.

Which is why for store the max rpc size is MaxPageSize*MaxWakuMsgSize + safety_margin. Agree though. This is complicated by configurable sizes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This i feel is overkill at this point and can be taken up if we notice any vulnerabilities.

Agree.

Copy link
Contributor

@gabrielmer gabrielmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much! 😍 Added a couple small questions/comments

tests/waku_lightpush/test_client.nim Show resolved Hide resolved
tests/waku_filter_v2/test_waku_client.nim Show resolved Hide resolved
Copy link
Contributor

@gabrielmer gabrielmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much!

@chaitanyaprem chaitanyaprem requested a review from jm-clius April 19, 2024 07:38
Copy link
Contributor

@jm-clius jm-clius left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not completely sure of possible impact/vulnerabilities, but I think this is a reasonable solution for now.

@chaitanyaprem chaitanyaprem merged commit e61e4ff into master Apr 20, 2024
13 of 15 checks passed
@chaitanyaprem chaitanyaprem deleted the fix/max-msg-size branch April 20, 2024 03:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants