Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

io.netty.handler.codec.TooLongFrameException: Adjusted frame length exceeds 32768: 13396197 - discarded #95

Open
Protonull opened this issue Nov 1, 2023 · 2 comments

Comments

@Protonull
Copy link
Contributor

This has become an issue during testing due to the 4.6GB database that Gjum gave me to stress test MapSync's memory usage. At some point during the connection, the client will throw this error. I'm not entirely certain, but I think it's throwing while receiving an oversized packet, here's the full exception:

[07:30:54] [nioEventLoopGroup-2-1/WARN] (DefaultChannelPipeline) An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
 io.netty.handler.codec.TooLongFrameException: Adjusted frame length exceeds 32768: 13396197 - discarded
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:503) ~[netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:489) ~[netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.exceededFrameLength(LengthFieldBasedFrameDecoder.java:376) ~[netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:419) ~[netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:332) ~[netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507) ~[netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446) ~[netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.68.Final.jar%2330!/:4.1.68.Final]
	at java.lang.Thread.run(Thread.java:833) [?:?]

This is primarily due to #71, which lowered the maximum frame size from 16,777,216 (2 ^24), down to 32,768 (2 ^15). Keep in mind though that the exception is referencing a frame length of 13,396,197, so even if we reverted that PR, it's still only just about fitting within that ridiculously large frame size. So if the database used covered even more of the world, or if the world were much larger (like Civclassics size), or if you used it for other servers like 2b2t, you run to risk of exceeding the frame size even then.

The short term fix is to revert #71, but this is fundamentally an issue with the protocol, which just dumps every region's timestamps into a single packet, followed by, in all likelihood, an even larger chunk timestamps packet, since there's far more chunks, and chunk coordinates are ints, not shorts. It may be worth going for a staggered approach, perhaps preventing the client from sending chunk data from a particular region without first having synced its timestamps with the server first.

@Gjum
Copy link
Member

Gjum commented Nov 1, 2023

Are we sure this is not a decoding error? Can we log the packet id for failed packets like these?

@Huskydog9988
Copy link
Contributor

It could very well be a decoding error, but I think a protocol update to make it less wasteful wouldn't hurt. Sending the timestamp for every chunk seems like an exception waiting to happen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants