Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory issues in TCP #488

Open
hannesm opened this issue Jun 10, 2022 · 0 comments
Open

Memory issues in TCP #488

hannesm opened this issue Jun 10, 2022 · 0 comments

Comments

@hannesm
Copy link
Member

hannesm commented Jun 10, 2022

I tried to investigate some memory leaks of https://github.com/roburio/tlstunnel, and put some printf debugging into mirage-tcpip (see this commit hannesm@f358e78) which prints some statistics about buffers that are present. Here are the numbers:

tcp.pcb-stats removing pcb from connection tables: [channels=460 (RX 20 TX 12993900 URX 5735 UTX 54732 TOTAL 13054387) listens=0 (RX 0 TX 0 URX 0 UTX 0 TOTAL 0) connects=0]
tcp.pcb-stats process-synack: [channels=215 (RX 0 TX 0 URX 45842733 UTX 0 TOTAL 45842733) listens=0 (RX 0 TX 0 URX 0 UTX 0 TOTAL 0) connects=1]

There are two TCP/IP stacks in the unikernel (one public, which is the top line, with listening sockets; one private where client connections are initiated to the backend servicces).

Now, the observation is: there are quite some channels open (460 / 215) -- which I'd have not expected to see so many. Furthermore there is quite some data in the TX queue and the URX (awaiting an application to call read).

I may be doing something completely stupid in tlstunnel, but I thoought that the two reader/writer functions should be fine. That's even from a branch including more aggressive close robur-coop/tlstunnel@0b985ab.

I'm wondering what is up with TCP: are there some Lwt_mvar.t that are stuck? Is a close not cleaning up everything? Is there a simple example on how to properly write a TCP server and client in a way that they always free up all resources after a while? I also had the impression that eventually the window size is transmitted via TCP, but not respected in User_buffer (i.e. when fresh segments come in, they are not discarded, but instead are acked).

I don't have singled out TCP flows that cause such stuck states, but eventually someone else has some further ideas (though, looking at #486 and #470 it doesn't look like there's much activity here).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant