You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to investigate some memory leaks of https://github.com/roburio/tlstunnel, and put some printf debugging into mirage-tcpip (see this commit hannesm@f358e78) which prints some statistics about buffers that are present. Here are the numbers:
There are two TCP/IP stacks in the unikernel (one public, which is the top line, with listening sockets; one private where client connections are initiated to the backend servicces).
Now, the observation is: there are quite some channels open (460 / 215) -- which I'd have not expected to see so many. Furthermore there is quite some data in the TX queue and the URX (awaiting an application to call read).
I may be doing something completely stupid in tlstunnel, but I thoought that the two reader/writer functions should be fine. That's even from a branch including more aggressive closerobur-coop/tlstunnel@0b985ab.
I'm wondering what is up with TCP: are there some Lwt_mvar.t that are stuck? Is a close not cleaning up everything? Is there a simple example on how to properly write a TCP server and client in a way that they always free up all resources after a while? I also had the impression that eventually the window size is transmitted via TCP, but not respected in User_buffer (i.e. when fresh segments come in, they are not discarded, but instead are acked).
I don't have singled out TCP flows that cause such stuck states, but eventually someone else has some further ideas (though, looking at #486 and #470 it doesn't look like there's much activity here).
The text was updated successfully, but these errors were encountered:
I tried to investigate some memory leaks of https://github.com/roburio/tlstunnel, and put some printf debugging into mirage-tcpip (see this commit hannesm@f358e78) which prints some statistics about buffers that are present. Here are the numbers:
There are two TCP/IP stacks in the unikernel (one public, which is the top line, with listening sockets; one private where client connections are initiated to the backend servicces).
Now, the observation is: there are quite some channels open (460 / 215) -- which I'd have not expected to see so many. Furthermore there is quite some data in the TX queue and the URX (awaiting an application to call read).
I may be doing something completely stupid in tlstunnel, but I thoought that the two reader/writer functions should be fine. That's even from a branch including more aggressive
close
robur-coop/tlstunnel@0b985ab.I'm wondering what is up with TCP: are there some Lwt_mvar.t that are stuck? Is a
close
not cleaning up everything? Is there a simple example on how to properly write a TCP server and client in a way that they always free up all resources after a while? I also had the impression that eventually the window size is transmitted via TCP, but not respected in User_buffer (i.e. when fresh segments come in, they are not discarded, but instead are acked).I don't have singled out TCP flows that cause such stuck states, but eventually someone else has some further ideas (though, looking at #486 and #470 it doesn't look like there's much activity here).
The text was updated successfully, but these errors were encountered: