-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accessing remote server via DOCKER_HOST eats all memory #3528
Comments
Can you provide more details, otherwise this may be difficult to look into;
|
my local uses docker desktop, but the issue also exist when I run the same command with gitlab ci. also yes using here is a video of the issue
DOCKER_HOST=... docker info
services:
mongo:
image: mongo
postgres:
image: postgres
redis:
image: redis
nginx:
image: nginx
node:
image: node |
Also, I noticed that using |
Memory usage by
|
Hm.. right, yes, so it would be attaching to each container in the compose stack to stream the output; I can imaging that causing more overhead, especially with ssh here. Wondering if we can make it reuse connections or something along those lines. /cc @AkihiroSuda @ndeloof perhaps you have ideas? |
Maybe we should re-revert this (with some fix)? |
I will work on this |
I don’t think this issue is related to
Client
Server
|
I can speak to this; the way docker works over SSH remote appears to be:
In summary: While I myself am not too familiar with I'm currently workshopping a somewhat better solution here at the moment. I haven't made a PR pending further testing, potential cross-platform issues, and error-handling, but also implementation on Docker CLI here.
Hopefully with this architecture, there's less memory overhead as there would hypothetically be just the one process, [1] I'm not too certain if this is actually needed, but it is a nice feature. I've already pushed code on my fork to take an accepted |
I'm using a remote SSH Docker context on MacOS running Docker Desktop to deploy stacks to my server, here's the output of
I left my computer on overnight and when I checked my servers metrics I noticed sshd was using almost 6 GB of memory. There was hundreds of these ssh sessions and
Does anyone have some insight on this? My system is just constantly creating these sessions for no reason, when I'm not even using the Docker context. There's also a fairly recent forum post about this: Docker Continuously Making Unnecessary SSH Connections to Remote Servers EDIT: Exiting Docker Desktop closes all of the ssh sessions and exits all the dial-stdio processes on the remote server, however if you leave Docker running it just continuously creates those sessions, eventually leading to a situation where it will use all of the servers memory. |
Accessing remote server via SSH and running command eats all the memory.
Using the same command in server itself has no problem.
For instance,
I have a docker compose file in my local, if I run the command below, it eats all the memory and server shuts down.
but, if I copy the same compose file to server and run the
docker compose up
command only uses ~50MB memory.The text was updated successfully, but these errors were encountered: