-
Notifications
You must be signed in to change notification settings - Fork 835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
no sftp prompt on client side despite successful connection #341
Comments
It works, if I mount the following config as
The main changes are |
I suspect I have the same problem and removing
from the configuration fixes it (but obviously disables chroot). What happens is that when trying to log in the sshd docker process will consume 100% cpu for about 150 seconds after which the login actually succeeds. My guess is that the FileZilla client has a timeout of 20s which means it will never succeed in that case. I have reproduced this with a virtual fresh install of Fedora 37 inside Gnome Boxes, following the official Docker CE installation instructions, then trying the "Simplest docker run example"
followed by
and then entering the password. Interestingly the exact same procedure but with Debian 11 does not have the same problem, with the same docker version:
So there seems to be something related to Fedora that triggers it in this case. I have also tried |
Try to limit count of nofiles, e.g.: --ulimit nofile=16000:16000 I touched same issue and it solves CPU load and client receives prompt. Background: When process forks, it tries to close all file descriptors. There is library function which handles it, but it loops from max nofile to 1. In past, maximum value for nofile was up to 65535, but latest distributions/kernels use much higher values (try ulimit -H -n) and it takes time till code loops over the range. |
I can confirm that your suggestion works for me, using this command
the 100% cpu delays are gone. On my system:
But I can increase nofile up to 10 000 000 before I start to see a noticable delay (that value gives a few seconds delay), so I'm not sure what nofile value docker is using per default on my system. In any case, it seems like while this workaround fixes the problem there should be a proper fix for this. @jnovak-netsystemcz Do you know what the proper fix would be? |
I think there is no proper fix. It is how OS works... A process has many limits. Your OS has limit for max open files set to 524288 and docker inherits it. I expect it is configured in /etc/security/... in your OS. If you change limits, you must restart docker to inherit new limits. |
Actually, 524 288 works alright but the docker daemon process has a nofile limit of 1073741816 (just over a billion), not sure how this is configured but found it by looking at /proc/PID/limits. So if application developers expect to be able to loop over this range then we have a problem and I guess you are right, the limit must be lowered to fix the problem. Did some googling and didn't find much, but this seems to be a similar issue: |
Can confirm that lowering the file descriptors has fixed this for me on latest Docker daemon and containerd version on Arch Linux. I retained the default subsystem and chroot sftp config:
and added ulimits to my docker compose:
|
@jnovak-netsystemcz / @Lustyn's solutions work for me, thanks to all of you for your input on this! Maybe this could be added to the README? |
The docker compose solution worked, thank you @Lustyn , Fedora 40 |
I'm trying to connect to the container via
sftp
, but it just gets stuck and does nothing.Directory structure:
docker-compose.yml:
Server log:
Client log (only the last few lines, after passwort prompt):
Now the terminal just sits and does nothing, inputs have no results, either.
Trying to connect with FileZilla with
sftp://localhost
logs a successful authentication, but no directory listing. After 20 seconds, it aborts the connection and reconnects with the same result as before.
The text was updated successfully, but these errors were encountered: