-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[1.10] docker ps slow down #19328
Comments
👍 It's true, I also noticed that. And I'm definitely +1 on lock free docker ps. |
But how can you be sure about ps output validness? |
Yeah, we'd at least need a read lock on ps. |
we cannot be sure about output validness anyways with the current lock. It only guarantees that the container doesn't change while we're reading its information, but there are no further guarantees. Imagine that we have 1000 containers. There is no guarantees that the container 1 is not deleted when we read the information of the container 1000 and return the results for docker ps. I'm assigning this to myself because I have already other changes in docker ps that make this operation faster and I can add this to it. |
@LK4D4 your right that we probably can't have a 100% lock free if we want consistent data. The problem I see at the moment is that the container lock is used to both control concurrency of operations and consistency of data. I think the two should be split. @calavera Will this be addressed in 1.10? All I can say is that the call to |
Just wanted to point out that this issue applied to |
Adding to the milestone so that we don't loose sight |
I'm not sure if we should patch this for 1.10. I understand your frustration, but we should follow the rules 😄 fixing this might involve something else besides removing those 4 lines. I'll open a PR shortly. |
that PR is not strictly necessary, but we'll need to do something at the store level, like locking the IDs in the store rather than locking the whole container. |
I'm removing this from the milestone. As I said, fixing this will carry consequences that we should not consider at this point of the release. |
@cpuguy83 works on lock free container structure. |
Closing as outdated |
This applies to 1.9 also, but I found this on master. TL;DR Obtaining the container.Lock() at https://github.com/docker/docker/blob/master/daemon/list.go#L115 is very problematic.
Even after all the fixes in Docker 1.9 for
docker ps
performance I sometimes see it go to crap. I've largely tracked this down to that one container Lock acquire. The problem is that the container lock is acquired during container start also. While starting a container you can do a lot of things that are very very slow or possibly hang for awhile. The one I'm seeing the most right now is calling volume drivers, like the below stack trace.So the problem is that you just need one container to hang up on start and then all
docker ps
calls will hang. I've seen this get so bad that enough goroutines backed up such that the daemon was taking 700mb of memory just for blocked goroutines.It seems like the best fix is to just never acquire this lock. Lets have a lock free
docker ps
!The text was updated successfully, but these errors were encountered: