-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't run/start any docker container after update #597
Comments
I can confirm this behaviour for
I am not able to start any container after the update. Also a downgrade to 18.09.1 does not work :-( |
Workaround which is working for me:
Then it seems to work for me for on Ubuntu 16.04 LTS... |
Thats sound great, can you test it for Ubuntu 18.04? |
Do you have a custom |
No, only a proxy configuration in /etc/systemd/system/docker.service.d/http-proxy.conf. |
It's definitely not a general problem with Ubuntu 16.04 or 18.04 packages (I've been able to run the latest packages on those versions without problems) No direct clue though what could cause this (at a glance, the error looks to originate from runc) |
I'm also seeing this behavior on latest Debian Buster (kernel 4.19.16-1) after updating docker-ce and containerd.io. Rolling back the updates also solved my problem. |
Same, brand new instance of RHEL 7.,6 in AWS. Install Docker-CE and it's requirements and cannot start any container. Rolling back to 18.09.1 fixes. |
I can confirm what @ghandim suggested (worked for Ubuntu 16.04 LTS.) 👍 |
Could someone post the output of For RHEL 7.5, 7.6 wondering if that relates to opencontainers/runc#1988 (which looks to be a kernel bug in the RHEL kernels) |
The error
Was mentioned in moby/moby#34776 ("Can´t specify memory limit in docker run for docker version 17.07.0-ce, build 8784753"), and two tickets in runc: opencontainers/runc#1547, and opencontainers/runc#1914 It's known that the CVE fix requires more memory when starting a container (see opencontainers/runc#1980). The top comment in this ticket (#597 (comment)) shows that just starting a However, both runc issues; opencontainers/runc#1547 and The last link (Tight container limits may cause "read init-p: connection reset by peer") describe that ping @justincormack @kolyshkin @cyphar PTAL |
It doesn't. The mitigation is all done in C code and only uses |
~# runc --version on
|
Thanks; so runc @ghandim if, in your case, you upgrade just the containerd.io package (which bundles # downgrade the docker engine and cli to 18.09.1
apt-get -y --allow-downgrades install \
docker-ce=5:18.09.1~3-0~ubuntu-xenial \
docker-ce-cli=5:18.09.1~3-0~ubuntu-xenial
# but make sure the containerd.io package is at the latest version
apt-get -y install containerd.io=1.2.2-3 |
After upgrading only containerd.io I cannot start any container anymore :-( |
Trying to reproduce on a DigitalOcean machine, which looks to have exactly the same kernel, but I don't see the problem 😞;
No clue at this moment what the difference would be. For those on CentOS; #595 (comment) mentions that the problem occurred on an outdated CentOS kernel, but did not reproduce on |
@ghandim Your workaround doesn't fix this problem on Ubuntu 18.04 :/ |
@ChaosRambo did downgrading also downgrade the |
I had the same issue/error message with the following setup: Docker version: Server: Docker Engine - Community Kernel: On a week old vServer with Ubuntu 18.04.2 I "fixed" it by downgrading to docker-ce. My now working setup: containerd github.com/containerd/containerd 1.2.2 9754871865f7fe2f4e74d43e2fc7ccd237edcbce Docker version 18.06.1-ce, build e68fc7a` I would also be curious about a real fix. Or what even went wrong... Thanks and cheers |
update your kernel >=3.10.0.927 |
@leeningli as posted above, this is not a helpful suggestion.
|
I am so sorry. |
As far as I can tell |
Thanks for pointing it out - maybe that's where my problem arises? I'm not sure where this kernel comes from, it came with my vServer when I installed Ubuntu.
How else could I find out what kernel is running? |
Just for curiosity, are you using a Strato vServer? Because I also get this error with the exact same kernel (4.15.0) on my vServer there... Edit: Ok. So it's their problem. I just sent them a mail "Probleme mit Kernel 4.15.0 und Docker" and urged them to solve the problem with their used kernel... |
Yes I do use Strato. So maybe it's something in the configuration there? I would have thought they install a standard Ubuntu but apparently they modify some things. How can we get to the root of this? |
I just got an answer from them, that a needed kernel module is missing on their vServers and there are no plans on adding it in the foreseeable future. The original answer is as follows:
Edit: Just got another answer from Strato because I ran docker there before I did a fresh server install - So they work on it, but have no eta [they use Virtuozzo for their vServers]:
Sorry, if this doesn't help others with the problem, but for all Strato customers this could be a warning... |
@flownex could you run the |
First try:
@thaJeztah Any hint on how to get you what you need? |
OK I have provided details to my ex-colleagues at VZ (although from my perspective Strato should have contacted VZ support); I vaguely remember that as of some time ago docker inside container was a supported configuration. |
@flownex @ogrady thanks for the info! Preliminary reply from VZ team: the latest VZ7 kernel is tested and works with Docker CE 18.09.2. Perhaps Strato just need to update their kernel, it should be trivial if they use readykernel. I will update this as soon as I'll have more info. |
@flownex @ogrady @goddib @ChaosRambo looks like you guys are all using a container under Virtuozzo or OpenVZ kernel:
Now, this appears to be the latest full build of VZ7 Virtuozzo kernel, version 3.10.0-862.11.6.vz7.64.7 (actual version can be guessed by comparing compilation date reported, in-container version is spoofed for userspace to work). For this kernel, there is a readykernel update v72.1, released 15 Feb. This update, as well as any later one (the list is at https://readykernel.com/?distro=Virtuozzo-7&kernel=3.10.0-862.11.6.vz7.64.7), should work fine with Docker CE 18.09.2. As far as I understand there's no way to see what kernel is running from inside a container, but you can definitely ask your hosting service provider to upgrade to the latest readykernel Virtuozzo, pointing out to this comment if needed. |
If you can do something like |
There's a confusion between Virtuozzo containers and Docker containers here. What I was talking about is a Virtuozzo host (which provides "OS" containers a la LXC/LXD but with a custom kernel, so slightly more VM-like, yet they are still containers), and dockerd running inside such an (OS) container. Those users reporting kernel 4.15 above are renting such OS containers from a hoster, and they don't have access to a (Virtuozzo or OpenVZ) host system. With that, there might be a Hope that clears things up |
I was face the same error. in CentOS7.6 (basic install, yum updated.)
|
Got the same problem on Ubuntu 18.04.
|
In case it helps anyone else... the following packages now work for me on Red Hat Enterprise Linux Server release 7.6 (Maipo) with 3.10.0-957.5.1.el7.x86_64:
and none of those work for me on CentOS Linux release 7.6.1810 (Core) with the same kernel as above (3.10.0-957.5.1.el7.x86_64). |
@jblaine could you also check the version of |
@thaJeztah 2.77-1 on RHEL 7 and 2.74-1 on CentOS 7 |
I think that may be the problem; see containers/container-selinux#63 We contributed a fix upstream (containers/container-selinux#64), but that may not have found its way yet to a new version of the package. |
I had the same error (
I had to downgrade manually following https://docs.docker.com/cs-engine/1.13/ |
@pblayo what kernel version are you running on? (what does
docker (and containers in general), use features provided by the kernel; in case of the patch-releases for 18.06.3 and 18.09.3, there were no actual changes in docker itself, but an updated version of the runc runtime was included to address a critical vulnerability (CVE-2019-5736) that allowed container escapes. The fix for that vulnerability required kernel features that are not available in older kernel versions, so if you're using the original 3.13 kernel, you need to update to a later Ubuntu kernel through the LTS Enablement stack; https://wiki.ubuntu.com/Kernel/LTSEnablementStack
I highly recommend not running that version; that's a very old version of Docker, and that's no longer maintained and may have unpatched vulnerabilities (actually, I'm not sure why those pages are still listed on the documentation; I'll open a pull-request to have them removed) If you cannot upgrade your kernel; the previous version of Docker 18.06 (18.06.2) should would (but won't have the updated version of runc, so is not patched to address CVE-2019-5736) Note that Ubuntu 14.04 reaches EOL next month (April 2019), so it's worth considering to upgrade to the current LTS version (18.04) |
@thaJeztah : thanks a lot, downgrading Docker to
|
@pblayo you're welcome! At least that would get you going, but of course, it's a workaround because you won't have the fix for the CVE 😕 The kernel version you mentioned; that's the version on which the problem occurred?
Trying to reproduce; I started a Ubuntu 14.04 machine on DigitalOcean; Upgrade the kernel, and reboot (never hurts);
(same version as you reported 👍) I Installed docker;
Unfortunately, I'm not able to reproduce the issue |
@thaJeztah : no sorry |
@thaJeztah : I'm not sure I understand the next step : do I have to wait for an updated packaged version of Docker or of the kernel? (or both?) |
Sorry for the confusion; if you upgraded your kernel to 4.x, you should be able to install version |
OK thanks so my final working configuration is:
|
Can confirm that with
the bug is fixed on my Ubuntu environment. |
Thanks! I'll tentatively close this issue, but feel free to comment if you're still running into this after installing the versions mentioned above #597 (comment) |
Same
|
meet the same problem when I update to docker-18.09.5,and fix it after restart my pc
|
@ghandim @thaJeztah I just ran into this issue on the following versions:
As with everyone else, a reboot fixes the issue. The server was up for 52 days at the time of the issue if that helps at all. |
Expected behavior
Any Docker Container should start normally
Actual behavior
Any Docker Container doesn't start after system update
Steps to reproduce the behavior
Any of these commands create the same error:
LOG:
This is an existing container (MongoDB)
LOG:
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.)
The text was updated successfully, but these errors were encountered: