-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Docker 18.09.3. #6347
Conversation
Hi @tsuna. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @justinsb @andrewsykim @chrisz100 |
/ok-to-test |
I reworked my change a bit by instead adding a more generic provision for specifying extra packages to install at the same time as Docker. I noticed that RHEL/CentOS has needed this for a while to deploy SELinux policies anyway, so I figured it made more sense to unify this existing need with the new one (the small difference is that the SELinux policies could be installed on their own, which is why the existing kludge in the code was working fine, whereas now to upgrade Docker with the newer 3-package distribution everything needs to happen in a single dpkg transaction). Let me know what you think. |
I just rebased my change, can someone take a look please? |
Ping, pretty please :) |
@@ -162,7 +169,7 @@ func (e *Package) findDpkg(c *fi.Context) (*Package, error) { | |||
installed = true | |||
installedVersion = version | |||
healthy = fi.Bool(true) | |||
case "iF": | |||
case "iF", "iU": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What’s this iU?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This happened in an earlier version of this change that attempted to install the packages one by one (which, ultimately, doesn't work). iF
means the package is selected to be installed and is currently in half-configured state, while iU
means it's selected to be installed and is currently merely unpacked.
It might not be needed anymore since this iteration of the code installs all 3 packages in a single transaction, but having the code craps out if it sees the package in iU
state instead of continuing to try/wait until the install is complete seems like a bad idea. So I figured I'd leave this here as it makes sense to handle iU
in the same way that iF
is handled.
Just rebased my change one more time, to drop a commit that had become redundant since commit 24222ff merged 8 days ago. |
/retest
|
/retest (it was the same infrastructure failure I mentioned above) |
Can we bump this to 18.09.2 to address CVE-2019-5736 at the same time? |
I rebased again and adjusted the change for Docker 18.09.3 that came out today. I didn't understand the e2e failure that hit previously. It says |
Starting from Docker 18.09.0, the Docker distribution has been split in 3 packages: the Docker daemon, the Docker CLI, and for containerd. This adds a twist to how to upgrade Docker from the base image as the daemon and CLI packages must be installed at the same time, otherwise dpkg/rpm will refuse to upgrade (the new CLI is incompatible with the old package and the daemon can't be installed without first installing the CLI and the new containerd, so the upgrade MUST happen in a single transaction). This code change thus adds the possibility to specify additional packages to install in the same dpkg/yum transaction, such as the Docker CLI and containerd in nodeup, and the ability to apply the multi-package upgrade atomically with dpkg/rpm. We also use this new mechanism for the SELinux policy on RHEL/CentOS.
Just rebased again. Please, pretty please, help me push this through the finish line this week. 🙏 |
Pretty please, can I haz LGTM & approvalz? 😢 |
/lgtm |
Thanks @chrisz100! Almost in time for me to check this off my list for our weekly standup in 10min. How can I get the approval now? If you know who I need to bribe, let me know 😄 |
any idea when this will be approved? |
I don't know, I don't even understand the difference between LGTM and approval, but if anyone knows who I need to bribe, let me know. |
@mikesplain @justinsb can one of you guys please approve this PR? I'll pay you 100 KubeCoin on the day of the ICO. Just need to finish my business plan for a blockchain-backed k8s control plane. etcd is for losers. TIA. |
@tsuna lol'd irl |
@tsuna it’s the two step review process - lgtm is technically does the code look good and technically doable by anyone in the Kubernetes GitHub org- approver is a higher level that is supposes to check for does it add to our roadmap when we approve it right now or is later a better time. As it seems like it’s targeted for 1.12 now so rather soon! |
Sorry this took so long @tsuna - it hit in the middle of the test account problems followed by the docker CVE, and we had to prioritize the CVE. I think the package version changed in one place, but I'm just going to send another PR rather than risk forcing another rebase on you. Plus I want those kubecoins :-) /approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: justinsb, tsuna The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Follow up to kubernetes#6347 - add a test for some of the names based on some heuristics, and fix some of the problems that popped up.
Follow up to kubernetes#6347 - add a test for some of the names based on some heuristics, and fix some of the problems that popped up.
@tsuna Under cluster spec:
kops version: 1.11.1 and then I ran:
Even then the docker-version remains 17.3.2. Any help would be appreciated!
|
@shahbhavik01 this new version will not be in kops until 1.12 is released. |
You can look at #6448 (edit: more specifically, this comment) if you wanna try a version you build yourself from |
@tsuna / @shahbhavik01 I tried update: all good, |
Cześć Marek, the default version used by kops remains unchanged. I'm not sure why, TBH, but you need to edit your cluster manifest to pick the Docker version you want.
|
Starting from Docker 18.09.0, the Docker distribution has been split in
3 packages: the Docker daemon, the Docker CLI, and for containerd. This
adds a twist to how to upgrade Docker from the base image as the daemon
and CLI packages must be installed at the same time, otherwise dpkg/rpm
will refuse to upgrade (the new CLI is incompatible with the old package
and the daemon can't be installed without first installing the CLI and
the new containerd, so the upgrade MUST happen in a single transaction).
This code change thus adds the possibility to specify sources for the
CLI and containerd in nodeup, and the ability to apply the multi-package
upgrade atomically with dpkg/rpm.
Note: I tested this on one of my kops clusters in AWS.
This fixes #5747