Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker daemon version out of date #63

Closed
bacongobbler opened this issue Nov 30, 2017 · 56 comments
Closed

docker daemon version out of date #63

bacongobbler opened this issue Nov 30, 2017 · 56 comments

Comments

@bacongobbler
Copy link

provisioning an AKS cluster with v1.8.2 nodes show that the kubelets are running Docker v1.12.6. It'd be great for them to be running 17.06.0-ce, which is minikube's underlying docker version as of v0.24.0.

@amanohar
Copy link

amanohar commented Dec 1, 2017

@bacongobbler thanks for reporting this. I have opened an issue in ACS Engine: Azure/acs-engine#1865

@bacongobbler
Copy link
Author

closing, let's follow up in that ticket. :)

@bacongobbler
Copy link
Author

I’m re-opening this issue for two reasons:

  1. the patch in acs-engine only makes this configurable but still points to 1.12
  2. AKS still provisions with docker 1.12

I think we can call this closed once either an AKS cluster can be deployed with newer releases of docker or we document the reasons against that. :)

@alexquintero
Copy link

In my case I want to run my ci build agents inside of my kubernetes cluster.

A more current version of Docker is required In order to be able to do multi stage builds in a Dockerfile which is the recommended way to build a .net core docker container.

@gertjvr
Copy link

gertjvr commented Jan 27, 2018

+1 Any idea when this will be available since the issue has already been resolved in acs-engine?

@paulopez78
Copy link

+1 Any update on the status of that issue?

@jalberto
Copy link

current deploy with 1.9.6 is pointing to docker 1.13, still 17.05 is required for multistage builds.

This was reported 5 months ago, and yet not a simple status update by aks team

@jalberto
Copy link

@slack @jackfrancis a status update will be appreciated

@jalberto
Copy link

jalberto commented Apr 12, 2018

Instruction for manual workaround is here: Azure/acs-engine#2589 (comment)

But careful as it probably will make your cluster out of SLA (not big deal in current AKS state, you probably need ot create a new cluster each time you need upgrade or change something)

@sozercan
Copy link
Member

Just FYI:

The validated docker versions are the same as for v1.9: 1.11.2 to 1.13.1 and 17.03.x (ref)

from https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies

@slack
Copy link
Contributor

slack commented Apr 12, 2018

Sorry for letting this issue age. We are locked to Docker 1.13 (upgraded from 1.12.6). We will not be jumping to 17.03 or 17.05 in AKS. This does disallow cluster-side building of images, but AKS can still run the resulting artifact.

If you need cluster-side multi-stage builds, you will need to use acs-engine directly.

@bacongobbler
Copy link
Author

closing as answered/wontfix

@aminebizid
Copy link

Very strange to keep a too old docker version

@jackfrancis
Copy link
Member

Just to clarify: strange isn't quite the right way to interpret the situation. Rather: we agree that it is not ideal that Docker and MS do not have mutually agreeable distribution partnerships for Docker CE, but until that date we are unable to include Docker CE w/ AKS clusters.

@lkt82
Copy link

lkt82 commented Jun 16, 2018

Are you working on getting this partnership then :) ?

@jackfrancis
Copy link
Member

@lkt82 Yes

@damienwebdev
Copy link

@jackfrancis This just bit me really hard. What can we as the community do to help this get achieved?

@aminebizid
Copy link

We are stuck here at Engie :(
Devops : No K8S without docker 17
Infra team: No K8S without AKS

@weinong
Copy link
Contributor

weinong commented Jul 27, 2018

@aminebizid you can upvote https://feedback.azure.com/forums/914020-azure-kubernetes-service-aks to help us prioritize. thanks!

@marcel-dempers
Copy link

Is there any movement on this issue ?
We have multiple businesses on Azure running into this issue and some are starting to evaluate migration options to GKE for their CI/CD as AKS does not support multi-stage builds

@damienwebdev
Copy link

damienwebdev commented Oct 24, 2018

@marcel-dempers there are some other issues where the AKS team commented that they're switching the underlying docker engine to moby/moby rather than Docker CE/EE to handle this licensing issue. If I'm not mistaken, work on this is "started". I don't think they've set a release timeline yet, though.

@jackfrancis
Copy link
Member

We are at the last mile w/ deprecating docker-engine in favor of moby:

Azure/acs-engine#3896

We want to do a phased rollout, especially to AKS, so bear with us as we setup VHD (pre-baked image) pipelines and introduce this into AKS regions gradually. ETA for availability in acs-engine is end of this week, for initial AKS rollout the week after next (early November).

@jackfrancis
Copy link
Member

jackfrancis commented Nov 9, 2018

This will be available behind a feature flag rolling out next week:

$ az feature register --name MobyImage --namespace Microsoft.ContainerService

After your subscription has been registered for that feature, you'll have to:

$ az provider register -n Microsoft.ContainerService

To get it onto your exsiting AKS cluster, you'll have to upgrade or scale in/out after the subscription that owns the cluster is registered for the above feature.

For folks on 1.11, v1.11.4 is also going out w/ next week's release, which is an indicator that the release has landed in your region. (And hopefully offers a convenient version to upgrade onto for folks already up to v1.11.4.

@DBarthe
Copy link

DBarthe commented Nov 14, 2018

Hi @jackfrancis,

I don't understand, would it be possible to upgrade the docker daemon inside an existing aks cluster ? And when ?

Current installed docker 1.13.1 does not allow ARG before FROM, we need something above ~17.09.

Would it break everything if upgraded manually ?

Thanks in advance

@jackfrancis
Copy link
Member

@DBarthe If the subscription that manages that cluster is registered for the feature, then yes, a cluster upgrade will get you vm nodes that have moby as docker runtime. What region is your cluster in?

@DinoSourcesRex
Copy link

@jackfrancis Hey there. How long does registration for the features take? Or is this the region roll out you were referring to and I just misunderstood? I've run az provider show -n Microsoft.ContainerService but I would be lying if I told you that I understood the output.

My AKS cluster is in West Europe.

@jackfrancis
Copy link
Member

@DinoSourcesRex could you share the output?

The release hasn't landed in westeurope yet. ETA: within the next 24 hours

@DinoSourcesRex
Copy link

DinoSourcesRex commented Nov 14, 2018

@jackfrancis

Output from az provider show -n Microsoft.ContainerService - guids

{
  "authorization": {
    "applicationId": "GUID",
    "managedByRoleDefinitionId": "GUID",
    "roleDefinitionId": "GUID"
  },
  "id": "/subscriptions/GUID/providers/Microsoft.ContainerService",
  "namespace": "Microsoft.ContainerService",
  "registrationState": "Registered",
  "resourceTypes": [
    {
      "aliases": null,
      "apiVersions": [
        "2017-07-01",
        "2017-01-31",
        "2016-09-30",
        "2016-03-30"
      ],
      "capabilities": "None",
      "locations": [
        "Japan East",
        "Central US",
        "East US 2",
        "Japan West",
        "East Asia",
        "South Central US",
        "Australia East",
        "Australia Southeast",
        "Brazil South",
        "Southeast Asia",
        "West US",
        "North Central US",
        "West Europe",
        "North Europe",
        "East US",
        "UK West",
        "UK South",
        "West Central US",
        "West US 2",
        "South India",
        "Central India",
        "West India",
        "Canada East",
        "Canada Central",
        "Korea South",
        "Korea Central"
      ],
      "properties": null,
      "resourceType": "containerServices"
    },
    {
      "aliases": null,
      "apiVersions": [
        "2018-03-31",
        "2017-08-31"
      ],
      "capabilities": "None",
      "locations": [
        "East US",
        "West Europe",
        "Central US",
        "Canada Central",
        "Canada East",
        "UK South",
        "West US",
        "West US 2",
        "Australia East",
        "North Europe",
        "Japan East",
        "East US 2",
        "Southeast Asia",
        "UK West",
        "South India"
      ],
      "properties": null,
      "resourceType": "managedClusters"
    },
    {
      "aliases": null,
      "apiVersions": [
        "2017-08-31",
        "2017-01-31",
        "2016-09-30",
        "2016-03-30",
        "2015-11-01-preview"
      ],
      "locations": [],
      "properties": null,
      "resourceType": "locations"
    },
    {
      "aliases": null,
      "apiVersions": [
        "2018-10-31",
        "2017-08-31",
        "2016-03-30"
      ],
      "locations": [
        "East US",
        "West Europe",
        "Central US",
        "UK West",
        "West Central US",
        "West US 2",
        "South India",
        "Central India",
        "West India",
        "Canada East",
        "Canada Central",
        "Korea South",
        "Korea Central",
        "UK South",
        "Australia East",
        "Australia Southeast",
        "North Europe",
        "Japan East",
        "East US 2",
        "Southeast Asia"
      ],
      "properties": null,
      "resourceType": "locations/operationresults"
    },
    {
      "aliases": null,
      "apiVersions": [
        "2017-07-01",
        "2017-01-31",
        "2016-09-30",
        "2016-03-30"
      ],
      "locations": [
        "Japan East",
        "Central US",
        "East US 2",
        "Japan West",
        "East Asia",
        "South Central US",
        "Australia East",
        "Australia Southeast",
        "Brazil South",
        "Southeast Asia",
        "West US",
        "North Central US",
        "West Europe",
        "North Europe",
        "East US",
        "UK West",
        "UK South",
        "West Central US",
        "West US 2",
        "South India",
        "Central India",
        "West India",
        "Canada East",
        "Canada Central",
        "Korea South",
        "Korea Central"
      ],
      "properties": null,
      "resourceType": "locations/operations"
    },
    {
      "aliases": null,
      "apiVersions": [
        "2018-10-31",
        "2018-03-31",
        "2017-08-31",
        "2017-07-01",
        "2017-01-31",
        "2016-09-30",
        "2016-03-30",
        "2015-11-01-preview"
      ],
      "locations": [],
      "properties": null,
      "resourceType": "operations"
    },
    {
      "aliases": null,
      "apiVersions": [
        "2017-09-30"
      ],
      "locations": [
        "East US",
        "West Europe",
        "Central US",
        "Canada East",
        "Canada Central",
        "UK South",
        "UK West",
        "West US",
        "West US 2",
        "Australia East",
        "North Europe",
        "Japan East",
        "East US 2",
        "Southeast Asia",
        "South India",
        "East Asia"
      ],
      "properties": null,
      "resourceType": "locations/orchestrators"
    }
  ]
}

@jackfrancis
Copy link
Member

Actually, sorry @DinoSourcesRex, try the az feature list --namespace Microsoft.ContainerService command. Look for the entry with the "name": "Microsoft.ContainerService/MobyImage", key/val pair, and especially the value of "state" for that entry.

@DinoSourcesRex
Copy link

DinoSourcesRex commented Nov 14, 2018

@jackfrancis I'm marked as "registered" on that one, however when I run az provider register -n Microsoft.ContainerService (as you linked above) I still get

"Registering is still on-going. You can monitor using 'az provider show -n Microsoft.ContainerService'"

as the response.

    "id": "/subscriptions/GUID/providers/Microsoft.Features/providers/Microsoft.ContainerService/features/MobyImage",
    "name": "Microsoft.ContainerService/MobyImage",
    "properties": {
      "state": "Registered"
    },
    "type": "Microsoft.Features/providers/features"

@edernucci
Copy link

@jackfrancis 1.11.4 landed here. The upgrade will change to moby or I still have to enable the feature with steps provided above?

@DaveAurionix
Copy link

@DinoSourcesRex I get the same behaviour ("Registering is still on-going") but I can't see 1.11.4 offered as an upgrade option yet (I'm in West Europe). I'm assuming/guessing that these are related, so I'll try again once 1.11.4 is visible.

@DinoSourcesRex
Copy link

@DaveAurionix Bit of a daft question but how do I check to see if / when 1.11.4 is available? And how do I check what I'm currently running?

@DaveAurionix
Copy link

DaveAurionix commented Nov 15, 2018

I used …

GUI way: take a look in Azure Portal at your Kubernetes Service resource, click the Upgrade tab on the left and see the current (and possible) versions in the drop-down.

CLI way: do an az login followed by az aks get-upgrades --resource-group your_resource_group_name --name your_cluster_name

Note that in both cases the resource group to check/use is not the auto-generated resource group (starting with MC_) with the VMs, load balancer, etc, instead it's the resource group you created to contain your managed cluster resource.

@edernucci
Copy link

@DaveAurionix you can pass just the location :-)

edernucci@artorias:~$ az aks get-versions -l eastus
KubernetesVersion    Upgrades
-------------------  ----------------------
1.11.4               None available
1.11.3               1.11.4
1.10.9               1.11.3, 1.11.4
1.10.8               1.10.9, 1.11.3, 1.11.4
1.9.11               1.10.8, 1.10.9
1.9.10               1.9.11, 1.10.8, 1.10.9
1.8.15               1.9.10, 1.9.11
1.8.14               1.8.15, 1.9.10, 1.9.11
1.7.16               1.8.14, 1.8.15
1.7.15               1.7.16, 1.8.14, 1.8.15

@DaveAurionix
Copy link

Way easier :)

@DinoSourcesRex
Copy link

DinoSourcesRex commented Nov 15, 2018

Awesome, thanks guys.

@edernucci Did you do anything special to get yours formatted as a table? Mine comes back as json which I can read but the table is a lot nicer to look at!

@poveilleux
Copy link

@DinoSourcesRex It's -o table or --output table (the long version).

@DinoSourcesRex
Copy link

Thanks guys, learning a lot here!

@edernucci
Copy link

@jackfrancis There's something I did not quite understand. When I enable the moby feature for my subscription all my clusters will use moby instead of Docker after the node is recreated?

I ask this because I have three clusters and would like to rollout only in the development, to see how it will behave.

@jackfrancis
Copy link
Member

@edernucci, the feature flag functionality is per-subscription, so any clusters operations (create/upgrade/scale) using that subscription will be paved w/ vm nodes that have the moby docker runtime installed.

Thanks all for the community spirit here!

@edernucci
Copy link

edernucci commented Nov 15, 2018

Awesome, thanks guys.

@edernucci Did you do anything special to get yours formatted as a table? Mine comes back as json which I can read but the table is a lot nicer to look at!

@DinoSourcesRex If you like the table view and want it to be the default, you can set it using az configure command.

@timwebster9
Copy link

timwebster9 commented Nov 20, 2018

So when using the new MobyImage, how do we tell exactly what version of Docker is installed? I see this:

azureuser@aks-nodepool1-11596463-0:~$ docker --version
Docker version 3.0.1, build 09f5e9d

@laurentgrangeau
Copy link

Hi there,
I have registered myself to the MobyImage, and performed an update of AKS, but I still see docker 1.13.1. Is there something to do ?

~ az feature list --namespace Microsoft.ContainerService -o table
Name                                                      RegistrationState
--------------------------------------------------------  -------------------
Microsoft.ContainerService/MobyImage                      Registered

➜  ~ kubectl get nodes -o wide                                
NAME                       STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-agentpool-26745425-0   Ready    agent   2h    v1.11.4   10.240.0.5    <none>        Ubuntu 16.04.5 LTS   4.15.0-1030-azure   docker://1.13.1

@jackfrancis
Copy link
Member

@timwebster9, that docker version correlates with the moby package, so you're 👍

@laurentgrangeau that's not the expected outcome if you were registered before you upgraded. :( Are you able to build a new cluster on the registered sub and report if you still get 1.13.1?

@timja
Copy link

timja commented Nov 21, 2018

@jackfrancis where can you see how the moby package version correlates to moby?
i.e. I go here https://github.com/moby/moby/tags and the last version was released over a year ago and its 17.03.2-ce

@laurentgrangeau
Copy link

@jackfrancis I just created a new cluster, but I got the same. Maybe the Moby image is not rollout yet in North Europe

➜  .kube az aks list -o table
Name       Location     ResourceGroup    KubernetesVersion    ProvisioningState    Fqdn
---------  -----------  ---------------  -------------------  -------------------  ---------------------------------------------------------
mobyhotei  northeurope  hotei            1.11.4               Succeeded            mobyhotei-hotei-001ba5-b861c86c.hcp.northeurope.azmk8s.io

➜  ~ kubectl get nodes -o wide
NAME                       STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-nodepool1-63134054-0   Ready    agent   4m    v1.11.4   10.240.0.5    <none>        Ubuntu 16.04.5 LTS   4.15.0-1030-azure   docker://1.13.1

@babin93
Copy link

babin93 commented Nov 22, 2018

After upgrading my AKS (it said I was on version 1.11.4 on the Upgrade Page), every time I tried to Scale my AKS, I had the following error:

Operation is not allowed while cluster is being upgrading or failed in upgrade

Logging out and logging back in solved the issue.

I hope it can help others if they face this issue.

@ghost ghost locked as resolved and limited conversation to collaborators Aug 12, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests