-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support arm64 #166
Comments
Looking at https://github.com/kubernetes-sigs/kind/blob/master/pkg/cluster/context.go#L196-L206 where the code errors out, there's a TODO for logging from @BenTheElder . |
Looks like the underlying reason is that kindest/node is not multiarchitecture.
|
Yes, now we only hava |
/priority important-longterm |
kind create cluster
fails on arm64 in fixing mounts
@dims hacked up a working version of this: https://paste.fedoraproject.org/paste/gdlF9fqXeSADK-aPN-sEbw/raw 🎉 |
We're yet to see details around this announcement, but it ought to simplify the process of building ARM images significantly: https://techcrunch.com/2019/04/24/docker-partners-with-arm/ |
yes! also somehow forgot to update this issue, we have arm64 support, just no published images yet (that will need some more thinking...) if you build images yourself kind should work on arm64 today 😅 |
somewhat, we then either have to publish different images for arm or sort
out image manifests and what that pipeline looks like.
ideally we don't always require building both but do when we publish
pre-built images.
there's some other options where we stop needing RUN outside of the base
image that would make this all simpler as well.
…On Wed, Apr 24, 2019 at 10:15 PM Peter Benjamin ***@***.***> wrote:
Once we can cross-build images for ARM from x86 with native docker
tooling, would that solve the image publishing problem?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#166 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAHADKYJ3JR7PJ4SNNFQP5LPSE477ANCNFSM4GJSJUKQ>
.
|
My fervent hope is that the new Docker tooling will make multi-architecture manifests much easier to produce. The current setup for most projects is just a bit complex. |
So FWIW I did figure out how to work manifest-tool I think, this definitely seems feasible this year, if a bit clunky, the trickiest part now is we need to write some tooling to cross compile the kind image (or coordinate with kicking off a build on packet or ... 🤔). I think I'd like to get this into GCB based publishing with a cross-compile so we can start automating publishing mulit-arch node images, I punted looking further into that while working on the breaking image changes in #461, but those are in now :-) |
@BenTheElder we can use CircleCI or Travis to do that, I see that another project under kubernetes-sig has the integrations enabled https://github.com/kubernetes-sigs/kubeadm-dind-cluster I have experience building pipelines on those, it will be relatively easy to create a pipeline there to automatically publish the images based on PR or per tag |
We want them to be pushed based on Kubernetes repo changes instead.
The tricky part will not be setting up Travis etc., We have Prow, GCB, ...
but we need the credential to not be available to presubmits, we need the
triggering to be right, and we need to build even for PPC64LE ideally,
which I doubt Travis etc. are running natively on, so we need to cross
compile.
Kubernetes uses some tricks for doing this w/ binfmt_misc and qemu that
work very portably.
*From:*Antonio Ojea <notifications@github.com>
*Date:*Fri, May 3, 2019, 02:04
*To:*kubernetes-sigs/kind
*Cc:*Benjamin Elder, Mention
@BenTheElder <https://github.com/BenTheElder> we can use CircleCI or Travis
… to do that, I see that another project under kubernetes-sig has the
integrations enabled
https://github.com/kubernetes-sigs/kubeadm-dind-cluster
I have experience building pipelines on those, it will be relatively easy
to create a pipeline there to automatically publish the images based on PR
or per tag
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#166 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAHADK7V2OSSMKVES7RIO7LPTP53RANCNFSM4GJSJUKQ>
.
|
hmm, I think I have to read more about prow https://github.com/kubernetes/test-infra/blob/master/prow/jobs.md |
Minor update: I've ensured the ip-masq-agent pushes multi-arch (manifest) images upstream before we adopted it, and our own tiny networking daemon is cross compiled and pushing multi-arch images. These were simpler than Kubernetes node images, but at least we have some more multi-arch samples and we continue to nominally work on arm64. I think the next step is to support building node images from Kubernetes release tarballs so we can consume Kubernetes's upsteam cross compilation output and save time building & publishing. Then we start building manifest list images from those. |
@vielmetti the default node image is not multi-arch. please see discussion above. |
Thanks @BenTheElder - per the discussion above I found https://hub.docker.com/r/rossgeorgiev/kind-node-arm64 and will continue with that for my testing. |
This is still intended to ship in the next release. I've put up a PR for the current iteration #2176 |
kind @ HEAD should "just work" on arm64, but we need verification. Then we're ~prepared to release. |
Hi all. It should but it's not. I'm getting |
@lasdolphin can you share more about the host OS etc? That error isn't related to architecture. EDIT: we've seen that issue on AMD64 but don't have enough info yet, the thread for that is #2236 (comment) EDIT2: @lasdolphin please add those details to #2236 instead of this thread, the fact that kubelet ran and crashed suggests that the arm64 aspect is working as intended, as far as we could test before hitting the other, unrelated issue. |
https://twitter.com/JimEwald/status/1393411027329445890 we have a report for arm64 working here: Docker Desktop 3.3.3, macOS 11.3.1, Mac Mini/M1 Apple Silicon. |
Docker version 20.10.6, build 370c289 |
Hello🙂 I have a similar environment as @lasdolphin, but the current Environment:
❯ kind version
kind v0.11.0-alpha go1.16.3 darwin/arm64
❯ k version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T21:10:45Z", GoVersion:"go1.16.3", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-14T10:11:15Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/arm64"}
❯ docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
compose: Docker Compose (Docker Inc., 2.0.0-beta.1)
scan: Docker Scan (Docker Inc., v0.8.0)
Server:
Containers: 11
Running: 2
Paused: 0
Stopped: 9
Images: 25
Server Version: 20.10.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 5.10.25-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: aarch64
CPUs: 4
Total Memory: 1.942GiB
Name: docker-desktop
ID: 5Y2M:UAVW:DGCD:DHOB:QFRH:DHKY:Y4FL:OZXY:4IRG:RSNQ:S6Z7:STMB
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false ❯ docker version
Client:
Cloud integration: 1.0.14
Version: 20.10.6
API version: 1.41
Go version: go1.16.3
Git commit: 370c289
Built: Fri Apr 9 22:46:57 2021
OS/Arch: darwin/arm64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.6
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 8728dd2
Built: Fri Apr 9 22:44:13 2021
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.4.4
GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc:
Version: 1.0.0-rc93
GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
docker-init:
Version: 0.19.0
GitCommit: de40ad0
|
I have kind version ‘0.10.0’
That might explain different behaviour.
Best regards,
Andrei
… On 15. 5. 2021, at 9:38, TAKAHASHI Shuuji ***@***.***> wrote:
Hello🙂 I have a similar environment as @lasdolphin, but the current HEAD looks to work well in my environment. I tried creating Pod, Service, Ingress without issues. Also, I deployed Contour following https://kind.sigs.k8s.io/docs/user/ingress and can access to Service via Ingress from localhost.
Environment:
kind version: (use kind version):
❯ kind version
kind v0.11.0-alpha go1.16.3 darwin/arm64
Kubernetes version: (use kubectl version):
❯ k version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T21:10:45Z", GoVersion:"go1.16.3", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-14T10:11:15Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/arm64"}
Docker version: (use docker info):
❯ docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
compose: Docker Compose (Docker Inc., 2.0.0-beta.1)
scan: Docker Scan (Docker Inc., v0.8.0)
Server:
Containers: 11
Running: 2
Paused: 0
Stopped: 9
Images: 25
Server Version: 20.10.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 5.10.25-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: aarch64
CPUs: 4
Total Memory: 1.942GiB
Name: docker-desktop
ID: 5Y2M:UAVW:DGCD:DHOB:QFRH:DHKY:Y4FL:OZXY:4IRG:RSNQ:S6Z7:STMB
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
❯ docker version
Client:
Cloud integration: 1.0.14
Version: 20.10.6
API version: 1.41
Go version: go1.16.3
Git commit: 370c289
Built: Fri Apr 9 22:46:57 2021
OS/Arch: darwin/arm64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.6
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 8728dd2
Built: Fri Apr 9 22:44:13 2021
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.4.4
GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc:
Version: 1.0.0-rc93
GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
docker-init:
Version: 0.19.0
GitCommit: de40ad0
OS (e.g. from /etc/os-release):
MacBook Air (M1, 2020)
macOS Big Sur 11.3.1 (20E241)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Posting here too. Confirmed to work on Ubuntu Server 20.04 aarch64, in Parallels Desktop virtualisation on MacBook Air with M1:
|
Success on an Equinix Metal c1.large.arm (96 core ThunderX), the original machine that I opened this issue on. Built with
|
Excellent, thank you all! Yes -- it's not necessarily expected to work with kind v0.10, in particular kubernetes 1.21 had a breaking change so that new test image is meant to be run with a newer kind binary. We'll be releasing with this soon, probably sometime Monday, this is the last thing we really wanted to make sure we got in v0.11 😅 |
I had a few other things going on today myself, but the key reason we've not released today is kind-ci/containerd-nightlies#19 runc rc93 has a regression we're currently shipping @ HEAD that we thought we'd already moved past by upgrading containerd, but didn't actually pickup due to the bug above. That should be resolved after re-running the build with the linked patch there, and then updating the kind images. After that we should be pretty clear to release. #2236 is concerning for non-systemd hosts, but there's no actionable root cause yet and we expect most container hosts will be running systemd, so no plan to block on that. |
runc rc94 fixes are in. @amwat is cutting the release now. |
ipvs iptable implementation
[EOS-11126] Añadir un tag de Stratio en todos los recursos creados en el Cloud Provider
Device under test is a Packet c1.large.arm 96-core arm64 machine running Ubuntu 18.04.
The text was updated successfully, but these errors were encountered: