Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kernel throttling bug patch in kops node images #8954

Closed
alok87 opened this issue Apr 22, 2020 · 39 comments
Closed

Kernel throttling bug patch in kops node images #8954

alok87 opened this issue Apr 22, 2020 · 39 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@alok87
Copy link
Contributor

alok87 commented Apr 22, 2020

What?

Linux Kernel has a bug in which un-necessary CPU throttling happens due to the bug in Kernel.
KernelBug: https://bugzilla.kernel.org/show_bug.cgi?id=198197
KubernetesIssue: kubernetes/kubernetes#67577

Many of us use kops default node images and have been experiencing problems due to un-necessary CPU throttling in our pods.

Please suggest if the latest kops node images has the kernel patch which has the fix for CPUThrottling. If not then can we bring this patch to these node images please.

Node Images

kope.io/k8s-1.17-debian-stretch-amd64-hvm-ebs-2020-01-17
kope.io/k8s-1.16-debian-stretch-amd64-hvm-ebs-2020-01-17
kope.io/k8s-1.15-debian-stretch-amd64-hvm-ebs-2020-01-17

cc @Nuru @justinsb

@alok87
Copy link
Contributor Author

alok87 commented Apr 22, 2020

Issue is there in kops image kope.io/k8s-1.15-debian-stretch-amd64-hvm-ebs-2019-09-26. with Kernel 4.9.0. Un-necessary throttling is happening. We need to patch the kernel fix in node images.

admin@ip-10-2-21-63:~$ uname -a
Linux ip-10-2-21-63 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3+deb9u1 (2019-09-20) x86_64 GNU/Linux

This was used to do the analysis.

Note: all are with 1000 iterations
We set cfs quota to 20ms and cfs period to 100ms, so we can use at most 20% of cpu during any 100ms period. If we ever go over 20ms of cpu time, we'll be throttled for the remaining of 100ms period.

CPU Quota CPU Period Sleep between interationns Un-needed Throttling
throttling disabled throttling disabled 1000ms none (burn_took=5ms always)
20000 (20ms) 100000 (100ms) 100ms sometimes (burn_took=5ms mostly, sometimes 90+ms)
20000 (20ms) 100000 (100ms) 1000ms frequent (burn_took=5ms always, and many times 80+ms due to throttling)

We should not have got throttled with which sleeps between iterations, container will un-necessarily getting throttled.

@hakman
Copy link
Member

hakman commented Apr 22, 2020

@alok87 Is there a fix for this in the Debian Stretch official repo or images?

@alok87
Copy link
Contributor Author

alok87 commented Apr 22, 2020

not sure. how to find out?

@hakman
Copy link
Member

hakman commented Apr 22, 2020

You should be able to find out by doing all the updates, including kernel and rebooting before the testing.

BTW, distros with older kernels have other bugs also, quite many I would say. Another example is this one: awslabs/amazon-eks-ami#357.

I proposed using Debian Buster or Ubuntu 20.04 in the near future instead of Debian Stretch. Not sure what the policy is for upgrading default images for active releases though.

@alok87
Copy link
Contributor Author

alok87 commented Apr 22, 2020

Does the Debian buster or Ubuntu 20.04 has the bug fixed? Should we move to these images? Does the node images needs to be custom build for Kubernetes?

@hakman
Copy link
Member

hakman commented Apr 22, 2020

Ubuntu 20.04 should have these all fixed. I am not sure how many made it to Debian Buster.
Personally I use Ubuntu and Debian official images. Kops one are just Debian Stretch images with some packages pre-installed to have faster startup.

You can get an idea about how the default images are created and what they contain here:
https://github.com/kubernetes-sigs/image-builder/blob/master/images/kube-deploy/imagebuilder/templates/1.15-stretch.yml

@alok87
Copy link
Contributor Author

alok87 commented Apr 22, 2020

thanks @hakman let me explore these options and hope to see the kops one upgrade to these since they have many issues fixed.

@hakman
Copy link
Member

hakman commented Apr 22, 2020

You're welcome. Ubuntu 20.04 support will come with the next releases hopefully this week.
If you want to try Debian Buster, you will need to use this: #7381 (comment). Works pretty well for many people since last year.

@alok87
Copy link
Contributor Author

alok87 commented Apr 22, 2020

You mean Ubuntu 20.04 support in kops for ubuntu 20.04 release?

@alok87
Copy link
Contributor Author

alok87 commented Apr 22, 2020

Upgraded kernel to 4.19, throttling is not reproducible. It has the patch i guess 🥳

admin@ip-172-31-102-98:~$ uname -r
4.19.0-0.bpo.8-amd64
admin@ip-172-31-102-98:~$ uname -a
Linux ip-172-31-102-98 4.19.0-0.bpo.8-amd64 #1 SMP Debian 4.19.98-1~bpo9+1 (2020-03-09) x86_64 GNU/Linux
docker run --rm -it --cpu-quota 20000 --cpu-period 100000 -v $(pwd):$(pwd) -w $(pwd) golang:1.9.2 go run cfs.go -iterations 100 -sleep 1000ms
2020/04/22 11:22:02 [0] burn took 5ms, real time so far: 5ms, cpu time so far: 6ms
2020/04/22 11:22:03 [1] burn took 5ms, real time so far: 1010ms, cpu time so far: 12ms
2020/04/22 11:22:04 [2] burn took 5ms, real time so far: 2015ms, cpu time so far: 20ms
2020/04/22 11:22:05 [3] burn took 5ms, real time so far: 3021ms, cpu time so far: 25ms
2020/04/22 11:22:06 [4] burn took 5ms, real time so far: 4026ms, cpu time so far: 32ms
2020/04/22 11:22:07 [5] burn took 5ms, real time so far: 5031ms, cpu time so far: 37ms
2020/04/22 11:22:08 [6] burn took 5ms, real time so far: 6036ms, cpu time so far: 44ms
2020/04/22 11:22:09 [7] burn took 5ms, real time so far: 7041ms, cpu time so far: 50ms
2020/04/22 11:22:10 [8] burn took 5ms, real time so far: 8046ms, cpu time so far: 56ms
2020/04/22 11:22:11 [9] burn took 5ms, real time so far: 9051ms, cpu time so far: 63ms
2020/04/22 11:22:12 [10] burn took 5ms, real time so far: 10056ms, cpu time so far: 68ms
2020/04/22 11:22:13 [11] burn took 5ms, real time so far: 11062ms, cpu time so far: 75ms
2020/04/22 11:22:14 [12] burn took 5ms, real time so far: 12067ms, cpu time so far: 80ms
2020/04/22 11:22:15 [13] burn took 5ms, real time so far: 13072ms, cpu time so far: 87ms
2020/04/22 11:22:16 [14] burn took 5ms, real time so far: 14077ms, cpu time so far: 93ms
2020/04/22 11:22:17 [15] burn took 5ms, real time so far: 15082ms, cpu time so far: 99ms
2020/04/22 11:22:18 [16] burn took 5ms, real time so far: 16087ms, cpu time so far: 105ms
2020/04/22 11:22:19 [17] burn took 5ms, real time so far: 17093ms, cpu time so far: 111ms
2020/04/22 11:22:21 [18] burn took 5ms, real time so far: 18098ms, cpu time so far: 118ms
2020/04/22 11:22:22 [19] burn took 5ms, real time so far: 19103ms, cpu time so far: 124ms
2020/04/22 11:22:23 [20] burn took 5ms, real time so far: 20108ms, cpu time so far: 130ms
2020/04/22 11:22:24 [21] burn took 5ms, real time so far: 21113ms, cpu time so far: 136ms
2020/04/22 11:22:25 [22] burn took 5ms, real time so far: 22118ms, cpu time so far: 142ms
2020/04/22 11:22:26 [23] burn took 5ms, real time so far: 23123ms, cpu time so far: 148ms
2020/04/22 11:22:27 [24] burn took 5ms, real time so far: 24128ms, cpu time so far: 155ms
2020/04/22 11:22:28 [25] burn took 5ms, real time so far: 25134ms, cpu time so far: 160ms
2020/04/22 11:22:29 [26] burn took 5ms, real time so far: 26139ms, cpu time so far: 167ms
2020/04/22 11:22:30 [27] burn took 5ms, real time so far: 27144ms, cpu time so far: 173ms
2020/04/22 11:22:31 [28] burn took 5ms, real time so far: 28149ms, cpu time so far: 178ms
2020/04/22 11:22:32 [29] burn took 5ms, real time so far: 29154ms, cpu time so far: 184ms
2020/04/22 11:22:33 [30] burn took 5ms, real time so far: 30159ms, cpu time so far: 191ms
2020/04/22 11:22:34 [31] burn took 5ms, real time so far: 31165ms, cpu time so far: 197ms
2020/04/22 11:22:35 [32] burn took 5ms, real time so far: 32170ms, cpu time so far: 203ms
2020/04/22 11:22:36 [33] burn took 5ms, real time so far: 33175ms, cpu time so far: 209ms
2020/04/22 11:22:37 [34] burn took 5ms, real time so far: 34180ms, cpu time so far: 215ms
2020/04/22 11:22:38 [35] burn took 5ms, real time so far: 35185ms, cpu time so far: 221ms
2020/04/22 11:22:39 [36] burn took 5ms, real time so far: 36190ms, cpu time so far: 227ms
2020/04/22 11:22:40 [37] burn took 5ms, real time so far: 37196ms, cpu time so far: 234ms
2020/04/22 11:22:41 [38] burn took 5ms, real time so far: 38212ms, cpu time so far: 240ms
2020/04/22 11:22:42 [39] burn took 5ms, real time so far: 39217ms, cpu time so far: 246ms
2020/04/22 11:22:43 [40] burn took 5ms, real time so far: 40223ms, cpu time so far: 252ms
2020/04/22 11:22:44 [41] burn took 5ms, real time so far: 41228ms, cpu time so far: 259ms
2020/04/22 11:22:45 [42] burn took 5ms, real time so far: 42233ms, cpu time so far: 264ms
2020/04/22 11:22:46 [43] burn took 5ms, real time so far: 43238ms, cpu time so far: 271ms
2020/04/22 11:22:47 [44] burn took 5ms, real time so far: 44243ms, cpu time so far: 277ms
2020/04/22 11:22:48 [45] burn took 5ms, real time so far: 45248ms, cpu time so far: 283ms
2020/04/22 11:22:49 [46] burn took 5ms, real time so far: 46253ms, cpu time so far: 289ms
2020/04/22 11:22:50 [47] burn took 5ms, real time so far: 47259ms, cpu time so far: 295ms
2020/04/22 11:22:51 [48] burn took 5ms, real time so far: 48264ms, cpu time so far: 302ms
2020/04/22 11:22:52 [49] burn took 5ms, real time so far: 49269ms, cpu time so far: 308ms
2020/04/22 11:22:53 [50] burn took 5ms, real time so far: 50274ms, cpu time so far: 315ms
2020/04/22 11:22:54 [51] burn took 5ms, real time so far: 51279ms, cpu time so far: 320ms
2020/04/22 11:22:55 [52] burn took 5ms, real time so far: 52284ms, cpu time so far: 327ms
2020/04/22 11:22:56 [53] burn took 5ms, real time so far: 53289ms, cpu time so far: 333ms
2020/04/22 11:22:57 [54] burn took 5ms, real time so far: 54294ms, cpu time so far: 339ms
2020/04/22 11:22:58 [55] burn took 5ms, real time so far: 55300ms, cpu time so far: 344ms
2020/04/22 11:22:59 [56] burn took 6ms, real time so far: 56313ms, cpu time so far: 351ms
2020/04/22 11:23:00 [57] burn took 5ms, real time so far: 57319ms, cpu time so far: 356ms
2020/04/22 11:23:01 [58] burn took 5ms, real time so far: 58324ms, cpu time so far: 362ms
2020/04/22 11:23:02 [59] burn took 5ms, real time so far: 59329ms, cpu time so far: 368ms
2020/04/22 11:23:03 [60] burn took 5ms, real time so far: 60334ms, cpu time so far: 374ms
2020/04/22 11:23:04 [61] burn took 5ms, real time so far: 61339ms, cpu time so far: 380ms
2020/04/22 11:23:05 [62] burn took 5ms, real time so far: 62345ms, cpu time so far: 386ms
2020/04/22 11:23:06 [63] burn took 5ms, real time so far: 63350ms, cpu time so far: 391ms
2020/04/22 11:23:07 [64] burn took 5ms, real time so far: 64355ms, cpu time so far: 397ms
2020/04/22 11:23:08 [65] burn took 5ms, real time so far: 65360ms, cpu time so far: 404ms
2020/04/22 11:23:09 [66] burn took 5ms, real time so far: 66365ms, cpu time so far: 409ms
2020/04/22 11:23:10 [67] burn took 5ms, real time so far: 67370ms, cpu time so far: 416ms
2020/04/22 11:23:11 [68] burn took 5ms, real time so far: 68375ms, cpu time so far: 422ms
2020/04/22 11:23:12 [69] burn took 5ms, real time so far: 69381ms, cpu time so far: 428ms
2020/04/22 11:23:13 [70] burn took 5ms, real time so far: 70386ms, cpu time so far: 434ms
2020/04/22 11:23:14 [71] burn took 5ms, real time so far: 71391ms, cpu time so far: 440ms
2020/04/22 11:23:15 [72] burn took 5ms, real time so far: 72396ms, cpu time so far: 446ms
2020/04/22 11:23:16 [73] burn took 5ms, real time so far: 73409ms, cpu time so far: 452ms
2020/04/22 11:23:17 [74] burn took 5ms, real time so far: 74415ms, cpu time so far: 458ms
2020/04/22 11:23:18 [75] burn took 5ms, real time so far: 75420ms, cpu time so far: 464ms
2020/04/22 11:23:19 [76] burn took 9ms, real time so far: 76430ms, cpu time so far: 474ms
2020/04/22 11:23:20 [77] burn took 5ms, real time so far: 77435ms, cpu time so far: 480ms
2020/04/22 11:23:21 [78] burn took 8ms, real time so far: 78443ms, cpu time so far: 484ms
2020/04/22 11:23:22 [79] burn took 5ms, real time so far: 79449ms, cpu time so far: 490ms
2020/04/22 11:23:23 [80] burn took 5ms, real time so far: 80454ms, cpu time so far: 496ms
2020/04/22 11:23:24 [81] burn took 5ms, real time so far: 81459ms, cpu time so far: 502ms
2020/04/22 11:23:25 [82] burn took 5ms, real time so far: 82464ms, cpu time so far: 508ms
2020/04/22 11:23:26 [83] burn took 5ms, real time so far: 83469ms, cpu time so far: 515ms
2020/04/22 11:23:27 [84] burn took 5ms, real time so far: 84474ms, cpu time so far: 521ms
2020/04/22 11:23:28 [85] burn took 5ms, real time so far: 85479ms, cpu time so far: 527ms
2020/04/22 11:23:29 [86] burn took 5ms, real time so far: 86485ms, cpu time so far: 533ms
2020/04/22 11:23:30 [87] burn took 5ms, real time so far: 87490ms, cpu time so far: 540ms
2020/04/22 11:23:31 [88] burn took 5ms, real time so far: 88495ms, cpu time so far: 546ms
2020/04/22 11:23:32 [89] burn took 5ms, real time so far: 89500ms, cpu time so far: 551ms
2020/04/22 11:23:33 [90] burn took 7ms, real time so far: 90507ms, cpu time so far: 555ms
2020/04/22 11:23:34 [91] burn took 5ms, real time so far: 91513ms, cpu time so far: 561ms
2020/04/22 11:23:35 [92] burn took 5ms, real time so far: 92518ms, cpu time so far: 566ms
2020/04/22 11:23:36 [93] burn took 11ms, real time so far: 93529ms, cpu time so far: 576ms
2020/04/22 11:23:37 [94] burn took 5ms, real time so far: 94535ms, cpu time so far: 583ms
2020/04/22 11:23:38 [95] burn took 5ms, real time so far: 95540ms, cpu time so far: 588ms
2020/04/22 11:23:39 [96] burn took 5ms, real time so far: 96545ms, cpu time so far: 595ms
2020/04/22 11:23:40 [97] burn took 5ms, real time so far: 97550ms, cpu time so far: 601ms
2020/04/22 11:23:41 [98] burn took 5ms, real time so far: 98555ms, cpu time so far: 607ms
2020/04/22 11:23:42 [99] burn took 5ms, real time so far: 99560ms, cpu time so far: 613ms
docker run --rm -it --cpu-quota 20000 --cpu-period 100000 -v $(pwd):$(pwd) -w $(pwd) golang:1.9.2 go run cfs.go -iterations 100 -sleep 5000ms
2020/04/22 11:26:17 [0] burn took 5ms, real time so far: 5ms, cpu time so far: 7ms
2020/04/22 11:26:22 [1] burn took 5ms, real time so far: 5010ms, cpu time so far: 13ms
2020/04/22 11:26:27 [2] burn took 5ms, real time so far: 10015ms, cpu time so far: 19ms
2020/04/22 11:26:32 [3] burn took 5ms, real time so far: 15020ms, cpu time so far: 25ms
2020/04/22 11:26:37 [4] burn took 5ms, real time so far: 20025ms, cpu time so far: 32ms
2020/04/22 11:26:42 [5] burn took 5ms, real time so far: 25030ms, cpu time so far: 38ms
2020/04/22 11:26:47 [6] burn took 5ms, real time so far: 30035ms, cpu time so far: 44ms
2020/04/22 11:26:52 [7] burn took 5ms, real time so far: 35041ms, cpu time so far: 50ms
2020/04/22 11:26:57 [8] burn took 5ms, real time so far: 40046ms, cpu time so far: 56ms
2020/04/22 11:27:02 [9] burn took 5ms, real time so far: 45051ms, cpu time so far: 63ms
2020/04/22 11:27:07 [10] burn took 5ms, real time so far: 50056ms, cpu time so far: 69ms
2020/04/22 11:27:12 [11] burn took 5ms, real time so far: 55061ms, cpu time so far: 75ms
2020/04/22 11:27:17 [12] burn took 5ms, real time so far: 60066ms, cpu time so far: 81ms
2020/04/22 11:27:22 [13] burn took 5ms, real time so far: 65071ms, cpu time so far: 88ms
2020/04/22 11:27:27 [14] burn took 5ms, real time so far: 70077ms, cpu time so far: 93ms
2020/04/22 11:27:32 [15] burn took 5ms, real time so far: 75082ms, cpu time so far: 100ms
2020/04/22 11:27:38 [16] burn took 5ms, real time so far: 80087ms, cpu time so far: 106ms
2020/04/22 11:27:43 [17] burn took 5ms, real time so far: 85092ms, cpu time so far: 112ms
2020/04/22 11:27:48 [18] burn took 5ms, real time so far: 90097ms, cpu time so far: 118ms
2020/04/22 11:27:53 [19] burn took 5ms, real time so far: 95102ms, cpu time so far: 125ms
2020/04/22 11:27:58 [20] burn took 5ms, real time so far: 100107ms, cpu time so far: 131ms
2020/04/22 11:28:03 [21] burn took 5ms, real time so far: 105112ms, cpu time so far: 137ms
2020/04/22 11:28:08 [22] burn took 5ms, real time so far: 110118ms, cpu time so far: 143ms

Now we need to just release this for kops! 🏎️

@hakman
Copy link
Member

hakman commented Apr 22, 2020

One thing you forgot to mention is that this is from backports, not regular updates.

Backports are packages taken from the next Debian release (called "testing"), adjusted and recompiled for usage on Debian stable.

@alok87
Copy link
Contributor Author

alok87 commented Apr 22, 2020

May be then we should move to Debian Buster or Ubuntu 20.04 as you proposed.

I proposed using Debian Buster or Ubuntu 20.04 in the near future instead of Debian Stretch. Not sure what the policy is for upgrading default images for active releases though.

@alok87
Copy link
Contributor Author

alok87 commented Apr 22, 2020

@hakman do you have the yamls like these for debian buster or ubuntu 20.04 ?

@hakman
Copy link
Member

hakman commented Apr 22, 2020

no, but should be very similar to the stretch one or buster one from here kubernetes-sigs/image-builder#205.

@Nuru
Copy link

Nuru commented Apr 22, 2020

I have been trying to track the distribution of this patch as best I can, but have not been completely successful. So far, my best information is that

  • I have reproduced the issue in the current kops stable 1.15 AMI: kope.io/k8s-1.15-debian-stretch-amd64-hvm-ebs-2020-01-17
  • The patch probably made into Debian buster, since that is based on Linux kernel 4.19 and the fix landed in 4.19.84
  • The patch never made it into Debian stretch, but probably has been backported to stretch-backports since it appears now to be based on 4.19.98.

Since buster is the current stable release, I would recommend updating the AMI to the current Debian buster, which is 10.3.

@alok87
Copy link
Contributor Author

alok87 commented Apr 23, 2020

@Nuru you can try this out: I have built the Debian Buster(10.3) AMI using ImageBuilder

@kzap
Copy link

kzap commented Apr 30, 2020

@alok87 I believe there are blockers for using Debian Buster with k8s < 1.17, see #8224

@olemarkus
Copy link
Member

if you are fine with some minimal nftable mixed with legacy iptables and all your pods supports nftables, Buster will work.
Kops may look into defaulting to Ubuntu 20.04 over Buster because of this though.

@hakman
Copy link
Member

hakman commented Apr 30, 2020

One good news is that Ubuntu 20.04 support will be available in Kops 1.16.2: 1938652.

@alok87
Copy link
Contributor Author

alok87 commented May 1, 2020

We reverted from Debian Buster/kernel 4.19 to the kops 4.9 debian stretch. As we were facing a lot of issues in the nodes after upgrading. Here is the slack discussion about it. Process ksoftirqd-0 started taking a lot of CPU and the nodes with this OS only started becoming unroutable.
Does this look like the issue is because of the same issue you have mentioned? @olemarkus @kzap

@hakman Will the Ubuntu20.04 support come for k8s 1.15?

@alok87 I believe there are blockers for using Debian Buster with k8s < 1.17, see #8224

@Nuru you can try this out: I have built the Debian Buster(10.3) AMI using ImageBuilder

@hakman
Copy link
Member

hakman commented May 1, 2020

@alok87 there is no check for minimum k8s version for Ubuntu 20.04.

@Nuru
Copy link

Nuru commented May 1, 2020

@alok87 If the 4.19 kernel has issues, can you use the 4.14 kernel? It has the appropriate kernel patches as of kernel version v4.14.154. I thought it was shipping in Debian Stretch-Backports at some point.

@alok87
Copy link
Contributor Author

alok87 commented May 2, 2020

Thanks @Nuru
we are waiting for kops 1.16.2 to try out Ubuntu 20.04 in Kubernetes 1.15 with Kernel 5+

@jdomag
Copy link

jdomag commented Jun 3, 2020

Hi,
Has anything changed here? is there any kops image with patched kernel?

@olemarkus
Copy link
Member

Not yet. But kops supports running Ubuntu 20.04, and support for Debian Buster is underway.

@hakman
Copy link
Member

hakman commented Jun 3, 2020

Has anything changed here? is there any kops image with patched kernel?

Please use image: ubuntu/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-arm64-server-20200528. It should fix you issue.

@jdomag
Copy link

jdomag commented Jun 12, 2020

I will wait until official kops image - for now i'll remove CPU limits as i use Brustable QoS anyway. Any idea how to protect some system/k8s resources so those always have an access to CPU? I'm wondering what will be impact if all the CPU on the node is being used.

@diversario
Copy link
Member

@jdomag without limits, I think your best best is to set requests for your system processes at the level where it should be always or almost always enough.

@hakman
Copy link
Member

hakman commented Jun 12, 2020

There are no immediate plans to build an official Kops image with Ubuntu 20.04.
For now, 1.18 will use the official Ubuntu 20.04 image as default.
#9283

@prashantkalkar
Copy link
Contributor

prashantkalkar commented Jul 14, 2020

@hakman Is it possible to use Ubuntu 20.04 image for older k8s version clusters? I am using Kops 1.17 with k8s version: 1.14.10.

@hakman
Copy link
Member

hakman commented Jul 14, 2020

@prashantkalkar you can use Ubuntu 20.04 by manually setting the image for each instance group:

- name: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200701

@prashantkalkar
Copy link
Contributor

@prashantkalkar you can use Ubuntu 20.04 by manually setting the image for each instance group:

- name: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200701

Thanks will use.

@prashantkalkar
Copy link
Contributor

@hakman Unfortunately ubuntu image did not work for me when I tried. The kop-configuration service and the kubelet service both were failing due to the absence of docker.

kops-configuration service was failing with the message:

Output: Failed to restart docker.service: Unit docker.socket not found.

and docker-healthcheck.service was also failing with message:

timeout: failed to run command ‘docker’: No such file or directory

Let me know if I need to submit a separate issue.

@hakman
Copy link
Member

hakman commented Jul 15, 2020

Let me know if I need to submit a separate issue.

Please do. Thanks.

@prashantkalkar
Copy link
Contributor

A new issue is submitted here: #9574

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 13, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 12, 2020
@hakman
Copy link
Member

hakman commented Nov 12, 2020

kOps started using unmodified Ubuntu 20.04 images as default as of 1.18.
/close

@k8s-ci-robot
Copy link
Contributor

@hakman: Closing this issue.

In response to this:

kOps started using unmodified Ubuntu 20.04 images as default as of 1.18.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants