Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update API server certSANs #1447

Closed
dungdm93 opened this issue Mar 13, 2019 · 19 comments
Closed

Update API server certSANs #1447

dungdm93 opened this issue Mar 13, 2019 · 19 comments
Assignees
Labels
area/security priority/backlog Higher priority than priority/awaiting-more-evidence.
Milestone

Comments

@dungdm93
Copy link

dungdm93 commented Mar 13, 2019

What keywords did you search in kubeadm issues before filing this one?

apiserver
sa
certificate
certSANs

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version):
v1.13.4

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: VM on GCE
  • OS (e.g. from /etc/os-release): Ubuntu 18.04
  • Kernel (e.g. uname -a): Linux kube-master 4.18.0-1007-gcp
  • Others:

What happened?

After bootstrap k8s cluster using kubeadm, I realize that I forgot public IP/DNS of API server. So I'd like to reconfig k8s cluster by using kubeadm, but it doesn't work.

What you expected to happen?

Cluster update with new config.

How to reproduce it (as minimally and precisely as possible)?

  1. Bootstrap cluster
root@kube-master:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP group default qlen 1000
    link/ether 42:01:0a:94:00:02 brd ff:ff:ff:ff:ff:ff
    inet 10.148.0.2/32 scope global dynamic ens4
       valid_lft 2525sec preferred_lft 2525sec
    inet6 fe80::4001:aff:fe94:2/64 scope link 
       valid_lft forever preferred_lft forever
root@kube-master:~# kubeadm init
....

# Check PKI Certificates
root@kube-master:~# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout
....
X509v3 Subject Alternative Name: 
    DNS:kube-master, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:10.148.0.2
...
  1. Get kubeadm config
kubeadm config view > kubeadm.yaml
  1. Update kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
+  certSANs:
+  - 35.198.193.188
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
- controlPlaneEndpoint: ""
+ controlPlaneEndpoint: 10.148.0.2:6443
kubernetesVersion: v1.13.4
...
  1. Apply config
root@kube-master:~# kubeadm upgrade apply --config=kubeadm.yaml
root@kube-master:~# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout
# certSANs are the same as above

I also try to review certificates, but the result is the same.

root@kube-master:~# kubeadm alpha certs renew all --config kubeadm.yaml
@dungdm93 dungdm93 changed the title ll How to update API server certSANs Mar 13, 2019
@dungdm93 dungdm93 changed the title How to update API server certSANs Update API server certSANs Mar 13, 2019
@neolit123
Copy link
Member

neolit123 commented Mar 13, 2019

root@kube-master:~# kubeadm upgrade apply --config=kubeadm.yaml

try this command instead:
kubeadm config upload from-file --config kubeadm.yaml
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/#cmd-config-from-file

then do kubeadm config view to see if you have made the change.

root@kube-master:~# kubeadm alpha certs renew all --config kubeadm.yaml

you first need to get the previous step right.
do you get some sort of an error here?

you also have the option to just restart kubeadm reset -f and kubeadm init...

@neolit123 neolit123 added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Mar 13, 2019
@dungdm93
Copy link
Author

config already published to k8s configmap with no need of kubeadm config upload. Proof is:

root@kube-master:~# kubectl get cm -n kube-system 
NAME                                 DATA   AGE
coredns                              1      4h29m
extension-apiserver-authentication   6      4h29m
kube-flannel-cfg                     2      4h28m
kube-proxy                           2      4h29m
kubeadm-config                       2      4h29m
kubelet-config-1.13                  1      4h29m

@fabriziopandini
Copy link
Member

@dungdm93 I fear that there is no easy answer to your question...
Changing the SANs can be "easily" achieved, but changing the control-plane address complex. Here you can find some community provided solutions #338

@dungdm93
Copy link
Author

@fabriziopandini actually, controlPlaneEndpoint in my cluster already is 10.148.0.2:6443. So there is no change control-plane address at all. I just wanna add extra SANs to my apiServer certificates.

@fabriziopandini
Copy link
Member

@dungdm93
If it is only to get a certificate re-created, my first try would be

  • delete the API server certificate
  • recreate it using kubeadm init phase certs apiserver
  • force restart the API server

(afaik kubeadm upgrade doesn't replace certificates unless they are near expiration)

@dungdm93
Copy link
Author

dungdm93 commented Mar 26, 2019

@fabriziopandini I found a way to workaround but there is something is not true.

  1. Before update:
# kubeadm config view
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.13.4
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout
...
    X509v3 Subject Alternative Name: 
        DNS:master, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:10.148.0.39
...
  1. Update kubeadm config
# kubeadm upgrade apply --config=kubeadm.yml
# kubeadm config view
apiServer:
  certSANs:
  - 35.197.150.45
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.13.4
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
  1. Re-generate apiServer certificate
# rm -rf /etc/kubernetes/pki/apiserver.*
# kubeadm init phase certs apiserver
I0326 02:57:52.228780    2975 version.go:237] remote version is much newer: v1.14.0; falling back to: stable-1.13
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.148.0.39]

As you can see from the log, certSANs are still IPs [10.96.0.1 10.148.0.39]. However:

# rm -rf /etc/kubernetes/pki/apiserver.*
# kubeadm init phase certs apiserver --config=kubeadm.yml 
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.148.0.39 35.197.150.45]

This time, kubeadm add extra SAN into the certificate. However, in the third time, if I not specify --config=kubeadm.yml, kubeadm still apply old config version, although, I already update it.

# kubeadm init phase certs apiserver
I0326 02:57:52.228780    2975 version.go:237] remote version is much newer: v1.14.0; falling back to: stable-1.13
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.148.0.39]

@fabriziopandini
Copy link
Member

@dungdm93 you have to update the kubeadm-config ConfigMap in the namespace kube-system to make your change persistent (this is what kubeadm uses if you don't provide --config ; same goes for updates)

@dungdm93
Copy link
Author

@fabriziopandini In step 2 above, I already update kubeadm-config via

kubeadm upgrade apply --config=kubeadm.yml

Check it:

# kubectl get cm/kubeadm-config -n kube-system -o yaml
apiVersion: v1
data:
  ClusterConfiguration: |
    apiServer:
      certSANs:
      - 35.197.150.45
      extraArgs:
        authorization-mode: Node,RBAC
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta1
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: ""
    controllerManager: {}
    dns:
      type: CoreDNS
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: k8s.gcr.io
    kind: ClusterConfiguration
    kubernetesVersion: v1.13.4
    networking:
      dnsDomain: cluster.local
      podSubnet: 10.244.0.0/16
      serviceSubnet: 10.96.0.0/12
    scheduler: {}
  ClusterStatus: |
    apiEndpoints:
      master:
        advertiseAddress: 10.148.0.39
        bindPort: 6443
    apiVersion: kubeadm.k8s.io/v1beta1
    kind: ClusterStatus
kind: ConfigMap
metadata:
  creationTimestamp: "2019-03-25T14:58:35Z"
  name: kubeadm-config
  namespace: kube-system
  resourceVersion: "13969"
  selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
  uid: 76d36104-4f0e-11e9-ac8b-42010a940027

@ryangrahamnc
Copy link

ryangrahamnc commented Apr 26, 2019

I get this issue too.

Dump the config to a file and edit:
kubeadm config view > /root/kubeadmconf.yml
vim /root/kubeadmconf.yml to add my new hostname:

apiServer:
  certSANs:
   - "example.com"
...

Apply the change:
kubeadm config upload from-file --config /root/kubeadmconf.yml

kubectl get -n kube-system cm kubeadm-config -o yaml shows my change

Then try to update the keys:

rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs apiserver
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout

^ does not show my new hostname

Trying again with --config specified:

rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs apiserver --config=/root/kubeadmconf.yml
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout

^ does show my new hostname has been applied

Restarting the api server (via docker ps | grep apiserver and docker restart ${id}) has no effect.

@timothysc timothysc added the priority/backlog Higher priority than priority/awaiting-more-evidence. label May 3, 2019
@timothysc timothysc added this to the Next milestone May 3, 2019
@timothysc timothysc added area/security and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels May 3, 2019
@fabriziopandini
Copy link
Member

fabriziopandini commented May 5, 2019

@dungdm93
I investigated the problem a little bit
kubeadm upgrade does not apply changes to certificates. see #1540 for more info.

AFAIK, as of today the only viable way to get a SANS changed is to delete the existing api-server certificate, recreate it with kubadm init phase certs api-server --config your-new-local-config.yaml, restart the component (please always pass the config, because if the config file is missing the command will use CLI flags values, not the kubeadm--config stored in the cluster)

Please note as well that you can use kubeadm config upload from-file to modify the kubeadm--config (instead of a full upgrade)

@omegazeng
Copy link

omegazeng commented May 6, 2019

same issue

kubeadm v1.14.0

  • kubeadm-config.yaml
    ClusterConfiguration
    apiServer.certSANs add a new hostname
kubeadm alpha certs renew all --config kubeadm-config.yaml
kubeadm upgrade apply --config kubeadm-config.yaml
systemctl restart kubelet.service
kubeadm config view
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text

kubeadm config view # I see the new hostname.
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text # Only the Validity Period has updated, certSANs no effect.


I want to change controlPlaneEndpoint, need help.

@fabriziopandini
Copy link
Member

@omegazeng
Please follow instructions above (don't use upgrade/don't use certs renew)

@omegazeng
Copy link

@fabriziopandini
Thanks! I succeed to add certSAN to apiserver.crt via

kubeadm init phase certs apiserver --config kubeadm-config.yaml

Details:

I have 3 master node.
copy apiserver.* to other 2 master node, then restart the 3 apiserver pods.
change ~/.kube/config cluster server domain to the new one.
kubectl succeed to connect to apiserver.

kubectl config view
kubeadm config view

have shown new domain

Finally:

# all k8s node
sed -i 's/old-domain/new-domain/' /etc/kubernetes/*.conf
service kubelet restart

restart all k8s master node one by one (Because I am not sure which kube-system pod needs restart).

check cluster status.

@maxdebayser
Copy link

The steps in this thread worked for me:

kubeadm config view > /root/kubeadmconf.yml
kubeadm config upload from-file --config /root/kubeadmconf.yml
cd /etc/kubernetes/pki
# check cert before
openssl x509 -in apiserver.crt -text -noout
rm apiserver.*
kubeadm init phase certs apiserver --config=/root/kubeadmconf.yml
# check cert after
openssl x509 -in apiserver.crt -text -noout 
systemctl daemon-reload
systemctl restart kubelet
# find and restart apiserver
docker ps | grep apiserver
docker restart apiserver_id

# Verify connection to apiserver:
openssl s_client -connect myserver:6443 | openssl x509 -noout -text

After all these steps, the last command showed the added SANs.

@fabriziopandini
Copy link
Member

It seems the issue is solved now. Feel free to reopen if there are still problem
/close

NB. If you are in a HA cluster created with kubeadm join --experimental-control-plane, certificates differ from one control-plane node to another. So you have to delete recreate son each node (while uploading the config can be done on the first control-plane node only)

@k8s-ci-robot
Copy link
Contributor

@fabriziopandini: Closing this issue.

In response to this:

It seems the issue is solved now. Feel free to reopen if there are still problem
/close

NB. If you are in a HA cluster created with kubeadm join --experimental-control-plane, certificates differ from one control-plane node to another. So you have to delete recreate son each node (while uploading the config can be done on the first control-plane node only)

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@myonlyzzy
Copy link

thanks !

@juliohm1978
Copy link

This is isn't working for me. I ran the commands given by @maxdebayser, but the same error keeps hitting me. New certs are created and the apiserver is using them.

/usr/local/bin/kubeadm upgrade apply -y v1.15.3 --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=all --allow-experimental-upgrades --allow-release-candidate-upgrades --etcd-upgrade=false --force
root@k8s-master01-tre-20190606:/etc/kubernetes/pki# /usr/local/bin/kubeadm upgrade apply -y v1.15.3 --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=all --allow-experimental-upgrades --allow-release-candidate-upgrades --etcd-upgrade=false --force
[upgrade/config] Making sure the configuration is correct:
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.15.3"
[upgrade/versions] Cluster version: v1.14.3
[upgrade/versions] kubeadm version: v1.15.3
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/prepull] Prepulling image for component kube-scheduler.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.15.3"...
Static pod: kube-apiserver-k8s-master01-tre-20190606.dis.tjpr.jus.br hash: 89e0fe1c0aae8f7d7ffa9d65cb836de6
Static pod: kube-controller-manager-k8s-master01-tre-20190606.dis.tjpr.jus.br hash: 01dc19eb0a9c7eb3c988612a4f0b67e8
Static pod: kube-scheduler-k8s-master01-tre-20190606.dis.tjpr.jus.br hash: fd29bfff9a9c75a09e7573f98e900cd5
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests361320450"
[controlplane] Adding extra host path mount "usr-share-ca-certificates" to "kube-apiserver"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/apply] FATAL: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [failed to renew certificates for component "kube-apiserver": failed to renew certificate apiserver-kubelet-client: unable to sign certificate: must specify at least one ExtKeyUsage, rename /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-11-23-01-44-24/kube-apiserver.yaml /etc/kubernetes/manifests/kube-apiserver.yaml: no such file or directory]

What am I missing here?

@juliohm1978
Copy link

juliohm1978 commented Nov 23, 2019

Ok, I managed to get around it.

The same error message appears if your apiserver-kubelet-client certificate is also messed up. So, recreating that cert helped.

However, the error message comes back for the front-proxy-client cert.

failed to renew certificate front-proxy-client: unable to sign certificate: must specify at least one ExtKeyUsage

At this point, I backed up and recreated all certs.

cd /etc/kubernetes/pki

mv apiserver.crt apiserver.crt.old
mv apiserver.key apiserver.key.old
mv apiserver-kubelet-client.crt apiserver-kubelet-client.crt.old
mv apiserver-kubelet-client.key apiserver-kubelet-client.key.old
mv front-proxy-ca.crt front-proxy-ca.crt.old
mv front-proxy-ca.key front-proxy-ca.key.old
mv front-proxy-client.crt front-proxy-client.crt.old
mv front-proxy-client.key front-proxy-client.key.old

kubeadm init phase certs all --config /etc/kubernetes/kubeadm-config.yaml

After that, I was able to upgrade the cluster

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/security priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

10 participants