-
Notifications
You must be signed in to change notification settings - Fork 717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update API server certSANs #1447
Comments
try this command instead: then do
you first need to get the previous step right. you also have the option to just restart |
config already published to k8s configmap with no need of root@kube-master:~# kubectl get cm -n kube-system
NAME DATA AGE
coredns 1 4h29m
extension-apiserver-authentication 6 4h29m
kube-flannel-cfg 2 4h28m
kube-proxy 2 4h29m
kubeadm-config 2 4h29m
kubelet-config-1.13 1 4h29m |
@fabriziopandini actually, |
@dungdm93
(afaik kubeadm upgrade doesn't replace certificates unless they are near expiration) |
@fabriziopandini I found a way to workaround but there is something is not true.
# kubeadm config view
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.13.4
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout
...
X509v3 Subject Alternative Name:
DNS:master, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:10.148.0.39
...
# kubeadm upgrade apply --config=kubeadm.yml
# kubeadm config view
apiServer:
certSANs:
- 35.197.150.45
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.13.4
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
# rm -rf /etc/kubernetes/pki/apiserver.*
# kubeadm init phase certs apiserver
I0326 02:57:52.228780 2975 version.go:237] remote version is much newer: v1.14.0; falling back to: stable-1.13
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.148.0.39] As you can see from the log, certSANs are still # rm -rf /etc/kubernetes/pki/apiserver.*
# kubeadm init phase certs apiserver --config=kubeadm.yml
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.148.0.39 35.197.150.45] This time, # kubeadm init phase certs apiserver
I0326 02:57:52.228780 2975 version.go:237] remote version is much newer: v1.14.0; falling back to: stable-1.13
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.148.0.39] |
@dungdm93 you have to update the |
@fabriziopandini In step 2 above, I already update kubeadm upgrade apply --config=kubeadm.yml Check it: # kubectl get cm/kubeadm-config -n kube-system -o yaml
apiVersion: v1
data:
ClusterConfiguration: |
apiServer:
certSANs:
- 35.197.150.45
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.13.4
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
ClusterStatus: |
apiEndpoints:
master:
advertiseAddress: 10.148.0.39
bindPort: 6443
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterStatus
kind: ConfigMap
metadata:
creationTimestamp: "2019-03-25T14:58:35Z"
name: kubeadm-config
namespace: kube-system
resourceVersion: "13969"
selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
uid: 76d36104-4f0e-11e9-ac8b-42010a940027 |
I get this issue too. Dump the config to a file and edit:
Apply the change:
Then try to update the keys:
^ does not show my new hostname Trying again with --config specified:
^ does show my new hostname has been applied Restarting the api server (via |
@dungdm93 AFAIK, as of today the only viable way to get a SANS changed is to delete the existing api-server certificate, recreate it with Please note as well that you can use |
same issue kubeadm v1.14.0
kubeadm alpha certs renew all --config kubeadm-config.yaml
kubeadm upgrade apply --config kubeadm-config.yaml
systemctl restart kubelet.service
kubeadm config view
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text kubeadm config view # I see the new hostname. I want to change controlPlaneEndpoint, need help. |
@omegazeng |
@fabriziopandini kubeadm init phase certs apiserver --config kubeadm-config.yaml Details: I have 3 master node. kubectl config view
kubeadm config view have shown new domain Finally: # all k8s node
sed -i 's/old-domain/new-domain/' /etc/kubernetes/*.conf
service kubelet restart restart all k8s master node one by one (Because I am not sure which kube-system pod needs restart). check cluster status. |
The steps in this thread worked for me:
After all these steps, the last command showed the added SANs. |
It seems the issue is solved now. Feel free to reopen if there are still problem NB. If you are in a HA cluster created with kubeadm join --experimental-control-plane, certificates differ from one control-plane node to another. So you have to delete recreate son each node (while uploading the config can be done on the first control-plane node only) |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
thanks ! |
This is isn't working for me. I ran the commands given by @maxdebayser, but the same error keeps hitting me. New certs are created and the apiserver is using them.
What am I missing here? |
Ok, I managed to get around it. The same error message appears if your apiserver-kubelet-client certificate is also messed up. So, recreating that cert helped. However, the error message comes back for the front-proxy-client cert.
At this point, I backed up and recreated all certs.
After that, I was able to upgrade the cluster |
What keywords did you search in kubeadm issues before filing this one?
apiserver
sa
certificate
certSANs
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Versions
kubeadm version (use
kubeadm version
):v1.13.4
Environment:
kubectl version
):uname -a
): Linux kube-master 4.18.0-1007-gcpWhat happened?
After bootstrap k8s cluster using
kubeadm
, I realize that I forgot public IP/DNS of API server. So I'd like to reconfig k8s cluster by usingkubeadm
, but it doesn't work.What you expected to happen?
Cluster update with new config.
How to reproduce it (as minimally and precisely as possible)?
kubeadm.yaml
I also try to review certificates, but the result is the same.
The text was updated successfully, but these errors were encountered: