Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: command to recreate certificates #6024

Closed
RubenAtPA opened this issue Dec 6, 2019 · 23 comments
Closed

Feature request: command to recreate certificates #6024

RubenAtPA opened this issue Dec 6, 2019 · 23 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@RubenAtPA
Copy link

RubenAtPA commented Dec 6, 2019

While developing I need to work out of the office as well. The problem is that when I connect to the different network, the hyper-v minikube machine get a different address that can not (easily) be fixed. The problem after that is that when I run the kubectl commands I get the error: Unable to connect to the server: x509: certificate is valid for 192.168.XXX.YYY, 10.96.0.1, 10.0.0.1, not 192.168.XXX.ZZZ

Currently the only work around is to stop the minikube machine and restart it again so it can go through the recreation of the certificates. That also requires that all my deployments have to restart to, what is undesirable. I know of the --apiserver-ips option but that only seems to work when you know what ip address the vm will be assigned.

The solution would be a command like minikube renew-certs or something similar.

@medyagh
Copy link
Member

medyagh commented Dec 16, 2019

@RubenAtPA I am not sure if renewing certs would require the api server to be restared as well ...
I would be happy to look into any prototype that would add this feature, without adding a new command. maybe as a start flag

minikube start --genererate-certs-only

@tstromberg
Copy link
Contributor

minikube start should recreate the certificates. It should not be necessary to stop it, and does not require deployments to be restarted. Do you mind trying it? On most systems, it should take at most 10 seconds, but I understand Hyper-V may have different performance characteristics.

That said, it might be nice to have a more explicit command.

@tstromberg tstromberg changed the title Feature request: Minikube command to recreate certificates Feature request: command to recreate certificates Dec 19, 2019
@tstromberg tstromberg added kind/feature Categorizes issue or PR as related to a new feature. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Dec 19, 2019
@RubenAtPA
Copy link
Author

I have tried it with the minikube start command, but that will try to start the entire deployment and takes up to two minutes on my system. I would opt for the explicit command or indeed the suggested flag, as the IP is already renewed without problem and the only reported issue seems to be on the certificate domain/ip.

@tstromberg
Copy link
Contributor

2 minutes sounds terrible for an already running cluster. We should fix that.

@RubenAtPA
Copy link
Author

That is indeed true. However the need to always go through the restart process would be mitigated if such an option would exist. However, as a small change to the feature request we did start using the command minikube update-context more often and it would even make more sense if this would also update the certificates instead of just the kube config files. Is that a possibility?

@tstromberg tstromberg added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Feb 26, 2020
@tstromberg
Copy link
Contributor

My preference would still be to make start super-fast rather than adding another option. That way it's solved for everyone.

What platform are you seeing 2-minute starts on?

I get 30 seconds on my machine, which is admittedly 29 seconds too long.

@RubenAtPA
Copy link
Author

I do agree that an extra option wouldn't be the better solution, however with the command update-context it will already reconfigure the kube config file with a newly assigned ip, so it would be quite straight forward if the certificates are updated too since they are bound to the ip address in question.

Regarding the start-up time. I work on a very recent laptop with a Core i5 8365U (4 core/8 threads) CPU, 32GB Ram on Windows 10 Pro with the latest updates installed. The minikube VM (Hyper-V) has 8 virtual cores and 12GB RAM assigned. The VM itself starts within seconds, but from there on it takes very long for the kubernetes services to report up and running.

@tstromberg
Copy link
Contributor

We did a lot of work in v1.9.2 to improve start time, getting it down to ~5 seconds. Most of that time is spent generating certificates. Can you check if this works for you?

@RubenAtPA
Copy link
Author

It did indeed seem to improve a lot. It's now down to about 45 seconds where most of the time it seems to be waiting for kubernetes. It is better and boils down to a coffee round :-). Thanks for that!

Does that mean that it won't become possible to include the certificate generation in the update-context too? As that does a lot less things.

@tstromberg
Copy link
Contributor

45 seconds is still a tremendously long time. Can you provide the command-line you are using?

@sammym1982
Copy link

sammym1982 commented Jun 3, 2020

We also ran into the same issue but Minikube Start did not help with it and only thing which worked was delete and recreate which is really not ideal when folks connects, reconnects gome and office networks while working remotely during these times.

Is there a way we can somehow assign ip which will not change. We can try to script it out in our tooling. A force fix to fixup certs will also be great through commands.

In our case the issue happens with the docker build commands which are configured with minikube using docker env.

Env: Windows 10

@sammym1982
Copy link

Any thoughts on this?

@RubenAtPA
Copy link
Author

RubenAtPA commented Jun 5, 2020

45 seconds is still a tremendously long time. Can you provide the command-line you are using?

I am not sure how to interpret your question. We use an Ubuntu cli on WSL that pipes it's commands to minikube running on the Windows host. The initial response is fast, but it takes quiet a lot of time on the part where it starts the kubernetes server.

Does that answer your question?

@RubenAtPA
Copy link
Author

We also ran into the same issue but Minikube Start did not help with it and only thing which worked was delete and recreate which is really not ideal when folks connects, reconnects gome and office networks while working remotely during these times.

Is there a way we can somehow assign ip which will not change. We can try to script it out in our tooling. A force fix to fixup certs will also be great through commands.

In our case the issue happens with the docker build commands which are configured with minikube using docker env.

Env: Windows 10

Running on Windows you will probably use Hyper-V as the VM manager. This system doesn't support static ip's. For that the regeneration of the certificates would help more.

@sammym1982
Copy link

@RubenAtPA I see you closed this issue. Can you share what workaround you used to get around this issue. I guess its minikube start but wanted to confirm. Unfortunately for me minikube start is not working :( I am not working through WSL2 but powershell from windows. Not sure that makes any difference in behavior. As I pointed above the issue for us when we trigger docker commands.

@RubenAtPA
Copy link
Author

@RubenAtPA I see you closed this issue. Can you share what workaround you used to get around this issue. I guess its minikube start but wanted to confirm. Unfortunately for me minikube start is not working :( I am not working through WSL2 but powershell from windows. Not sure that makes any difference in behavior. As I pointed above the issue for us when we trigger docker commands.

I have not actively closed the issue. As far as I am concerned I still hope that the proposed solution will be implemented. For now we do indeed use minikube start as work-around, but that still takes a lot of time, while adding it to the minikube update-context command would make it a lot faster.

@RubenAtPA RubenAtPA reopened this Jun 9, 2020
@medyagh
Copy link
Member

medyagh commented Jun 12, 2020

I would consider any Pr that adds a new flag
called minikube start --refresh-certs to ensure the certs are re-created.

@medyagh medyagh added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jun 12, 2020
@sammym1982
Copy link

I can confirm that minikube start does not resolve the issue for us and docker build keeps failing with this error and only solution is to delete and recreate the minikube which highly undesirable.

Error is something like:
error during connect: Get "https://XXX.XX.XX.175:2376/v1.24/version": x509: certificate is valid for XXX.XX.XX.92, 127.0.0.1, not XXX.XX.XX.175

@sammym1982
Copy link

BTW it looks like fix for my issue is resolved with 11.0 #8185. Giving it a try.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 15, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 15, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

6 participants