Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

karmada-controller-manager can't restart in helm installation for dependent secret not found #5233

Closed
chaosi-zju opened this issue Jul 23, 2024 · 4 comments · Fixed by #5305
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@chaosi-zju
Copy link
Member

What happened:

In helm installation method, when installing karmada-controller-manager, we use a initContainer to wait for the ready status of karmada-apiserver, which prevents the karmada-controller-manager from CrashLoopBack. This feature is introduced in #5010.

In order to access host cluster kube-apiserver in initContainer, we mounted a service-account-token type Secret, because the deployment of karmada-controller-manager is defined automountServiceAccountToken: false. Unset automountServiceAccountToken is introduced in #2523.

However, in #5010, we deleted the Secret mentioned above when we finished installation. Actually, we still need this secret after installation finished, otherwise karmada-controller-manager can't restart for lack of the mounted secret.

What you expected to happen:

Now, we have two ways to resolve it:

  • Method one: after the installation is complete, the secret remains.

But I am not clear about is this reserved secret sensitive information? Is there any security issue?

  • Method two: no longer manually mount the Secret but directly set automountServiceAccountToken.

In #2523, we unset automountServiceAccountToken because we think karmada-controller-manager has no need to interact with host cluster, but now it seems that it needs to interact with the host cluster. So, maybe we can reset automountServiceAccountToken.

How to reproduce it (as minimally and precisely as possible):

  1. install karmada by helm method (the latest version)
  2. delete the pod of karmada-controller-manager
  3. check whether the pod of karmada-controller-manager ready

Anything else we need to know?:

Environment:

  • Karmada version: latest
  • kubectl-karmada or karmadactl version (the result of kubectl-karmada version or karmadactl version): latest
  • Others:
@chaosi-zju chaosi-zju added the kind/bug Categorizes issue or PR as related to a bug. label Jul 23, 2024
@chaosi-zju
Copy link
Member Author

chaosi-zju commented Jul 23, 2024

@XiShanYongYe-Chang @zhzhuang-zju @calvin0327 @carlory and anyone else

Can you help and look this problem?

Which of the two way do you think is better?

@XiShanYongYe-Chang
Copy link
Member

Personally, I prefer method two.

Hi @carlory @calvin0327 are there any security issues if we do this, or anything else that doesn't make sense?

@carlory
Copy link
Member

carlory commented Jul 26, 2024

There is no essential difference between the two methods, so they are the same regarding security. so more simpler and easier to understand is the better.

method two +1

@zhzhuang-zju
Copy link
Contributor

method two +1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
Status: No status
Development

Successfully merging a pull request may close this issue.

4 participants