-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GCE kube-up] Don't provision kubeconfig file for kube-proxy service account #52183
[GCE kube-up] Don't provision kubeconfig file for kube-proxy service account #52183
Conversation
Forgot to note, this doesn't change how kube-proxy static pods are deployed. |
Ref #23225. |
3bdc429
to
bd60abd
Compare
cmd/kube-proxy/app/server.go
Outdated
return nil, nil, err | ||
// When only master URL is set, use service account but override the host. | ||
if len(config.KubeConfigFile) == 0 && len(masterOverride) != 0 { | ||
glog.Info("Only --master was specified. Using service account with overrided host.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to keep backwards-compat, we can't do this.
--master
only currently means "I can run anywhere, and I will connect insecurely (via 8080 port) to the API server"
I guess we need to add yet an other --use-service-account-credentials
or similar flag :/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we need to add yet an other --use-service-account-credentials or similar flag :/
Thanks, I think you are right...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a flag --service-account-master-url
. PTAL, thanks!
This isn't targeted for v1.8, right? |
Assign back to me once it is LGTM'ed and ready for approval. |
Nope, let's do it for v1.9 :) |
bd60abd
to
f1dac0c
Compare
ba84b11
to
3d151bf
Compare
/retest |
1 similar comment
/retest |
Tests passed, code is ready for review. cc @ncdc for kube-proxy config API changes |
Would you be willing to do this without adding a new command line flag to |
I would like to do so. What is the timeline of componentconfig API (enabled by default already)? I'm a bit lost after reading kubernetes/enhancements#115. Would be great to have some guidelines for converting flags to config file. |
The kube-proxy config is functional right now (you can do Ultimately we need to move the kube-proxy config structs into their own api group, and componentconfig needs to go away. I would recommend converting to a config file soon, but one concern I do have is about upgrades: if we roll out a cluster with version n, then later we adjust the config file format, how do we handle upgrades to n+1 as well as downgrades back to n? |
Is populating config file content via configmap (like what kubeadm alread did for kubeconfig) and making sure configmap is updated during upgrade/downgrade be feasible? |
As long as you can get a file on disk in a path that |
What is blocking stabilizing the config structure, other than moving it out of componentconfig and making your desired changes? Why not just do that now and move it out of alpha, so that you have API stability across upgrades? |
With the Kubelet, we won't convert to using the config file until the kubeletconfig API group is out of alpha, and the loading-from-a-file mechanism will remain behind an alpha gate until then. |
Nothing, other than time 😄 |
@liggitt Thanks for the comment.
Good point. No I don't have a compelling reason now. I was thinking by making it a flag/config we could make kube-proxy binary self-contained, though that isn't really true as by using |
Revised PR a bit to keep Pushed as separate commits and hopefully keeping review simple.
The last commit tweaks KUBE_PROXY_DAEMONSET to prove this is working. Will remove later. Sorry for the last minute changes @mikedanese @ncdc |
0d33422
to
5896f00
Compare
/retest |
thanks, the client building looks more coherent now |
5896f00
to
5c381d7
Compare
Rebased. @mikedanese @ncdc any chance to take another look? Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The go code changes lgtm, although I have a question if we could do this without adding a flag.
cmd/kube-proxy/app/server.go
Outdated
&clientcmd.ConfigOverrides{ClusterInfo: clientcmdapi.Cluster{Server: masterOverride}}).ClientConfig() | ||
if err != nil { | ||
return nil, nil, err | ||
if config.UseServiceAccount { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@liggitt @deads2k what if we just did this instead?
loader := clientcmd.NewDefaultClientConfigLoadingRules()
loader.ExplicitPath = config.KubeConfigFile
kubeConfig, err := clientcmd.BuildConfigFromKubeconfigGetter(masterOverride, loader.Load)
I believe this would allow us to skip adding a "use service account" flag:
- Use the kubeconfig file, if it's specified in the kube-proxy config
- Support the master url override
- Use the in-cluster config if the kubeconfig file is not specified
WDYT? If this is too big or risky of a change, then we can certainly keep what's here in this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, seems like BuildConfigFromFlags()
has supported this use case?
kubernetes/staging/src/k8s.io/client-go/tools/clientcmd/client_config.go
Lines 522 to 539 in 4ee72eb
// BuildConfigFromFlags is a helper function that builds configs from a master | |
// url or a kubeconfig filepath. These are passed in as command line flags for cluster | |
// components. Warnings should reflect this usage. If neither masterUrl or kubeconfigPath | |
// are passed in we fallback to inClusterConfig. If inClusterConfig fails, we fallback | |
// to the default config. | |
func BuildConfigFromFlags(masterUrl, kubeconfigPath string) (*restclient.Config, error) { | |
if kubeconfigPath == "" && masterUrl == "" { | |
glog.Warningf("Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.") | |
kubeconfig, err := restclient.InClusterConfig() | |
if err == nil { | |
return kubeconfig, nil | |
} | |
glog.Warning("error creating inClusterConfig, falling back to default config: ", err) | |
} | |
return NewNonInteractiveDeferredLoadingClientConfig( | |
&ClientConfigLoadingRules{ExplicitPath: kubeconfigPath}, | |
&ConfigOverrides{ClusterInfo: clientcmdapi.Cluster{Server: masterUrl}}).ClientConfig() | |
} |
If using InClusterConfig() when neither kubeconfig nor masterUrl is specified for kube-proxy is not considered breaking backwards-compat, I'd happy to do it this way. The current behavior is using default API client. cc @luxas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took another look, using BuildConfigFromFlags()
seems valid here. As when none of masterUrl
nor kubeconfigPath
is defined, it will call InClusterConfig()
. Then when InClusterConfig()
fails, it then falls back to default config. This seems to match the current behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
loader := clientcmd.NewDefaultClientConfigLoadingRules()
That would make kube-proxy
honor $KUBECONFIG
and $HOME/.kube/config
, which seems problematic. Traditionally, we've had server components use explicit flags to specify connection config files, rather than picking up env-based config like $KUBECONFIG.
InClusterConfig()
falls somewhere between an explicit --kubeconfig
and an implicit $KUBECONFIG
or homedir behavior, since it is using paths and envs only intended to be set in a containerized environment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The kubeconfig is going to be specific in the config file via --config
. We won't support --kubeconfig
any more.
Maybe if the kubeconfig in the config file is blank, we fall back to in cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
specific file falling back to InClusterConfig() if unspecified seems ok. Just make sure we don't pull in $KUBECONFIG or $HOME/.kube/config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
K
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MrHohn can you make the change such that if config.KubeConfigFile
is empty, we use the in-cluster config?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ncdc Sounds good, will ping again when ready.
5c381d7
to
a8354ba
Compare
a8354ba
to
00454b1
Compare
if len(config.KubeConfigFile) == 0 && len(masterOverride) == 0 { | ||
glog.Warningf("Neither --kubeconfig nor --master was specified. Using default API client. This might not work.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to log that it's using in-cluster config?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on logging why it is falling back to in cluster config. LGTM otherwise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. Added a log line for this.
LGTM. @liggitt please check the in-cluster setup. |
00454b1
to
476138c
Compare
Thanks for reviewing! Removed the dummy commit. |
cluster changes still looks good. /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bowei, mikedanese, MrHohn Associated issue: 281 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue (batch tested with PRs 52883, 52183, 53915, 53848). If you want to cherry-pick this change to another branch, please follow the instructions here. |
What this PR does / why we need it:
Offloading the burden of provisioning kubeconfig file for kube-proxy service account from GCE startup scripts. This also helps us decoupling kube-proxy daemonset upgrade from node upgrade.
Previous attempt on #51172, using InClusterConfig for kube-proxy based on discussions on kubernetes/client-go#281.
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #NONESpecial notes for your reviewer:
/assign @bowei @thockin
cc @luxas @murali-reddy
Release note: