Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s client doesn't periodically refresh GCP credentials #209

Closed
jlewi opened this issue Sep 21, 2018 · 2 comments
Closed

k8s client doesn't periodically refresh GCP credentials #209

jlewi opened this issue Sep 21, 2018 · 2 comments

Comments

@jlewi
Copy link
Contributor

jlewi commented Sep 21, 2018

See
#207
#208

We observed the argo client getting unauthorized errors when workflows took longer than an hour to run. This is because the OAuth credential is expiring and not being refreshed.

The K8s client should automatically be refreshing the OAuth credential but this doesn't appear to happen. In #208 we added a temporary workaround to force a refresh but we should figure out why its necessary.

I suspect the problem is here:

get_google_credentials=_refresh_credentials,

To work around an issue in the K8s client we overrode loading of kube config and inserted our own logic to refresh credentials. I suspect there is a bug in that refresh and refresh is not being called (we could potentially instrument it to see if that's true).

We should consider the following

  1. Removing our custom code and seeing if the K8s client works; the original issue RefreshError with config.load_kube_config() kubernetes-client/python#339 is supposedly fixed
  2. If we need to continue to use our workaround we should try to figure out why credential refresh isn't happening and fix it.
@richardsliu
Copy link
Contributor

The bug appears to be in the K8S python client: kubernetes-client/python-base#59

For now we'll need to keep a workaround on our side.

@tania-python-dev
Copy link

I also faced with a problem with refreshing token. Python client reads token from ~./kube/config, if it is already expired, creates new token and sends next request with a new one. It looks like a usual flow but the issue is that kubectl config remains the same (with old expired token) and API responds me with 403 Forbidden error since I come with unknown for him token.
The only way I can refresh token in ~./kube/config is to execute any kubectl command from console (e.g. kubectl get namespaces). After this token refreshes and API responds 200 OK. Does anybody know how to fix it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants