-
Notifications
You must be signed in to change notification settings - Fork 39.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ServiceAccounts #7101
ServiceAccounts #7101
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed, please reply here (e.g.
|
467a6ce
to
0b75524
Compare
sorry for the massive cc, but a lot of you might be interested in this: |
Really awesome |
Haven't reviewed yet but may intersect what's being discussed in #6578 cc
|
func startComponents(etcdClient tools.EtcdClient, cl *client.Client, addr net.IP, port int) { | ||
machineList := []string{"localhost"} | ||
|
||
runApiServer(etcdClient, addr, port, *masterServiceNamespace) | ||
runScheduler(cl) | ||
runControllerManager(machineList, cl, *nodeMilliCPU, *nodeMemory) | ||
runServiceAccountController(cl) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The controller is also being run from controllermanager.go. Is that correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nm, i see that's the standalone kubernetes binary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
which means i need to put my PVC binder in the same place :) Glad I was reviewing this PR!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, actually, I think this does get run twice.
You have this in controllermanager.go and that's run on line 165 above. Two controllers running.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm having a conversation with myself. runControllerManager runs the same controllers as controllermanager but they are not the same thing.
I suppose the question then is why a separate run method for your controller instead of rolling it into runControllerManager (which ideally wouldn't have duplication with controllermanager.go)?
// It must be of type SecretTypeServiceAccountToken | ||
APISecret string `json:"apiSecret"` | ||
// Secrets is the list of secrets allowed to be used by pods running using this ServiceAccount | ||
Secrets []string `json:"secrets"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, apiSecret is for this service account to talk to kubernetes/openshift apiserver, and secret is to talk to other things, like, say, the GCE API or the github API?
Should we call it KubeSecret? Or do you not want to leak the k word into OS API (understandable)? Other ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are your thoughts on tradeoffs on this design versus just having one array of secret names, where one of the entries is by convention a secret of type SecretTypeServiceAccountToken? It does work around the naming problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would some clusters ever have (I'm making this up but, hopefully you get the idea) a GithubSecretCrontroller which adds Github keys to pods based on a certain default Github project for the namespace. Or a GoogleCloudStorageController which adds GoogleCloudStorage bucket keys. Etc.
If that is the case, then those initializers will need to be allowed to edit the Secrets array. Which requires that the array have a patchStrategy:"merge" patchMergeKey:"foo"
.
What would foo be?
If we aren't sure what foo is, then I think this array needs to be a map.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The apiSecret
field did feel messy... open to suggestions. My thinking in making it a special field:
- It is sort of special... the secret it refers to has to be of a certain type, and two different service accounts shouldn't reference the same ServiceAccountToken secret.
- Explicitly naming the secret that holds the ServiceAccount's token lets the admission controller that sets up the VolumeMount do so based only on the info in the ServiceAccount, without also listing/searching secrets.
- Avoids ambiguity if there are more than one ServiceAccountToken secret for a ServiceAccount (which is intentionally possible)
As to the name, serviceAccountTokenSecret
felt too long (though that might be better than apiToken
now that I think about it), and tokenSecret
not descriptive enough. If the token is usable against things running on top of Kube with the ability to identify the ServiceAccount from it, calling it a KubeSecret felt too constraining.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would some clusters ever have ... which adds Github keys to pods ...
Sure. Controllers managing secrets (either to generate them, populate them, or rotate them) seems very reasonable to me.
Which requires that the array have a patchStrategy:"merge" patchMergeKey:"foo".
The patchStrategy
stuff is new to me... I'll catch up and update
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is fairly new and explained somewhat in docs/api-conventions.md.
Excited about this PR. |
I've been working on service accounts for system components (bootstrapping). |
This is a perfect use case for integration testing. |
What is the story about updating a token and its dependent processes? I guess the controller could just update the Secret when it is near expiry, and existing pods would keep running with their already mounted data, and newly created pods would use the new secret? Some other thing could make sure all the pods that used the old secret were re-made. |
The service account tokens have to be more structured in order for the authenticator to look in the right namespace and secret to validate them. Assuming you generated them the same way the controller would, nothing would prevent you from doing so proactively. The admission plugin is cleanly separated from the account/token-creating controllers. |
e.deleteSecret(newSecret) | ||
} | ||
|
||
// Handle case where they changed the type or annotations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need resourceVersion as well as UID to reliably detect changes, since these objects are not immutable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just making sure the old and new secret reference the same serviceAccount (both name and UID)... I don't think I care about the specific resourceVersion of the serviceaccount
To ponder: |
If the source cluster was one of the claims in the token, you could still do cross cluster validation. Alternately, you could expire tokens (not my preference for tokens given to machine accounts) and just verify the token's signature on the remote clusters. |
The same mechanism that would let you have a single signing (private) key to sign the tokens and multiple verifying (public) keys would allow:
|
9656370
to
b9ab5ce
Compare
@nikhiljindal I've run e2e locally, but hadn't completed a GCE run with this branch yet... working on that today |
Jenkins will probably beat you to it now that this is merged. :) |
Can you comment here on pass/fail, so I know whether to rebase on top of it or wait for the revert? |
You're so supportive :) |
aah Sorry. I saw the status/LGTM tag on the PR and assumed that it was ready to merge :) |
On GCE:
|
On GKE-CI:
|
Looks like we need to add the |
@liggitt ^^ |
@roberthbailey I'm having a hard time telling where GKE-CI gets its |
Fixing it on our side. |
@roberthbailey ok... the two changes that were needed for GCE were:
Not sure how GKE starts up |
The second one gets picked up automatically. Changes to the bash scripts/config to create a cluster need to be mirrored in our provisioning code. |
All fixed on our end: GKE-CI is now happy as well:
|
./hack/local-up-cluster broke for me after syncing this change. /tmp/kube-apiserver.log shows:
After some debugging I found that
I ran the following commands to fix it:
./hack/local-up-cluster works fine for me now. |
@nikhiljindal nope, opened #8174 |
worked for me locally because that folder existed from a previous run (it gets created later by the apiserver to hold self-signed certs/keys) |
Was this merged in 0.17.0? |
Nope. It missed the release by a day or so. |
OK, tagging 0.17.1 has just fixed that. Thanks |
This is a first pass at ServiceAccount objects and associated tokens.
Overview:
/var/run/secrets/kubernetes.io/serviceaccount
)kubernetes.io/serviceaccount
)serviceaccount:[namespace]:[service account name]
)sub
,kubernetes.io/service-account.uid
claims from token to builduser.Info
Short-term follow-ups:
Longer-term follow-ups:
kubeconfig
and API CA in addition to the token (requires knowing the CA bundle and hostname clients will need to use to contact the API... do we know that?) - Put Root CA Cert for cluster into all(ish) pods #7648/pods/[pod]/secrets
to make this easier)Sample usage
To create a service account:
To delete a service account:
To create additional API tokens
A controller loop ensures a secret with an API token exists for each service account. To create additional API tokens for a service account, create a secret of type
ServiceAccountToken
with an annotation referencing the service account, and the controller will update it with a generated token:To delete/invalidate a service account token:
To set a pod's service account
Create or update the pod with the
spec.serviceAccount
field set to the name of aServiceAccount
. That field is set to the name of the defaultServiceAccount
if empty.To mount the APISecret in containers
Ha! Trick step. You don't have to do anything... the ServiceAccount admission plugin automatically mounts the pod's
spec.serviceAccount
'sServiceAccountToken
secret at/var/run/secrets/kubernetes.io/serviceaccount
. That means containers can count on an API token existing at/var/run/secrets/kubernetes.io/serviceaccount/token