-
Notifications
You must be signed in to change notification settings - Fork 409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
server: Add optional auth token #736
Conversation
1af70b9
to
3bf7431
Compare
Code lgtm, guess this can't hurt BYO either now since we just support workers there. Looks sane |
3bf7431
to
cd24821
Compare
Just for general info, I just discovered Ignition will do a retry loop infinitely on http status
|
Just writing up some more steps for testing/using this. After you do something like this: Your next step is to
|
pkg/server/server.go
Outdated
@@ -31,10 +31,30 @@ type kubeconfigFunc func() (kubeconfigData []byte, rootCAData []byte, err error) | |||
// appenderFunc appends Config. | |||
type appenderFunc func(*ignv2_2types.Config) error | |||
|
|||
// Error is returned by the GetConfig API | |||
type Error struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we might want to call it GetConfigError
or something like that (doesn't need uppercase to be exported also, does it?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(@runcom I thought upper to be an exported type)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
like this naming suggestion btw.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I realized only after commenting that I uppercased my suggestion 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or possibly configError
to be explicit.
aws route53 issues /retest |
Now a much stronger elaboration of this PR would be for the MCO to support one-time-use tokens. Rather than having a secret, an admin (or automation) would do e.g.:
The secret name is the concatenation of Then the HTTP request would have e.g. The MCS would check for this secret's existence, and if it existed, accept the request and then delete the secret. This would also mean access to the EC2 metadata API from a pod couldn't gain a useful token. The machine API would need to learn how to generate a per-instance userdata. |
/retest |
OK, all tests passed on this one, confirming my manual testing before submitting the PR that nothing depends on access to the I'd like to consider landing this as we're pretty sure it's not going to break anything, and gives us the mechanism to gate access to |
This is an optional hardening for access to Ignition; the installer generates a random key (separately for master/worker pool) and installs it into the `openshift-machine-config-operator` namespace. If the MCS finds an `ignition-auth` secret with the `master/worker` keys, it will use it: openshift/machine-config-operator#736 This PR just generates those secrets, so we can land it before the MCO PR as well.
Installer PR: openshift/installer#1740 |
/approve This lgtm, can this land separately from the installer PR? do we need a go-on from Auth team? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nits but those are nits 😄
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ashcrow, cgwalters, runcom The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
func (cs *clusterServer) GetConfig(cr poolRequest) (*ignv2_2types.Config, error) { | ||
mp, err := cs.machineClient.MachineConfigPools().Get(cr.machineConfigPool, metav1.GetOptions{}) | ||
func (cs *clusterServer) GetConfig(cr poolRequest, auth string) (*ignv2_2types.Config, error) { | ||
authSecretObj, err := cs.kubeClient.CoreV1().Secrets(componentNamespace).Get(ignitionAuth, metav1.GetOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ability to configure multiple secrets would make later rotation much easier.
|
||
// If there's a secret, require that it was passed as a query parameter. | ||
if authEnabled { | ||
authSecret := string(authSecretObj.Data[name]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If an administrator forgets to configure an authsecret for a new machine pool, this will fail open.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's intentional for backcompat reasons.
/hold This doesn't but us much in terms of security and it makes disaster recovery more difficult. |
If access to the EC2 metadata endpoint is shut off (which it needs to be anyways), or in cases where the bootstrap config isn't accessible at all, I think this is fairly strong security. It entirely shuts down the problem. How is this not buying much?
Slightly...and if desired we can switch this to just landing code to enable auth tokens for both master/worker without actually enabling it by default. In other words, drop the change to disable the master pool. Further, keep in mind that this code is a step towards further integration with machineAPI. |
Ignition may contain secret data; pods running on the cluster shouldn't have access. This adds opt-in support for denying serving that data. It is disabled by default so we can check whether this would happen in any CI scenarios to start. Run `oc -n openshift-machine-config-operator create configmap machine-config-server provision-check=yes` to switch to enforcing mode. First, we deny any request that appears to come from the pod overlay network. This closes off a lot of avenues without any risk. However, we can't guarantee all in-cluster requests appear to originate from the pod network; in some cases according to the SDN team, particularly for machines that have multiple NICs. Hence, this PR also closes off access to any IP that responds on port 22, as that is a port that is: - Known to be active by default - Not firewalled A previous attempt at this was to have an [auth token](openshift#736); but this fix doesn't require changing the installer and people's PXE setups. In the future we may reserve a port in the 9xxx range and have the MCD respond on it so that admins who disable/firewall SSH don't have indirectly reduced security.
@cgwalters: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@cgwalters: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Closing in favor of #2223 |
Since the bootstrap node serves Ignition to master, and today we
don't support scaling up/down masters, just disable access to
the master Ignition config in the in-cluster MCS by default.