-
Notifications
You must be signed in to change notification settings - Fork 606
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1839621: Add an ability to proxy requests to Che Workspace #5332
Bug 1839621: Add an ability to proxy requests to Che Workspace #5332
Conversation
Hi @sleshchenko. Thanks for your PR. I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/uncc @rhamilto |
Do you have a doc for setup / testing this? |
1956703
to
af9aa8e
Compare
@spadgett @benjaminapetersen Regarding our security concerns around the backend endpoint, @sleshchenko and I came up with a potential solution: currently, the webhooks used by the operator to secure workspaces are owned by the controller, so its removal removes the webhooks as well. If we removed that ownerref and kept webhooks in cluster when the controller is removed, this would provide security assuming webhooks aren't manually removed:
This leaves an opening for potential privilege escalation in users who can modify webhooks or workspace/status, but those are pretty nonstandard privileges. WDYT? |
This comment has been minimized.
This comment has been minimized.
ccdb51f
to
44b2843
Compare
07c98ab
to
7793e45
Compare
Included in PR description. Setup
TestingNote: it's necessary to grant additional permissions to the Testing the
|
/ok-to-test |
Added a few minor fixups:
Wanted to also share a process for testing changes locally on Testing backend proxy while running bridge locally
|
Any guidance on these CI failures? I don't know how to resolve
|
Signed-off-by: Angel Misevski <amisevsk@redhat.com>
/retest |
pkg/terminal/proxy.go
Outdated
r2.URL.Path = path | ||
|
||
// TODO a new proxy per request is created. Must be revised and probably changed | ||
terminalProxy := proxy.NewProxy(&proxy.Config{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little unclear why we need to use a proxy. If this is just a single, well-known endpoint for initializing the terminal, why not just call that endpoint directly from the backend? This would also tighten the security because you couldn't proxy with the token to other workspace paths.
Are there other requests we intend to proxy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The requests currently proxied to the workspace are
.../exec/init
to initialize a terminal.../activity/tick
to allow scale-to-zero after inactivity -- pings every minute: Bug 1839233: send activity tick to keep the cloud shell terminal alive #5546
We can probably just make those requests directly. I think the proxy may currently be design cruft from our initial goal of serving the proxied console frontend.
Removing the proxy may interfere with future plans to reuse some of the features provided by the front-end (e.g. saving history when reopening). @sleshchenko would have to confirm on that front.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
request.Clone
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if its relevant at this point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked into writing out such a solution and feel like it introduces some unnecessary complexity and room for error (we'd basically be writing a simplified implementation of reverseProxy.ServeHTTP at a certain point). If there are no significant objections, I'd prefer to keep it as-is for now and potentially look into it in the future.
Regarding the security concerns:
- It's likely we'll have additional API paths on the workspace side for 4.6, as we intend to look into reusing some of the cloudshell UI
- Maliciously proxying the user's token to different paths is not a significant concern since anything malicious you could do with an alternate path you could do with the default path (e.g. log token on
/activity/tick
). If we want to restrict API paths, it'd be easier to check against a list of "approved" paths.
There may be some headers/other data that is processed by reverseProxy that's problematic that I'm unaware of. If it's important I can work on this today and have changes by tomorrow. @benjaminapetersen @spadgett WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need the token for these other requests? I'd expect only to need the token once on init.
We should only pass the token when we need to.
There may be some headers/other data that is processed by reverseProxy that's problematic that I'm unaware of.
It's very likely. We should not be passing cookies for instance. Probably some other headers could cause problems.
If there are only a handful of requests we'll need to make, it would be better to support only those explicitly. The proxy seems unnecessary to me here. I'd think removing the proxy would reduce complexity as opposed to adding to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@spadgett makes sense to me, I'll work on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implemented, PTAL.
Signed-off-by: Angel Misevski <amisevsk@redhat.com>
Reduce duplication by extracting k8s cluster config struct to separate method. Signed-off-by: Angel Misevski <amisevsk@redhat.com>
/retest |
/retitle Bug 1839621: Add an ability to proxy requests to Che Workspace |
@sleshchenko: This pull request references Bugzilla bug 1839621, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Limit /api/terminal/proxy endpoint from being a full reverseProxy to a workspace to simplify code. Signed-off-by: Angel Misevski <amisevsk@redhat.com>
@sleshchenko: This pull request references Bugzilla bug 1839621, which is valid. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sleshchenko, spadgett The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@sleshchenko: All pull requests linked via external trackers have merged: openshift/console#5332, openshift/console-operator#432. Bugzilla bug 1839621 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What does this PR do?
It solves https://bugzilla.redhat.com/show_bug.cgi?id=1839621
So, it adds backend endpoints
/api/terminal/proxy/[namespace]/[workspace-name]/[path]
: proxy requests on[path]
to workspace in given namespace, attaching user's bearer token. This endpoint is used to construct a kubeconfig in the user's workspace, allowing them to access commandline tools./api/terminal/available/
: Returns204
if proxy is available,503
otherwise. Used to check if the endpoint above will process requests (see security considerations below)This PR is needed for the OpenShift Console to open a terminal. See screencast how it works:
Screencast
Setup
Note: To ease testing, this PR includes some commits that include an early implementation of the frontend UI that will be used. These commits will be dropped from this PR before merging. The nature of the changes means it's much harder to test otherwise.
Built image could be used:
^ image has date in the tag for you to make sure it's up to date, and it will be updated after any changes in the PR.
WEBHOOK_ENABLED
above to false should disable all endpointsTesting
Note: it's necessary to grant additional permissions to the
console
SA, as in openshift/console-operator#432Testing the
/api/terminal/available/
endpointuse curl from terminal
Should return
204
ifWEBHOOK_ENABLED
,503
otherwise.Testing the main proxy endpoint:
Cases:
Log in as a cluster-admin user, click terminal button and wait for workspace to be running. Should see a request sent to
.../exec/init
with 403 response, and message "Terminal is disabled for cluster-admin users."Log in as regular user, click terminal button. Should see a request to
.../exec/init
with 200 status, and terminal should start. Once terminal is initialized,oc whoami
should list current user as logged in. Note: A few front-end workarounds are required to make this work:You can also exec directly into the
dev
container in the workspace pod, and testoc
there.In both cases, if
WEBHOOK_ENABLED=false
when deploying the operator, the requests to.../exec/init
should return 403 with message "Terminal endpoint is disabled: workspace operator is not deployed."Workflow implementation details
The current implementation uses the following flow:
Related issues/PRs
It depends on openshift/console-operator#432
Security concerns
We're adding a proxy to the backend that is submitting a user's auth token to an URL defined in a status field. As a result it's necessary to carefully consider cases to avoid sending the auth token to an unintended recipient. The current security model on the workspace operator side is:
To support these backend changes, requests to the new backend endpoints perform some checks before proxying the user's token:
openshift-operators
namespace