-
Notifications
You must be signed in to change notification settings - Fork 14
Support a console (bash shell) in a pod #78
Comments
This pretty much requires websocket support for exec which doesn't exist There is some work on this in openshift & kubernetes so should be able to On 07:34, Thu, 20 Aug 2015 James Strachan notifications@github.com wrote:
|
Yeah, that would be awesome ! We would also need a terminal emulation in JavaScript (sth. like terminal.js or the client part of butterfly) |
We don't have direct access to the docker daemon, right ? Otherwise we could use the websocket version of exec. |
Here's an issue i raised ages ago: openshift/origin#3366 |
term.js was my thought originally that I tried to knock up a poc with until websocket support stumped me. |
@rhuss Can't use docker daemon directly, no. |
@jimmidyson Why not ? At least when building and pushing stuff with the maven plugin you are going through the docker daemon directly. If you would know the container id for a certain pod I wonder why not calling an exec should work, too ? (beside possible CORS issues ...) |
In a real cluster the docker daemon is only accessible via local Unix socket. No remote access. |
ok, understand. What about the idea of an exec-proxy service like with https://github.com/taskcluster/docker-exec-websocket-server ? |
Not sure how that would work. Perhaps spin up the proxy container on the node with the target pod on, mount docker socket (means running as privileged) & proxy through to this new container? How would you target the proxy container on to correct node? Would mean labeling nodes & using node selector but that isn't normally configured so granular as to be able to target individual node. |
just brainstorming ;-), but yeah, something like that. Isn't the node a pod is running on retrievable ? But even then you are right, how to start the proxy on a specific node ... We could put a proxy within the application pod when starting, but then there's the issue that we would have to run in privileged mode (and that the application pod must be prepared, too) Maybe all too complicated and we should more push on openshift/origin#3366 for getting an 'official' solution. Does pure Kubernetes has the same problem ? |
Yeah same problem for kubernetes-this is all same code. I would wait for it to be implemented in openshift/kubernetes. |
They have this in cockpit already, think they spawn a kubectl shell process and must redirect stdout/err so they can serve it out to the frontend. |
Is that how they do it? I thought they deploy cockpit shell/bridge containers on every node? |
they may, but they definitely spawn a kubectl process under the covers when you connect. |
Reading only docs I can find looks they also require disabling openshift auth. This is another reason to wait for proper support - propagating auth through separate processes is going to be tricky otherwise. |
We have the shell now. Is there more to do or can we close this ticket? |
it would be awesome to be able to open a shell inside a container in a pod from the console - which did the equivalent of 'oc exec ... bash' then folks could noodle inside containers to diagnose issues
The text was updated successfully, but these errors were encountered: