You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have just a question regarding ssh connections to a kubernetes cluster and its scalability. Since kubernetes provider uses kubectl exec command and a tunnel using ssh to stablish the connection (stdio), all connections are stablished using the kubeconfig information and the IP for the master node (or Proxy if HA kubetnetes cluster).
From this, I would like to ask what would happen if 1k users try to connect to its devpod within the same cluster. Do you think this could be a scalability problem in the future the way devpod connects to the cluster? Do you think having an ingress that expose TCP/SSH port (or nodeport) could solve the problem regarding too many connections to the same cluster? Moreover having long connections opened may require nodes to support many IO handlers.
Did you try a benchmark to test how many connections could be limited to the current approach?
Thanks in advance.
Regards
Javier
The text was updated successfully, but these errors were encountered:
Hey @jsa4000, that's something to keep an eye on indeed.
We haven't tested the limits of this approach yet but like you mentioned there's a single point of failure in the apiserver.
If the apiserver goes down, no one is able to connect to their workspaces although the workloads aren't affected. We're currently looking into ways of circumventing this and connecting to the workspaces directly.
Exposing the workload via an ingress would certainly help but also be very cumbersome in configuration
Hi,
I have just a question regarding ssh connections to a kubernetes cluster and its scalability. Since kubernetes provider uses kubectl exec command and a tunnel using ssh to stablish the connection (stdio), all connections are stablished using the kubeconfig information and the IP for the master node (or Proxy if HA kubetnetes cluster).
From this, I would like to ask what would happen if 1k users try to connect to its devpod within the same cluster. Do you think this could be a scalability problem in the future the way devpod connects to the cluster? Do you think having an ingress that expose TCP/SSH port (or nodeport) could solve the problem regarding too many connections to the same cluster? Moreover having long connections opened may require nodes to support many IO handlers.
Did you try a benchmark to test how many connections could be limited to the current approach?
Thanks in advance.
Regards
Javier
The text was updated successfully, but these errors were encountered: