Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: K8s deployment. Enjoy! #384

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

felipe241916
Copy link

I've added a k8s manifest, along with the details in the readme file.

@rustdesk
Copy link
Owner

rustdesk commented Mar 6, 2024

@paspo could you help review? I have no knowledge of k8s, :)

@paspo
Copy link
Contributor

paspo commented Mar 7, 2024

Thanks @felipe241916 for your contribution.

IMHO it's not yet time for official kubernetes support: Felipe's work is not the issue here, in fact he did what it's possible with the actual state of the project.

I'll try to explain for every aspect.

Image used

Felipe choose the s6 images probably because of the more flexible environment management. The classic images are better because they contains only the required binaries (no shell, no extra binaries) and are consequently more secure.
Sadly these images aren't yet ready to be used in kubernetes, primarily because of secret management.
To fix that, we must agree on a more streamlined way to handle the configuration inside binaries (see #371), implement these modifications, and have our binaries fetch secrets from the correct place.

hbbs (signaling) and hbbr (relay) should be two separate deployments

This is somewhat related to previous topic: you want two separate containers because you have a single signaling server and multiple relays.
Sadly you can't split the s6 image, or disable only a component, and this is a big limit. If you plan to do a kubernetes deployment it's because you have a big workload, so scaling has to be planned.

(minor issue) you are suggesting a specific storageClass

I know that the target of this feature is a kubernetes admin who'll never apply a manifest without proper vetting, but it's not required to "propose" a specific storageClass if not needed (nfs-provisioner-storage in this case).

secrets are stored in PV, it's better to use k8s secrets

This is related to the first issue above, and it's the only way possible ATM. Still, the place for secrets is not in a PV, secrets should be handled by kubernetes secrets.

final recap

@felipe241916: Please keep this PR in standby. Please see #371 and give your feedback.
When we'll be ok with that proposal, the implementation can start and this will give us a container that should be a better option for k8s deployment.

@rustdesk
Copy link
Owner

rustdesk commented Mar 7, 2024

Thanks @paspo

@felipe241916
Copy link
Author

Hi @paspo, thanks for the feedback.

Regarding your recommendations:

Image used,

You are correct, I used the s6 image for the flexibility of having a busybox container available where I was able to bind some useful variables.

Initially I used a deployment with two separate containers, hbbs and hbbr. I had a problem exposing ports using LoadBalancer with different protocols because I use a baremetal environment. In the process of finding a solution, I tested the s6 image, which seemed like a good idea at the time, but I believe it is possible to run the environment separately using only the binaries.

About storageClass,

I don't think I explained it in the best way, but the intention wasn't exactly to propose a specific storageClass, but re-reading what I wrote, I agree that I gave that impression.

About secrets,

You're absolutely right, using secrets within a PV is far from ideal. I'm going to delve deeper into how to declare these secrets inside containers to improve deployment.

Finally,

Thank you again for evaluating my deployment. Rustdesk has helped me a lot recently and I would also like to be able to collaborate positively with the project.

Thank you all!

@h0jeZvgoxFepBQ2C
Copy link

Could you provide a helm chart for this? Would be really great!

Also linking my discussion here: rustdesk/rustdesk#8384

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants