You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe
Currently the KOTS install insists that secrets that access external services be entered into the configuration. This causes problems because KOTS then stores those secrets as static values in the config files.
For any service or solution where the secrets are rotated outside of KOTS, when a user goes to perform a redeploy, their secrets will then be updated with stale credentials from whatever was saved in the gitpod configuration in kots.
This manifests most immediately in setting up ECR as a registry for Gitpod - where today someone can configure a batch job to refresh the secret on a periodic basis outside of gitpod itself. If gitpod is redeployed, access to ECR will be broken until the periodic job runs, refreshing the secret.
Describe the behaviour you'd like
In fields where a username/password or certificate is requested for from the user, one should be able to select "use existing kubernetes secret" where they can provide the name of a secret to be used.
Same for service-account's, pending us supporting IAM like passwordless / realtime looking up of credentials - this is achieved with a clusterrolebinding to a k8s service account that then exposes credentials for access to those services via additional secrets mount into that services pod.
For certificates this also means a user can now perform their certificate generation / letsencrypt verification (or other third part CA that works with cert-manager, like Verafi) out of band of the installation and then provide the finished certificate to Gitpod. This cert will still be updated via cert-manager for them, and they don't have to worry about an expired ssl certificate embedded in their config file.
Describe alternatives you've considered
Most of them involve "deploy gitpod in a technically functional way" and patch components either before the installation fails or if it succeeds, patch things and then never hit redeploy in kots until they are ready to perform that patching / extra additional steps.
Additional context
Right now we're fighting our own installer because we're not exposing open ended configuration options and ways for us to pass extra data to underlying components.
Cloud best practices involve not storing secrets in plaintext and making it possible to be rotated, using rolebindings to service accounts, and other utilities to rotate secrets for us, are meant to help with those problems. Hard coding values in our configuration file just so our installer doesn't have to directly support k8s API calls (while we shell out to kubectl to perform these actions) will continue to make more work for us and make things more difficult to deploy in situations where gitpod has to confirm to internal security requirements or it will be removed.
One question here is if our components / services would automatically pick up any changes to these secrets. It would not be beneficial if you configure the installer to use secrets that exist, but then when these secrets are change the new ones are not picked up by Gitpod components.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Is your feature request related to a problem? Please describe
Currently the KOTS install insists that secrets that access external services be entered into the configuration. This causes problems because KOTS then stores those secrets as static values in the config files.
For any service or solution where the secrets are rotated outside of KOTS, when a user goes to perform a redeploy, their secrets will then be updated with stale credentials from whatever was saved in the gitpod configuration in kots.
This manifests most immediately in setting up ECR as a registry for Gitpod - where today someone can configure a batch job to refresh the secret on a periodic basis outside of gitpod itself. If gitpod is redeployed, access to ECR will be broken until the periodic job runs, refreshing the secret.
Describe the behaviour you'd like
In fields where a username/password or certificate is requested for from the user, one should be able to select "use existing kubernetes secret" where they can provide the name of a secret to be used.
Same for service-account's, pending us supporting IAM like passwordless / realtime looking up of credentials - this is achieved with a clusterrolebinding to a k8s service account that then exposes credentials for access to those services via additional secrets mount into that services pod.
For certificates this also means a user can now perform their certificate generation / letsencrypt verification (or other third part CA that works with cert-manager, like Verafi) out of band of the installation and then provide the finished certificate to Gitpod. This cert will still be updated via cert-manager for them, and they don't have to worry about an expired ssl certificate embedded in their config file.
Describe alternatives you've considered
Most of them involve "deploy gitpod in a technically functional way" and patch components either before the installation fails or if it succeeds, patch things and then never hit redeploy in kots until they are ready to perform that patching / extra additional steps.
Additional context
Right now we're fighting our own installer because we're not exposing open ended configuration options and ways for us to pass extra data to underlying components.
Cloud best practices involve not storing secrets in plaintext and making it possible to be rotated, using rolebindings to service accounts, and other utilities to rotate secrets for us, are meant to help with those problems. Hard coding values in our configuration file just so our installer doesn't have to directly support k8s API calls (while we shell out to kubectl to perform these actions) will continue to make more work for us and make things more difficult to deploy in situations where gitpod has to confirm to internal security requirements or it will be removed.
Internal conversation
The text was updated successfully, but these errors were encountered: