Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Airgapped installs of Gitpod force the use of installation registry #10298

Closed
2 tasks done
mrzarquon opened this issue May 27, 2022 · 2 comments · Fixed by #10685
Closed
2 tasks done

Airgapped installs of Gitpod force the use of installation registry #10298

mrzarquon opened this issue May 27, 2022 · 2 comments · Fixed by #10685
Labels
feature: airgap self-hosted team: delivery Issue belongs to the self-hosted team team: workspace Issue belongs to the Workspace team type: bug Something isn't working

Comments

@mrzarquon
Copy link
Contributor

mrzarquon commented May 27, 2022

Bug description

Currently if one installs Gitpod via a private registry (in airgap) we silently enforce that registry as the place to store user images, by overwriting the gitpod install yaml instead of surfacing the options in the configuration screen.

Since we want users to use the kots installer screen, at minimum this information should be exposed there.

This also complicates situations where some users can docker pull from the URL, but it's not the same URL to get an HTTP status of the image, due to networking and proxy limitations. The result is kubernetes can deploy from it, but registry facade's own status checks fail against it.

Steps to reproduce

Install gitpod in airgapped mode, block port 80 access to a target registry from the POD network, but not the Node network, and see that the services can be deployed, but now the services themselves (running in the PODs) can't access the same registry.

Workspace affected

No response

Expected behavior

The Container Registry our configuration page displays explains that this is where gitpod needs to store container images, mostly for user workspaces. One would expect this could be a separate registry than where Gitpod is installed from and can be configured separately.

Example repository

No response

Anything else?

The fact that in an airgapped installation Gitpod will store an image built by a user alongside where it retrieves the images for it's own services has multiple consequences:

  • Opens a security issue where it would be possible for a user generated image to be deployed in place of a gitpod service (ie, a custom version of agent-smith), if other vulnerabilities were found in the workspace generation process.
    • Ideally Gitpod's Service registry would be one restricted to read only access to Gitpod, to ensure it can't overwrite it's own services, by accident or intentionally
  • Gitpod's service images and end-user workspace images have lifecycle needs: workspaces can grow exponentially in numbers and size while service images would remain static with monthly updates and pruning of previous versions. These could be entirely different registries for optimal cost savings and auditing depending on the kinds of registries the customers are using.

Scenarios

There are two different uses of Registries for Gitpod: Service Images and User Images. A solution should consider the following iterations when being developed:

  • Same registry, same credentials, same namespaces because it's all considered untrusted content (current scenario)
  • Same registry, same credentials, different namespaces for Service vs User: The namespaces enforce restrictions, gitpod user only has read access to Service images
    • This implies that a different process and/or credentials are used to upload the airgapped images, so we can't assume we can copy the information from KOTS
  • Same registry, different credentials, same namespace: much less likely, but still feasible
  • Different registry, different credentials, same namespace: artifactory/gitpod for services, acr/gitpod for images
  • Different registry, different credentials, different namespace

Dependent Tasks

@mrsimonemms
Copy link
Contributor

Good call @mrzarquon. The reason for this is actually a limitation in image-builder-mk3. It's a bit of a tricky one to explain, so strap in...

Context

In a normal Kubernetes deployment, the imagePullSecrets is an array of strings. How that works is:

  1. try to pull image with no credentials
  2. loop through credentials and try to pull with the given credentials

This will either exit successfully if it can pull the image, or exit with an error code and bubble that up to the Kubernetes control plane for a human to decide what to do.

The problem

The image-builder-mk3 only accepts a single pullSecret. Consequently, it is unable to do the cycling through the different pull secrets to check.

My understanding is that it first tries without any credentials and then tries with the given credentials. This subtle difference between image-builder-mk3 and Kubernetes imagePullSecrets is the underlying problem. For non-airgapped installations, the images are pulled from our container registry without credentials and the workspace images are pulled from the user's private registry injecting the pull secret when it fails public authentication.

For an airgapped installation, all the images come from a private registry with a different pullSecret to the workspace registry. When it fails public authentication, it only has a single secret with which to attempt authentication.

The solution

This is dependent upon @gitpod-io/engineering-workspace amending image-builder-mk3 so that it can accept an array of strings in the pullSecret config parameter (perhaps also renaming to pullSecrets). Once this is done, we should be able to do this ticket.

@lucasvaltl
Copy link
Contributor

Currently blocked by #10396

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature: airgap self-hosted team: delivery Issue belongs to the self-hosted team team: workspace Issue belongs to the Workspace team type: bug Something isn't working
Projects
No open projects
3 participants