-
Notifications
You must be signed in to change notification settings - Fork 248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Add mattmoor/kontext to repo #53
Comments
This would be nice but I wonder how tekton would be able to access it if it's not in a remote data plane somewhere, |
Yeah we need to have a story on pipeline too for binary or source to image without a got resources builds |
So would this belong here or somewhere else? |
@chmouel would tekton not have access to a container registry? |
@poy it would be very useful to skip the registry and stream the packaged local directory directly to a pipeline |
@siamaksade I haven't given it deep thought, but my initial gut says that has quite a few edge cases that could make it brittle. Can you retry a pipeline if something goofs? Where is the directory stored? How does auth work? It seems like that makes that instance have state, which normally we wouldn't want. |
Agree that retry wouldn't make sense for this use-case. The use-case is to allow a developer to run their local changes through the pipeline for example on minikube before committing them to the git repository. |
Is auth a concern then? |
How do you mean? |
@siamaksade Using a container registry to store the local directory implies the developer has write access to the registry. We can just piggy back on that auth. However, if we push directly to the pipeline pod, then the pod will have to have an external IP with an exposed endpoint. This endpoint ideally is also secured somehow, but that means we'll have to solve for that. |
Related issue upstream (I think) : tektoncd/pipeline#924 |
Streaming the build context to the Pod is (probably?) going to be secure if proxied by the API server, which creates unnecessary API Server load (I believe the OpenShift folks pointed this out early in the The other key thing that kontext was meant to experiment with was leveraging layering to make incremental rebuilds faster, so if you touch a single file, you could augment your prior upload with a single-file layer by extracting a manifest from the prior kontext image and computing the delta. This would mean that if the Build hit the same node on-cluster, the only file transfer would end up being the layer with the single file. Personally, I also like the simplicity of the provenance story when you build from a kontext container's digest. Sorry for the brain dump, but happy to discuss more, if needed. |
This is what we do for what we call "binary" builds in openshift, but yes there are open (but as yet unrealized) concerns about apiserver load. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle stale |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
I built a simplified form of this into github.com/mattmoor/mink. It now support uploading a multi-arch version of kontext, and I've used it to run kaniko builds against clusters on amd64 and arm64 on Tekton. /lifecycle frozen |
mattmoor/kontext is a useful way to package a local directory up and store it in a container registry.
This is useful for when a user would like to upload a directory without requiring access to a new data plane (e.g., GCP's Cloud Storage).
/cc @mattmoor
The text was updated successfully, but these errors were encountered: