Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Add mattmoor/kontext to repo #53

Open
poy opened this issue May 31, 2019 · 21 comments
Open

Proposal: Add mattmoor/kontext to repo #53

poy opened this issue May 31, 2019 · 21 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@poy
Copy link

poy commented May 31, 2019

mattmoor/kontext is a useful way to package a local directory up and store it in a container registry.

This is useful for when a user would like to upload a directory without requiring access to a new data plane (e.g., GCP's Cloud Storage).

/cc @mattmoor

@vdemeester vdemeester added the kind/feature Categorizes issue or PR as related to a new feature. label May 31, 2019
@chmouel
Copy link
Member

chmouel commented May 31, 2019

This would be nice but I wonder how tekton would be able to access it if it's not in a remote data plane somewhere,

@vdemeester
Copy link
Member

Yeah we need to have a story on pipeline too for binary or source to image without a got resources builds

@poy
Copy link
Author

poy commented May 31, 2019

So would this belong here or somewhere else?

@poy
Copy link
Author

poy commented May 31, 2019

@chmouel would tekton not have access to a container registry?

@siamaksade
Copy link

@poy it would be very useful to skip the registry and stream the packaged local directory directly to a pipeline

@poy
Copy link
Author

poy commented Jun 4, 2019

@siamaksade I haven't given it deep thought, but my initial gut says that has quite a few edge cases that could make it brittle. Can you retry a pipeline if something goofs? Where is the directory stored? How does auth work?

It seems like that makes that instance have state, which normally we wouldn't want.

@siamaksade
Copy link

Agree that retry wouldn't make sense for this use-case. The use-case is to allow a developer to run their local changes through the pipeline for example on minikube before committing them to the git repository.

@poy
Copy link
Author

poy commented Jun 4, 2019

Is auth a concern then?

@siamaksade
Copy link

How do you mean?

@poy
Copy link
Author

poy commented Jun 14, 2019

@siamaksade Using a container registry to store the local directory implies the developer has write access to the registry. We can just piggy back on that auth.

However, if we push directly to the pipeline pod, then the pod will have to have an external IP with an exposed endpoint. This endpoint ideally is also secured somehow, but that means we'll have to solve for that.

@vdemeester
Copy link
Member

Related issue upstream (I think) : tektoncd/pipeline#924

@mattmoor
Copy link
Member

Streaming the build context to the Pod is (probably?) going to be secure if proxied by the API server, which creates unnecessary API Server load (I believe the OpenShift folks pointed this out early in the knative/build days, cc @bparees), and it's unclear that cluster-admins would allow this in general. I'm also unsure if this would work with mTLS enabled on a cluster with mesh (worth testing), especially since post-initContainers tekton can support in-mesh builds (we had folks interested in this in the knative/build days). Another implication of this is that clients must wait for builds to schedule before hanging up, which is exacerbated in multi-build pipelines, where the same context may be used by multiple phases (you have to wait for all tasks to schedule).

The other key thing that kontext was meant to experiment with was leveraging layering to make incremental rebuilds faster, so if you touch a single file, you could augment your prior upload with a single-file layer by extracting a manifest from the prior kontext image and computing the delta. This would mean that if the Build hit the same node on-cluster, the only file transfer would end up being the layer with the single file. Personally, I also like the simplicity of the provenance story when you build from a kontext container's digest.

Sorry for the brain dump, but happy to discuss more, if needed.

@bparees
Copy link

bparees commented Jun 14, 2019

Streaming the build context to the Pod is (probably?) going to be secure if proxied by the API server, which creates unnecessary API Server load (I believe the OpenShift folks pointed this out early in the knative/build days, cc @bparees),

This is what we do for what we call "binary" builds in openshift, but yes there are open (but as yet unrealized) concerns about apiserver load.

@tekton-robot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.

/lifecycle stale

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link
Contributor

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 28, 2020
@tekton-robot
Copy link
Contributor

@tekton-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Send feedback to tektoncd/plumbing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tekton-robot tekton-robot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 28, 2020
@vdemeester vdemeester reopened this Jul 29, 2020
@vdemeester
Copy link
Member

/remove-lifecycle stale

@tekton-robot tekton-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 29, 2020
@vdemeester
Copy link
Member

/remove-lifecycle rotten

@tekton-robot tekton-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 29, 2020
@tekton-robot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.

/lifecycle stale

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 27, 2020
@mattmoor
Copy link
Member

I built a simplified form of this into github.com/mattmoor/mink. It now support uploading a multi-arch version of kontext, and I've used it to run kaniko builds against clusters on amd64 and arm64 on Tekton.

/lifecycle frozen

@tekton-robot tekton-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
Status: Todo
Development

No branches or pull requests

7 participants