-
Notifications
You must be signed in to change notification settings - Fork 153
kaniko info in readme is incorrect #43
Comments
FYI, here's the original kanikache proposal: GoogleContainerTools/kaniko#300 |
Sorry for the inaccuracy, we will fix our README. Regarding the cache part, we meant to advertise the #!COMMIT feature, but the words came out wrong... Our design here is actually very similar - we also use the target registry for layer cache. The only difference here is that we use redis for cache key-value storage with a TTL (I think kaniko uses registry tag?), mostly because our internal registry tag storage cannot handle mutation. We would love to join the working group and discuss in person :) |
@yiranwang52 very cool, I'd love to hear more. FWIW, I still feel as though customizing the layering of produced images is one of the major unexplored frontiers (that I'd like to see us explore) with kaniko, and a major longstanding issue with Dockerfile. The most common heuristic I've seen is "collapse it all" and However, this approach is still beholden to Dockerfile directives as the finest resolution for commits. I can't for example break up: If you join the |
@mattmoor looking at https://docs.google.com/document/d/1pXfg6pLPpQoIb5_E6PeVWHLttL2YgUPX6ysoqmyVzik/edit#heading=h.j5f2ov1fvu9f, what if we run FTL inside a build container (essentially splitting FTL into a list of RUN commands) so we get the benefits of caching language specific dependency layers. |
(assuming I understand your suggestion correctly) If Dockerfile synthesis is possible, and you can adequately split your "planning" from your "doing" then you may be able to abuse Dockerfile to achieve this goal. I have a(n internal) doc where I use a Dockerfile to convey the ideas behind FTL. Note that this assumes that your per-layer construction can be made completely deterministic etc etc. I wasn't aware there were public docs on Python FTL, so that's neat :) |
Hey, cool to see more tools emerging in this space. I'm the original author of the Bazel stuff, and Uber TL'd kaniko.
This isn't really accurate. kaniko heavily uses https://github.com/google/go-containerregistry, which is a generic container registry library (also used in skaffold, buildpacks v3, Knative, ko), and for auth uses a "keychain" that mimics the Docker keychain.
Because setting up auth (esp. in a Container) can be a royal pain, the tool provides options for making things smoother via GCP credentials, but also Azure and AWS credentials.
In fact, the only "tight" integration with Kubernetes I'm aware of is that it falls back on Kubelet-style authentication using Node identity (instead of anonymous) if the standard Docker keychain resolution fails to find a credential (think: universal credential helper).
I'm curious if you have followed the "kanikache" work, where kaniko leverages the final Docker registry as a distributed cache? I'd be surprised if a redis-based cache out-performed this because while redis is fast, the registry yields no-copy caching. kaniko won't even download the image if the only remaining directives are metadata manipulation. This is mostly done, but there are a few places left that the team's working on optimizing.
For lack of a better forum to ask, I figured I'd reach out and see if you would be interested in coming to the Knative Build working group to talk about makisu? While the general focus is on Knative Build and Pipelines, this group is deeply interested in safe on-cluster Build, and typically has representatives from related groups (buildah, kaniko, buildpacks). We've had presentations on all three in the past, so I'd love to hear more about makisu.
cc @imjasonh who typically runs these meetings. Also feel free to reach out over email (my github handle at google.com), or find me on Knative slack (same handle) if you want to chat or exchange tips/tricks for building container images.
Again, very cool to see this :)
The text was updated successfully, but these errors were encountered: