Skip to content
This repository has been archived by the owner on May 4, 2021. It is now read-only.

kaniko info in readme is incorrect #43

Closed
mattmoor opened this issue Nov 18, 2018 · 5 comments
Closed

kaniko info in readme is incorrect #43

mattmoor opened this issue Nov 18, 2018 · 5 comments
Labels
documentation Related to documentation or examples

Comments

@mattmoor
Copy link

Hey, cool to see more tools emerging in this space. I'm the original author of the Bazel stuff, and Uber TL'd kaniko.

Kaniko is tightly integrated with Kubernetes, and manages secrets with Google Cloud Credential...

This isn't really accurate. kaniko heavily uses https://github.com/google/go-containerregistry, which is a generic container registry library (also used in skaffold, buildpacks v3, Knative, ko), and for auth uses a "keychain" that mimics the Docker keychain.

Because setting up auth (esp. in a Container) can be a royal pain, the tool provides options for making things smoother via GCP credentials, but also Azure and AWS credentials.

In fact, the only "tight" integration with Kubernetes I'm aware of is that it falls back on Kubelet-style authentication using Node identity (instead of anonymous) if the standard Docker keychain resolution fails to find a credential (think: universal credential helper).

However, Makisu's more flexible caching features make it optimal for higher build volume across many repos and developers.

I'm curious if you have followed the "kanikache" work, where kaniko leverages the final Docker registry as a distributed cache? I'd be surprised if a redis-based cache out-performed this because while redis is fast, the registry yields no-copy caching. kaniko won't even download the image if the only remaining directives are metadata manipulation. This is mostly done, but there are a few places left that the team's working on optimizing.


For lack of a better forum to ask, I figured I'd reach out and see if you would be interested in coming to the Knative Build working group to talk about makisu? While the general focus is on Knative Build and Pipelines, this group is deeply interested in safe on-cluster Build, and typically has representatives from related groups (buildah, kaniko, buildpacks). We've had presentations on all three in the past, so I'd love to hear more about makisu.

cc @imjasonh who typically runs these meetings. Also feel free to reach out over email (my github handle at google.com), or find me on Knative slack (same handle) if you want to chat or exchange tips/tricks for building container images.

Again, very cool to see this :)

@mattmoor
Copy link
Author

FYI, here's the original kanikache proposal: GoogleContainerTools/kaniko#300

@yiranwang52
Copy link
Collaborator

Sorry for the inaccuracy, we will fix our README.

Regarding the cache part, we meant to advertise the #!COMMIT feature, but the words came out wrong... Our design here is actually very similar - we also use the target registry for layer cache. The only difference here is that we use redis for cache key-value storage with a TTL (I think kaniko uses registry tag?), mostly because our internal registry tag storage cannot handle mutation.

We would love to join the working group and discuss in person :)

@mattmoor
Copy link
Author

@yiranwang52 very cool, I'd love to hear more.

FWIW, I still feel as though customizing the layering of produced images is one of the major unexplored frontiers (that I'd like to see us explore) with kaniko, and a major longstanding issue with Dockerfile.

The most common heuristic I've seen is "collapse it all" and #!COMMIT is a very nice middle ground (I wonder if we can get at it with the standard buildkit parser?).

However, this approach is still beholden to Dockerfile directives as the finest resolution for commits. I can't for example break up: RUN pip install so that each .whl file's output gets its own layer, which we use as part of a technique called "FTL" (Bazel is capable of something similar with py_image, and several other {lang}_image rules; it was the original proving ground for many of these ideas).

If you join the knative-users google group (self-service) you should be able to peruse this deck on FTL: https://docs.google.com/presentation/d/1rZNK_Lb2NM0xm-AfQE0W1DX96uevtdHksz40j-XPzhI/edit (it foreshadows the birth of kaniko at the end).

@evelynl94
Copy link
Collaborator

@mattmoor looking at https://docs.google.com/document/d/1pXfg6pLPpQoIb5_E6PeVWHLttL2YgUPX6ysoqmyVzik/edit#heading=h.j5f2ov1fvu9f, what if we run FTL inside a build container (essentially splitting FTL into a list of RUN commands) so we get the benefits of caching language specific dependency layers.

@mattmoor
Copy link
Author

(assuming I understand your suggestion correctly) If Dockerfile synthesis is possible, and you can adequately split your "planning" from your "doing" then you may be able to abuse Dockerfile to achieve this goal. I have a(n internal) doc where I use a Dockerfile to convey the ideas behind FTL. Note that this assumes that your per-layer construction can be made completely deterministic etc etc.

I wasn't aware there were public docs on Python FTL, so that's neat :)

@yiranwang52 yiranwang52 added the bug Something isn't working label Nov 21, 2018
@yiranwang52 yiranwang52 added documentation Related to documentation or examples and removed bug Something isn't working labels Dec 14, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
documentation Related to documentation or examples
Projects
None yet
Development

No branches or pull requests

3 participants