-
Notifications
You must be signed in to change notification settings - Fork 294
Allow ECR pull from controller IAM role #35
Allow ECR pull from controller IAM role #35
Conversation
Goes some way to resolve coreos/coreos-kubernetes#620
Current coverage is 56.90% (diff: 100%)@@ master #35 diff @@
==========================================
Files 4 4
Lines 949 949
Methods 0 0
Messages 0 0
Branches 0 0
==========================================
Hits 540 540
Misses 329 329
Partials 80 80
|
@c-knowles LGTM. Thanks for your contribution! |
Great, thanks! I just need to figure out if it's worth re-doing coreos/coreos-kubernetes#716 next. I've been using it already for a while now on a forked build. |
@c-knowles As a kube-aws user, I'd be greatly happy if I could deploy kube-aws created clusters into existing VPCs. After a deployment, it would be way better if services hosted in both kubernetes and existing infrastructure could start communicating each other (if we demanded so), without manual steps we have to take for now. Those manual steps seem to include (1) maintaining route tables (2) assigning it to kube-aws created subnets, etc? After reading your great write up in coreos/coreos-kubernetes#716, I now realize that it is basically what your PR addresses, right? In short: I'm a fan of your PR 👍 |
Sure, I'll redo that one soon then. It's exactly what I'm using it for - to access RDS in the same VPC. I'm interested to get some acceptance tests in there too but can discuss in the PR once I've made it. |
/off current ECR topic The described usecase (using existing services) is already possible by using It might be better to implement this into the nodepool concept we've been talking about.. |
@pieterlange Thanks for confirmation. Really helpful 👍 So, for anyone interested, my comment #35 (comment) was missing the point.
...so that we can make services in kubernetes and existing infrastructure communicate to each other. And what @pieterlange described follows. (Any correction is welcome!) |
Let's move to #44 |
…/add-helm-deploy-operator to hcom-flavour * commit '8162cdf3338247991eece79e8cbb0be676a885e7': RUN-861 Remove mantle-helm-deploy object as decided we don't want to create this from kube-aws. Correct typo RUN-861 Rename apiGroup based on changes I made to deploy-operator source. Rename clusterrolle + binding to match other kube controller conventions. Add in mantle-helm-deploy resource to (hopefully) deploy mantle immediately when the helmOperator comes online. Correct namespace of helmdeploy serviceaccount Make controllers create resources to run helmDeployOperator: add helmDeploy.yaml, crd.yaml & rbac.yaml to install-kube-system.service
Goes some way to resolve coreos/coreos-kubernetes/issues/620, a replacement of coreos/coreos-kubernetes#731.