-
Notifications
You must be signed in to change notification settings - Fork 49
Run cloud-controller-manager (CCM) on AWS #145
Comments
I'm confused by this as well. I'm not very fluent in Go but looking at the source code of the in-tree cloud controller manager command, I don't see the code that gets loaded based on the In any case, have you had the chance to try it? |
Also naming of the machines is important for this to work. We need to change the way nodes are named on the AWS provider. |
I've already taken care of the naming part so far.On Mar 17, 2020 11:44 PM, Suraj Deshmukh <notifications@github.com> wrote:
Also naming of the machines is important for this to work. We need to change the way nodes are named on the AWS provider.
—You are receiving this because you were assigned.Reply to this email directly, view it on GitHub, or unsubscribe.
|
kubernetes/cloud-provider-aws#42 (comment) It seems that this project is not usable yet 😕 I guess we need to use built-in cloud provider for now then... |
It seems the CCM doesn't handle Dynamic Provisioning of PVCs, so I was looking at https://github.com/kubernetes-sigs/aws-ebs-csi-driver/, which seems to be the way to go. Here are my notes:
After that, I followed their Dynamic Provisioning example and everything worked as expected: the StorageClass was created, the PersistentVolumeClaim was created and after some seconds it was Bound, then the Pod using that claim was started and I could see that the Volume worked. As they mention in their readme There are two ways to grant permissions to the ebs-csi-driver:
I'm not sure what's best so we should discuss it: the InstanceProfile route would mean that anything on that machine can do the operations listed in the policy IIUC so it seems creating an IAM user and placing it in a secret is safer (making sure the cluster is set up properly so only the CSI driver has access to the secret, of course). |
It also seems to me, that creating credentials as a secret is a nicer way to go. |
A summary of the work so far: we've been able to get CCM running manually on AWS and get the --cloud-provider flags configured correctly. However, getting CCM integrated properly in lokomotive is where we hit a roadblock. When setting up the helm chart within lokomotive, we hit a point where bootstrapping failed. Mateusz and Johannes looked into it, and we need to add the |
We discussed about this OOB so I'll summarize the discussion:
So for now we decided to put on hold the work on getting the CCM running and focus efforts on running the EBS CSI driver like mentioned in #145 (comment) Here's a new issue about the EBS CSI driver: #379 |
Changes the hostname for AWS clusters to the naming scheme preferred by Cloud Controller Manager in order to allow us to set up LoadBalancer services on AWS. Required for #145
Right now we don't run the CCM on AWS so, for example, creating Services of type LoadBalancer doesn't work so we should run it on the AWS platform.
Cloud providers in Kubernetes
It seems in the beginning the code that communicated with the cloud provider was living in each core Kubernetes component (except the scheduler and kube-proxy), so they all had a
--cloud-provider
flag and they communicated with the cloud.Today, there are out-of-tree providers too and a new component called cloud-controller-manager. In this way, only that component communicates with the cloud and it can be released independently from Kubernetes. This is the recommended way forward. Check https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/ for more details.
Options
There's configuring the cloud-controller-manager component with
--cloud-provider=aws
. You can find an example DaemonSet here: https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#examples. As mentioned there, the kubelet needs to run with the flag--cloud-provider=external
and there should be no--cloud-provider
flags in the API Server nor kube-controller-manager. Also, we might need to play with the CLI flags to get it working correctly.There's also the cloud-provider-aws repo, which adds to the confusion. It seems to be the home of the out-of-tree AWS cloud provider but development doesn't seem very active (see kubernetes/cloud-provider-aws#42). In the readme it shows the IAM policy needed and the proper node names requirement but it tells to pass
--cloud-provider=external
to the kubelet, API Server and kube-controller manager, contradicting the previous paragraph. Also, there's no example or yaml file on how to deploy this.What to do
I think what we should do is use the
cloud-controller-manager
and try to deploy it as mentioned in Kubernetes Cloud Controller Manager for now, and if in the future cloud-provider-aws is more active we can consider switching to it.The text was updated successfully, but these errors were encountered: