Skip to content
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.

Clusters without public IPs #221

Closed
anhowe opened this issue Jan 26, 2017 · 39 comments · Fixed by #2326 or #2401
Closed

Clusters without public IPs #221

anhowe opened this issue Jan 26, 2017 · 39 comments · Fixed by #2326 or #2401

Comments

@anhowe
Copy link
Contributor

anhowe commented Jan 26, 2017

In some cases customers would like to use Azure Express Route without public IP addresses assigned. This issue tracks the need an popularity of this request.

@trevor-MSFT
Copy link

I'm also interested in this feature. Happy to test if needed.

@colemickens
Copy link
Contributor

I think in order to do this, we would need to:

  1. Make it possible to make the master LB an internal load balancer.

  2. build and document a way of creating a new apiserver cert for the internal LB IP and then replacing the current apiserver cert.

@rrudduck
Copy link

I'm interested in this feature as well and am able to test / contribute if needed.

@dcieslak19973
Copy link

I'm also interested

@simon-heinen
Copy link

Especially for large companies that are using multiple SAAS applications this is an important feature so that they can encapsulate each of the SAAS applications in its one subnet. Thats why I think that would be a really important feature to add

@knee-berts
Copy link
Contributor

This is very interesting to me and several customers I am working with.

@evillgenius75
Copy link
Contributor

I have at least 3 large customers with this need if they want to move production work onto an Acs-engine built cluster. would also prefer that its there when acs RP has custom vnet feature, hopefully soon.

@colemickens
Copy link
Contributor

To the folks in this thread, this would be an easy change for someone to contribute and test, but, as far as I know, is not something we're likely to get to in the immediate future.

  1. Always add the ILB, not just in single master scenarios.
  2. Add an apimodel flag for "InternalOnly" (or something)
  3. When InternalOnly is set, skip outputting the external LB resource and the corresponding public ip.

@shelwig
Copy link

shelwig commented Mar 22, 2017

Really need this for customers in the financial sector looking to move to ACS.

@bogdangrigg
Copy link

Same here - we are interested in the feature & we'd be willing to test / otherwise contribute.

@colemickens
Copy link
Contributor

@bogdangrigg I gave some of the high level tasks in the post above. My post in #458 gives some further guidance. If you want to take this on, I can offer pointers along the way.

@Savithra
Copy link

@colemickens Will changing the current external LB in the azuredeploy.json to an internal LB work until this feature is implemented?

@dcieslak19973
Copy link

dcieslak19973 commented Apr 21, 2017

I'm excited to see that K8s now supports Azure Internal Load Balancers. How soon before ACS-Engine incorporates

kubernetes/kubernetes#38901

via

kubernetes/kubernetes#43510

Thanks
Dan

@colemickens
Copy link
Contributor

@dcieslak19973 those aren't even in a released version of Kubernetes. I suggest you look at the release schedule for 1.7 and then add 2-4 weeks for us to get it into ACS.

@dcieslak19973
Copy link

Thanks! Just really excited for the feature

@ajhewett
Copy link

This is also an important prerequisite for my use of ACS (medical sector). Unfortunately, kubernetes/kubernetes#43510 is not yet assigned to a milestone.

@andymoe
Copy link

andymoe commented Apr 26, 2017

Chiming in here. This is an important feature for the use case (ours) where we put an app gateway in front of the worker cluster in a docker swarm deployment. When using an app gateway there's not a good reason to have the backend pool traffic go over the public internet and doing so greatly complicates the deployment.

@ExchMaster
Copy link

Almost all public sector entities in the Azure US National Cloud environment would need/require this capability.

@colemickens
Copy link
Contributor

I don't think this is an upstream issue @anhowe. The only work remaining here is in ACS-Engine.

@anhowe
Copy link
Contributor Author

anhowe commented May 16, 2017

Marked as upstream, because doesn't kubernetes/kubernetes#43510 have to be assigned to a milestone?

@colemickens
Copy link
Contributor

That PR is already merged, and if you click on the merge commit hash where the bot merged it in, you can see that it's already in the alpha 1.7 builds. As such, I'm going to remove the upstream label.

@colemickens
Copy link
Contributor

To re-iterate, this would be a low-hanging fruit for someone from the community to pick up. In fact, I think agents are already connecting via the ILB... it's possible that the only thing necessary would be to plumb something from apimodel -> the LB resource in the ARM template part to exclude the public configuration and the public ip resource when the apimodel flag is enabled...

In fact, someone could probably see what I'm doing in this PR (#479) and then emulate it to:

  • add the new apimodel field to indicate no-public-ip + no-public-lb
  • put conditional around the public ip resource
  • put conditional around the pip config on the LB resource

That should more or less be it. The PKI is already configured correctly for the ILB and the agents are already connecting through ILB (or master IP directly in case of single-master).

And I'll reiterate, I'd be happy to provide pointers/guidance to anyone who wanted to pick this up.

@lachie83
Copy link
Member

I've taken a look at this. Your guidance was spot on although I have a couple of questions.

If the master.count is gt 1 an ILB is generated. I've opted to keep the existing LB that had a public IP and simply provision as in ILB which is what you recommended above (This is up for grabs on whether this is a right approach). In doing so I have the following challenges.

  • All the storage account resources have a dependency on the PublicIP resource. Why?
  • The outputs are publishing an FQDN for the master. It doesn't appear that you can give an FQDN to and ILB so I would have to write the IP address. I would also have to update the server field in the kubeconfig to be the IP address. Is this the best approach? I would rather continue using the FQDN but that doesn't appear to be available on an ILB. Other options include creating an Azure DNS zone along with a record set and pin that record to the ILB address.
  • Also looks like the cert generation isn't currently adding the ILB address most probably because it wasn't expecting to have to. It looks like the master.count > 1 condition triggers the ILB creation. Need to dig a little more on this one

@seanknox seanknox modified the milestones: v0.4.0, v0.3.0 Jul 6, 2017
@seanknox seanknox removed this from the v0.4.0 milestone Jul 14, 2017
@sjdweb
Copy link

sjdweb commented Aug 10, 2017

Is there any update on this?

@travisnielsen
Copy link
Member

Would it be an issue if ACS Engine were to simply allow the user to specify their own DNS domain (contoso.local) in the input file? In this scenario:

  • No ALB or public IP would ever be provisioned and an ILB would be created if the master count > 1.
  • If no custom k8s API certificate is specified, one is generated according to the cluster name + custom domain (k8sdemo.contosol.local).
  • The ACS Engine provisioning scripts would set the custom domain on/etc/resolvconf/resolv.conf.d/base as a DNS search suffix. I'm not a Linux expert (and maybe this is a hack) but I've found this technique to work fine in Azure.

This introduces dependencies on the customer's environment, so documentation would need to be added to inform the user that internal DNS entries will be required in advance for the k8s API and all master and agent nodes to work. The upcoming Private Domains feature in Azure DNS could potentially be leveraged for more automation.

@tylerauerbeck
Copy link

Is there any update on this?

@hmarcelodn
Copy link

Is there any update on this?? We have several projects that needs to be internally accessed too.

@snarayanank2
Copy link

Hi, we're deploying a cluster in a large corporate and this will be very important. Any idea when this will be available?

@jackfrancis jackfrancis self-assigned this Jan 29, 2018
@yfried
Copy link
Contributor

yfried commented Feb 5, 2018

Until this is resolved, can I simply remove a public IP from a cluster after it's created? Will that work?

@jackfrancis
Copy link
Member

No. @lachie83 Could your provide an executive summary to @yfried why it's not that simple to get a private Kubernetes cluster? (Thanks!)

@yfried
Copy link
Contributor

yfried commented Feb 6, 2018

@jackfrancis Thank you.
I've already tried to do that myself and found out it doesn't work.

Do you have an ETA on the title feature? We really need this.

Alternatively, is there a way to alter a newly created cluster (no workload yet) such that it's completely isolated from the internet and only accessible via our VPN?

@dennis-benzinger-hybris
Copy link
Contributor

@jackfrancis and @lachie83, I would also be interested in the executive summary mentioned above. Thanks in advance!

@jackfrancis
Copy link
Member

@yfried We're spinning up resources on this right now and hope to have progress this month.

@sabbour
Copy link

sabbour commented Feb 8, 2018

What's the problem with removing the Public IP after the cluster is created? What are the dependencies?

@annegi
Copy link

annegi commented Feb 9, 2018

The problem with removing the public IP is that it is attached to the external load balancer and the Kubernetes API certificates are generated for that particular DNS name so you would not be able to pass the certificates authentication

@dennis-benzinger-hybris
Copy link
Contributor

The certificate also contains some generic kubernetes* names. So if you control DNS resolution in your setup ("real" DNS, /etc/hosts, ...) you can keep using this certificate with the internal LB.

Is that the only side effect of removing the public load balancer?

@CecileRobertMichon CecileRobertMichon self-assigned this Feb 20, 2018
@jackfrancis
Copy link
Member

As a starting point, let's build out a configuration feature that does the following:

  • builds a cluster without a publically facing load balancer (and as a result no public IP)
  • generates kubeconfig artifacts that point to the API server via the internal load balancer IP address
  • simple documentation that describes how to make this work in practice (i.e., build your "non-public" cluster, move the appropriate kubeconfig artifact onto a kubectl-enabled host in the same k8s VNET (or onto a peer VNET that routes to the k8s VNET)

TODO is:

  • what is the configuration feature called and where should it be in the api model?

@dennis-benzinger-hybris
Copy link
Contributor

FWIW, for our use case a flag called enablePublicIP in the kubernetesConfig object would be sufficient. For backwards compatibility the default would be true. This name would match options like e.g. enableRbac or enableAggregatedAPIs.

@ghost ghost added in progress and removed ready labels Feb 22, 2018
@mcronce
Copy link

mcronce commented Feb 23, 2018

For this to be successful, you also need to be able to reach Azure APIs to e.g. attach persistent volumes without reaching out over the public Internet to get to them.

Considering you're sitting in the same data centers, this is not only reasonable but expected, but it's probably something that the Azure product team needs to solve, not that the acs-engine community can really help with.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.