copyright | lastupdated | keywords | subcollection | ||
---|---|---|---|---|---|
|
2019-09-25 |
kubernetes, iks, vlan |
containers |
{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:note: .note} {:important: .important} {:deprecated: .deprecated} {:download: .download} {:preview: .preview}
{: #cs_network_cluster}
After you initially set up your network when you create a cluster, you can change the service endpoints that your Kubernetes master is accessible through or change the VLAN connections for your worker nodes. {: shortdesc}
The content on this page is specific to classic clusters. For information about VPC on Classic clusters, see Understanding network basics of VPC clusters. {: note}
{: #set-up-private-se}
Enable or disable the private service endpoint for your cluster. {: shortdesc}
The private service endpoint makes your Kubernetes master privately accessible. Your worker nodes and your authorized cluster users can communicate with the Kubernetes master over the private network. To determine whether you can enable the private service endpoint, see Worker-to-master and user-to-master communication. Note that you cannot disable the private service endpoint after you enable it.
Did you create a cluster with only a private service endpoint before you enabled your account for VRF and service endpoints? Try setting up the public service endpoint so that you can use your cluster until your support cases are processed to update your account. {: tip}
-
Enable VRF in your IBM Cloud infrastructure account. To check whether a VRF is already enabled, use the
ibmcloud account show
command. -
Enable your {{site.data.keyword.cloud_notm}} account to use service endpoints.
-
Enable the private service endpoint.
ibmcloud ks cluster feature enable private-service-endpoint --cluster <cluster_name_or_ID>
{: pre}
-
Refresh the Kubernetes master API server to use the private service endpoint. You can follow the prompt in the CLI, or manually run the following command.
ibmcloud ks cluster master refresh --cluster <cluster_name_or_ID>
{: pre}
-
Create a configmap to control the maximum number of worker nodes that can be unavailable at a time in your cluster. When you update your worker nodes, the configmap helps prevent downtime for your apps as the apps are rescheduled orderly onto available worker nodes.
-
Update all the worker nodes in your cluster to pick up the private service endpoint configuration.
By issuing the update command, the worker nodes are reloaded to pick up the service endpoint configuration. If no worker update is available, you must [reload the worker nodes manually](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli). If you reload, be sure to cordon, drain, and manage the order to control the maximum number of worker nodes that are unavailable at a time.
``` ibmcloud ks worker update --cluster --worker ``` {: pre} -
If the cluster is in an environment behind a firewall:
- Allow your authorized cluster users to run
kubectl
commands to access the master through the private service endpoint. - Allow outbound network traffic to the private IPs for infrastructure resources and for the {{site.data.keyword.cloud_notm}} services that you plan to use.
- Optional: To use the private service endpoint only:
{: #set-up-public-se}
Enable or disable the public service endpoint for your cluster. {: shortdesc}
The public service endpoint makes your Kubernetes master publicly accessible. Your worker nodes and your authorized cluster users can securely communicate with the Kubernetes master over the public network. For more information, see Worker-to-master and user-to-master communication.
Steps to enable
If you previously disabled the public endpoint, you can re-enable it.
- Enable the public service endpoint.
{: pre}
ibmcloud ks cluster feature enable public-service-endpoint --cluster <cluster_name_or_ID>
- Refresh the Kubernetes master API server to use the public service endpoint. You can follow the prompt in the CLI, or manually run the following command.
{: pre}
ibmcloud ks cluster master refresh --cluster <cluster_name_or_ID>
- Create a configmap to control the maximum number of worker nodes that can be unavailable at a time in your cluster. When you update your worker nodes, the configmap helps prevent downtime for your apps as the apps are rescheduled orderly onto available worker nodes.
- Update all the worker nodes in your cluster to remove the public service endpoint configuration.
By issuing the update command, the worker nodes are reloaded to pick up the service endpoint configuration. If no worker update is available, you must reload the worker nodes manually with the
ibmcloud ks worker reload
command. If you reload, be sure to cordon, drain, and manage the order to control the maximum number of worker nodes that are unavailable at a time.ibmcloud ks worker update --cluster <cluster_name_or_ID> --worker <worker1,worker2>
{: pre}
{: #disable-public-se}
Steps to disable
To disable the public service endpoint, you must first enable the private service endpoint so that your worker nodes can communicate with the Kubernetes master.
- Enable the private service endpoint.
- Disable the public service endpoint.
{: pre}
ibmcloud ks cluster feature disable public-service-endpoint --cluster <cluster_name_or_ID>
- Refresh the Kubernetes master API server to remove the public service endpoint by following the CLI prompt or by manually running the following command.
{: pre}
ibmcloud ks cluster master refresh --cluster <cluster_name_or_ID>
- Create a configmap to control the maximum number of worker nodes that can be unavailable at a time in your cluster. When you update your worker nodes, the configmap helps prevent downtime for your apps as the apps are rescheduled orderly onto available worker nodes.
- Update all the worker nodes in your cluster to remove the public service endpoint configuration.
By issuing the update command, the worker nodes are reloaded to pick up the service endpoint configuration. If no worker update is available, you must reload the worker nodes manually with the
ibmcloud ks worker reload
command. If you reload, be sure to cordon, drain, and manage the order to control the maximum number of worker nodes that are unavailable at a time.ibmcloud ks worker update --cluster <cluster_name_or_ID> --worker <worker1,worker2>
{: pre}
{: #migrate-to-private-se}
Enable worker nodes to communicate with the master over the private network instead of the public network by enabling the private service endpoint. {: shortdesc}
All clusters that are connected to a public and a private VLAN use the public service endpoint by default. Your worker nodes and your authorized cluster users can securely communicate with the Kubernetes master over the public network. To enable worker nodes to communicate with the Kubernetes master over the private network instead of the public network, you can enable the private service endpoint. Then, you can optionally disable the public service endpoint.
- If you enable the private service endpoint and keep the public service endpoint enabled too, workers always communicate with the master over the private network, but your users can communicate with the master over either the public or private network.
- If you enable the private service endpoint but disable the public service endpoint, workers and users must communicate with the master over the private network.
Note that you cannot disable the private service endpoint after you enable it.
-
Enable VRF in your IBM Cloud infrastructure account. To check whether a VRF is already enabled, use the
ibmcloud account show
command. -
Enable your {{site.data.keyword.cloud_notm}} account to use service endpoints.
-
Enable the private service endpoint.
ibmcloud ks cluster feature enable private-service-endpoint --cluster <cluster_name_or_ID>
{: pre}
-
Refresh the Kubernetes master API server to use the private service endpoint by following the CLI prompt or by manually running the following command.
ibmcloud ks cluster master refresh --cluster <cluster_name_or_ID>
{: pre}
-
Create a configmap to control the maximum number of worker nodes that can be unavailable at a time in your cluster. When you update your worker nodes, the configmap helps prevent downtime for your apps as the apps are rescheduled orderly onto available worker nodes.
-
Update all the worker nodes in your cluster to pick up the private service endpoint configuration.
By issuing the update command, the worker nodes are reloaded to pick up the service endpoint configuration. If no worker update is available, you must [reload the worker nodes manually](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli). If you reload, be sure to cordon, drain, and manage the order to control the maximum number of worker nodes that are unavailable at a time.
``` ibmcloud ks worker update --cluster --worker ``` {: pre} -
Optional: To use the private service endpoint only:
- Disable the public service endpoint.
{: pre}
ibmcloud ks cluster feature disable public-service-endpoint --cluster <cluster_name_or_ID>
- Set up access to the master on the private service endpoint.
- Disable the public service endpoint.
{: #change-vlans}
When you create a cluster, you choose whether to connect your worker nodes to a private and a public VLAN or to a private VLAN only. Your worker nodes are part of worker pools, which store networking metadata that includes the VLANs to use to provision future worker nodes in the pool. You might want to change your cluster's VLAN connectivity setup later, in cases such as the following. {: shortdesc}
- The worker pool VLANs in a zone run out of capacity, and you need to provision a new VLAN for your cluster worker nodes to use.
- You have a cluster with worker nodes that are on both public and private VLANs, but you want to change to a private-only cluster.
- You have a private-only cluster, but you want some worker nodes such as a worker pool of edge nodes on the public VLAN to expose your apps on the internet.
Trying to change the service endpoint for master-worker communication instead? Check out the topics to set up public and private service endpoints. {: tip}
Before you begin:
- Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.
- If your worker nodes are stand-alone (not part of a worker pool), update them to worker pools.
To change the VLANs that a worker pool uses to provision worker nodes:
- List the names of the worker pools in your cluster.
ibmcloud ks worker-pool ls --cluster <cluster_name_or_ID>
{: pre}
- Determine the zones for one of the worker pools. In the output, look for the Zones field.
ibmcloud ks worker-pool get --cluster <cluster_name_or_ID> --worker-pool <pool_name>
{: pre}
-
For each zone that you found in the previous step, get an available public and private VLAN that are compatible with each other.
-
Check the available public and private VLANs that are listed under Type in the output.
ibmcloud ks vlan ls --zone <zone>
{: pre}
-
Check that the public and private VLANs in the zone are compatible. To be compatible, the Router must have the same pod ID. In this example output, the Router pod IDs match:
01a
and01a
. If one pod ID was01a
and the other was02a
, you cannot set these public and private VLAN IDs for your worker pool.ID Name Number Type Router Supports Virtual Workers 229xxxx 1234 private bcr01a.dal12 true 229xxxx 5678 public fcr01a.dal12 true
{: screen}
-
If you need to order a new public or private VLAN for the zone, you can order in the {{site.data.keyword.cloud_notm}} console, or use the following command. Remember that the VLANs must be compatible, with matching Router pod IDs as in the previous step. If you are creating a pair of new public and private VLANs, they must be compatible with each other.
ibmcloud sl vlan create -t [public|private] -d <zone> -r <compatible_router>
{: pre}
-
Note the IDs of the compatible VLANs.
-
Set up a worker pool with the new VLAN network metadata for each zone. You can create a new worker pool, or modify an existing worker pool.
-
Create a new worker pool: See adding worker nodes by creating a new worker pool.
-
Modify an existing worker pool: Set the worker pool's network metadata to use the VLAN for each zone. Worker nodes that were already created in the pool continue to use the previous VLANs, but new worker nodes in the pool use new VLAN metadata that you set.
-
Example to add both public and private VLANs, such as if you change from private-only to both private and public:
ibmcloud ks zone network-set --zone <zone> --cluster <cluster_name_or_ID> --worker-pool <pool_name> --private-vlan <private_vlan_id> --public-vlan <public_vlan_id>
{: pre}
-
Example to add only a private VLAN, such as if you change from public and private VLANs to private-only when you have a VRF-enabled account that uses service endpoints:
ibmcloud ks zone network-set --zone <zone> --cluster <cluster_name_or_ID> --worker-pool <pool_name> --private-vlan <private_vlan_id> --private-only
{: pre}
-
-
Add worker nodes to the worker pool by resizing the pool.
ibmcloud ks worker-pool resize --cluster <cluster_name_or_ID> --worker-pool <pool_name> --size-per-zone <number_of_workers_per_zone>
{: pre}
If you want to remove worker nodes that use the previous network metadata, change the number of workers per zone to double the previous number of workers per zone. Later in these steps, you can cordon, drain, and remove the previous worker nodes. {: tip}
-
Verify that new worker nodes are created with the appropriate Public IP and Private IP in the output. For example, if you change the worker pool from a public and private VLAN to private-only, the new worker nodes have only a private IP. If you change the worker pool from private-only to both public and private VLANs, the new worker nodes have both public and private IPs.
ibmcloud ks worker ls --cluster <cluster_name_or_ID> --worker-pool <pool_name>
{: pre}
-
Optional: Remove the worker nodes with the previous network metadata from the worker pool.
-
In the output of the previous step, note the ID and Private IP of the worker nodes that you want to remove from the worker pool.
-
Mark the worker node as unschedulable in a process that is known as cordoning. When you cordon a worker node, you make it unavailable for future pod scheduling.
kubectl cordon <worker_private_ip>
{: pre}
-
Verify that pod scheduling is disabled for your worker node.
kubectl get nodes
{: pre} Your worker node is disabled for pod scheduling if the status displays
SchedulingDisabled
. -
Force pods to be removed from your worker node and rescheduled onto remaining worker nodes in the cluster.
kubectl drain <worker_private_ip>
{: pre} This process can take a few minutes.
-
Remove the worker node. Use the worker ID that you previously retrieved.
ibmcloud ks worker rm --cluster <cluster_name_or_ID> --worker <worker_name_or_ID>
{: pre}
-
Verify that the worker node is removed.
ibmcloud ks worker ls --cluster <cluster_name_or_ID> --worker-pool <pool_name>
{: pre}
-
Optional: You can repeat steps 2 - 7 for each worker pool in your cluster. After you complete these steps, all worker nodes in your cluster are set up with the new VLANs.
-
Optional: If you no longer need the subnets on the old VLANs, you can remove them.