copyright | lastupdated | ||
---|---|---|---|
|
2018-03-28 |
{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:download: .download}
{: #vpn}
With VPN connectivity, you can securely connect apps in a Kubernetes cluster on {{site.data.keyword.containerlong}} to an on-premises network. You can also connect apps that are external to your cluster to an app that is running inside your cluster. {:shortdesc}
To connect your worker nodes and apps to an on-premises data center, you can configure a VPN IPSec endpoint with a strongSwan service or with a Vyatta Gateway Appliance or a Fortigate Appliance.
-
Vyatta Gateway Appliance or Fortigate Appliance: If you have a larger cluster, want to access non-Kubernetes resources over the VPN, or want to access multiple clusters over a single VPN, you might choose to set up a Vyatta Gateway Appliance or Fortigate Security Appliance to configure an IPSec VPN endpoint. To configure a Vyatta, see Setting up VPN connectivity with Vyatta.
-
strongSwan IPSec VPN Service: You can set up a strongSwan IPSec VPN service that securely connects your Kubernetes cluster with an on-premises network. The strongSwan IPSec VPN service provides a secure end-to-end communication channel over the internet that is based on the industry-standard Internet Protocol Security (IPsec) protocol suite. To set up a secure connection between your cluster and an on-premises network, configure and deploy the strongSwan IPSec VPN service directly in a pod in your cluster.
{: #vyatta}
The Vyatta Gateway Appliance is a bare metal server that runs a special distribution of Linux. You can use a Vyatta as VPN gateway to securely connect to an on-premises network. {:shortdesc}
All public and private network traffic that enters or leaves the cluster VLANs is routed through the Vyatta. You can use the Vyatta as a VPN endpoint to create an encrypted IPSec tunnel between servers in IBM Cloud infrastructure (SoftLayer) and on-premise resources. For example, the following diagram shows how an app on a private-only worker node in {{site.data.keyword.containershort_notm}} can communicate with an on-premises server via a Vyatta VPN connection:
-
An app in your cluster,
myapp2
, receives a request from an Ingress or LoadBalancer service and needs to securely connect to data in your on-premises network. -
Because
myapp2
is on a worker node that is on a private VLAN only, the Vyatta acts as a secure connection between the worker nodes and the on-premises network. The Vyatta uses the destination IP address to determine which network packets should be sent to the on-premises network. -
The request is encrypted and sent over the VPN tunnel to the on-premises data center.
-
The incoming request passes through the on-premises firewall and is delivered to the VPN tunnel endpoint (router) where it is decrypted.
-
The VPN tunnel endpoint (router) forwards the request to the on-premises server or mainframe depending on the destination IP address specified in step 2. The necessary data is sent back over the VPN connection to
myapp2
through the same process.
To set up a Vyatta Gateway Appliance:
-
To enable a VPN connection using the Vyatta, configure IPSec on the Vyatta .
For more information, see this blog post on connecting a cluster to an on-premises data center .
{: #vpn-setup}
Use a Helm chart to configure and deploy the strongSwan IPSec VPN service inside of a Kubernetes pod. {:shortdesc}
Because strongSwan is integrated within your cluster, you don't need an external gateway device. When VPN connectivity is established, routes are automatically configured on all of the worker nodes in the cluster. These routes allow two-way connectivity through the VPN tunnel between pods on any worker node and the remote system. For example, the following diagram shows how an app in {{site.data.keyword.containershort_notm}} can communicate with an on-premises server via a strongSwan VPN connection:
-
An app in your cluster,
myapp
, receives a request from an Ingress or LoadBalancer service and needs to securely connect to data in your on-premises network. -
The request to the on-premises data center is forwarded to the IPSec strongSwan VPN pod. The destination IP address is used to determine which network packets should be sent to the IPSec strongSwan VPN pod.
-
The request is encrypted and sent over the VPN tunnel to the on-premises data center.
-
The incoming request passes through the on-premises firewall and is delivered to the VPN tunnel endpoint (router) where it is decrypted.
-
The VPN tunnel endpoint (router) forwards the request to the on-premises server or mainframe depending on the destination IP address specified in step 2. The necessary data is sent back over the VPN connection to
myapp
through the same process.
{: #vpn_configure}
Before you begin:
- Install an IPsec VPN gateway in your on-premises data center.
- Either create a standard cluster or update an existing cluster to version 1.7.4 or later.
- The cluster must have at least one available public Load Balancer IP address. You can check to see your available public IP addresses or free up a used IP address.
- Target the Kubernetes CLI to the cluster.
For more information about the Helm commands that are used to set up the strongSwan chart, see the Helm documentation .
To configure the Helm chart:
-
Save the default configuration settings for the strongSwan Helm chart in a local YAML file.
helm inspect values ibm/strongswan > config.yaml
{: pre}
-
Open the
config.yaml
file and make the following changes to the default values according to the VPN configuration you want. You can find descriptions for more advanced settings in the configuration file comments.Important: If you do not need to change a property, comment that property out by placing a
#
in front of it.Understanding the YAML file components -
Save the updated
config.yaml
file. -
Install the Helm chart to your cluster with the updated
config.yaml
file. The updated properties are stored in a configmap for your chart.Note: If you have multiple VPN deployments in a single cluster, you can avoid naming conflicts and differentiate between your deployments by choosing more descriptive release names than
vpn
. To avoid the truncation of the release name, limit the release name to 35 characters or less.helm install -f config.yaml --namespace=kube-system --name=vpn ibm/strongswan
{: pre}
-
Check the chart deployment status. When the chart is ready, the STATUS field near the top of the output has a value of
DEPLOYED
.helm status vpn
{: pre}
-
Once the chart is deployed, verify that the updated settings in the
config.yaml
file were used.helm get values vpn
{: pre}
{: #vpn_test}
After you deploy your Helm chart, test the VPN connectivity. {:shortdesc}
-
If the VPN on the on-premises gateway is not active, start the VPN.
-
Set the
STRONGSWAN_POD
environment variable.export STRONGSWAN_POD=$(kubectl get pod -n kube-system -l app=strongswan,release=vpn -o jsonpath='{ .items[0].metadata.name }')
{: pre}
-
Check the status of the VPN. A status of
ESTABLISHED
means that the VPN connection was successful.kubectl exec -n kube-system $STRONGSWAN_POD -- ipsec status
{: pre}
Example output:
Security Associations (1 up, 0 connecting): k8s-conn[1]: ESTABLISHED 17 minutes ago, 172.30.244.42[ibm-cloud]...192.168.253.253[on-premises] k8s-conn{2}: INSTALLED, TUNNEL, reqid 12, ESP in UDP SPIs: c78cb6b1_i c5d0d1c3_o k8s-conn{2}: 172.21.0.0/16 172.30.0.0/16 === 10.91.152.128/26
{: screen}
Note:
- When you try to establish VPN connectivity with the strongSwan Helm chart, it is likely that the VPN status is not `ESTABLISHED` the first time. You might need to check the on-premises VPN endpoint settings and change the configuration file several times before the connection is successful:
- Run `helm delete --purge `
- Fix the incorrect values in the configuration file.
- Run `helm install -f config.yaml --namespace=kube-system --name= ibm/strongswan`
- If the VPN pod is in an `ERROR` state or continues to crash and restart, it might be due to parameter validation of the `ipsec.conf` settings in the chart's configmap.
- Check for any validation errors in the strongSwan pod logs by running `kubectl logs -n kube-system $STRONGSWAN_POD`.
- If validation errors exist, run `helm delete --purge `
- Fix the incorrect values in the configuration file.
- Run `helm install -f config.yaml --namespace=kube-system --name= ibm/strongswan`
- When you try to establish VPN connectivity with the strongSwan Helm chart, it is likely that the VPN status is not `ESTABLISHED` the first time. You might need to check the on-premises VPN endpoint settings and change the configuration file several times before the connection is successful:
-
You can further test the VPN connectivity by running the five Helm tests that are included in the strongSwan chart definition.
helm test vpn
{: pre}
-
If all tests pass, your strongSwan VPN connection is successfully set up.
-
If any test fails, continue to the next step.
-
-
View the output of a failed test by looking at the logs of the test pod.
kubectl logs -n kube-system <test_program>
{: pre}
Note: Some of the tests have requirements that are optional settings in the VPN configuration. If some of the tests fail, the failures might be acceptable depending on whether you specified these optional settings. Refer to the following table for more information about each test and why it might fail.
{: #vpn_tests_table}
Understanding the Helm VPN connectivity tests -
Delete the current Helm chart.
helm delete --purge vpn
{: pre}
-
Open the
config.yaml
file and fix the incorrect values. -
Save the updated
config.yaml
file. -
Install the Helm chart to your cluster with the updated
config.yaml
file. The updated properties are stored in a configmap for your chart.helm install -f config.yaml --namespace=kube-system --name=<release_name> ibm/strongswan
{: pre}
-
Check the chart deployment status. When the chart is ready, the STATUS field near the top of the output has a value of
DEPLOYED
.helm status vpn
{: pre}
-
Once the chart is deployed, verify that the updated settings in the
config.yaml
file were used.helm get values vpn
{: pre}
-
Clean up the current test pods.
kubectl get pods -a -n kube-system -l app=strongswan-test
{: pre}
kubectl delete pods -n kube-system -l app=strongswan-test
{: pre}
-
Run the tests again.
helm test vpn
{: pre}
{: #vpn_upgrade}
Make sure your strongSwan Helm chart is up-to-date by upgrading it. {:shortdesc}
To upgrade your strongSwan Helm chart to the latest version:
helm upgrade -f config.yaml --namespace kube-system <release_name> ibm/strongswan
{: pre}
{: #vpn_upgrade_1.0.0}
Due to some of the settings that are used in the version 1.0.0 Helm chart, you cannot use helm upgrade
to update from 1.0.0 to the latest version.
{:shortdesc}
To upgrade from version 1.0.0, you must delete the 1.0.0 chart and install the latest version:
-
Delete the 1.0.0 Helm chart.
helm delete --purge <release_name>
{: pre}
-
Save the default configuration settings for the latest version of the strongSwan Helm chart in a local YAML file.
helm inspect values ibm/strongswan > config.yaml
{: pre}
-
Update the configuration file and save the file with your changes.
-
Install the Helm chart to your cluster with the updated
config.yaml
file.helm install -f config.yaml --namespace=kube-system --name=<release_name> ibm/strongswan
{: pre}
Additionally, certain ipsec.conf
timeout settings that were hardcoded in 1.0.0 are exposed as configurable properties in later versions. The names and defaults of some of these configurable ipsec.conf
timeout settings were also changed to be more consistent with strongSwan standards. If you are upgrading your Helm chart from 1.0.0 and want to retain the 1.0.0 version defaults for the timeout settings, add the new settings to your chart configuration file with the old default values.
1.0.0 setting name | 1.0.0 default | Latest version setting name | Latest version default |
---|---|---|---|
ikelifetime |
60m | ikelifetime |
3h |
keylife |
20m | lifetime |
1h |
rekeymargin |
3m | margintime |
9m |
{: vpn_disable}
You can disable the VPN connection by deleting the Helm chart. {:shortdesc}
helm delete --purge <release_name>
{: pre}