copyright | lastupdated | keywords | subcollection | ||
---|---|---|---|---|---|
|
2019-10-01 |
kubernetes, iks, versions, update |
containers |
{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:note: .note} {:important: .important} {:deprecated: .deprecated} {:note: .note}
{: #cs_versions}
{: #version_types}
{{site.data.keyword.containerlong}} concurrently supports multiple versions of Kubernetes. When a latest version (n) is released, versions up to 2 behind (n-2) are supported. Versions more than 2 behind the latest (n-3) are first deprecated and then unsupported. {:shortdesc}
Supported Kubernetes versions:
- Latest: 1.15.4
- Default: 1.14.7
- Other: 1.13.11
Deprecated and unsupported Kubernetes versions:
- Deprecated: 1.12.10
- Unsupported: 1.5, 1.7, 1.8, 1.9, 1.10, 1.11
Deprecated versions: When clusters are running on a deprecated Kubernetes version, you have a minimum of 45 days to review and update to a supported Kubernetes version before the version becomes unsupported. During the deprecation period, your cluster is still functional, but might require updates to a supported release to fix security vulnerabilities. For example, you can add and reload worker nodes, but you cannot create new clusters that use the deprecated version when the unsupported date is 45 or less days away.
Unsupported versions: If your clusters run a Kubernetes version that is not supported, review the following potential update impacts and then immediately update the cluster to continue receiving important security updates and support. Unsupported clusters cannot add or reload existing worker nodes. You can find out whether your cluster is unsupported by reviewing the State field in the output of the ibmcloud ks cluster ls
command or in the {{site.data.keyword.containerlong_notm}} console .
If you wait until your cluster is three or more minor versions behind the oldest supported version, you cannot update the cluster. Instead, create a new cluster, deploy your apps to the new cluster, and delete the unsupported cluster.
To avoid this issue, update deprecated clusters to a supported version less than three ahead of the current version, such as 1.11 to 1.12 and then update to the latest version, 1.14. If the worker nodes run a version three or more behind the master, you might see your pods fail by entering a state such as MatchNodeSelector
, CrashLoopBackOff
, or ContainerCreating
until you update the worker nodes to the same version as the master. After you update from a deprecated to a supported version, your cluster can resume normal operations and continue receiving support.
{: important}
To check the server version of a cluster, run the following command.
kubectl version --short | grep -i server
{: pre}
Example output:
Server Version: v1.14.7+IKS
{: screen}
{: #update_types}
Your Kubernetes cluster has three types of updates: major, minor, and patch. {:shortdesc}
Update type | Examples of version labels | Updated by | Impact |
---|---|---|---|
Major | 1.x.x | You | Operation changes for clusters, including scripts or deployments. |
Minor | x.9.x | You | Operation changes for clusters, including scripts or deployments. |
Patch | x.x.4_1510 | IBM and you | Kubernetes patches, as well as other {{site.data.keyword.cloud_notm}} Provider component updates such as security and operating system patches. IBM updates masters automatically, but you apply patches to worker nodes. See more about patches in the following section. |
{: caption="Impacts of Kubernetes updates" caption-side="top"} |
As updates become available, you are notified when you view information about the worker nodes, such as with the ibmcloud ks worker ls --cluster <cluster>
or ibmcloud ks worker get --cluster <cluster> --worker <worker>
commands.
- Major and minor updates (1.x): First, update your master node and then update the worker nodes. Worker nodes cannot run a Kubernetes major or minor version that is greater than the masters.
- You cannot update a Kubernetes master three or more minor versions ahead. For example, if your current master is version 1.11 and you want to update to 1.14, you must update to 1.12 first.
- If you use a
kubectl
CLI version that does match at least themajor.minor
version of your clusters, you might experience unexpected results. Make sure to keep your Kubernetes cluster and CLI versions up-to-date.
- Patch updates (x.x.4_1510): Changes across patches are documented in the Version changelog. Master patches are applied automatically, but you initiate worker node patches updates. Worker nodes can also run patch versions that are greater than the masters. As updates become available, you are notified when you view information about the master and worker nodes in the {{site.data.keyword.cloud_notm}} console or CLI, such as with the following commands:
ibmcloud ks cluster ls
,cluster get
,workers
, orworker get
.- Worker node patches: Check monthly to see whether an update is available, and use the
ibmcloud ks worker update
command or theibmcloud ks worker reload
command to apply these security and operating system patches. During an update or reload, your worker node machine is reimaged, and data is deleted if not stored outside the worker node. - Master patches: Master patches are applied automatically over the course of several days, so a master patch version might show up as available before it is applied to your master. The update automation also skips clusters that are in an unhealthy state or have operations currently in progress. Occasionally, IBM might disable automatic updates for a specific master fix pack, as noted in the changelog, such as a patch that is only needed if a master is updated from one minor version to another. In any of these cases, you can choose to safely use the
ibmcloud ks cluster master update
command yourself without waiting for the update automation to apply.
- Worker node patches: Check monthly to see whether an update is available, and use the
{: #prep-up} This information summarizes updates that are likely to have impact on deployed apps when you update a cluster to a new version from the previous version.
- Version 1.15 preparation actions.
- Version 1.14 preparation actions.
- Version 1.13 preparation actions.
- Deprecated: Version 1.12 preparation actions.
- Archive of unsupported versions.
For a complete list of changes, review the following information:
{: #release-history}
The following table records {{site.data.keyword.containerlong_notm}} version release history. You can use this information for planning purposes, such as to estimate general time frames when a certain release might become unsupported. After the Kubernetes community releases a version update, the IBM team begins a process of hardening and testing the release for {{site.data.keyword.containerlong_notm}} environments. Availability and unsupported release dates depend on the results of these tests, community updates, security patches, and technology changes between versions. Plan to keep your cluster master and worker node version up-to-date according to the n-2
version support policy.
{: shortdesc}
{{site.data.keyword.containerlong_notm}} was first generally available with Kubernetes version 1.5. Projected release or unsupported dates are subject to change. To go to the version update preparation steps, click the version number.
Dates that are marked with a dagger (†
) are tentative and subject to change.
{: important}
{: #cs_v115}
{{site.data.keyword.containerlong_notm}} is a Certified Kubernetes product for version 1.15 under the CNCF Kubernetes Software Conformance Certification program. _Kubernetes® is a registered trademark of The Linux Foundation in the United States and other countries, and is used pursuant to a license from The Linux Foundation._
Review changes that you might need to make when you update from the previous Kubernetes version to 1.15. {: shortdesc}
{: #115_before}
The following table shows the actions that you must take before you update the Kubernetes master. {: shortdesc}
Type | Description |
---|---|
`kubelet` cgroup metrics collection | `kubelet` now collects only cgroups metrics for the node, container runtime, kubelet, pods, and containers. If any automation or components rely on additional cgroup metrics, update the components to reflect these changes. |
Default Calico policy change | If you created custom Calico HostEndpoints that refer to an `iks.worker.interface == 'private'` label, a new default Calico policy, `allow-all-private-default`, might disrupt network traffic. You must create a Calico policy with the `iks.worker.interface == 'private'` label to override the default policy. For more information, see [Default Calico and Kubernetes network policies](/docs/containers?topic=containers-network_policies#default_policy). |
{: #115_after}
The following table shows the actions that you must take after you update the Kubernetes master. {: shortdesc}
Type | Description |
---|---|
Unsupported: `kubectl exec --pod` | The `kubectl exec` command's `--pod` and shorthand `-p` flags are no longer supported. If your scripts rely on these flags, update them. |
Unsupported: `kubectl scale job` | The `kubectl scale job` command is removed. If your scripts rely on this command, update them. |
{: #115_after_worker}
The following table shows the actions that you must take after you update your worker nodes. {: shortdesc}
Type | Description |
---|---|
`kubelet probe metrics` type are now counters rather than gauge | The previous method of using the gauge type for probe metrics is replaced by the counters type. The gauge type returned `0` for success and `1` for failed operations. Now, the counters type keeps track of the number of times that the metric returns `successful`, `failure`, or `unknown`. If your automation processes rely on a `0` successful or `1` failed gauge response, update the processes to use the counters response statuses. The numerical response value can now indicate the number of times that the counters response statuses are reported. Additionally, to reflect this change in functionality, the `prober_probe_result` metric is replaced by the `prober_probe_total` metric. |
{: #cs_v114}
{{site.data.keyword.containerlong_notm}} is a Certified Kubernetes product for version 1.14 under the CNCF Kubernetes Software Conformance Certification program. _Kubernetes® is a registered trademark of The Linux Foundation in the United States and other countries, and is used pursuant to a license from The Linux Foundation._
Review changes that you might need to make when you update from the previous Kubernetes version to 1.14. {: shortdesc}
Kubernetes 1.14 introduces new capabilities for you to explore. Try out the new kustomize
project that you can use to hep write, customize, and reuse your Kubernetes resource YAML configurations. Or take a look at the new kubectl
CLI docs .
{: tip}
{: #114_before}
The following table shows the actions that you must take before you update the Kubernetes master. {: shortdesc}
Type | Description |
---|---|
CRI pod log directory structure change | The container runtime interface (CRI) changed the pod log directory structure from `/var/log/pods/` to `/var/log/pods/`. If your apps bypass Kubernetes and the CRI to access pod logs directly on worker nodes, update them to handle both directory structures. Accessing pod logs via Kubernetes, for example by running `kubectl logs`, is not impacted by this change. |
Health checks no longer follow redirects | Health check liveness and readiness probes that use an `HTTPGetAction` no longer follow redirects to hostnames that are different from the original probe request. Instead, these non-local redirects return a `Success` response and an event with reason `ProbeWarning` is generated to indicate that the redirect was ignored. If you previously relied on the redirect to run health checks against different hostname endpoints, you must perform the health check logic outside the `kubelet`. For example, you might proxy the external endpoint instead of redirecting the probe request. |
Unsupported: KubeDNS cluster DNS provider | CoreDNS is now the only supported cluster DNS provider for clusters that run Kubernetes version 1.14 and later. If you update an existing cluster that uses KubeDNS as the cluster DNS provider to version 1.14, KubeDNS is automatically migrated to CoreDNS during the update. Thus before you update the cluster, consider [setting up CoreDNS as the cluster DNS provider](/docs/containers?topic=containers-cluster_dns#set_coredns) and testing it. For example, if your app relies on an older DNS client, you might need to [update the app or customize CoreDNS](/docs/containers?topic=containers-cs_troubleshoot_network#coredns_issues). CoreDNS supports [cluster DNS specification ![External link icon](../icons/launch-glyph.svg "External link icon")](https://github.com/kubernetes/dns/blob/master/docs/specification.md#25---records-for-external-name-services) to enter a domain name as the Kubernetes service `ExternalName` field. The previous cluster DNS provider, KubeDNS, does not follow the cluster DNS specification, and as such, allows IP addresses for `ExternalName`. If any Kubernetes services use IP addresses instead of DNS, you must update the `ExternalName` to DNS for continued functionality. |
Unsupported: Kubernetes `Initializers` alpha feature | The Kubernetes `Initializers` alpha feature, `admissionregistration.k8s.io/v1alpha1` API version, `Initializers` admission controller plug-in, and use of the `metadata.initializers` API field are removed. If you use `Initializers`, switch to use [Kubernetes admission webhooks ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) and delete any existing `InitializerConfiguration` API objects before you update the cluster. |
Unsupported: Node alpha taints | The use of taints `node.alpha.kubernetes.io/notReady` and `node.alpha.kubernetes.io/unreachable` are no longer supported. If you rely on these taints, update your apps to use the `node.kubernetes.io/not-ready` and `node.kubernetes.io/unreachable` taints instead. |
Unsupported: The Kubernetes API swagger documents | The `swagger/*`, `/swagger.json`, and `/swagger-2.0.0.pb-v1` schema API docs are now removed in favor of the `/openapi/v2` schema API docs. The swagger docs were deprecated when the OpenAPI docs became available in Kubernetes version 1.10. Additionally, the Kubernetes API server now aggregates only OpenAPI schemas from `/openapi/v2` endpoints of aggregated API servers. The fallback to aggregate from `/swagger.json` is removed. If you installed apps that provide Kubernetes API extensions, ensure that your apps support the `/openapi/v2` schema API docs. |
Unsupported and deprecated: Select metrics | Review the [removed and deprecated Kubernetes metrics ![External link icon](../icons/launch-glyph.svg "External link icon")](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#removed-and-deprecated-metrics). If you use any of these deprecated metrics, change to the available replacement metric. |
{: #114_after}
The following table shows the actions that you must take after you update the Kubernetes master. {: shortdesc}
Type | Description |
---|---|
Unsupported: `kubectl --show-all` | The `--show-all` and shorthand `-a` flags are no longer supported. If your scripts rely on these flags, update them. |
Kubernetes default RBAC policies for unauthenticated users | The Kubernetes default role-based access control (RBAC) policies no longer grant access to [discovery and permission-checking APIs to unauthenticated users ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#discovery-roles). This change applies only to new version 1.14 clusters. If you update a cluster from a prior version, unauthenticated users still have access to the discovery and permission-checking APIs. If you want to update to the more secure default for unauthenticated users, remove the `system:unauthenticated` group from the `system:basic-user` and `system:discovery` cluster role bindings. |
Deprecated: Prometheus queries that use `pod_name` and `container_name` labels | Update any Prometheus queries that match `pod_name` or `container_name` labels to use `pod` or `container` labels instead. Example queries that might use these deprecated labels include kubelet probe metrics. The deprecated `pod_name` and `container_name` labels become unsupported in the next Kubernetes release. |
{: #cs_v113}
{{site.data.keyword.containerlong_notm}} is a Certified Kubernetes product for version 1.13 under the CNCF Kubernetes Software Conformance Certification program. _Kubernetes® is a registered trademark of The Linux Foundation in the United States and other countries, and is used pursuant to a license from The Linux Foundation._
Review changes that you might need to make when you update from the previous Kubernetes version to 1.13. {: shortdesc}
{: #113_before}
The following table shows the actions that you must take before you update the Kubernetes master. {: shortdesc}
Type | Description |
---|---|
N/A |
{: #113_after}
The following table shows the actions that you must take after you update the Kubernetes master. {: shortdesc}
Type | Description |
---|---|
CoreDNS available as the new default cluster DNS provider | CoreDNS is now the default cluster DNS provider for new clusters in Kubernetes 1.13 and later. If you update an existing cluster to 1.13 that uses KubeDNS as the cluster DNS provider, KubeDNS continues to be the cluster DNS provider. However, you can choose to [use CoreDNS instead](/docs/containers?topic=containers-cluster_dns#dns_set). For example, you might test your apps on CoreDNS in preparation for the next Kubernetes version update to make sure that you do not need to [update the app, or to customize CoreDNS](/docs/containers?topic=containers-cs_troubleshoot_network#coredns_issues).
CoreDNS supports [cluster DNS specification ![External link icon](../icons/launch-glyph.svg "External link icon")](https://github.com/kubernetes/dns/blob/master/docs/specification.md#25---records-for-external-name-services) to enter a domain name as the Kubernetes service `ExternalName` field. The previous cluster DNS provider, KubeDNS, does not follow the cluster DNS specification, and as such, allows IP addresses for `ExternalName`. If any Kubernetes services use IP addresses instead of DNS, you must update the `ExternalName` to DNS for continued functionality. |
`kubectl` output for `Deployment` and `StatefulSet` | The `kubectl` output for `Deployment` and `StatefulSet` now includes a `Ready` column and is more human-readable. If your scripts rely on the previous behavior, update them. |
`kubectl` output for `PriorityClass` | The `kubectl` output for `PriorityClass` now includes a `Value` column. If your scripts rely on the previous behavior, update them. |
`kubectl get componentstatuses` | The `kubectl get componentstatuses` command does not properly report the health of some Kubernetes master components because these components are no longer accessible from the Kubernetes API server now that `localhost` and insecure (HTTP) ports are disabled. After introducing highly available (HA) masters in Kubernetes version 1.10, each Kubernetes master is set up with multiple `apiserver`, `controller-manager`, `scheduler`, and `etcd` instances. Instead, review the cluster healthy by checking the [{{site.data.keyword.cloud_notm}} console ![External link icon](../icons/launch-glyph.svg "External link icon")](https://cloud.ibm.com/kubernetes/landing) or by using the `ibmcloud ks cluster get` [command](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_get). |
Unsupported: `kubectl run-container` | The `kubectl run-container` command is removed. Instead, use the `kubectl run` command. |
`kubectl rollout undo` | When you run `kubectl rollout undo` for a revision that does not exist, an error is returned. If your scripts rely on the previous behavior, update them. |
Deprecated: `scheduler.alpha.kubernetes.io/critical-pod` annotation | The `scheduler.alpha.kubernetes.io/critical-pod` annotation is now deprecated. Change any pods that rely on this annotation to use [pod priority](/docs/containers?topic=containers-pod_priority#pod_priority) instead. |
{: #113_after_workers}
The following table shows the actions that you must take after you update your worker nodes. {: shortdesc}
Type | Description |
---|---|
containerd `cri` stream server | In containerd version 1.2, the `cri` plug-in stream server now serves on a random port, `http://localhost:0`. This change supports the `kubelet` streaming proxy and provides a more secure streaming interface for container `exec` and `logs` operations. Previously, the `cri` stream server listened on the worker node's private network interface by using port 10010. If your apps use the container `cri` plug-in and rely on the previous behavior, update them. |
{: #cs_v112}
{{site.data.keyword.containerlong_notm}} is a Certified Kubernetes product for version 1.12 under the CNCF Kubernetes Software Conformance Certification program. _Kubernetes® is a registered trademark of The Linux Foundation in the United States and other countries, and is used pursuant to a license from The Linux Foundation._
Review changes that you might need to make when you update from the previous Kubernetes version to 1.12. {: shortdesc}
{: #112_before}
The following table shows the actions that you must take before you update the Kubernetes master. {: shortdesc}
Type | Description |
---|---|
Kubernetes Metrics Server | If you currently have the Kubernetes `metric-server` deployed in your cluster, you must remove the `metric-server` before you update the cluster to Kubernetes 1.12. This removal prevents conflicts with the `metric-server` that is deployed during the update. |
Role bindings for `kube-system` `default` service account | The `kube-system` `default` service account no longer has **cluster-admin** access to the Kubernetes API. If you deploy features or add-ons such as [Helm](/docs/containers?topic=containers-helm#public_helm_install) that require access to processes in your cluster, set up a [service account ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/). If you need time to create and set up individual service accounts with the appropriate permissions, you can temporarily grant the **cluster-admin** role with the following cluster role binding: `kubectl create clusterrolebinding kube-system:default --clusterrole=cluster-admin --serviceaccount=kube-system:default` |
{: #112_after}
The following table shows the actions that you must take after you update the Kubernetes master. {: shortdesc}
Type | Description |
---|---|
APIs for Kubernetes | The Kubernetes API replaces deprecated APIs as follows:
Update all your YAML `apiVersion` fields to use the appropriate Kubernetes API before the deprecated APIs become unsupported. Also, review the [Kubernetes docs ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) for changes that are related to `apps/v1`, such as the following.
|
CoreDNS available as cluster DNS provider | The Kubernetes project is in the process of transitioning to support CoreDNS instead of the current Kubernetes DNS (KubeDNS). In version 1.12, the default cluster DNS remains KubeDNS, but you can [choose to use CoreDNS](/docs/containers?topic=containers-cluster_dns#dns_set). |
`kubectl apply --force` | Now, when you force an apply action (`kubectl apply --force`) on resources that cannot be updated, such as immutable fields in YAML files, the resources are recreated instead. If your scripts rely on the previous behavior, update them. |
`kubectl get componentstatuses` | The `kubectl get componentstatuses` command does not properly report the health of some Kubernetes master components because these components are no longer accessible from the Kubernetes API server now that `localhost` and insecure (HTTP) ports are disabled. After introducing highly available (HA) masters in Kubernetes version 1.10, each Kubernetes master is set up with multiple `apiserver`, `controller-manager`, `scheduler`, and `etcd` instances. Instead, review the cluster healthy by checking the [{{site.data.keyword.cloud_notm}} console ![External link icon](../icons/launch-glyph.svg "External link icon")](https://cloud.ibm.com/kubernetes/landing) or by using the `ibmcloud ks cluster get` [command](/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli#cs_cluster_get). |
`kubectl logs --interactive` | The `--interactive` flag is no longer supported for `kubectl logs`. Update any automation that uses this flag. |
`kubectl patch` | If the `patch` command results in no changes (a redundant patch), the command no longer exits with a `1` return code. If your scripts rely on the previous behavior, update them. |
`kubectl version -c` | The `-c` shorthand flag is no longer supported. Instead, use the full `--client` flag. Update any automation that uses this flag. |
`kubectl wait` | If no matching selectors are found, the command now prints an error message and exits with a `1` return code. If your scripts rely on the previous behavior, update them. |
kubelet cAdvisor port | The [Container Advisor (cAdvisor) ![External link icon](../icons/launch-glyph.svg "External link icon")](https://github.com/google/cadvisor) web UI that the kubelet used by starting the `--cadvisor-port` is removed from Kubernetes 1.12. If you still need to run cAdvisor, [deploy cAdvisor as a daemon set ![External link icon](../icons/launch-glyph.svg "External link icon")](https://github.com/google/cadvisor/tree/master/deploy/kubernetes). In the daemon set, specify the ports section so that cAdvisor can be reached via `http://node-ip:4194`, such as follows. The cAdvisor pods fail until the worker nodes are updated to 1.12 because earlier versions of kubelet use host port 4194 for cAdvisor.
|
Kubernetes dashboard | If you access the dashboard via `kubectl proxy`, the **SKIP** button on the login page is removed. Instead, [use a **Token** to log in](/docs/containers?topic=containers-app#cli_dashboard). |
Kubernetes Metrics Server | Kubernetes Metrics Server replaces Kubernetes Heapster (deprecated since Kubernetes version 1.8) as the cluster metrics provider. If you run more than 30 pods per worker node in your cluster, [adjust the `metrics-server` configuration for performance](/docs/containers?topic=containers-kernel#metrics).
The Kubernetes dashboard does not work with the `metrics-server`. If you want to display metrics in a dashboard, choose from the following options.
|
`rbac.authorization.k8s.io/v1` Kubernetes API | The `rbac.authorization.k8s.io/v1` Kubernetes API (supported since Kubernetes 1.8) is replacing the `rbac.authorization.k8s.io/v1alpha1` and `rbac.authorization.k8s.io/v1beta1` API. You can no longer create RBAC objects such as roles or role bindings with the unsupported `v1alpha` API. Existing RBAC objects are converted to the `v1` API. |
{: #k8s_version_archive}
Find an overview of Kubernetes versions that are unsupported in {{site.data.keyword.containerlong_notm}}. {: shortdesc}
{: #cs_v111}
As of 20 July 2019, {{site.data.keyword.containerlong_notm}} clusters that run Kubernetes version 1.11 are unsupported. Version 1.11 clusters cannot receive security updates or support unless they are updated to the next most recent version. {: shortdesc}
Review the potential impact of each Kubernetes version update, and then update your clusters immediately to at least 1.12.
{: #cs_v110}
As of 16 May 2019, {{site.data.keyword.containerlong_notm}} clusters that run Kubernetes version 1.10 are unsupported. Version 1.10 clusters cannot receive security updates or support unless they are updated to the next most recent version. {: shortdesc}
Review the potential impact of each Kubernetes version update, and then update your clusters to Kubernetes 1.12.
{: #cs_v19}
As of 27 December 2018, {{site.data.keyword.containerlong_notm}} clusters that run Kubernetes version 1.9 are unsupported. Version 1.9 clusters cannot receive security updates or support unless they are updated to the next most recent version. {: shortdesc}
To continue running your apps in {{site.data.keyword.containerlong_notm}}, create a new cluster and deploy your apps to the new cluster.
{: #cs_v18}
As of 22 September 2018, {{site.data.keyword.containerlong_notm}} clusters that run Kubernetes version 1.8 are unsupported. Version 1.8 clusters cannot receive security updates or support. {: shortdesc}
To continue running your apps in {{site.data.keyword.containerlong_notm}}, create a new cluster and deploy your apps to the new cluster.
{: #cs_v17}
As of 21 June 2018, {{site.data.keyword.containerlong_notm}} clusters that run Kubernetes version 1.7 are unsupported. Version 1.7 clusters cannot receive security updates or support. {: shortdesc}
To continue running your apps in {{site.data.keyword.containerlong_notm}}, create a new cluster and deploy your apps to the new cluster.
{: #cs_v1-5}
As of 4 April 2018, {{site.data.keyword.containerlong_notm}} clusters that run Kubernetes version 1.5 are unsupported. Version 1.5 clusters cannot receive security updates or support. {: shortdesc}
To continue running your apps in {{site.data.keyword.containerlong_notm}}, create a new cluster and deploy your apps to the new cluster.