-
Notifications
You must be signed in to change notification settings - Fork 5.2k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
2 changed files
with
239 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1 @@ | ||
14 | ||
18 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,238 @@ | ||
--- | ||
kep-number: 18 | ||
title: Azure Availability Zones | ||
authors: | ||
- "@feiskyer" | ||
owning-sig: sig-azure | ||
participating-sigs: | ||
- sig-azure | ||
- sig-storage | ||
reviewers: | ||
- name: "@khenidak" | ||
- name: "@colemickens" | ||
approvers: | ||
- name: "@brendanburns" | ||
editor: TBD | ||
creation-date: 2018-07-11 | ||
last-updated: 2018-07-11 | ||
status: provisional | ||
--- | ||
|
||
# Azure Availability Zones | ||
|
||
## Table of Contents | ||
|
||
- [Azure Availability Zones](#azure-availability-zones) | ||
- [Summary](#summary) | ||
- [Scopes and Non-scopes](#scopes-and-non-scopes) | ||
- [Scopes](#scopes) | ||
- [Non-scopes](#non-scopes) | ||
- [AZ label format](#az-label-format) | ||
- [Cloud provider options](#cloud-provider-options) | ||
- [Node registration](#node-registration) | ||
- [Get by instance metadata](#get-by-instance-metadata) | ||
- [Get by Go SDK](#get-by-go-sdk) | ||
- [LoadBalancer and PublicIP](#loadbalancer-and-publicip) | ||
- [AzureDisk](#azuredisk) | ||
- [PVLabeler](#pvlabeler) | ||
- [PersistentVolumeLabel](#persistentvolumelabel) | ||
- [StorageClass](#storageclass) | ||
- [Appendix](#appendix) | ||
|
||
## Summary | ||
|
||
This proposal aims to add [Azure Availability Zones (AZ)](https://azure.microsoft.com/en-us/global-infrastructure/availability-zones/) support to Kubernetes. | ||
|
||
## Scopes and Non-scopes | ||
|
||
### Scopes | ||
|
||
The proposal includes required changes to support availability zones for various functions in Azure cloud provider and AzureDisk volumes: | ||
|
||
- Detect availability zones automatically when registering new nodes and node's label `failure-domain.beta.kubernetes.io/zone` will be replaced with AZ instead of fault domain | ||
- LoadBalancer and PublicIP will be provisioned with zone redundant | ||
- `GetLabelsForVolume` interface will be implemented for Azure managed disks and it will also be added to `PersistentVolumeLabel` admission controller so as to support DynamicProvisioningScheduling | ||
|
||
### Non-scopes | ||
|
||
Provisioning Kubernetes masters and nodes with availability zone support is not included in this proposal. It should be done in the provisioning tools (e.g. acs-engine). Azure cloud provider will auto-detect the node's availability zone if `availabilityZones` option is configured for the Azure cloud provider. | ||
|
||
## AZ label format | ||
|
||
Currently, Azure nodes are registered with label `failure-domain.beta.kubernetes.io/zone=faultDomain`. | ||
|
||
The format of fault domain is numbers (e.g. `1` or `2`), which is in same format with AZ (e.g. `1` or `3`). If AZ is using same format with faultDomain, then there'll be scheduler issues for clusters with both AZ and non-AZ nodes. So AZ will use a different format in kubernetes: `<region>-<AZ>`, e.g. `centralus-1`. | ||
|
||
The AZ label will be applied in multiple Kubernetes resources, e.g. | ||
|
||
- Nodes | ||
- AzureDisk PersistentVolumes | ||
- AzureDisk StorageClass | ||
|
||
## Cloud provider options | ||
|
||
Because only standard load balancer is supported with AZ, it is a prerequisite to enable AZ for the cluster. | ||
|
||
Standard load balancer has been added in Kubernetes v1.11, related options include: | ||
|
||
| Option | Default | **AZ Value** | Releases | Notes | | ||
| --------------------------- | ------- | ------------- | -------- | ------------------------------------- | | ||
| loadBalancerSku | basic | **standard** | v1.11 | Enable standard LB | | ||
| excludeMasterFromStandardLB | true | true or false | v1.11 | Exclude master nodes from LB backends | | ||
|
||
These options should be configured in Azure cloud provider configure file (e.g. `/etc/kubernetes/azure.json`): | ||
|
||
```json | ||
{ | ||
..., | ||
"loadBalancerSku": "standard", | ||
"excludeMasterFromStandardLB": true | ||
} | ||
``` | ||
|
||
Note that with standard SKU LoadBalancer, `primaryAvailabitySetName` and `primaryScaleSetName` is not required because all available nodes (with configurable masters via `excludeMasterFromStandardLB`) are added to LoadBalancer backend pools. | ||
|
||
## Node registration | ||
|
||
When nodes are started, kubelet automatically adds labels to them with region and zone information: | ||
|
||
- Region: `failure-domain.beta.kubernetes.io/region=centralus` | ||
- Zone: `failure-domain.beta.kubernetes.io/zone=centralus-1` | ||
|
||
```sh | ||
$ kubectl get nodes --show-labels | ||
NAME STATUS AGE VERSION LABELS | ||
kubernetes-node12 Ready 6m v1.11 failure-domain.beta.kubernetes.io/region=centralus,failure-domain.beta.kubernetes.io/zone=centralus-1,... | ||
``` | ||
|
||
Azure cloud providers sets fault domain for label `failure-domain.beta.kubernetes.io/zone` today. With AZ enabled, we should set the node's availability zone instead. To keep backward compatibility and distinguishing from fault domain, `<region>-<AZ>` is used here. | ||
|
||
The node's zone could get by ARM API or instance metadata. This will be added in `GetZoneByProviderID()` and `GetZoneByNodeName()`. | ||
|
||
### Get by instance metadata | ||
|
||
This method is used in kube-controller-manager. | ||
|
||
```sh | ||
# Instance metadata API should be upgraded to 2017-12-01. | ||
$ curl -H Metadata:true "http://169.254.169.254/metadata/instance/compute/zone?api-version=2017-12-01&format=text" | ||
2 | ||
``` | ||
|
||
### Get by Go SDK | ||
|
||
This method is used in cloud-controller-manager. | ||
|
||
No `zones` property is included in `VirtualMachineScaleSetVM` yet in Azure Go SDK (including latest 2018-04-01 compute API). | ||
|
||
We need to ask Azure Go SDK to add `zones` for `VirtualMachineScaleSetVM`. Opened the issue https://github.com/Azure/azure-sdk-for-go/issues/2183 for tracking it. | ||
|
||
> Note: there's already `zones` property in `VirtualMachineScaleSet`, `VirtualMachine` and `Disk`. | ||
## LoadBalancer and PublicIP | ||
|
||
LoadBalancer with standard SKU will be created and all available nodes (including VirtualMachines and VirtualMachineScaleSetVms, together with optional masters configured via excludeMasterFromStandardLB) are added to LoadBalancer backend pools. | ||
|
||
PublicIPs will also be created with standard SKU, and they are zone redundant by default. | ||
|
||
Note that zonal PublicIPs are not supported. We may add this easily if there’re clear use-cases in the future. | ||
|
||
## AzureDisk | ||
|
||
When Azure managed disks are created, the `PersistentVolumeLabel` admission controller automatically adds zone labels to them. The scheduler (via `VolumeZonePredicate`) will then ensure that pods that claim a given volume are only placed into the same zone as that volume, as volumes cannot be attached across zones. | ||
|
||
> Note that only managed disks are supported. Blob disks don't support availability zones on Azure. | ||
### PVLabeler | ||
|
||
`PVLabeler` interface should be implemented for AzureDisk: | ||
|
||
```go | ||
// PVLabeler is an abstract, pluggable interface for fetching labels for volumes | ||
type PVLabeler interface { | ||
GetLabelsForVolume(ctx context.Context, pv *v1.PersistentVolume) (map[string]string, error) | ||
} | ||
``` | ||
|
||
It should return the region and zone of the AzureDisk, e.g. | ||
|
||
- `failure-domain.beta.kubernetes.io/region=centralus` | ||
- `failure-domain.beta.kubernetes.io/zone=centralus-1` | ||
|
||
so that the PV will be created with labels: | ||
|
||
```sh | ||
$ kubectl get pv --show-labels | ||
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE LABELS | ||
pv-managed-abc 5Gi RWO Bound default/claim1 46s failure-domain.beta.kubernetes.io/region=centralus,failure-domain.beta.kubernetes.io/zone=centralus-1 | ||
``` | ||
|
||
### PersistentVolumeLabel | ||
|
||
Besides PVLabeler interface, [PersistentVolumeLabel](https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/admission/storage/persistentvolume/label/admission.go) admission controller should also updated with AzureDisk support, so that new PVs could be applied with above labels automatically. | ||
|
||
```go | ||
func (l *persistentVolumeLabel) Admit(a admission.Attributes) (err error) { | ||
... | ||
if volume.Spec.AzureDisk != nil { | ||
labels, err := l.findAzureDiskLabels(volume) | ||
if err != nil { | ||
return admission.NewForbidden(a, fmt.Errorf("error querying AzureDisk volume %s: %v", volume.Spec.AzureDisk.DiskName, err)) | ||
} | ||
volumeLabels = labels | ||
} | ||
... | ||
} | ||
``` | ||
|
||
### StorageClass | ||
|
||
Note that the above interfaces are only applied to AzureDisk PV, not StorageClass. For AzureDisk StorageClass, we should add a new optional parameter `zone` and `zones` (must not be used at the same time) for specifying which zones should be used to provision AzureDisk: | ||
|
||
```yaml | ||
apiVersion: storage.k8s.io/v1 | ||
kind: StorageClass | ||
metadata: | ||
annotations: | ||
labels: | ||
kubernetes.io/cluster-service: "true" | ||
name: managed-premium | ||
parameters: | ||
kind: Managed | ||
storageaccounttype: Premium_LRS | ||
zone: "centralus-1" | ||
# zones: "centralus-1,centralus-2,centralus-3" | ||
provisioner: kubernetes.io/azure-disk | ||
``` | ||
If multiple zones are specified, then new AzureDisk will be provisioned with zone chosen arbitrarily among them. | ||
If both zone and zones are not specified, then new AzureDisk will be provisioned with zone chosen by round-robin across all active zones, which means | ||
- If there are no zoned nodes, then AzureDisk will also be provisioned without zone | ||
- Zoned AzureDisk will only be provisioned when there are zoned nodes | ||
- If there are multiple zones, then those zones are chosen by round-robin | ||
Note that there are risks if the cluster is running with both zoned and non-zoned nodes. In such case, AzureDisk is always zoned, and it can't be attached to non-zoned nodes. This means | ||
- new pods with zoned AzureDisks are always scheduled to zoned nodes | ||
- old pods using non-zoned AzureDisks can't be scheduled to zoned nodes | ||
So if users are planning to migrate workloads to zoned nodes, old AzureDisks should be recreated (probably backup first and restore to the new one). | ||
## Appendix | ||
Kubernetes will automatically spread the pods in a replication controller or service across nodes in a single-zone cluster (to reduce the impact of failures). | ||
With multiple-zone clusters, this spreading behavior is extended across zones (to reduce the impact of zone failures.) (This is achieved via `SelectorSpreadPriority`). This is a best-effort placement, and so if the zones in your cluster are heterogeneous (e.g. different numbers of nodes, different types of nodes, or different pod resource requirements), this might prevent perfectly even spreading of your pods across zones. If desired, you can use homogeneous zones (same number and types of nodes) to reduce the probability of unequal spreading. | ||
|
||
There's also some [limitations of availability zones of various Kubernetes functions](https://kubernetes.io/docs/setup/multiple-zones/#limitations), e.g. | ||
|
||
- No zone-aware network routing | ||
- Volume zone-affinity will only work with a `PersistentVolume`, and will not work if you directly specify an AzureDisk volume in the pod spec. | ||
- Clusters cannot span clouds or regions (this functionality will require full federation support). | ||
- StatefulSet volume zone spreading when using dynamic provisioning is currently not compatible with pod affinity or anti-affinity policies. | ||
- If the name of the StatefulSet contains dashes (“-”), volume zone spreading may not provide a uniform distribution of storage across zones. | ||
- When specifying multiple PVCs in a Deployment or Pod spec, the StorageClass needs to be configured for a specific, single zone, or the PVs need to be statically provisioned in a specific zone. Another workaround is to use a StatefulSet, which will ensure that all the volumes for a replica are provisioned in the same zone. | ||
|
||
See more at [running Kubernetes in multiple zones](https://kubernetes.io/docs/setup/multiple-zones/). |