Table of Contents generated with DocToc
- User Guide
- Prerequisites
- Helm Chart Deployment
- Operations
- Enabling federation of an API type
- Disabling federation of an API type
- Propagation status
- Deletion policy
- Example
- Create the Test Namespace
- Create Test Resources
- Check Status of Resources
- Update FederatedNamespace Placement
- Using Cluster Selector
- Neither
spec.placement.clusters
norspec.placement.clusterSelector
is provided - Both
spec.placement.clusters
andspec.placement.clusterSelector
are provided spec.placement.clusters
is not provided,spec.placement.clusterSelector
is provided but emptyspec.placement.clusters
is not provided,spec.placement.clusterSelector
is provided and not empty
- Neither
- Using Cluster Selector
- Example Cleanup
- Troubleshooting
- Namespaced Federation
- Local Value Retention
- Higher order behaviour
- Controller-Manager Leader Election
If you are looking to use kubefed, you've come to the right place. Below is a walkthrough tutorial for how to deploy the kubefed control plane.
Please refer to Kubefed Concepts first before you go through this user guide.
The kubefed deployment requires kubernetes version >= 1.11. The following is a detailed list of binaries required.
kubectl
is installed by the guide.
kubefedctl
is the federation command line utility. You can download
the latest binary from the release page.
VERSION=<latest-version>
curl -LO https://github.com/kubernetes-sigs/kubefed/releases/download/${VERSION}/kubefedctl.tgz
tar -zxvf kubefedctl.tgz
chmod u+x kubefedctl
sudo mv kubefedctl /usr/local/bin/ # make sure the location is in the PATH
NOTE: kubefedctl
is built for Linux only in the release package.
If you follow this user guide without any changes you will be using the latest
stable released version of the kubefed image tagged as latest
.
Alternatively, we support the ability to deploy the latest master image tagged
as canary
or your own
custom image.
The kubefed control plane can run on any v1.13 or greater Kubernetes clusters. The following is a list of Kubernetes environments that have been tested and are supported by the Kubefed community:
After completing the steps in one of the above guides, return here to continue the Kubefed deployment.
NOTE: You must set the correct context using the command below as this guide depends on it.
kubectl config use-context cluster1
You can refer to helm chart installation guide to install and uninstall a kubefed control plane.
Next, you'll want to use the kubefedctl
tool to join all your
clusters that you want to test against.
kubefedctl join cluster1 --cluster-context cluster1 \
--host-cluster-context cluster1 --v=2
kubefedctl join cluster2 --cluster-context cluster2 \
--host-cluster-context cluster1 --v=2
You can repeat these steps to join any additional clusters.
NOTE: cluster-context
will default to use the joining cluster name if not
specified.
Check the status of the joined clusters until you verify they are ready:
kubectl -n kube-federation-system get kubefedclusters
NAME READY AGE
cluster1 True 1m
cluster2 True 1m
If required, federation allows you to unjoin clusters using kubefedctl
tool.
kubefedctl unjoin cluster2 --cluster-context cluster2 --host-cluster-context cluster1 --v=2
You can repeat these steps to unjoin any additional clusters.
It is possible to enable federation of any Kubernetes API type (including CRDs) using the
kubefedctl
command:
kubefedctl enable <target kubernetes API type>
The <target kubernetes API type>
above can be the Kind (e.g. Deployment
), plural name
(e.g. deployments
), group-qualified plural name (e.g deployment.apps
), or short name
(e.g. deploy
) of the intended target API type.
The command will create a CRD for the federated type named Federated<Kind>
. The command will also
create a FederatedTypeConfig
in the federation system namespace with the group-qualified plural name
of the target type. A FederatedTypeConfig
associates the federated type CRD with the target
kubernetes type, enabling propagation of federated resources of the given type to the member clusters.
The format used to name the FederatedTypeConfig
is <target kubernetes API type name>.<group name>
except kubernetes core
group types where the name format used is <target kubernetes API type name>
.
It is also possible to output the yaml to stdout
instead of applying it to the API Server:
kubefedctl enable <target API type> --output=yaml
NOTE: Federation of an API type requires that the API type be installed on all member clusters. If the API type is not installed on a member cluster, propagation to that cluster will fail. See issue 314 for more details.
If the API type is not installed on one of your member clusters, you will see a
repeated controller-manager
log error similar to the one reported in issue
314. At this
time, you must manually verify that the API type is installed on each of your
clusters as the controller-manager
log error is the only indication.
For an example API type bars.example.com
, you can verify that the API type is
installed on each of your clusters by running:
CLUSTER_CONTEXTS="cluster1 cluster2"
for c in ${CLUSTER_CONTEXTS}; do
echo ----- ${c} -----
kubectl --context=${c} api-resources --api-group=example.com
done
The output should look like the following:
----- cluster1 -----
NAME SHORTNAMES APIGROUP NAMESPACED KIND
bars example.com true Bar
----- cluster2 -----
NAME SHORTNAMES APIGROUP NAMESPACED KIND
bars example.com true Bar
The output shown below is an example if you do not have the API type
installed on cluster2
. Note that cluster2
did not return any resources:
----- cluster1 -----
NAME SHORTNAMES APIGROUP NAMESPACED KIND
bars example.com true Bar
----- cluster2 -----
NAME SHORTNAMES APIGROUP NAMESPACED KIND
Verifying the API type exists on all member clusters will ensure successful propagation to that cluster.
When kubefedctl enable
is used to enable types whose plural names (e.g. deployments.example.com
and deployments.apps) match, the crd name of the generated federated type would also match (e.g.
deployments.types.kubefed.k8s.io).
kubefedctl enable --federation-group string
specifies the name of the API group to use for the
generated federation type. It is types.kubefed.k8s.io
by default. If a new federation group is
enabled, the RBAC permissions for the kubefed controller manager will need to be updated to include
permissions for the new group.
For example, after federation deployment, deployments.apps
is enabled by default. To enable
deployments.example.com
, you should:
kubefedctl enable deployments.example.com --federation-group federation.example.com
kubectl patch clusterrole kubefed-role --type='json' -p='[{"op": "add", "path": "/rules/1", "value": {
"apiGroups": [
"federation.example.com"
],
"resources": [
"*"
],
"verbs": [
"get",
"watch",
"list",
"update"
]
}
}]'
This example is for cluster scoped federation deployment. For namespaced federation deployment,
you can patch role kubefed-role
in the kubefed namespace instead.
It is possible to disable propagation of a type that is configured for propagation using the
kubefedctl
command:
kubefedctl disable <FederatedTypeConfig Name>
This command will set the propagationEnabled
field in the FederatedTypeConfig
associated with this target API type to false
, which will prompt the sync
controller for the target API type to be stopped.
If the goal is to permanently disable federation of the target API type, passing the
--delete-from-api
flag will remove the FederatedTypeConfig
and federated type CRD created by
enable
:
kubefedctl disable <FederatedTypeConfig Name> --delete-from-api
WARNING: All custom resources for the type will be removed by this command.
When the sync controller reconciles a federated resource with member clusters, propagation status will be written to the resource as per the following example:
apiVersion: types.kubefed.k8s.io/v1alpha1
kind: FederatedNamespace
metadata:
name: myns
namespace: myns
spec:
placement:
clusterSelector: {}
status:
# The status True of the condition of type Propagation
# indicates that the state of all member clusters is as
# intended as of the last probe time.
conditions:
- type: Propagation
status: True
lastProbeTime: "2019-05-08T01:23:20Z"
lastTransitionTime: "2019-05-08T01:23:20Z"
# The namespace 'myns' has been verified to exist in the
# following clusters as of the lastProbeTime recorded
# in the 'Propagation' condition.
clusters:
- name: cluster1
- name: cluster2
If the sync controller encounters an error in creating, updating or
deleting managed resources in member clusters, the Propagation
condition will have a status of False
and the reason field will be
one of the following values:
Reason | Description |
---|---|
CheckClusters | One or more clusters is not in the desired state. |
ClusterRetrievalFailed | An error prevented retrieval of member clusters. |
ComputePlacementFailed | An error prevented computation of placement. |
For reasons other than CheckClusters
, an event will be logged with
the same reason and can be examined for more detail:
kubectl describe kubefednamespace myns -n myns | grep ComputePlacementFailed
Warning ComputePlacementFailed 5m kubefednamespace-controller Invalid selector <nil>
If the Propagation
condition has status False
and reason
CheckClusters
, the cluster status can be examined to determine the
clusters for which reconciliation was not successful. In the following
example, namespace myns
has been verified to exist in cluster1
.
The namespace should not exist in cluster2
, but deletion has failed.
apiVersion: types.kubefed.k8s.io/v1alpha1
kind: FederatedNamespace
metadata:
name: myns
namespace: myns
spec:
placement:
clusterNames:
- cluster1
status:
conditions:
- type: Propagation
status: False
reason: CheckClusters
lastProbeTime: "2019-05-08T01:23:20Z"
lastTransitionTime: "2019-05-08T01:23:20Z"
clusters:
- name: cluster1
- name: cluster2
status: DeletionFailed
When a cluster has a populated status, as in the example above, the
sync controller will have written an event with a matching Reason
that may provide more detail as to the nature of the problem.
kubectl describe federatednamespace myns -n myns | grep cluster2 | grep DeletionFailed
Warning DeletionFailed 5m federatednamespace-controller Failed to delete Namespace "myns" in cluster "cluster2"...
The following table enumerates the possible values for cluster status:
Status | Description |
---|---|
AlreadyExists | The target resource already exists in the cluster, and cannot be adopted due to skipAdoptingResources being configured. |
CachedRetrievalFailed | An error occurred when retrieving the cached target resource. |
ClientRetrievalFailed | An error occurred while attempting to create an API client for the member cluster. |
ClusterNotReady | The latest health check for the cluster did not succeed. |
ComputeResourceFailed | An error occurred when determining the form of the target resource that should exist in the cluster. |
CreationFailed | Creation of the target resource failed. |
CreationTimedOut | Creation of the target resource timed out. |
DeletionFailed | Deletion of the target resource failed. |
DeletionTimedOut | Deletion of the target resource timed out. |
FieldRetentionFailed | An error occurred while attempting to retain the value of one or more fields in the target resource (e.g. clusterIP for a service) |
LabelRemovalFailed | Removal of the federation label from the target resource failed. |
LabelRemovalTimedOut | Removal of the federation label from the target resource timed out. |
RetrievalFailed | Retrievel of the target resource from the cluster failed. |
UpdateFailed | Update of the target resource failed. |
UpdateTimedOut | Update of the target resource timed out. |
VersionRetrievalFailed | An error occurred while attempting to retrieve the last recorded version of the target resource. |
WaitingForRemoval | The target resource has been marked for deletion and is awaiting garbage collection. |
All federated resources reconciled by the sync controller have a
finalizer (kubefed.k8s.io/sync-controller
) added to their
metadata. This finalizer will prevent deletion of a federated resource
until the sync controller has a chance to perform pre-deletion
cleanup.
Pre-deletion cleanup of a federated resource includes removal of
resources managed by the federated resource from member clusters. To
ensure retention of managed resources, add kubefed.k8s.io/orphan: true
as an annotation to the federated resource prior to deletion:
kubectl patch <federated type> <name> \
--type=merge -p '{"metadata": {"annotations": {"kubefed.k8s.io/orphan": "true"}}}'
In the event that a sync controller for a given federated type is not able to reconcile a federated resource slated for deletion - due to propagation being disabled for a given type or the federated control plane not running - a federated resource that still has the federation finalizer will linger rather than being garbage collected. If necessary, the federation finalizer can be manually removed to ensure garbage collection.
Follow these instructions for running an example to verify your deployment is
working. The example will create a test namespace with a federatednamespace
resource as well as a federated resource for the following k8s resources:
configmap
, secret
, deployment
, service
and serviceaccount
. It will
then show how to update the federatednamespace
resource to move resources.
First create the test-namespace
for the test resources:
kubectl apply -f example/sample1/namespace.yaml \
-f example/sample1/federatednamespace.yaml
Create all the test resources by running:
kubectl apply -R -f example/sample1
NOTE: If you get the following error while creating a test resource i.e.
unable to recognize "example/sample1/federated<type>.yaml": no matches for kind "Federated<type>" in version "types.kubefed.k8s.io/v1alpha1",
then it indicates that a given type may need to be enabled with kubefedctl enable <type>
Check the status of all the resources in each cluster by running:
for r in configmaps secrets service deployment serviceaccount job; do
for c in cluster1 cluster2; do
echo; echo ------------ ${c} resource: ${r} ------------; echo
kubectl --context=${c} -n test-namespace get ${r}
echo; echo
done
The status of propagation is also recorded on each federated resource:
for r in federatedconfigmaps federatedsecrets federatedservice federateddeployment federatedserviceaccount federatedjob; do
echo; echo ------------ ${c} resource: ${r} ------------; echo
kubectl --context=${c} -n test-namespace get ${r} -o yaml
echo; echo
done
Now make sure nginx
is running properly in each cluster:
for c in cluster1 cluster2; do
NODE_PORT=$(kubectl --context=${c} -n test-namespace get service \
test-service -o jsonpath='{.spec.ports[0].nodePort}')
echo; echo ------------ ${c} ------------; echo
curl $(echo -n $(minikube ip -p ${c})):${NODE_PORT}
echo; echo
done
Remove cluster2
via a patch command or manually:
kubectl -n test-namespace patch federatednamespace test-namespace \
--type=merge -p '{"spec": {"placement": {"clusters": [{"name": "cluster1"}]}}}'
kubectl -n test-namespace edit federatednamespace test-namespace
Then wait to verify all resources are removed from cluster2
:
for r in configmaps secrets service deployment serviceaccount job; do
for c in cluster1 cluster2; do
echo; echo ------------ ${c} resource: ${r} ------------; echo
kubectl --context=${c} -n test-namespace get ${r}
echo; echo
done
done
We can quickly add back all the resources by simply updating the
FederatedNamespace
to add cluster2
again via a patch command or
manually:
kubectl -n test-namespace patch federatednamespace test-namespace \
--type=merge -p '{"spec": {"placement": {"clusters": [{"name": "cluster1"}, {"name": "cluster2"}]}}}'
kubectl -n test-namespace edit federatednamespace test-namespace
Then wait and verify all resources are added back to cluster2
:
for r in configmaps secrets service deployment serviceaccount job; do
for c in cluster1 cluster2; do
echo; echo ------------ ${c} resource: ${r} ------------; echo
kubectl --context=${c} -n test-namespace get ${r}
echo; echo
done
done
Lastly, make sure nginx
is running properly in each cluster:
for c in cluster1 cluster2; do
NODE_PORT=$(kubectl --context=${c} -n test-namespace get service \
test-service -o jsonpath='{.spec.ports[0].nodePort}')
echo; echo ------------ ${c} ------------; echo
curl $(echo -n $(minikube ip -p ${c})):${NODE_PORT}
echo; echo
done
If you were able to verify the resources removed and added back then you have successfully verified a working kubefed deployment.
In addition to specifying an explicit list of clusters that a resource should be propagated
to via the spec.placement.clusters
field of a federated resource, it is possible to
use the spec.placement.clusterSelector
field to provide a label selector that determines
that list of clusters at runtime.
If the goal is to select a subset of member clusters, make sure that the KubefedCluster
resources that are intended to be selected have the appropriate labels applied.
The following command is an example to label a KubefedCluster
:
kubectl label kubefedclusters -n kube-federation-system cluster1 foo=bar
Please refer to Kubernetes label command
to get more detail for how kubectl label
works.
The following sections detail how spec.placement.clusters
and
spec.placement.clusterSelector
are used in determining the clusters that a federated
resource should be propagated to.
spec:
placement: {}
In this case, you can either set spec: {}
as above or remove spec
field from your
placement policy. The resource will not be propagated to member clusters.
spec:
placement:
clusters:
- name: cluster2
- name: cluster1
clusterSelector:
matchLabels:
foo: bar
For this case, spec.placement.clusterSelector
will be ignored as
spec.placement.clusters
is provided. This ensures that the results of runtime
scheduling have priority over manual definition of a cluster selector.
spec:
placement:
clusterSelector: {}
In this case, the resource will be propagated to all member clusters.
spec:
placement:
clusterSelector:
matchLabels:
foo: bar
In this case, the resource will only be propagated to member clusters that are labeled
with foo: bar
.
To cleanup the example simply delete the namespace:
kubectl delete ns test-namespace
If federated resources are not propagated as expected to the member clusters, you can
use the following command to view Events
which may aid in diagnosing the problem.
kubectl describe <federated CRD> <CR name> -n test-namespace
An example for CRD of federatedserviceaccounts
is as follows:
kubectl describe federatedserviceaccounts test-serviceaccount -n test-namespace
It may also be useful to inspect the kubefed controller log as follows:
kubectl logs -f kubefed-controller-manager-0 -n kube-federation-system
All prior instructions referred to the deployment and use of a cluster-scoped kubefed control plane. It is also possible to deploy a namespace-scoped control plane. In this mode of operation, kubefed controllers will target resources in a single namespace on both host and member clusters. This may be desirable when experimenting with federation on a production cluster.
To deploy a federation in a namespaced configuration, set
global.scope
to Namespaced
as per the Helm chart install
instructions.
Joining additional clusters to a namespaced federation requires
providing additional arguments to kubefedctl join
:
--kubefed-namespace=<namespace>
to ensure the cluster is joined to the federation running in the specified namespace
To join mycluster
when KUBEFED_NAMESPACE=test-namespace
was used for deployment:
kubefedctl join mycluster --cluster-context mycluster \
--host-cluster-context mycluster --v=2 \
--kubefed-namespace=test-namespace
In most cases, the federation sync controller will overwrite any changes made to resources it manages in member clusters. The exceptions appear in the following table. Where retention is conditional, an explanation will be provided in a subsequent section.
Resource Type | Fields | Retention | Requirement |
---|---|---|---|
All | metadata.resourceVersion | Always | Updates require the most recent resourceVersion for concurrency control. |
Scalable | spec.replicas | Conditional | The HPA controller may be managing the replica count of a scalable resource. |
Service | spec.clusterIP,spec.ports | Always | A controller may be managing these fields. |
ServiceAccount | secrets | Conditional | A controller may be managing this field. |
For scalable resources (those that have a scale subtype
e.g. ReplicaSet
and Deployment
), retention of the spec.replicas
field is controlled by the retainReplicas
boolean field of the
federated resource. retainReplicas
defaults to false
, and should
be set to true
only if the resource will be managed by HPA in member
clusters.
Retention of the replicas field is possible for all
clusters or no clusters. If a resource will be managed by HPA in some
clusters but not others, it will be necessary to create a separate
federated resource for each retention strategy (i.e. one with
retainReplicas: true
and one with retainReplicas: false
).
A populated secrets
field of a ServiceAccount
resource managed by
federation will be retained if the managing federated resource does
not specify a value for the field. This avoids the possibility of the
sync controller attempting to repeatedly clear the field while a local
serviceaccounts controller attempts to repeatedly set it to a
generated value.
The architecture of kubefed API allows higher level APIs to be constructed using the
mechanics provided by the standard form of the federated API types (containing fields for
template
, placement
and override
) and associated controllers for a given resource.
Further sections describe few of higher level APIs implemented as part of Kubefed.
Multi-Cluster Ingress DNS provides the ability to programmatically manage DNS resource records of Ingress objects through ExternalDNS integration. Review the guides below for different DNS provider to learn more.
- Multi-Cluster Ingress DNS with ExternalDNS Guide for Google Cloud DNS
- Multi-Cluster Ingress DNS with ExternalDNS Guide for CoreDNS in minikube
Multi-Cluster Service DNS provides the ability to programmatically manage DNS resource records of Service objects through ExternalDNS integration. Review the guides below for different DNS provider to learn more.
- Multi-Cluster Service DNS with ExternalDNS Guide for Google Cloud DNS
- Multi-Cluster Service DNS with ExternalDNS Guide for CoreDNS in minikube
ReplicaSchedulingPreference provides an automated mechanism of distributing
and maintaining total number of replicas for deployment
or replicaset
based
federated workloads into federated clusters. This is based on high level
user preferences given by the user. These preferences include the semantics
of weighted distribution and limits (min and max) for distributing the replicas.
These also include semantics to allow redistribution of replicas dynamically
in case some replica pods remain unscheduled in some clusters, for example
due to insufficient resources in that cluster.
RSP is used in place of ReplicaSchedulingPreference for brevity in text further on.
The RSP controller works in a sync loop observing the RSP resource and the
matching namespace/name
pair FederatedDeployment
or FederatedReplicaset
resource.
If it finds that both RSP and its associated federated resource, the type of which
is specified using spec.targetKind
, exists, it goes ahead to list currently
healthy clusters and distributes the spec.totalReplicas
using the associated
per cluster user preferences. If the per cluster preferences are absent, it
distributes the spec.totalReplicas
evenly among all clusters. It updates (or
creates if missing) the same namespace/name
for the
targetKind
with the replica values calculated, leveraging the sync controller
to actually propagate the k8s resource to federated clusters. Its noteworthy that
if an RSP is present, the spec.replicas
from the federated resource are unused.
RSP also provides a further more useful feature using spec.rebalance
. If this is
set to true
, the RSP controller monitors the replica pods for target replica
workload from each federated cluster and if it finds that some clusters are not
able to schedule those pods for long, it moves (rebalances) the replicas to
clusters where all the pods are running and healthy. This in other words helps
moving the replica workloads to those clusters where there is enough capacity
and away from those clusters which are currently running out of capacity. The
rebalance
feature might cause initial shuffle of replicas to reach an eventually
balanced state of distribution. The controller might further keep trying to move
few replicas back into the cluster(s) which ran out of capacity, to check if it can
be scheduled again to reach the normalised state (even distribution or the state
desired by user preferences), which apparently is the only mechanism to check if
this cluster has capacity now. The spec.rebalance
should not be used if this
behaviour is unacceptable.
The RSP can be considered as more user friendly mechanism to distribute the replicas, where the inputs needed from the user at federated control plane are reduced. The user only needs to create the RSP resource and associated federated resource (with only spec.template populated) to distribute the replicas. It can also be considered as a more automated approach at distribution and further reconciliation of the workload replicas.
The usage of the RSP semantics is illustrated using some examples below. The
examples considers 3 federated clusters A
, B
and C
.
apiVersion: scheduling.kubefed.k8s.io/v1alpha1
kind: ReplicaSchedulingPreference
metadata:
name: test-deployment
namespace: test-ns
spec:
targetKind: FederatedDeployment
totalReplicas: 9
or
apiVersion: scheduling.kubefed.k8s.io/v1alpha1
kind: ReplicaSchedulingPreference
metadata:
name: test-deployment
namespace: test-ns
spec:
targetKind: FederatedDeployment
totalReplicas: 9
clusters:
"*":
weight: 1
A, B and C get 3 replicas each.
apiVersion: scheduling.kubefed.k8s.io/v1alpha1
kind: ReplicaSchedulingPreference
metadata:
name: test-deployment
namespace: test-ns
spec:
targetKind: FederatedDeployment
totalReplicas: 9
clusters:
A:
weight: 1
B:
weight: 2
A gets 3 and B gets 6 replicas in the proportion of 1:2. C does not get any replica as missing weight preference is considered as weight=0.
apiVersion: scheduling.kubefed.k8s.io/v1alpha1
kind: ReplicaSchedulingPreference
metadata:
name: test-deployment
namespace: test-ns
spec:
targetKind: FederatedDeployment
totalReplicas: 9
clusters:
A:
minReplicas: 4
maxReplicas: 6
weight: 1
B:
minReplicas: 4
maxReplicas: 8
weight: 2
A gets 4 and B get 5 as weighted distribution is capped by cluster A minReplicas=4.
apiVersion: scheduling.kubefed.k8s.io/v1alpha1
kind: ReplicaSchedulingPreference
metadata:
name: test-deployment
namespace: test-ns
spec:
targetKind: FederatedDeployment
totalReplicas: 50
clusters:
"*":
weight: 1
"C":
maxReplicas: 20
weight: 1
Possible scenarios
All have capacity.
Replica layout: A=16 B=17 C=17.
B is offline/has no capacity
Replica layout: A=30 B=0 C=20
A and B are offline:
Replica layout: C=20
The kubefed controller manager is always deployed with leader election feature to ensure high availability of the control plane. Leader election module ensures there is always a leader elected among multiple instances which takes care of running the controllers. In case the active instance goes down, one of the standby instances gets elected as leader to ensure minimum downtime. Leader election ensures that only one instance is responsible for reconciliation. You can refer to the helm chart configuration to configure parameters for leader election to tune for your environment (the defaults should be sane for most environments).