This project was created to build managed Kubernetes Composite Resource
(XR) with three
compositions supporting three main cloud providers.
Each XR can have one or more compositions. Each composition describes how XR should be created
by defining provider package and list of resources (Managed Resources) which build it.
This allows a Composition to act as a class of service.
You can use it as a foundation to understand, build and operate managed Kubernetes Platform in the Cloud.
This repository uses two crossplane distributions:
- XP - Crossplane - upstream project
- UXP - Upbound Universal Crossplane - downstream distribution of crossplane maintained by Upbound
Thought it is possible to mix crossplane distributions with providers I will use following combinations in this repo:
- XP + Native providers
- UXP + Official providers
Added support for official providers maintained by upbound.
Providers are Crossplane packages allowing provision the respective infrastructure. They differ between themselves by number of supported cloud resources (CRDs), written programming language and maintenance model. At that moment we can use two different cloud providers to build the composition:
- Native (Classic) - maintained by XP community, the fastest one, written in Go with limited resource coverage.
- Official - maintained by Upbound, the newest one based on Upjet, with coverage between Native and Jet-preview.
There is another provider available, but it was deprecated and has been replaced by Official one:
- Jet - maintained by XP community, based on Terrajet, available in two packages one with similar coverage as classic provider and one (with -preview suffix) with full resource coverage. I will keep it for a time being in this project but it is not longer maintained and will be release in the future releases.
To give you an idea about current coverage state, based on AWS provider:
## Official providers
- AWS Official Doc or AWS Marketplace
- Azure Official or Azure Marketplace
- GCP Official or GCP Marketplace
All resources which are needed to provision managed Kubernetes cluster are defined in smaller classic-jet provider, so no need to install much bigger with -preview suffix.
Post Provisioning use Helm and Kubernetes Providers.
To demonstrate usage for both post provisioning resources I created following examples:
- (Universal) Crossplane Provisioning using Helm Chart
- Production Namespace Provisioning using Kubernetes Manifest
Install Kubernetes Cluster. I recommend to use Rancher Desktop for local cluster.
# Install XP cli
curl -sL https://raw.githubusercontent.com/crossplane/crossplane/master/install.sh | sh
# Install XP
helm repo add crossplane-stable https://charts.crossplane.io/stable
helm repo update
helm upgrade --install \
crossplane crossplane-stable/crossplane \
--namespace crossplane-system \
--create-namespace \
--wait
# --set nodeSelector."agentpool"=xpjetaks2
# Verify status
helm list -n crossplane-system
kubectl get all -n crossplane-system
# Install UP Command-Line
brew install upbound/tap/up
# Install UXP
up uxp install
# Verify status
kubectl get pods -n upbound-system`
As an output of above setup you should get three credentials files with following content.
- aws-cred.conf
[default]
aws_access_key_id = XXXXXXXXXX
aws_secret_access_key = WFhYWFhYWFhYWA==
- azure-cred.json
{
"clientId": "XXXXXXXXXX",
"clientSecret": "WFhYWFhYWFhYWA==",
"subscriptionId": "XXXXXXXXXX",
"tenantId": "XXXXXXXXXX",
"activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
"resourceManagerEndpointUrl": "https://management.azure.com/",
"activeDirectoryGraphResourceId": "https://graph.windows.net/",
"sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
"galleryEndpointUrl": "https://gallery.azure.com/",
"managementEndpointUrl": "https://management.core.windows.net/"
}
- gcp-cred.json
{
"type": "service_account",
"project_id": "XXXXXXXXXX",
"private_key_id": "XXXXXXXXXX",
"private_key": "-----BEGIN PRIVATE KEY-----\nWFhYWFhYWFhYWA==\n-----END PRIVATE KEY-----\n",
"client_email": "XXXXXXXXXX",
"client_id": "XXXXXXXXXX",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "XXXXXXXXXX"
}
You can store them in cloud Key Vault (KV) or any other Secret Store. Below there is example how to get them from Azure Key Vault. To retrieve credential files from Azure KV you can use following:
KEYVAULT=<Key Vault Name>
az keyvault secret show --name uxpAwsCred --vault-name $KEYVAULT --query value -o tsv | sed -r 's@ aws@\naws@g' > aws-cred.conf
az keyvault secret show --name uxpAzureCred --vault-name $KEYVAULT --query value -o tsv | jq > azure-cred.json
az keyvault secret show --name uxpGcpCred --vault-name $KEYVAULT --query value -o tsv | jq > gcp-cred.json
To be able to provision cloud resources using Crossplane we have to create and configure cloud provider resource. This resource stores the cloud information and is used by XP to interact with cloud provider.
We need to provide two environment variables:
- base64 encoded cloud credentials
- name of the namespace, for UXP:
upbound-system
for XP:crossplane-system
# XP Native
kubectl apply -f providers/native/xp-providers.yaml
# Service providers
kubectl apply -f providers/service-providers.yaml
# Verification
kubectl get provider
PROVIDER_SECRET_NAMESPACE=crossplane-system
BASE64ENCODED_AWS_ACCOUNT_CREDS=$(base64 -i aws-cred.conf | tr -d "\n")
eval "echo \"$(cat providers/secret-aws-provider.yaml)\"" | kubectl apply -f -
eval "echo \"$(cat providers/native/xp-aws-providerconfig.yaml)\"" | kubectl apply -f -
kubectl get providerconfig.aws.crossplane.io
# Verification
kubectl apply -f validation/native/xp-aws-bucket.yaml
kubectl get Bucket.s3.aws.crossplane.io
# AWS Validation using cmd (alternatively console UI)
aws s3 ls --output table
# Clean-up
kubectl delete Bucket.s3.aws.crossplane.io xp-aws-bucket
PROVIDER_SECRET_NAMESPACE=crossplane-system
BASE64ENCODED_AZURE_ACCOUNT_CREDS=$(base64 -i azure-cred.json | tr -d "\n")
eval "echo \"$(cat providers/secret-azure-provider.yaml)\"" | kubectl apply -f -
eval "echo \"$(cat providers/native/xp-azure-providerconfig.yaml)\"" | kubectl apply -f -
kubectl get providerconfig.azure.crossplane.io
# Verification
kubectl apply -f validation/native/xp-azure-bucket.yaml
kubectl get Account.storage.azure.crossplane.io
# Azure Validation using cmd (alternatively console UI)
az group show --resource-group xp-azure-rg -o table
az storage account show -g xp-azure-rg -n xpazurebucket007 -o table
# Clean-up
kubectl delete Account.storage.azure.crossplane.io xpazurebucket007
For GCP we need additionally environment variable: PROJECT_ID.
PROVIDER_SECRET_NAMESPACE=crossplane-system
BASE64ENCODED_GCP_PROVIDER_CREDS=$(base64 -i gcp-cred.json | tr -d "\n")
PROJECT_ID=$(gcloud projects list --filter='NAME:<Project Name>' --format="value(PROJECT_ID.scope())")
eval "echo \"$(cat providers/secret-gcp-provider.yaml)\"" | kubectl apply -f -
eval "echo \"$(cat providers/native/xp-gcp-providerconfig.yaml)\"" | kubectl apply -f -
kubectl get providerconfig.gcp.crossplane.io
# Verification
kubectl apply -f validation/native/xp-gcp-bucket.yaml
kubectl get Bucket.storage.gcp.crossplane.io -w
# GCP Validation using cmd (alternatively console UI)
gsutil ls -p <PROJECT_ID>
kubectl delete Bucket.storage.gcp.crossplane.io xp-gcp-bucket
You can install official providers
- using configuration
kubectl apply -f configuration/official.yaml
watch kubectl get pkg
- manually by applying manifest files
# UXP
kubectl apply -f providers/official/uxp-providers.yaml
# Service providers
kubectl apply -f providers/service-providers.yaml
kubectl get provider.pkg
PROVIDER_SECRET_NAMESPACE=upbound-system
BASE64ENCODED_AWS_ACCOUNT_CREDS=$(base64 -i aws-cred.conf | tr -d "\n")
eval "echo \"$(cat providers/secret-aws-provider.yaml)\"" | kubectl apply -f -
kubectl apply -f providers/official/uxp-aws-providerconfig.yaml
kubectl get providerconfig
# Verification
kubectl apply -f validation/official/uxp-aws-bucket.yaml
kubectl get Bucket.s3.aws.upbound.io -w
# AWS Validation using cmd (alternatively console UI)
aws s3 ls --output table
# Clean-up
kubectl delete Bucket.s3.aws.upbound.io uxp-aws-bucket
PROVIDER_SECRET_NAMESPACE=upbound-system
BASE64ENCODED_AZURE_ACCOUNT_CREDS=$(base64 -i azure-cred.json | tr -d "\n")
eval "echo \"$(cat providers/secret-azure-provider.yaml)\"" | kubectl apply -f -
kubectl apply -f providers/official/uxp-azure-providerconfig.yaml
kubectl get providerconfig
# Verification
kubectl apply -f validation/official/uxp-azure-bucket.yaml
kubectl get account.storage.azure.upbound.io/uxpazurebucket007 -w
# Azure Validation using cmd (alternatively console UI)
az group show --resource-group uxp-azure-rg -o table
az storage account show -g uxp-azure-rg -n uxpazurebucket007 -o table
# Clean-up
kubectl delete -f validation/uxp-azure-bucket.yaml
For GCP we need additionally third environment variable: PROJECT_ID.
PROVIDER_SECRET_NAMESPACE=upbound-system
BASE64ENCODED_GCP_PROVIDER_CREDS=$(base64 -i gcp-cred.json | tr -d "\n")
PROJECT_ID=$(gcloud projects list --filter='NAME:<Project Name>' --format="value(PROJECT_ID.scope())")
eval "echo \"$(cat providers/secret-gcp-provider.yaml)\"" | kubectl apply -f -
eval "echo \"$(cat providers/official/uxp-gcp-providerconfig.yaml)\"" | kubectl apply -f -
kubectl get providerconfig
# Verification
kubectl apply -f validation/official/uxp-gcp-bucket.yaml
kubectl get Bucket.storage.gcp.upbound.io -w
# GCP Validation using cmd (alternatively console UI)
gsutil ls -p <PROJECT_ID>
kubectl delete Bucket.storage.gcp.upbound.io uxp-gcp-bucket
# XP Jet
kubectl apply -f providers/jet-providers.yaml
# Service providers
kubectl apply -f providers/service-providers.yaml
# Verification
kubectl get provider
PROVIDER_SECRET_NAMESPACE=crossplane-system
BASE64ENCODED_AWS_ACCOUNT_CREDS=$(base64 -i aws-cred.conf | tr -d "\n")
eval "echo \"$(cat providers/secret-aws-provider.yaml)\"" | kubectl apply -f -
eval "echo \"$(cat providers/jet-aws-provider.yaml)\"" | kubectl apply -f -
kubectl get providerconfig.aws.jet.crossplane.io
PROVIDER_SECRET_NAMESPACE=crossplane-system
BASE64ENCODED_AZURE_ACCOUNT_CREDS=$(base64 -i azure-cred.json | tr -d "\n")
eval "echo \"$(cat providers/secret-azure-provider.yaml)\"" | kubectl apply -f -
eval "echo \"$(cat providers/jet-azure-provider.yaml)\"" | kubectl apply -f -
kubectl get providerconfig.azure.jet.crossplane.io
For GCP we need additionally third environment variable: project ID.
PROVIDER_SECRET_NAMESPACE=crossplane-system
BASE64ENCODED_GCP_PROVIDER_CREDS=$(base64 -i gcp-cred.json | tr -d "\n")
PROJECT_ID=$(gcloud projects list --filter='NAME:<Project Name>' --format="value(PROJECT_ID.scope())")
eval "echo \"$(cat providers/secret-gcp-provider.yaml)\"" | kubectl apply -f -
eval "echo \"$(cat providers/jet-gcp-provider.yaml)\"" | kubectl apply -f -
kubectl get providerconfig.gcp.jet.crossplane.io
unset BASE64ENCODED_AWS_ACCOUNT_CREDS BASE64ENCODED_AZURE_ACCOUNT_CREDS BASE64ENCODED_GCP_PROVIDER_CREDS PROJECT_ID PROVIDER_SECRET_NAMESPACE
rm aws-cred.conf azure-cred.json gcp-cred.json
Crossplane divides responsibility for the infrastructure provisioning as follows:
- Ops/SRE defines platform and APIs for Dev team
- Dev consumes the infrastructure defined by Ops team
Platform team creates compositions and composite resource definitions (XRDs) to define and configure managed kubernetes services infrastructure in cloud.
# Compositions using Native providers
kubectl apply -f configuration/native/definition.yaml
kubectl apply -f configuration/native/xp-eks-composition.yaml
kubectl apply -f configuration/native/xp-aks-composition.yaml
kubectl apply -f configuration/native/xp-gke-composition.yaml
# Using configuration
kubectl apply -f configuration/official.yaml
kubectl get pkg
# Manually
kubectl apply -f configuration/official/definition.yaml
kubectl apply -f configuration/official/uxp-eks-composition.yaml
kubectl apply -f configuration/official/uxp-aks-composition.yaml
kubectl apply -f configuration/official/uxp-gke-composition.yaml
# Compositions using Jet providers
kubectl apply -f configuration/jet/definition.yaml
kubectl apply -f configuration/jet/jet-eks-composition.yaml
kubectl apply -f configuration/jet/jet-aks-composition.yaml
kubectl apply -f configuration/jet/jet-gke-composition.yaml
App team provisions infrastructure by creating claim objects for the XRDs defined by Ops team. In claim manifest file please ensure that you used supported region
kubectl create ns managed
# Claims using native provider
kubectl apply -f claims/native/xp-eks-claim.yaml
kubectl apply -f claims/native/xp-aks-claim.yaml
kubectl apply -f claims/native/xp-gke-claim.yaml
# Claims using official provider
kubectl apply -f claims/official/uxp-eks-claim.yaml
kubectl apply -f claims/official/uxp-aks-claim.yaml
kubectl apply -f claims/official/uxp-gke-claim.yaml
# Claims using jet provider - deprecated
kubectl apply -f claims/jet/jet-eks-claim.yaml
kubectl apply -f claims/jet/jet-aks-claim.yaml
kubectl apply -f claims/jet/jet-gke-claim.yaml
Dev team provisions Claims which either generate new Composite Resource (XR) or assign existing ones.
We can check progress using:
kubectl get managedcluster -n managed
# Example Output
NAME CLUSTERNAME CONTROLPLANE NODEPOOL FARGATEPROFILE SYNCED READY CONNECTION-SECRET AGE
xpaks cluster-xpaks Succeeded Succeeded NA4-cluster-xpaks True True xpaks 7m
xpgke cluster-xpgke RUNNING RUNNING NA4-cluster-xpgke True True xpgke 9m1s
xpeks cluster-xpeks ACTIVE ACTIVE ACTIVE True True xpeks 14m
uxpaks cluster-uxpaks True True NA4-cluster-uxpaks True True uxpaks 5m59s
uxpgke cluster-uxpgke True True NA4-cluster-uxpgke True True uxpgke 11m
uxpeks cluster-uxpeks ACTIVE ACTIVE ACTIVE True True uxpeks 22m
You can check time for cluster readiness for different managed kubernetes under Age column.
To verify status of Helm Charts and Kubernetes Object:
kubectl get Object,Release
NAME SYNCED READY AGE
object.kubernetes.crossplane.io/xpaks-ns-prod True True 23h
NAME CHART VERSION SYNCED READY STATE REVISION DESCRIPTION AGE
release.helm.crossplane.io/xpaks-crossplane crossplane 1.6.3 True True deployed 1 Install complete 23h
# Using secrets (eks and aks)
kubectl -n managed get secret xpeks --output jsonpath="{.data.kubeconfig}" | base64 -d | tee kubeconfig
kubectl -n managed get secret xpaks --output jsonpath="{.data.kubeconfig}" | base64 -d | tee kubeconfig
export KUBECONFIG=$PWD/kubeconfig
# Using Cloud APIs
export KUBECONFIG=$PWD/kubeconfig
gcloud container clusters get-credentials cluster-xpgke --region europe-west2 --project <project name>
# Using secrets (eks and aks)
kubectl -n managed get secret uxpaks --output jsonpath="{.data.kubeconfig}" | base64 -d | tee kubeconfig
kubectl -n managed get secret uxpeks --output jsonpath="{.data.kubeconfig}" | base64 -d | tee kubeconfig
export KUBECONFIG=$PWD/kubeconfig
# Using Cloud APIs
export KUBECONFIG=$PWD/kubeconfig
gcloud container clusters get-credentials cluster-uxpgke --region europe-west2 --project <project name>
az aks get-credentials --resource-group rg-uxpaks --name cluster-uxpaks --admin
aws eks update-kubeconfig --region eu-west-1 --name cluster-uxpeks --alias uxpeks
Deleting claims will take care of clean-up of all managed resources created to satisfy the claim.
# Native
kubectl delete managedcluster -n managed xpeks
kubectl delete managedcluster -n managed xpaks
kubectl delete managedcluster -n managed xpgke
# Official
kubectl delete managedcluster -n managed uxpeks
kubectl delete managedcluster -n managed uxpaks
kubectl delete managedcluster -n managed uxpgke
kubectl get providerconfig
# Clean-up Native Providers
kubectl delete providerconfig.aws.crossplane.io/aws-xp-provider
kubectl delete providerconfig.azure.crossplane.io/azure-xp-provider
kubectl delete providerconfig.gcp.crossplane.io/gcp-xp-provider
# Clean-up Official Providers
kubectl delete providerconfig.aws.upbound.io/aws-uxp-provider
kubectl delete providerconfig.azure.upbound.io/azure-uxp-provider
kubectl delete providerconfig.gcp.upbound.io/gcp-uxp-provider
# Clean-up Jet Providers
kubectl delete providerconfig.aws.jet.crossplane.io/aws-jet-provider
kubectl delete providerconfig.azure.jet.crossplane.io azure-jet-provider
kubectl delete providerconfig.gcp.jet.crossplane.io gcp-jet-provider
Name of the Namespace, for UXP: upbound-system
for XP: crossplane-system
# Clean-up Crossplane Secrets
kubectl delete secret -n crossplane-system aws-account-creds
kubectl delete secret -n crossplane-system azure-account-creds
kubectl delete secret -n crossplane-system gcp-account-creds
# Clean-up Upbound Secrets
kubectl delete secret -n upbound-system aws-account-creds
kubectl delete secret -n upbound-system azure-account-creds
kubectl delete secret -n upbound-system gcp-account-creds
# native
kubectl delete provider.pkg aws-provider
kubectl delete provider.pkg azure-provider
kubectl delete provider.pkg gcp-provider
## services
kubectl delete provider.pkg provider-helm
kubectl delete provider.pkg provider-kubernetes
# official with manual
kubectl delete provider.pkg aws-uxp-provider
kubectl delete provider.pkg azure-uxp-provider
kubectl delete provider.pkg gcp-uxp-provider
kubectl delete provider.pkg provider-helm
kubectl delete provider.pkg provider-kubernetes
# official with configuration
kubectl delete configuration.pkg.crossplane.io/natzka
kubectl delete provider.pkg upbound-provider-aws
kubectl delete provider.pkg upbound-provider-azure
kubectl delete provider.pkg upbound-provider-gcp
kubectl delete provider.pkg crossplane-contrib-provider-helm
kubectl delete provider.pkg crossplane-contrib-provider-kubernetes
# jet
kubectl delete provider.pkg aws-jet-provider
kubectl delete provider.pkg azure-jet-provider
kubectl delete provider.pkg gcp-jet-provider
# Verification
kubectl get provider.pkg
# XP
helm delete crossplane --namespace crossplane-system
kubectl get pods -n crossplane-system
# UXP
up uxp uninstall
kubectl get pods -n upbound-system
kubectl patch --type json --patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]' subnet.ec2.aws.upbound.io/uxpeks-pub-b
kubectl get xmanagedcluster
NAME CLUSTERNAME CONTROLPLANE NODEPOOL FARGATEPROFILE READY CONNECTION-SECRET AGE
uxpeks uxpeks ACTIVE ACTIVE ACTIVE True uxpeks 4d4h
uxpaks uxpaks True True NA4-uxpaks True uxpaks 4d
uxpgke uxpgke True True NA4-uxpgke True uxpgke 2d16h
# To find our which resource have issues within Composite resource:
kubectl describe xmanagedcluster uxpeks-5zxn6
...
Resource Refs:
API Version: iam.aws.crossplane.io/v1beta1
Kind: Role
Name: xpeks-controlplane
...
# To find out issue with not healthy resource
kubectl get Role.iam.aws.crossplane.io
kubectl describe Role.iam.aws.crossplane.io/xpeks-controlplane
# Native and official providers
kubectl get managed
kubectl get aws
kubectl get azure
kubectl get gcp
or
kubectl get providerconfig | grep aws
- Cluster ID
- Kubernetes Version
- Node Size
- Node Count
- Region (Cross Cloud Abstraction
- FargateProfile Namespace (valid for EKS)
configuration/native/
- Composite Resource Definition (XRD) with satisfying Compositions- xmanagedcluster XRD
- eks composition includes:
Role
RolePolicyAttachment
VPC
SecurityGroup
,SecurityGroupRule
Subnet
InternetGateway
RouteTable
,Route
,RouteTableAssociation
Cluster
NodeGroup
FargateProfile
Relase
Object
- aks composition includes:
ResourceGroup
VirtualNetwork
Subnet
AKSCluster
Relase
Object
- gke composition includes:
Network
Subnetwork
Cluster
NodePool
Relase
Object
providers/
- Provider Installation and Configurationclaims/native/
- Examples to consume defined by Ops XRDs
configuration/official/
- Composite Resource Definition (XRD) with satisfying Compositions- xmanagedcluster XRD
- eks composition includes:
Role
RolePolicyAttachment
VPC
SecurityGroup
,SecurityGroupRule
Subnet
InternetGateway
RouteTable
,Route
,RouteTableAssociation
Cluster
NodeGroup
FargateProfile
ClusterAuth
Object
Relase
- aks composition includes:
ResourceGroup
VirtualNetwork
Subnet
KubernetesCluster
KubernetesClusterNodePool
Relase
Object
- gke composition includes:
Network
Subnetwork
Cluster
NodePool
Relase
Object
providers/official/
- Provider Installation and Configurationclaims/official/
- Examples to consume defined by Ops XRDs
configuration/jet/
- Composite Resource Definition (XRD) with satisfying Compositions- xmanagedcluster XRD
- eks composition includes:
Role
RolePolicyAttachment
VPC
SecurityGroup
,SecurityGroupRule
Subnet
InternetGateway
RouteTable
,Route
,RouteTableAssociation
Cluster
NodeGroup
FargateProfile
Relase
Object
- aks composition includes:
ResourceGroup
VirtualNetwork
Subnet
KubernetesCluster
KubernetesClusterNodePool
Relase
Object
- gke composition includes:
Network
Subnetwork
Cluster
NodePool
Relase
Object
providers/jet/
- Provider Installation and Configurationclaims/jet/
- Examples to consume defined by Ops XRDs
Here’s how we suggest you go about proposing a change to this project:
- Fork this project to your account.
- Create a branch for the change you intend to make.
- Make your changes to your fork.
- Send a pull request from your fork’s branch to our
main
branch.
Using the web-based interface to make changes is fine too, and will help you by automatically forking the project and prompting to send a pull request too.