A Kubernetes operator built with Knative's controller framework that automatically applies custom labels to Deployments based on Custom Resource definitions.
The Labeler Controller watches for Labeler custom resources and automatically applies specified labels to all Deployments in the same namespace. This is useful for:
- Automated label management across multiple deployments
- Enforcing organizational labeling standards
- Dynamic label updates without manual intervention
- Centralized label configuration
This project follows the standard Kubernetes Operator pattern with three main components:
Defines the Labeler resource type and its schema.
A Knative-based controller that:
- Watches for
Labelercustom resources - Lists all Deployments in the Labeler's namespace
- Applies/updates labels on those Deployments
- Reconciles on CR create, update, delete, and periodic resync
Instances of Labeler that specify which labels to apply.
┌─────────────────────────────────────────────────────┐
│ User creates Labeler CR │
│ ↓ │
│ Controller detects CR │
│ ↓ │
│ Lists Deployments in namespace │
│ ↓ │
│ Patches each Deployment with custom labels │
└─────────────────────────────────────────────────────┘
- Kubernetes cluster (v1.25+)
kubectlconfigured to access your cluster- ko for building and deploying Go applications
- Go 1.25+ (for development)
kubectl apply -f config/crd/clusterops.io_labelers.yamlVerify the CRD is installed:
kubectl get crd labelers.clusterops.iokubectl create namespace labelerkubectl apply -f config/100-serviceaccount.yaml -n labeler
kubectl apply -f config/200-role.yaml
kubectl apply -f config/201-rolebinding.yamlUsing ko:
ko resolve -f config/controller.yaml | kubectl apply -n labeler -f -Or build and push manually:
docker build -t your-registry/labeler-controller:latest .
docker push your-registry/labeler-controller:latest
kubectl apply -f config/controller.yaml -n labelerVerify the controller is running:
kubectl get pods -n labelerExpected output:
NAME READY STATUS RESTARTS AGE
label-controller-xxxxx-yyyyy 1/1 Running 0 30s
Create a Labeler CR to specify which labels to apply:
apiVersion: clusterops.io/v1alpha1
kind: Labeler
metadata:
name: example-labeler
namespace: labeler
spec:
customLabels:
environment: "production"
team: "platform"
managed-by: "labeler-controller"Apply it:
kubectl apply -f config/cr.yaml -n labelerCheck that your Deployments now have the custom labels:
kubectl get deployment -n labeler -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels}{"\n"}{end}'Example output:
label-controller {"clusterops.io/release":"devel","environment":"production","managed-by":"labeler-controller","team":"platform"}
Edit your Labeler CR to change or add labels:
spec:
customLabels:
environment: "staging" # Changed
team: "platform"
version: "v2.0" # AddedApply the changes:
kubectl apply -f config/cr.yaml -n labelerThe controller will automatically detect the change and update all Deployment labels.
kubectl logs -n labeler -l app=label-controller -fExample log output:
{"severity":"INFO","message":"Reconciling Labeler : example-labeler"}
{"severity":"INFO","message":"Found 1 deployments in namespace labeler"}
{"severity":"INFO","message":"Reconcile succeeded","duration":"10.97ms"}| Field | Type | Required | Description |
|---|---|---|---|
customLabels |
map[string]string |
Yes | Key-value pairs of labels to apply to Deployments |
Example:
spec:
customLabels:
environment: "production"
team: "devops"
cost-center: "engineering"
app-version: "2.0"The controller supports Knative's standard configuration via ConfigMaps:
Logging Configuration:
kubectl apply -f config/config-logging.yaml -n labelerAdjust log levels (debug, info, warn, error) in config/config-logging.yaml.
The controller reconciles (applies labels) when:
- Labeler CR is created - Initial label application
- Labeler CR is updated - Labels are re-applied with new values
- Labeler CR is deleted - (Future: cleanup logic)
- Controller restarts - Resyncs all existing CRs
- Periodic resync - Every 10 hours (default)
The controller merges labels rather than replacing them:
- Existing labels on Deployments are preserved
- Only specified labels in the Labeler CR are added/updated
- No labels are removed
Example:
# Deployment has: {"app": "nginx", "version": "1.0"}
# Labeler adds: {"team": "devops", "env": "prod"}
# Result: {"app": "nginx", "version": "1.0", "team": "devops", "env": "prod"}- Go 1.25+
- Docker or compatible container runtime
- Access to a Kubernetes cluster (kind, minikube, etc.)
.
├── cmd/
│ └── labeler/
│ ├── main.go # Entry point
│ ├── controller.go # Controller setup
│ └── reconciler.go # Reconciliation logic
├── pkg/
│ ├── apis/
│ │ └── clusterops/
│ │ └── v1alpha1/
│ │ ├── doc.go # Package documentation
│ │ ├── types.go # API types (Labeler, LabelerSpec)
│ │ ├── register.go # Scheme registration
│ │ └── zz_generated.deepcopy.go # Auto-generated
│ └── client/ # Auto-generated clientsets, listers, informers
├── config/
│ ├── crd/ # CRD definitions
│ ├── 100-serviceaccount.yaml
│ ├── 200-role.yaml
│ ├── 201-rolebinding.yaml
│ ├── controller.yaml # Controller Deployment
│ ├── config-logging.yaml
│ └── cr.yaml # Example Custom Resource
├── hack/
│ ├── update-codegen.sh # Code generation script
│ └── tools.go # Tool dependencies
├── vendor/ # Vendored dependencies
├── go.mod
└── README.md
# Build locally
go build -o bin/labeler ./cmd/labeler
# Build and push container image with ko
ko publish github.com/ab-ghosh/knative-controller/cmd/labelerAfter modifying API types in pkg/apis/clusterops/v1alpha1/types.go, regenerate code:
# Regenerate deepcopy, clientset, listers, informers, and injection code
./hack/update-codegen.sh
# Regenerate CRDs
GOFLAGS=-mod=mod controller-gen crd paths=./pkg/apis/... output:crd:artifacts:config=config/crd-
Create a local cluster:
kind create cluster
-
Install CRD and RBAC:
kubectl apply -f config/crd/ kubectl create namespace labeler kubectl apply -f config/100-serviceaccount.yaml -n labeler kubectl apply -f config/200-role.yaml kubectl apply -f config/201-rolebinding.yaml
-
Deploy controller:
ko apply -f config/controller.yaml -n labeler
-
Test with example CR:
kubectl apply -f config/cr.yaml -n labeler
-
Watch logs:
kubectl logs -n labeler -l app=label-controller -f
# Run unit tests
go test ./...
# Run with coverage
go test -cover ./...Check if controller is running:
kubectl get pods -n labeler -l app=label-controllerView controller logs:
kubectl logs -n labeler -l app=label-controller --tail=50Verify Labeler CR exists:
kubectl get labeler -n labeler
kubectl describe labeler example-labeler -n labelerCheck RBAC permissions:
kubectl auth can-i list deployments --as=system:serviceaccount:labeler:clusterops -n labeler
kubectl auth can-i patch deployments --as=system:serviceaccount:labeler:clusterops -n labelerTrigger manual reconciliation:
kubectl annotate labeler example-labeler reconcile=trigger -n labeler --overwriteCheck service account exists:
kubectl get sa clusterops -n labelerView pod events:
kubectl describe pod -n labeler -l app=label-controllerapiVersion: clusterops.io/v1alpha1
kind: Labeler
metadata:
name: env-labeler
namespace: production
spec:
customLabels:
environment: "production"
tier: "frontend"
region: "us-west-2"apiVersion: clusterops.io/v1alpha1
kind: Labeler
metadata:
name: team-labeler
namespace: platform-team
spec:
customLabels:
team: "platform"
owner: "john.doe@company.com"
cost-center: "engineering-123"apiVersion: clusterops.io/v1alpha1
kind: Labeler
metadata:
name: compliance-labeler
namespace: secure-apps
spec:
customLabels:
compliance: "pci-dss"
data-classification: "confidential"
backup-required: "true"- ✅ Automated - no manual intervention needed
- ✅ Declarative - specify desired state
- ✅ Namespace-wide - applies to all deployments
- ✅ Self-healing - reapplies on drift
- ✅ Post-creation modification supported
- ✅ Doesn't require webhook infrastructure
- ✅ Can update existing resources
- ❌ Not preventive (webhook would reject at creation)
- ✅ Simpler - focused use case
- ✅ Lighter weight
- ❌ Less flexible - only label management
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
Licensed under the Apache License, Version 2.0. See LICENSE file for details.
Built with:
- Knative - Controller framework
- controller-gen - CRD generation
- ko - Container image building
For questions or issues, please open a GitHub issue.