diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..0e3920b5 --- /dev/null +++ b/404.html @@ -0,0 +1,1721 @@ + + + +
+ + + + + + + + + + + + + + + + +Kubernetes add-on compliance refers to the process of ensuring that all Kubernetes add-ons within a cluster meet the specific security and compliance requirements of an organization.
+Sveltos is a tool that facilitates the deployment of Kubernetes add-ons across multiple clusters. It supports various deployment methods such as Helm charts, Kustomize resources, and YAML files. Add-ons can be sourced from different locations.
+When programmatically deploying add-ons using Sveltos, it is crucial to ensure that the deployed add-ons adhere to specific compliance requirements. These requirements may differ depending on the cluster, with production clusters typically having more stringent requirements compared to test clusters.
+Sveltos enables the definition of compliance requirements2 for a group of clusters and enforces those requirements for all add-ons deployed to those clusters. It employs Lua to enforce compliance:
+By using Lua, Sveltos provides a comprehensive solution for enforcing Kubernetes add-on compliance1. This helps organizations in ensuring that their Kubernetes clusters are both secure and compliant with industry regulations and standards.
+Here are some additional benefits of using Sveltos to enforce Kubernetes add-on compliance:
+A new Custom Resource Definition is introduced: AddonCompliance.
+Here is an example:
+apiVersion: lib.projectsveltos.io/v1alpha1
+kind: AddonCompliance
+metadata:
+ name: depl-replica
+spec:
+ clusterSelector: env=production
+ luaValidationRefs:
+ - namespace: default
+ name: depl-horizontalpodautoscaler
+ kind: ConfigMap
+
Above instance is definining a set of compliances (contained in the referenced ConfigMap) which needs to be enforced in any managed cluster matching the clusterSelector field.
+ClusterSelector field is just a pure Kubernetes label selector. So any cluster with label env: production
will be a match.
The referenced ConfigMap contains a Lua validation.
+apiVersion: lib.projectsveltos.io/v1alpha1
+kind: AddonCompliance
+metadata:
+ name: depl-replica
+spec:
+ clusterSelector: env=production
+ luaValidationRefs:
+ - namespace: default
+ name: depl-horizontalpodautoscaler
+ kind: ConfigMap
+
Following ConfigMap contains an Lua policy enforcing that any deployment in the foo namespace has an associated HorizontalPodAutoscaler
+apiVersion: v1
+data:
+ lua.yaml: |
+ function evaluate()
+ local hs = {}
+ hs.valid = true
+ hs.message = ""
+
+ local deployments = {}
+ local autoscalers = {}
+
+ -- Separate deployments and services from the resources
+ for _, resource in ipairs(resources) do
+ local kind = resource.kind
+ if resource.metadata.namespace == "foo" then
+ if kind == "Deployment" then
+ table.insert(deployments, resource)
+ elseif kind == "HorizontalPodAutoscaler" then
+ table.insert(autoscalers, resource)
+ end
+ end
+ end
+
+ -- Check for each deployment if there is a matching HorizontalPodAutoscaler
+ for _, deployment in ipairs(deployments) do
+ local deploymentName = deployment.metadata.name
+ local matchingAutoscaler = false
+
+ for _, autoscaler in ipairs(autoscalers) do
+ if autoscaler.spec.scaleTargetRef.name == deployment.metadata.name then
+ matchingAutoscaler = true
+ break
+ end
+ end
+
+ if not matchingAutoscaler then
+ hs.valid = false
+ hs.message = "No matching autoscaler found for deployment: " .. deploymentName
+ break
+ end
+ end
+
+ return hs
+ end
+kind: ConfigMap
+metadata:
+ name: depl-horizontalpodautoscaler
+ namespace: default
+
There are two main components involved:
+These two controllers work together using a synchronization mechanism. When a new cluster is created, the controllers ensure that all the existing compliances for that cluster are discovered before any add-on is deployed.
+When Sveltos needs to deploy an add-on in a managed cluster, it follows these steps:
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ clusterSelector: env=production
+ syncMode: Continuous
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+ values: |
+ admissionController:
+ replicas: 1
+
An error is reported back
+Changing the replicas to 3, will make sure Kyverno helm chart satisfies all compliances and helm chart is deployed:
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ clusterSelector: env=production
+ helmCharts:
+ - chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ helmChartAction: Install
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ repositoryName: kyverno
+ repositoryURL: https://kyverno.github.io/kyverno/
+ values: |
+ admissionController:
+ replicas: 3
+ backgroundController:
+ replicas: 3
+ cleanupController:
+ replicas: 3
+ reportsController:
+ replicas: 3
+
➜ addon-controller git:(dev) ✗ kubectl exec -it -n projectsveltos sveltosctl-0 -- /sveltosctl show addons
++-------------------------------------+---------------+-----------+----------------+---------+-------------------------------+------------------+
+| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | VERSION | TIME | CLUSTER PROFILES |
++-------------------------------------+---------------+-----------+----------------+---------+-------------------------------+------------------+
+| default/sveltos-management-workload | helm chart | kyverno | kyverno-latest | 3.0.1 | 2023-06-14 02:57:12 -0700 PDT | kyverno |
++-------------------------------------+---------------+-----------+----------------+---------+-------------------------------+------------------+
+
Let's explore the advantages of choosing this approach instead of relying on an admission controller like Kyverno or OPA.
+One immediate benefit is that you won't need to deploy additional services in your managed clusters. By opting for this approach, you can simplify your cluster architecture and reduce the complexity associated with extra services.
+However, there are more significant advantages to consider:
+By considering these advantages, you can make an informed decision when choosing between this approach and utilizing an admission controller for your cluster management and add-on deployment needs.
+If you want to validate your Lua policies:
+validate_lua
directory. Inside this directory, create the following files:lua_policy.yaml
: This file should contain your Lua policy.valid_resource.yaml
: This file should contain the resources that satisfy the Lua policy.invalid_resource.yaml
: This file should contain the resources that do not satisfy the Lua policy.make test
from repo directory.Running make test
will initiate the validation process, which thoroughly tests your Lua policies against the provided resource files. This procedure ensures that your defined policy is not only syntactically correct but also functionally accurate. By executing the validation tests, you can gain confidence in the correctness and reliability of your policies written in Lua.
+By following these steps, you can easily validate your Lua policies using the Sveltos addon-controller repository.
If your clusters use mutating webhooks, you should carefully consider whether Sveltos add-on compliance will be effective for you. This is because Sveltos cannot see what mutating webhooks do, so it cannot guarantee that your clusters will be compliant. ↩
+Helm charts containing both CustomResourceDefinitions and instances of such CRDs cannot be deployed on clusters where compliance validations where defined. This is because helm dry run won't return full list of resources the helm chart would deploy and so Sveltos won't be able to validate. ↩
+Sveltos is a set of Kubernetes controllers that run in the management cluster. From the management cluster, Sveltos can manage add-ons and applications on a fleet of managed Kubernetes clusters.
+Sveltos comes with support to automatically discover ClusterAPI powered clusters, but it doesn't stop there. You can easily register any other cluster (on-prem, Cloud) and manage Kubernetes add-ons seamlessly.
+ +ClusterProfile and Profile are the CustomerResourceDefinitions used to instruct Sveltos which add-ons to deploy on a set of clusters.
+ClusterProfile is a cluster-wide resource. It can match any cluster and reference any resource regardless of their namespace.
+Profile, on the other hand, is a namespace-scoped resource that is specific to a single namespace. It can only match clusters and reference resources within its own namespace.
+By creating a ClusterProfile instance, you can easily deploy:
+across a set of Kubernetes clusters.
+Define which Kubernetes add-ons to deploy and where:
+It is as simple as that!
+The below example deploys a Kyverno helm chart in every cluster with the label env=prod.
+The first step is to ensure the CAPI clusters are successfully registered with Sevltos. If you have not registered the clusters yet, follow the instructions mentioned here.
+If you already register the CAPI clusters, ensure they are listed and ready to receive add-ons.
+$ kubectl get sveltosclusters -n projectsveltos --show-labels
+
+NAME READY VERSION LABELS
+cluster12 true v1.26.9+rke2r1 sveltos-agent=present
+cluster13 true v1.26.9+rke2r1 sveltos-agent=present
+
Please note: The CAPI clusters are registered in the projectsveltos namespace. If you register the clusters in a different namespace, update the command mentioned above.
+The second step is to assign a specific label to the Sveltos Clusters to receive specific add-ons. In this example, we will assign the label env=prod.
+$ kubectl label sveltosclusters cluster12 env=prod -n projectsveltos
+$ kubectl label sveltosclusters cluster13 env=prod -n projectsveltos
+$ kubectl get sveltosclusters -n projectsveltos --show-labels
+
+NAME READY VERSION LABELS
+cluster12 true v1.26.9+rke2r1 env=prod,sveltos-agent=present
+cluster13 true v1.26.9+rke2r1 env=prod,sveltos-agent=present
+
The third step is to create a ClusterProfile Kubernetes resource and apply it to the management cluster.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ clusterSelector: env=prod
+ syncMode: Continuous
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.1.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+
$ kubectl apply -f "kyverno_cluster_profile.yaml"
+
+$ sveltosctl show addons
+
++--------------------------+---------------+-----------+----------------+---------+-------------------------------+------------------+
+| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | VERSION | TIME | CLUSTER PROFILES |
++--------------------------+---------------+-----------+----------------+---------+-------------------------------+------------------+
+| projectsveltos/cluster12 | helm chart | kyverno | kyverno-latest | 3.1.1 | 2023-12-16 00:14:17 -0800 PST | kyverno |
+| projectsveltos/cluster13 | helm chart | kyverno | kyverno-latest | 3.1.1 | 2023-12-16 00:14:17 -0800 PST | kyverno |
++--------------------------+---------------+-----------+----------------+---------+-------------------------------+------------------+
+
For a quick add-ons example, watch the Sveltos introduction video on YouTube.
+ + + + + + + + + + + + + + + + +ClusterProfile is the CustomerResourceDefinition used to instruct Sveltos which add-ons to deploy on a set of clusters.
+clusterSelector field selects a set of managed clusters where listed add-ons and applications will be deployed.
+ +helmCharts field consists of a list of helm charts to be deployed to the clusters matching clusterSelector;
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+
policyRefs field references a list of ConfigMaps/Secrets, each containing Kubernetes resources to be deployed in the clusters matching clusterSelector.
+This field is a slice of PolicyRef structs. Each PolictRef has the following fields:
+policyRefs:
+- kind: Secret
+ name: my-secret-1
+ namespace: my-namespace-1
+ deploymentType: Local
+- kind: Remote
+ name: my-configmap-1
+ namespace: my-namespace-1
+ deploymentType: Remote
+
kustomizationRefs field is a list of sources containing kustomization files. Resources will be deployed in the clusters matching the clusterSelector specified.
+This field is a slice of KustomizationRef structs. Each KustomizationRef has the following fields:
+Kind: The kind of the referenced resource. The supported kinds are:
+Namespace: The namespace of the referenced resource. This field is optional and can be left empty. If it is empty, the namespace will be set to the cluster's namespace.
+This field can be set to:
+Let's take a closer look at the OneTime syncMode option. Once you deploy a ClusterProfile with a OneTime configuration, Sveltos will check all of your clusters for a match with the clusterSelector. Any matching clusters will have the resources specified in the ClusterProfile deployed. However, if you make changes to the ClusterProfile later on, those changes will not be automatically deployed to already-matching clusters.
+Now, if you're looking for real-time deployment and updates, the Continuous syncMode is the way to go. With Continuous, any changes made to the ClusterProfile will be immediately reconciled into matching clusters. This means that you can add new features, update existing ones, and remove them as necessary, all without lifting a finger. Sveltos will deploy, update, or remove resources in matching clusters as needed, making your life as a Kubernetes admin a breeze.
+ContinuousWithDriftDetection instructs Sveltos to monitor the state of managed clusters and detect a configuration drift for any of the resources deployed because of that ClusterProfile. +When Sveltos detects a configuration drift, it automatically re-syncs the cluster state back to the state described in the management cluster. +To know more about configuration drift detection, refer to this section.
+Imagine you're about to make some important changes to your ClusterProfile, but you're not entirely sure what the results will be. You don't want to risk causing any unwanted side effects, right? Well, that's where the DryRun syncMode configuration comes in. By deploying your ClusterProfile with this configuration, you can launch a simulation of all the operations that would normally be executed in a live run. The best part? No actual changes will be made to the matching clusters during this dry run workflow, so you can rest easy knowing that there won't be any surprises. +To know more about dry run, refer to this section.
+The stopMatchingBehavior field specifies the behavior when a cluster no longer matches a ClusterProfile. By default, all Kubernetes resources and Helm charts deployed to the cluster will be removed. However, if StopMatchingBehavior is set to LeavePolicies, any policies deployed by the ClusterProfile will remain in the cluster.
+For instance
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ stopMatchingBehavior: WithdrawPolicies
+ clusterSelector: env=prod
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+
When a cluster matches the ClusterProfile, Kyverno Helm chart will be deployed in such a cluster. If the cluster's labels are subsequently modified and cluster no longer matches the ClusterProfile, the Kyverno Helm chart will be uninstalled. However, if the stopMatchingBehavior property is set to LeavePolicies, Sveltos will retain the Kyverno Helm chart in the cluster.
+The reloader property determines whether rolling upgrades should be triggered for Deployment, StatefulSet, or DaemonSet instances managed by Sveltos and associated with this ClusterProfile when changes are made to mounted ConfigMaps or Secrets. +When set to true, Sveltos automatically initiates rolling upgrades for affected Deployment, StatefulSet, or DaemonSet instances whenever any mounted ConfigMap or Secret is modified. This ensures that the latest configuration updates are applied to the respective workloads.
+Please refer to this section for more information.
+A ClusterProfile might match more than one cluster. When a change is maded to a ClusterProfile, by default all matching clusters are update concurrently. +The maxUpdate field specifies the maximum number of Clusters that can be updated concurrently during an update operation triggered by changes to the ClusterProfile's add-ons or applications. +The specified value can be an absolute number (e.g., 5) or a percentage of the desired cluster count (e.g., 10%). The default value is 100%, allowing all matching Clusters to be updated simultaneously. +For instance, if set to 30%, when modifications are made to the ClusterProfile's add-ons or applications, only 30% of matching Clusters will be updated concurrently. Updates to the remaining matching Clusters will only commence upon successful completion of updates in the initially targeted Clusters. This approach ensures a controlled and manageable update process, minimizing potential disruptions to the overall cluster environment. +Please refer to this section for more information.
+The validateHealths property defines a set of Lua functions that Sveltos executes against the managed cluster to assess the health and status of the add-ons and applications specified in the ClusterProfile. These Lua functions act as validation checks, ensuring that the deployed add-ons and applications are functioning properly and aligned with the desired state. By executing these functions, Sveltos proactively identifies any potential issues or misconfigurations that could arise, maintaining the overall health and stability of the managed cluster.
+The ValidateHealths property accepts a slice of Lua functions, where each function encapsulates a specific validation check. These functions can access the managed cluster's state to perform comprehensive checks on the add-ons and applications. The results of the validation checks are aggregated and reported back to Sveltos, providing valuable insights into the health and status of the managed cluster's components.
+Lua's scripting capabilities offer flexibility in defining complex validation logic tailored to specific add-ons or applications.
+Please refer to this section for more information.
+Consider a scenario where a new cluster with the label env:prod is created. The following instructions guide Sveltos to:
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ clusterSelector: env=prod
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+ validateHealths:
+ - name: deployment-health
+ featureID: Helm
+ group: "apps"
+ version: "v1"
+ kind: "Deployment"
+ namespace: kyverno
+ script: |
+ function evaluate()
+ hs = {}
+ hs.healthy = false
+ hs.message = "available replicas not matching requested replicas"
+ if obj.status ~= nil then
+ if obj.status.availableReplicas ~= nil then
+ if obj.status.availableReplicas == obj.spec.replicas then
+ hs.healthy = true
+ end
+ end
+ end
+ return hs
+ end
+
The templateResourceRefs property specifies a collection of resources to be gathered from the management cluster. The values extracted from these resources will be utilized to instantiate templates embedded within referenced PolicyRefs and Helm charts. +Refer to template section for more info and examples.
+The dependsOn property specifies a list of other ClusterProfiles that this instance relies on. In any managed cluster that matches to this ClusterProfile, the add-ons and applications defined in this instance will only be deployed after all add-ons and applications in the designated dependency ClusterProfiles have been successfully deployed.
+For example, clusterprofile-a can depend on another clusterprofile-b. This implies that any Helm charts or raw YAML files associated with ClusterProfile A will not be deployed until all add-ons and applications specified in ClusterProfile B have been successfully provisioned.
+ + + + + + + + + + + + + + + + + +A ClusterProfile can have a combination of Helm charts, raw YAML/JSON, and Kustomize configurations.
+Consider a scenario where you want to utilize Kyverno to prevent the deployment of images with the 'latest' tag1. To achieve this, you can create a ClusterProfile that:
+Download the Kyverno policy and create a ConfigMap containing the policy within the management cluster.
+$ wget https://raw.githubusercontent.com/kyverno/policies/main/best-practices/disallow-latest-tag/disallow-latest-tag.yaml
+kubectl create configmap disallow-latest-tag --from-file disallow-latest-tag.yaml
+
To deploy Kyverno and a ClusterPolicy across all managed clusters matching the Sveltos label selector env=fv, utilize the below ClusterProfile."
+ apiVersion: config.projectsveltos.io/v1alpha1
+ kind: ClusterProfile
+ metadata:
+ name: kyverno
+ spec:
+ clusterSelector: env=fv
+ helmCharts:
+ - chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ helmChartAction: Install
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ repositoryName: kyverno
+ repositoryURL: https://kyverno.github.io/kyverno/
+ policyRefs:
+ - kind: ConfigMap
+ name: disallow-latest-tag
+ namespace: default
+
The ':latest' tag is mutable and can lead to unexpected errors if the image changes. A best practice is to use an immutable tag that maps to a specific version of an application Pod. ↩
+ClusterProfile Spec.HelmCharts can list all the Helm charts you want to deploy.
+Please note: Sveltos will deploy the Helm charts in the exact order you define them.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ clusterSelector: env=prod
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.1.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+
apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: prometheus-grafana
+spec:
+ clusterSelector: env=fv
+ helmCharts:
+ - repositoryURL: https://prometheus-community.github.io/helm-charts
+ repositoryName: prometheus-community
+ chartName: prometheus-community/prometheus
+ chartVersion: 23.4.0
+ releaseName: prometheus
+ releaseNamespace: prometheus
+ helmChartAction: Install
+ - repositoryURL: https://grafana.github.io/helm-charts
+ repositoryName: grafana
+ chartName: grafana/grafana
+ chartVersion: 6.58.9
+ releaseName: grafana
+ releaseNamespace: grafana
+ helmChartAction: Install
+
apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ clusterSelector: env=fv
+ syncMode: Continuous
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.1.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+ values: |
+ admissionController:
+ replicas: 1
+
apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: deploy-calico
+spec:
+ clusterSelector: env=prod
+ helmCharts:
+ - repositoryURL: https://projectcalico.docs.tigera.io/charts
+ repositoryName: projectcalico
+ chartName: projectcalico/tigera-operator
+ chartVersion: v3.24.5
+ releaseName: calico
+ releaseNamespace: tigera-operator
+ helmChartAction: Install
+ values: |
+ installation:
+ calicoNetwork:
+ ipPools:
+ {{ range $cidr := .Cluster.spec.clusterNetwork.pods.cidrBlocks }}
+ - cidr: {{ $cidr }}
+ encapsulation: VXLAN
+ {{ end }}
+
apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: deploy-cilium-v1-26
+spec:
+ clusterSelector: env=fv
+ helmCharts:
+ - chartName: cilium/cilium
+ chartVersion: 1.12.12
+ helmChartAction: Install
+ releaseName: cilium
+ releaseNamespace: kube-system
+ repositoryName: cilium
+ repositoryURL: https://helm.cilium.io/
+ values: |
+ k8sServiceHost: "{{ .Cluster.spec.controlPlaneEndpoint.host }}"
+ k8sServicePort: "{{ .Cluster.spec.controlPlaneEndpoint.port }}"
+ hubble:
+ enabled: false
+ nodePort:
+ enabled: true
+ kubeProxyReplacement: strict
+ operator:
+ replicas: 1
+ updateStrategy:
+ rollingUpdate:
+ maxSurge: 0
+ maxUnavailable: 1
+
The below YAML snippet demonstrates how Sveltos utilizes a Flux GitRepository. The git repository, located at https://github.com/gianlucam76/kustomize, comprises multiple kustomize directories. In this instance, Sveltos executes Kustomize on the helloWorld
directory and deploys the Kustomize output to the eng
namespace for each managed cluster matching the Sveltos clusterSelector.
apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: flux-system
+spec:
+ clusterSelector: env=fv
+ syncMode: Continuous
+ kustomizationRefs:
+ - namespace: flux-system
+ name: flux-system
+ kind: GitRepository
+ path: ./helloWorld/
+ targetNamespace: eng
+
apiVersion: source.toolkit.fluxcd.io/v1
+kind: GitRepository
+metadata:
+ name: flux-system
+ namespace: flux-system
+spec:
+ interval: 1m0s
+ ref:
+ branch: main
+ secretRef:
+ name: flux-system
+ timeout: 60s
+ url: ssh://git@github.com/gianlucam76/kustomize
+
$ sveltosctl show addons
++-------------------------------------+-----------------+-----------+----------------+---------+-------------------------------+------------------+
+| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | VERSION | TIME | CLUSTER PROFILES |
++-------------------------------------+-----------------+-----------+----------------+---------+-------------------------------+------------------+
+| default/sveltos-management-workload | apps:Deployment | eng | the-deployment | N/A | 2023-05-16 00:48:11 -0700 PDT | flux-system |
+| default/sveltos-management-workload | :Service | eng | the-service | N/A | 2023-05-16 00:48:11 -0700 PDT | flux-system |
+| default/sveltos-management-workload | :ConfigMap | eng | the-map | N/A | 2023-05-16 00:48:11 -0700 PDT | flux-system |
++-------------------------------------+-----------------+-----------+----------------+---------+-------------------------------+------------------+
+
If you have directories containing Kustomize resources, you can put the content in a ConfigMap (or Secret) and have a ClusterProfile to reference it.
+In this example, we are cloning the git repository https://github.com/gianlucam76/kustomize
locally, and then create a kustomize.tar.gz
with the content of the helloWorldWithOverlays directory.
$ git clone git@github.com:gianlucam76/kustomize.git
+
+$ tar -czf kustomize.tar.gz -C kustomize/helloWorldWithOverlays .
+
+$ kubectl create configmap kustomize --from-file=kustomize.tar.gz
+
The below ClusterProfile will use the Kustomize SDK to get all the resources that need to be deployed. Then it will deploy those in the production
namespace in each managed cluster with the Sveltos clusterSelector env=fv.
apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kustomize-with-configmap
+spec:
+ clusterSelector: env=fv
+ syncMode: Continuous
+ kustomizationRefs:
+ - namespace: default
+ name: kustomize
+ kind: ConfigMap
+ path: ./overlays/production/
+ targetNamespace: production
+
$ sveltosctl show addons
++-------------------------------------+-----------------+------------+---------------------------+---------+-------------------------------+--------------------------+
+| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | VERSION | TIME | CLUSTER PROFILES |
++-------------------------------------+-----------------+------------+---------------------------+---------+-------------------------------+--------------------------+
+| default/sveltos-management-workload | apps:Deployment | production | production-the-deployment | N/A | 2023-05-16 00:59:13 -0700 PDT | kustomize-with-configmap |
+| default/sveltos-management-workload | :Service | production | production-the-service | N/A | 2023-05-16 00:59:13 -0700 PDT | kustomize-with-configmap |
+| default/sveltos-management-workload | :ConfigMap | production | production-the-map | N/A | 2023-05-16 00:59:13 -0700 PDT | kustomize-with-configmap |
++-------------------------------------+-----------------+------------+---------------------------+---------+-------------------------------+--------------------------+
+
Profile is the CustomerResourceDefinition used to instruct Sveltos which add-ons to deploy on a set of clusters.
+Profile is a namespace-scoped resource. It can only match clusters and reference resources within its own namespace.
+clusterSelector field selects a set of managed clusters where listed add-ons and applications will be deployed. +Only cluster in the same namespace can be a match.
+ +helmCharts field consists of a list of helm charts to be deployed to the clusters matching clusterSelector;
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+
policyRefs field references a list of ConfigMaps/Secrets, each containing Kubernetes resources to be deployed in the clusters matching clusterSelector.
+This field is a slice of PolicyRef structs. Each PolictRef has the following fields:
+policyRefs:
+- kind: Secret
+ name: my-secret-1
+ namespace: my-namespace-1
+ deploymentType: Local
+- kind: Remote
+ name: my-configmap-1
+ namespace: my-namespace-1
+ deploymentType: Remote
+
kustomizationRefs field is a list of sources containing kustomization files. Resources will be deployed in the clusters matching the clusterSelector specified.
+This field is a slice of KustomizationRef structs. Each KustomizationRef has the following fields:
+Kind: The kind of the referenced resource. The supported kinds are:
+Namespace: The namespace of the resource being referenced. This field is automatically set to the namespace of the Profile instance. In other words, a Profile instance can only reference resources that are within its own namespace.
+This field can be set to:
+Let's take a closer look at the OneTime syncMode option. Once you deploy a Profile with a OneTime configuration, Sveltos will check all of your clusters for a match with the clusterSelector. Any matching clusters will have the resources specified in the Profile deployed. However, if you make changes to the Profile later on, those changes will not be automatically deployed to already-matching clusters.
+Now, if you're looking for real-time deployment and updates, the Continuous syncMode is the way to go. With Continuous, any changes made to the Profile will be immediately reconciled into matching clusters. This means that you can add new features, update existing ones, and remove them as necessary, all without lifting a finger. Sveltos will deploy, update, or remove resources in matching clusters as needed, making your life as a Kubernetes admin a breeze.
+ContinuousWithDriftDetection instructs Sveltos to monitor the state of managed clusters and detect a configuration drift for any of the resources deployed because of that Profile. +When Sveltos detects a configuration drift, it automatically re-syncs the cluster state back to the state described in the management cluster. +To know more about configuration drift detection, refer to this section.
+Imagine you're about to make some important changes to your Profile, but you're not entirely sure what the results will be. You don't want to risk causing any unwanted side effects, right? Well, that's where the DryRun syncMode configuration comes in. By deploying your Profile with this configuration, you can launch a simulation of all the operations that would normally be executed in a live run. The best part? No actual changes will be made to the matching clusters during this dry run workflow, so you can rest easy knowing that there won't be any surprises. +To know more about dry run, refer to this section.
+The stopMatchingBehavior field specifies the behavior when a cluster no longer matches a Profile. By default, all Kubernetes resources and Helm charts deployed to the cluster will be removed. However, if StopMatchingBehavior is set to LeavePolicies, any policies deployed by the Profile will remain in the cluster.
+For instance
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: Profile
+metadata:
+ name: kyverno
+ namespace: eng
+spec:
+ stopMatchingBehavior: WithdrawPolicies
+ clusterSelector: env=prod
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+
When a cluster matches the Profile, Kyverno Helm chart will be deployed in such a cluster. If the cluster's labels are subsequently modified and cluster no longer matches the Profile, the Kyverno Helm chart will be uninstalled. However, if the stopMatchingBehavior property is set to LeavePolicies, Sveltos will retain the Kyverno Helm chart in the cluster.
+The reloader property determines whether rolling upgrades should be triggered for Deployment, StatefulSet, or DaemonSet instances managed by Sveltos and associated with this Profile when changes are made to mounted ConfigMaps or Secrets. +When set to true, Sveltos automatically initiates rolling upgrades for affected Deployment, StatefulSet, or DaemonSet instances whenever any mounted ConfigMap or Secret is modified. This ensures that the latest configuration updates are applied to the respective workloads.
+Please refer to this section for more information.
+A Profile might match more than one cluster. When a change is maded to a Profile, by default all matching clusters are update concurrently. +The maxUpdate field specifies the maximum number of Clusters that can be updated concurrently during an update operation triggered by changes to the Profile's add-ons or applications. +The specified value can be an absolute number (e.g., 5) or a percentage of the desired cluster count (e.g., 10%). The default value is 100%, allowing all matching Clusters to be updated simultaneously. +For instance, if set to 30%, when modifications are made to the Profile's add-ons or applications, only 30% of matching Clusters will be updated concurrently. Updates to the remaining matching Clusters will only commence upon successful completion of updates in the initially targeted Clusters. This approach ensures a controlled and manageable update process, minimizing potential disruptions to the overall cluster environment. +Please refer to this section for more information.
+The validateHealths property defines a set of Lua functions that Sveltos executes against the managed cluster to assess the health and status of the add-ons and applications specified in the Profile. These Lua functions act as validation checks, ensuring that the deployed add-ons and applications are functioning properly and aligned with the desired state. By executing these functions, Sveltos proactively identifies any potential issues or misconfigurations that could arise, maintaining the overall health and stability of the managed cluster.
+The ValidateHealths property accepts a slice of Lua functions, where each function encapsulates a specific validation check. These functions can access the managed cluster's state to perform comprehensive checks on the add-ons and applications. The results of the validation checks are aggregated and reported back to Sveltos, providing valuable insights into the health and status of the managed cluster's components.
+Lua's scripting capabilities offer flexibility in defining complex validation logic tailored to specific add-ons or applications.
+Please refer to this section for more information.
+Consider a scenario where a new cluster with the label env:prod is created. The following instructions guide Sveltos to:
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: Profile
+metadata:
+ name: kyverno
+ namespace: eng
+spec:
+ clusterSelector: env=prod
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+ validateHealths:
+ - name: deployment-health
+ featureID: Helm
+ group: "apps"
+ version: "v1"
+ kind: "Deployment"
+ namespace: kyverno
+ script: |
+ function evaluate()
+ hs = {}
+ hs.healthy = false
+ hs.message = "available replicas not matching requested replicas"
+ if obj.status ~= nil then
+ if obj.status.availableReplicas ~= nil then
+ if obj.status.availableReplicas == obj.spec.replicas then
+ hs.healthy = true
+ end
+ end
+ end
+ return hs
+ end
+
The templateResourceRefs property specifies a collection of resources to be gathered from the management cluster. The values extracted from these resources will be utilized to instantiate templates embedded within referenced PolicyRefs and Helm charts. +Refer to template section for more info and examples.
+The dependsOn property specifies a list of other Profiles that this instance relies on. In any managed cluster that matches to this Profile, the add-ons and applications defined in this instance will only be deployed after all add-ons and applications in the designated dependency Profiles have been successfully deployed.
+For example, profile-a can depend on another profile-b. This implies that any Helm charts or raw YAML files associated with CProfile A will not be deployed until all add-ons and applications specified in Profile B have been successfully provisioned.
+ + + + + + + + + + + + + + + + + +The ClusterProfile Spec.PolicyRefs is a list of Secrets/ConfigMaps. Both Secrets and ConfigMaps data fields can be a list of key-value pairs. Any key is acceptable, and the value can be multiple objects in YAML or JSON format1.
+To create a Secret containing Calico YAMLs, use the below commands.
+$ wget https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico.yaml
+
+$ kubectl create secret generic calico --from-file=calico.yaml --type=addons.projectsveltos.io/cluster-profile
+
Please note: A ClusterProfile can only reference Secrets of type addons.projectsveltos.io/cluster-profile
+The YAML file exemplifies a ConfigMap that holds multiple resources. When a ClusterProfile instance references this ConfigMap, a GatewayClass and a Gateway instance are automatically deployed in any managed cluster that adheres to the ClusterProfile's clusterSelector.
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: contour-gateway
+ namespace: default
+data:
+ gatewayclass.yaml: |
+ kind: GatewayClass
+ apiVersion: gateway.networking.k8s.io/v1beta1
+ metadata:
+ name: contour
+ spec:
+ controllerName: projectcontour.io/projectcontour/contour
+ gateway.yaml: |
+ kind: Namespace
+ apiVersion: v1
+ metadata:
+ name: projectcontour
+ ---
+ kind: Gateway
+ apiVersion: gateway.networking.k8s.io/v1beta1
+ metadata:
+ name: contour
+ namespace: projectcontour
+ spec:
+ gatewayClassName: contour
+ listeners:
+ - name: http
+ protocol: HTTP
+ port: 80
+ allowedRoutes:
+ namespaces:
+ from: All
+
The below code represents a ClusterProfile resource that references the ConfigMap and Secret we created above.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: deploy-resources
+spec:
+ clusterSelector: env=fv
+ policyRefs:
+ - name: contour-gateway
+ namespace: default
+ kind: ConfigMap
+ - name: calico
+ namespace: default
+ kind: Secret
+
When a ClusterProfile references a ConfigMap or Secret, the kind and name fields are required, while the namespace field is optional. Specifying a namespace uniquely identifies the resource using the tuple namespace,name,kind, and that resource will be used for all matching clusters.
+If you leave the namespace field empty, Sveltos will search for the ConfigMap or Secret with the provided name within the namespace of each matching cluster.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: deploy-resources
+spec:
+ clusterSelector: env=fv
+ policyRefs:
+ - name: contour-gateway
+ kind: ConfigMap
+
Consider the provided ClusterProfile, when we have two workload clusters matching. One in the foo namespace and another in the bar namespace. Sveltos will search for the ConfigMap contour-gateway in the foo namespace for the Cluster in the foo namespace and for a ConfigMap contour-gateway in the bar namespace for the Cluster in the bar namespace.
+More ClusterProfile examples can be found here.
+A ConfigMap is not designed to hold large chunks of data. The data stored in a ConfigMap cannot exceed 1 MiB. If you need to store settings that are larger than this limit, you may want to consider mounting a volume or use a separate database or file service. ↩
+{"use strict";/*!
+ * escape-html
+ * Copyright(c) 2012-2013 TJ Holowaychuk
+ * Copyright(c) 2015 Andreas Lubbe
+ * Copyright(c) 2015 Tiancheng "Timothy" Gu
+ * MIT Licensed
+ */var Va=/["'&<>]/;qn.exports=za;function za(e){var t=""+e,r=Va.exec(t);if(!r)return t;var o,n="",i=0,s=0;for(i=r.index;i