diff --git a/content/security/docs/multitenancy.md b/content/security/docs/multitenancy.md index 715e4ebae..d8bcfe1d5 100644 --- a/content/security/docs/multitenancy.md +++ b/content/security/docs/multitenancy.md @@ -297,11 +297,11 @@ In the above examples, we used policies written for OPA/Gatekeeper. However, the ## Hard multi-tenancy Hard multi-tenancy can be implemented by provisioning separate clusters for each tenant. While this provides very strong isolation between tenants, it has several drawbacks. -First, when you have many tenants, this approach can quickly become expensive. Not only will you have to pay for the control plane costs for each cluster, you will not be able to share compute resources between clusters. This will eventually cause fragmentation where a subset of your clusters are under utilized while others are over utilized. +First, when you have many tenants, this approach can quickly become expensive. Not only will you have to pay for the control plane costs for each cluster, you will not be able to share compute resources between clusters. This will eventually cause fragmentation where a subset of your clusters are underutilized while others are overutilized. Second, you will likely need to buy or build special tooling to manage all of these clusters. In time, managing hundreds or thousands of clusters may simply become too unwieldy. -Finally, creating a cluster per tenant will be slow relative to a creating a namespace. Nevertheless, a hard-tenancy approach may be necessary in highly-regulated industries or in SaaS environments where strong isolation is required. +Finally, creating a cluster per tenant will be slow relative to a creating a namespace. Nevertheless, a hard-tenancy approach may be necessary in highly-regulated industries or in SaaS environments where strong isolation is required. ## Future directions diff --git a/content/windows/docs/security.md b/content/windows/docs/security.md index 1ed34d47e..13fb3b7d9 100644 --- a/content/windows/docs/security.md +++ b/content/windows/docs/security.md @@ -10,7 +10,7 @@ For more information on Pod Security Policies please reference the Kubernetes [d On the other hand, Pod Security Standards (PSS) which is the recommended security approach and typically implemented using Security Contexts are defined as part of the Pod and container specifications in the Pod manifest. PSS is the official standard that the Kubernetes project team has defined to address the security-related best practices for Pods. It defines policies such as baseline (minimally restrictive, default), privileged (unrestrictive) and restricted (most restrictive). -We recommend starting with the baseline profile. PSS baseline policy profile a solid balance between security and potential friction, requiring a minimal list of exceptions, it serves as a good starting point for workload security. If you are currently using PSPs we recommend switching to PSS. More details on the PSS policies can be found in the Kubernetes [documentation](https://kubernetes.io/docs/concepts/security/pod-security-standards/). These policies can be enforced with several tools including those from [OPA](https://www.openpolicyagent.org/) and [Kyverno](https://kyverno.io/). For example, Kyverno provides the full collection of PSS policies [here](https://kyverno.io/policies/pod-security/). +We recommend starting with the baseline profile. PSS baseline profile provides a solid balance between security and potential friction, requiring a minimal list of exceptions, it serves as a good starting point for workload security. If you are currently using PSPs we recommend switching to PSS. More details on the PSS policies can be found in the Kubernetes [documentation](https://kubernetes.io/docs/concepts/security/pod-security-standards/). These policies can be enforced with several tools including those from [OPA](https://www.openpolicyagent.org/) and [Kyverno](https://kyverno.io/). For example, Kyverno provides the full collection of PSS policies [here](https://kyverno.io/policies/pod-security/). Security context settings allow one to give privileges to select processes, use program profiles to restrict capabilities to individual programs, allow privilege escalation, filter system calls, among other things. diff --git a/policies/alternative-gatekeeper/README.md b/policies/alternative-gatekeeper/README.md index 5d0d04b54..ec1bf900d 100644 --- a/policies/alternative-gatekeeper/README.md +++ b/policies/alternative-gatekeeper/README.md @@ -113,7 +113,7 @@ Kubernetes has the concepts of `requests` and `limits` when it comes to CPU & Me By default, we're ensuring that we run running a tight ship by not only requiring that each of the containers in our Pods have **BOTH** a CPU & Memory request & limit - and that they are the same thing. -This is the ideal configuration if you are running a multi-tenant cluster to ensure that there are not any 'noisy neighbor' issues where people who don't specify limits burst into over provisioning on the Node where it was scheduled. This forces each service to think about how much CPU and Memory they actually need and declare it in their Spec templates when they deploy to the cluster - and be held to that. +This is the ideal configuration if you are running a multi-tenant cluster to ensure that there are not any 'noisy neighbor' issues where people who don't specify limits burst into over-provisioning on the Node where it was scheduled. This forces each service to think about how much CPU and Memory they actually need and declare it in their Spec templates when they deploy to the cluster - and be held to that. #### Require any Pods to declare readiness and liveness probes/healthchecks