-
Notifications
You must be signed in to change notification settings - Fork 98
Concepts
The Concepts section helps you learn about how KES works. In particular, it contains information about the general architecture, authentication as well as authorization and other important aspects.
╔═════════════════════════════════════════════════════════════════════════╗
║ ┌───────┐ ║
║ | KMS ├─────┐ ║
║ └───┬───┘ | ║
║ | | ║
║ | | ║
║ ┌────────────┐ ┌─────┴──────┐ | ┌───────────┐ ║
║ │ KES Client ├──────────────┤ KES Server ├──┴───────────┤ Key Store │ ║
║ └────────────┘ └────────────┘ └───────────┘ ║
╚═════════════════════════════════════════════════════════════════════════╝
Let's start with one KES server and one KES client - even though, in the general case, there can be arbitrary many clients talking to arbitrary many servers.
The KES client connects to the KES server over TLS and uses the KES server API to perform operations - for example creating a new key. The KES server itself is stateless and gets started with an initial configuration. However, to perform certain stateful operations - like creating a key - it needs to keep some state somewhere. Therefore, a KES server requires a key store / secret store, specified as part of the initial configuration. Whenever a client request changes the overall state, the KES server updates the key store to represent this new state. Multiple independent KES servers synchronize themselves via a common key store.
In addition to a key store, the KES server can also be connected to a Key-Management-System (KMS). If available, the KES server will use to the KMS to protect keys / secrets stored at the key store. The KMS is an optional component, and only required when trying to achieve certain (additional) security guarantees.
In general, all KES server operations require authentication and authorization. However, KES uses the same application-independent mechanism for both: TLS - i.e. mutual TLS authentication (mTLS).
┌──────────────┐ ┌──────────────┐
│ KES Client ├───────────────────┤ KES Server |
└──────────────┘ TLS └──────────────┘
(🗝️,📜) 🔒 (📜,🔑)
Therefore, a KES client needs a private key / public key pair and a X.509 certificate. Here, we explicitly distinguish the public key from the certificate to explain how authentication and authorization works:
In general, a KES server only accepts TLS connections from clients that can present a valid and authentic TLS certificate (📜) during the TLS handshake. By valid we mean a well-formed and e.g. not expired certificate. By authentic we refer to a certificate that has been issued, and therefore cryptographically signed, by a certificate authority (CA) that the KES server trusts.
Now, when a KES client tries to establish a connection to the KES server the TLS protocol will ensure that:
- The KES client actually has the private key (🗝️) that corresponds to the public key in the certificate (📜) presented by the client.
- The certificate presented by a client has been issued by a CA that the KES server trusts.
=> If the TLS handshake succeeds then the KES server considers the request as authentic.
It is possible to skip the certificate verification - for example during testing or development. To do so, start the KES server with the
--mtls-auth=ignore
option. Then clients still have to provide a certificate but the server will not verify whether the certificate has been issued by a trusted CA. Instead, the client can present a self-signed certificate.
Please note that CA-issued certificates are highly recommended for production deployments and--mtls-auth=ignore
should only be used for testing or development.
Once the KES server has considered a client request as authentic, it checks whether the client is actually authorized to perform the request operation - e.g. create a new secret key. Therefore, the server verifies that the request complies to the policy associated to the client. So, KES relies on a role and policy-based authorization model.
To associate clients to policies the KES server again relies on TLS - i.e. on the client certificate (📜).
More precisely, the KES server computes an identity from the certificate: 🆔 ═ H(📜)
. Technically,
the identity function (H) could be any unique mapping. KES uses a cryptographic hash of the client's public key as identity function: 🆔 ═ SHA-256(📜.PublicKey)
.
So, when the KES server receives an authentic client request, it computes the client identity (🆔) from the client certificate (📜) and checks whether this identity is associated to a named policy. If such an identity-policy mapping exists, the KES server validates that the request complies to the policy. Otherwise, the server rejects the request.
=> The KES server considers a request as authorized if the following two statements hold:
- A policy associated to the identity (🆔), computed from the client certificate (📜), exists.
- The associated policy explicitly allows the operation that the request wants to perform.
The KES server policies define whether a client request is allowed to perform a specific operation - e.g. creating a secret key. Overall, KES uses policy definitions that are designed to be human-readable and easy to reason about rather then providing the most flexibility.
A policy consists of a set of glob patterns. For example:
my-policy = [
# You can create the "my-app-key" and generate & decrypt data-encryption keys using the "my-app-key" key.
"/v1/key/create/my-app-key",
"/v1/key/generate/my-app-key",
"/v1/key/decrypt/my-app-key",
# You can decrypt data encrypted with any key name that matches "shared-keys-*" - for example:
# "shared-key-1" or "shared-key-user-XY".
"/v1/key/decrypt/shared-key-*",
# You can generate & decrypt data-encryption keys encrypted with any key name that matches
# "shared-key-" and one (single-digit) number, for example: "user-key-5" but not "user-key-12".
"/v1/key/generate/user-key-[0-9]",
"/v1/key/decrypt/user-key-[0-9]",
]
In general, a policy pattern has the following form:
<API-version>/<API>/<operation>/[<argument0>/<argument1>/...]>
However, a glob wildcard can not only be part of the argument(s) but can appear in any path segment. For example:
my-policy = [
# This pattern allows any v1 Key-API operation - e.g. create, delete, decrypt, ... - for the
# "my-app-key" key.
"/v1/key/*/my-app-key"
]
Please note that non-argument wildcards may implicitly grant access to future functionality e.g. new APIs. Therefore, it requires careful evaluation whether such wildcards are appropriate in your scenario.
The policy-identity mapping is a one-to-many relation. So, there can be arbitrary many identities associated to the same policy. However, the same identity can only be associated to one identity at one point in time. However, there is one important caveat here:
The one-to-many relation only holds for one server. So, the same identity A
can be associated to one policy P
at the KES server S1
but can also be associated to a different policy Q
at the KES server S2
. So, the two KES servers S1
and S2
have distinct and independent policy-identity sets.
Further, you may recall that the KES server computes the client identity from its certificate:
🆔 ═ SHA-256(📜.PublicKey)
. However, when specifying the identity-policy mapping it is totally
valid to associate an arbitrary identity value to a policy. So, the identity value does not need to be
an actual SHA-256 hash value. It can be "_"
, "disabled"
, "foobar123"
or literally any other value.
This is in particular useful for dealing with the special root identity.
The KES server has a special root identity that must be specified - either via the configuration file or
the --root
CLI option. In general, the root
is like any other identity except that it cannot be associated to any policy but can perform arbitrary API operations.
So, the root identity is especially useful for initial provisioning and management tasks. However, within centrally managed and/or automated deployments - like Kubernetes - root is not necessary and only a security risk. If an attacker gains access to the root's private key and certificate it can perform arbitrary operations.
Even though a root identity must always be specified, it is possible to effectively disable it. This can be done by specifying a root identity value that never will be an actually (SHA-256) hash value - for example
--root=_
(underscore) or --root=disabled
.
Since 🆔 ═ SHA-256(📜.PublicKey)
will never be e.g. disabled
it becomes impossible to perform an operation as root.
Note that even though root can perform arbitrary API operation, it cannot change the root identity itself. The root identity can only be specified/changed via the CLI or configuration file. So, an attacker cannot become root identity by tricking the current root. The attacker either has to compromise root's private key or change the initial server configuration.