Skip to content

Hashicorp Vault Keystore

dilverse edited this page Dec 7, 2022 · 33 revisions

This guide shows how to setup a KES server that uses Vault's K/V engine as a persistent and secure key store:

                         ╔═════════════════════════════════════════════╗
┌────────────┐           ║  ┌────────────┐              ┌───────────┐  ║
│ KES Client ├───────────╫──┤ KES Server ├──────────────┤   Vault   │  ║
└────────────┘           ║  └────────────┘              └───────────┘  ║
                         ╚═════════════════════════════════════════════╝

Vault Server Setup

1. Generate Vault Private Key & Certificate

KES and Vault will exchange sensitive information. In particular, KES will send and receive the secret keys from Vault's HTTP API. Therefore, it is necessary to secure the communication between KES and Vault. Here, we use self-signed certificates for simplicity.

The following command generates a new TLS private key (vault.key) and a self-signed X.509 certificate (vault.crt) issued for the IP 127.0.0.1 and DNS name localhost:

$ kes identity new --key vault.key --cert vault.crt --ip "127.0.0.1" localhost

  Private key:  vault.key
  Certificate:  vault.crt
  Identity:     37ced4538faa0c236b9fa80826b50de9afb45cc29acf6575f069a2d10e6125af

If you already have a TLS private key & certificate - e.g. from a WebPKI or internal CA - you can use them instead. Remember to adjust the vault-config.json later on.

2. Configure Vault Server

The following vault-config.json starts a single Vault server instance on port 8200:

{
  "api_addr": "https://127.0.0.1:8200",
  "backend": {
    "file": {
      "path": "vault/file"
    }
  },

  "default_lease_ttl": "168h",
  "max_lease_ttl": "720h",

  "listener": {
    "tcp": {
      "address": "0.0.0.0:8200",
      "tls_cert_file": "vault.crt",
      "tls_key_file": "vault.key",
      "tls_min_version": "tls12"
    }
  }
}

Note that we run Vault with a file backend. For high-availability you may want to use etcd, consul or Vault with integrated storage instead.

3. Start Vault Server

If you haven't already, download the Vault binary.

On linux, we can grant the binary the ipc_lock capability such that it can use the mlock syscall without root permissions:

sudo setcap cap_ipc_lock=+ep $(readlink -f $(which vault))

Now, we can start the Vault server instance:

$ vault server -config vault-config.json
4. Set VAULT_ADDR endpoint

The Vault CLI needs to know the Vault endpoint:

export VAULT_ADDR='https://127.0.0.1:8200'

When using a self-signed vault.crt the Vault CLI also needs to skip TLS certificate verification to talk to the Vault server:

export VAULT_SKIP_VERIFY=true
5. Initialize Vault Server
$ vault operator init

Unseal Key 1: eyW/+8ZtsgT81Cb0e8OVxzJAQP5lY7Dcamnze+JnWEDT
Unseal Key 2: 0tZn+7QQCxphpHwTm6/dC3LpP5JGIbYl6PK8Sy79R+P2
Unseal Key 3: cmhs+AUMXUuB6Lzsvgcbp3bRT6VDGQjgCBwB2xm0ANeF
Unseal Key 4: /fTPpec5fWpGqWHK+uhnnTNMQyAbl5alUi4iq2yNgyqj
Unseal Key 5: UPdDVPto+H6ko+20NKmagK40MOskqOBw4y/S51WpgVy/
 
Initial Root Token: s.zaU4Gbcu0Wh46uj2V3VuUde0

Vault is initialized with 5 key shares and a key threshold of 3. Please securely
distribute the key shares printed above. When the Vault is re-sealed,
restarted, or stopped, you must supply at least 3 of these keys to unseal it
before it can start servicing requests.

Vault will print N (5 by default) unseal key shares of which at least M (3 by default) are required to re-generate the actual unseal key to unseal Vault. Therefore, make sure to store them at a secure and durable location.

6. Set VAULT_TOKEN

The Vault CLI needs an authentication token to perform operations. The root access token is generated by vault operator init.

$ export VAULT_TOKEN=s.zaU4Gbcu0Wh46uj2V3VuUde0

Adjust the token to your own Vault access token.

7. Unseal Vault Server

Once initialized, Vault has to be unsealed using M out of N unseal key shares:

$ vault operator unseal eyW/+8ZtsgT81Cb0e8OVxzJAQP5lY7Dcamnze+JnWEDT
$ vault operator unseal 0tZn+7QQCxphpHwTm6/dC3LpP5JGIbYl6PK8Sy79R+P2
$ vault operator unseal cmhs+AUMXUuB6Lzsvgcbp3bRT6VDGQjgCBwB2xm0ANeF

Once enough valid unseal key shares have been submitted, Vault will become unsealed and able to process requests.

8. Enable K/V Backend

KES will store the secret keys at the Vault K/V backend. Vault provides two different K/V engines: v1 and v2.

The following command enables the K/V v1 secret engine:

$ vault secrets enable -version=1 kv

In general, we recommend the K/V v1 engine.

The following command enables the K/V v2 secret engine:

$ vault secrets enable -version=2 kv

Note that the Vault policy for KES depends on the chosen K/V engine version. The v2 engine requires slightly different policy rules compared to the v1 engine. For more information about v1 vs. v2 see: Upgrading from v1

9. Create Vault Policy

The Vault policy defines the API paths the KES server will be able to access.

The following kes-policy.hcl policy should be used for the K/V v1 backend:

path "kv/*" {
   capabilities = [ "create", "read", "delete" ]
}

The following kes-policy.hcl policy should be used for the K/V v2 backend:

path "kv/data/*" {
   capabilities = [ "create", "read" ]
}
path "kv/metadata/*" {
   capabilities = [ "list", "delete" ]       
}

The following command creates the policy at Vault:

$ vault policy write kes-policy kes-policy.hcl
10. Enable AppRole Authentication

The KES server will later need to authenticate to Vault. Here, we use the AppRole authentication method.

$ vault auth enable approle
11. Create KES Role

The following command adds a new role kes-server at Vault:

$ vault write auth/approle/role/kes-server token_num_uses=0  secret_id_num_uses=0  period=5m
12. Bind Policy to Role

The following command binds kes-server role to key-policy that was created earlier at Vault:

$ vault write auth/approle/role/kes-server policies=kes-policy
13. Generate AppRole ID

Now, we can request an AppRole ID for the KES server:

$ vault read auth/approle/role/kes-server/role-id 
14. Generate AppRole Secret

Further, we can request an AppRole secret for the KES server:

$ vault write -f auth/approle/role/kes-server/secret-id 

The AppRole secret is printed as secret_id. The secret_id_accessor can be ignored.


KES Server Setup

1. Generate KES Server Private Key & Certificate

First, we need to generate a TLS private key and certificate for our KES server. A KES server can only be run with TLS - since secure-by-default. Here we use self-signed certificates for simplicity.

The following command generates a new TLS private key (private.key) and a self-signed X.509 certificate (public.crt) issued for the IP 127.0.0.1 and DNS name localhost:

$ kes identity new --ip "127.0.0.1" localhost

  Private key:  private.key
  Certificate:  public.crt
  Identity:     2e897f99a779cf5dd147e58de0fe55a494f546f4dcae8bc9e5426d2b5cd35680

If you already have a TLS private key & certificate - e.g. from a WebPKI or internal CA - you can use them instead. Remember to adjust the tls config section later on.

2. Generate Client Credentials

The client application needs some credentials to access the KES server. The following command generates a new TLS private/public key pair:

$ kes identity new --key=client.key --cert=client.crt MyApp

  Private key:  client.key
  Certificate:  client.crt
  Identity:     02ef5321ca409dbc7b10e7e8ee44d1c3b91e4bf6e2198befdebee6312745267b

The identity 02ef5321ca409dbc7b10e7e8ee44d1c3b91e4bf6e2198befdebee6312745267b is an unique fingerprint of the public key in client.crt and you can re-compute it anytime:

$ kes identity of client.crt

  Identity:  02ef5321ca409dbc7b10e7e8ee44d1c3b91e4bf6e2198befdebee6312745267b
3. Configure KES Server

Next, we can create the KES server configuration file: config.yml. Please, make sure that the identity in the policy section matches your client.crt identity, also make sure to add the approle role_id and secret_id obtained earlier.

address: 0.0.0.0:7373 # Listen on all network interfaces on port 7373

admin:
  identity: disabled  # We disable the admin identity since we don't need it in this guide 
   
tls:
  key: private.key    # The KES server TLS private key
  cert: public.crt    # The KES server TLS certificate
   
policy:
  my-app: 
    allow:
    - /v1/key/create/my-key*
    - /v1/key/generate/my-key*
    - /v1/key/decrypt/my-key*
    identities:
    - 02ef5321ca409dbc7b10e7e8ee44d1c3b91e4bf6e2198befdebee6312745267b # Use the identity of your client.crt
   
keystore:
   vault:
     endpoint: https://127.0.0.1:8200
     version:  v1 # The K/V engine version - either "v1" or "v2".
     approle:
       id:     "" # Your AppRole ID
       secret: "" # Your AppRole Secret
       retry:  15s
     status:
       ping: 10s
     tls:
       ca: vault.crt # Manually trust the vault certificate since we use self-signed certificates
4. Start KES Server

Now, we can start a KES server instance:

$ kes server --config config.yml --auth off

On linux, KES can use the mlock syscall to prevent the OS from writing in-memory data to disk (swapping). This prevents leaking senstive data accidentality. The following command allows KES to use the mlock syscall without running with root privileges:

$ sudo setcap cap_ipc_lock=+ep $(readlink -f $(which kes))

Then, we can start a KES server instance with memory protection:

$ kes server --config config.yml --auth off --mlock

KES CLI Access

1. Set KES_SERVER Endpoint

The KES CLI needs to know to which server it should talk to:

$ export KES_SERVER=https://127.0.0.1:7373
2. Use Client Credentials

Further, the KES CLI needs some access credentials to talk to a KES server:

$ export KES_CLIENT_CERT=client.crt
$ export KES_CLIENT_KEY=client.key
3. Perform Operations

Now, we can perform any API operation that is allowed based on the policy we assigned above. For example we can create a key:

$ kes key create my-key-1

If you are running KES locally for testing purpose use -k or --insecure flag to create a key

$ kes key create my-key-1 -k

Then, we can use that key to generate a new data encryption key:

$ kes key dek my-key-1
{
  plaintext : UGgcVBgyQYwxKzve7UJNV5x8aTiPJFoR+s828reNjh0=
  ciphertext: eyJhZWFkIjoiQUVTLTI1Ni1HQ00tSE1BQy1TSEEtMjU2IiwiaWQiOiIxMTc1ZjJjNDMyMjNjNjNmNjY1MDk5ZDExNmU3Yzc4NCIsIml2IjoiVHBtbHpWTDh5a2t4VVREV1RSTU5Tdz09Iiwibm9uY2UiOiJkeGl0R3A3bFB6S21rTE5HIiwiYnl0ZXMiOiJaaWdobEZrTUFuVVBWSG0wZDhSYUNBY3pnRWRsQzJqWFhCK1YxaWl2MXdnYjhBRytuTWx0Y3BGK0RtV1VoNkZaIn0=
}

If you are running KES locally for testing purpose use -k or --insecure flag to generate new data encryption key

$ kes key dek my-key-1 -k
{
  plaintext : UGgcVBgyQYwxKzve7UJNV5x8aTiPJFoR+s828reNjh0=
  ciphertext: eyJhZWFkIjoiQUVTLTI1Ni1HQ00tSE1BQy1TSEEtMjU2IiwiaWQiOiIxMTc1ZjJjNDMyMjNjNjNmNjY1MDk5ZDExNmU3Yzc4NCIsIml2IjoiVHBtbHpWTDh5a2t4VVREV1RSTU5Tdz09Iiwibm9uY2UiOiJkeGl0R3A3bFB6S21rTE5HIiwiYnl0ZXMiOiJaaWdobEZrTUFuVVBWSG0wZDhSYUNBY3pnRWRsQzJqWFhCK1YxaWl2MXdnYjhBRytuTWx0Y3BGK0RtV1VoNkZaIn0=
}

Advanced Configuration

Here we show some additional configuration steps that can solve specific problems.

1. Multi-Tenancy with K/V prefixes

With some small changes, Vault can serve as backend for multiple isolated KES tenants. Each KES tenant can consist of N replicas and there can be M KES tenants connected to the same Vault server/cluster. So there are NxM KES server instances connected to a single Vault.

Therefore, each KES tenant has a separate prefix at the K/V secret engine. For each KES tenant there has to be a corresponding Vault policy.

For K/V version v1:

path "kv/<tenant-name>/*" {
   capabilities = [ "create", "read", "delete" ]
}

For K/V version v2:

path "kv/data/<tenant-name>/*" {
   capabilities = [ "create", "read" ]
}
path "kv/metadata/<tenant-name>/*" {
   capabilities = [ "list", "delete" ]       
}

Each KES tenant has slightly different configuration file that contains Vault K/V prefix which should be used:

keystore:
   vault:
     endpoint: https://127.0.0.1:8200
     prefix: <tenant-name>
     approle:
       id:     "" # Your AppRole ID
       secret: "" # Your AppRole Secret
       retry:  15s
     status:
       ping: 10s
     tls:
       ca: vault.crt # Manually trust the vault certificate since we use self-signed certificates
1. Multi-Tenancy with Vault Namespaces

With some small changes, Vault can serve as backend for multiple isolated KES tenants. Each KES tenant can consist of N replicas and there can be M KES tenants connected to the same Vault server/cluster. So there are NxM KES server instances connected to a single Vault.

Therefore, each KES tenant has a separate prefix at the K/V secret engine. For each KES tenant there has to be a corresponding Vault policy.

For K/V version v1:

path "kv/<tenant-name>/*" {
   capabilities = [ "create", "read", "delete" ]
}

For K/V version v2:

path "kv/data/<tenant-name>/*" {
   capabilities = [ "create", "read" ]
}
path "kv/metadata/<tenant-name>/*" {
   capabilities = [ "list", "delete" ]       
}

Each KES tenant has slightly different configuration file that contains Vault K/V prefix which should be used:

keystore:
   vault:
     endpoint: https://127.0.0.1:8200
     namespace: <vault-namespace>
     approle:
       id:     "" # Your AppRole ID
       secret: "" # Your AppRole Secret
       retry:  15s
     status:
       ping: 10s
     tls:
       ca: vault.crt # Manually trust the vault certificate since we use self-signed certificates

References

Clone this wiki locally