-
Notifications
You must be signed in to change notification settings - Fork 28
Use an external database
By default, Deep Security Smart Check will configure a database pod in your Kubernetes cluster. This is convenient for demonstration purposes, but for production it is a good idea to use an external database.
It is important for the database to be geographically close to the Deep Security Smart Check cluster. Network delays between Deep Security Smart Check and your database can cause the system to behave poorly.
You can configure Deep Security Smart Check with an external database running PostgreSQL 9.6, 10, or 11.
You will need to know the userid and password for the master user on your database server as well as the host name and port.
Add the snippet below to your overrides.yaml
file before installing Deep Security Smart Check:
db:
user: postgres
password: password
host: database.example.com
port: 5432
NOTE: Postgres attempts to connect to a database with the same name as the user, and will fail if that database does not exist. Ensure that a database with the same name as the user exists on the server and that the user has administrative rights on that database.
By default, Deep Security Smart Check uses a secure connection to your database server. You will likely need to configure trust for the certificate authority that created your database server's TLS certificate.
To do this, first create a ConfigMap
with the certificate authority's certificate:
$ kubectl create \
configmap \
dssc-db-trust \
--from-file=ca=ca.pem
and then update the db
section of your overrides.yaml
file and add a tls
section to tell Deep Security Smart Check where to get the certificate to trust:
db:
tls:
ca:
valueFrom:
configMapKeyRef:
name: dssc-db-trust
key: ca
You can then install Deep Security Smart Check with these overrides.
NOTE: At this time, Deep Security Smart Check only supports database certificates with RSA keys.
Deep Security Smart Check does not support migrating from the built-in database to an external database. You must re-install.
If you don't have an EKS cluster, eksctl is a convenient tool to provision an EKS cluster. We suggest that you deploy RDS for PostgreSQL in the same VPC as the cluster.
To facilitate the communication between the cluster and RDS, you need to allow inbound traffic from your cluster's security group in the RDS security group. If the cluster has multiple security groups, you may need to do this for all security groups. You should also allow inbound traffic from RDS to the cluster at port 5432.
For security reason, do not give the public access to the RDS instance.
By default, TLS is enabled in RDS. You can obtain the root certificate to configure the secret for Smart Check to connect to RDS as described above.
You can follow Azure document to prevision an AKS cluster using the Azure portal or the Azure CLI. We recommend that you select a virtual machine type that supports accelerated networking for the cluster nodes. Accelerated networking greatly improves the networking performance. D/DSv2 and F/Fs types of virtual machines support accelerated networking.
Follow the Azure documentation to create a database using either the Azure portal or the Azure CLI. To allow secure and direct connection between the cluster and database, enable VNet service endpoints and VNet rules in Azure Database for PostgreSQL.
If TLS connectivity needs to be enforced, you can obtain the root certificate to configure the secret for Smart Check to connect to Azure Database as described above.
If there is a problem with your setup, any pods that rely on the database connection will get stuck trying to initialize.
$ kubectl get pods --field-selector='status.phase!=Running'
NAME READY STATUS RESTARTS AGE
auth-7d78dccff7-nfh97 0/1 Init:0/1 0 4m26s
openscap-scan-ddc7b9d-jrhc7 0/2 Init:0/1 0 4m26s
registryviews-5f46786b46-m6x84 0/1 Init:0/1 0 4m25s
scan-568ffb49d7-dp2tt 0/1 Init:0/1 0 4m25s
vulnerability-scan-7b7c59d6f8-d5ql9 0/1 Init:0/2 0 4m25s
Pick one of these pods and run kubectl describe pod
on it, then look at the Events
section for more information.
In this example, the error shows that the dssc-db-trust
ConfigMap
does not exist.
$ kubectl describe pod auth-7d78dccff7-nfh97
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m13s default-scheduler Successfully assigned default/auth-7d78dccff7-nfh97 to minikube
Warning FailedMount 61s (x11 over 7m13s) kubelet, minikube MountVolume.SetUp failed for volume "database-ca" : configmap "dssc-db-trust" not found
Warning FailedMount 36s (x3 over 5m10s) kubelet, minikube Unable to mount volumes for pod "auth-7d78dccff7-nfh97_default(ff34aa94-94fc-11e9-90aa-080027ce2867)": timeout expired waiting for volumes to attach or mount for pod "default"/"auth-7d78dccff7-nfh97". list of unmounted volumes=[database-ca]. list of unattached volumes=[database-ca]
Create the ConfigMap
as described in Configure TLS for your database connection and ensure the overrides.yaml
is using the correct value for the ConfigMap
name.
If you modify the overrides.yaml
file, you will need to use helm upgrade
, or helm delete --purge
and helm install
to pick up the change.
If you did not modify the overrides.yaml
file, you can simply delete the stuck pods. Kubernetes will re-create the pods.
In this example, the error shows that the ca
key does not exist in the dssc-db-trust
ConfigMap
.
$ kubectl describe pod auth-7d78dccff7-p79j2
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13s default-scheduler Successfully assigned default/auth-7d78dccff7-p79j2 to minikube
Warning FailedMount 6s (x5 over 13s) kubelet, minikube MountVolume.SetUp failed for volume "database-ca" : configmap references non-existent config key: ca
Create the ConfigMap
as described in Configure TLS for your database connection and ensure the overrides.yaml
is using the correct value for the ConfigMap
name and key.
If you modify the overrides.yaml
file, you will need to use helm upgrade
, or helm delete --purge
and helm install
to pick up the change.
If you did not modify the overrides.yaml
file, you can simply delete the stuck pods. Kubernetes will re-create the pods.
If there are no errors in the Events
section of the kubectl describe pod
output, check the logs of the db-init
container in the pod.
In this example, the i/o timeout
error indicates that the container was unable to reach the database server.
$ kubectl logs auth-5447fbfb7-gvrbh -c db-init
{"commit":"79d968b712cfba4407e2cdc6f848034435c04859","component":"db-init","message":"Starting up","severity":"audit","timestamp":"2019-06-17T15:48:15Z"}
{"component":"db-init","error":"dial tcp 192.168.19.226:5432: i/o timeout","message":"could not get database connection","severity":"info","timestamp":"2019-06-17T15:48:20Z"}
Check that your database server allows connections from all nodes in your cluster. This may involve firewall rules, security groups, or other security controls depending on your cluster infrastructure.
Once you have resolved the network connectivity issue, Deep Security Smart Check should automatically recover and complete the initialization process.
In this example, the error logs show that the database server has rejected the username / password that you provided.
$ kubectl logs -f auth-66b5d948c-xht4t -c db-init
{"commit":"5c108f2e383fd54fef8d3f6848c0424d9de9e001","component":"db-init","message":"Starting up","severity":"audit","timestamp":"2019-06-22T15:22:08Z"}
Error: could not set up database: database not available: could not get database connection: pq: password authentication failed for user "postgres"
Check the username and password and update the overrides.yaml
file if required.
If you modify the overrides.yaml
file, you will need to use helm upgrade
or helm delete --purge
and helm install
to pick up the change.
If you did not modify the overrides.yaml
file, you can simply delete the stuck pods. Kubernetes will re-create the pods.
In this example, the error logs show that the database server has indicated that there is no database for the database user that you assigned.
$ kubectl logs auth-5447fbfb7-gvrbh -c db-init
Error: could not set up database: database not available: could not get database connection: pq: database "sampleuser" does not exist
When Deep Security Smart Check connects to the database for the first time, it does not provide a database name. Postgres attempts to connect to a database with the same name as the user, and will fail if that database does not exist.
Create a database with the same name as the user. Deep Security Smart Check should automatically recover, or you can retry your installation.
NOTE: No application data will be stored in this default database. All application data will be stored in service-specific databases on the server.
In this example, the error logs show that a secure connection to the database could not be established because the server certificate has been issued by certificate authority that Deep Security Smart Check does not recognize.
kubectl logs auth-66b5d948c-t5jj9 -c db-init
{"commit":"5c108f2e383fd54fef8d3f6848c0424d9de9e001","component":"db-init","message":"Starting up","severity":"audit","timestamp":"2019-06-22T15:33:25Z"}
{"component":"db-init","error":"x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"database.example.com\")","message":"could not get database connection","severity":"info","timestamp":"2019-06-22T15:33:25Z"}
Obtain the certificate (or certificate bundle) for your certificate authority, then create the ConfigMap
as described in Configure TLS for your database connection.
TIP: If you are using Postgres in Amazon RDS, read Using SSL with a PostgreSQL DB Instance for more information and to get the certificate bundle you will need.
If you modify the overrides.yaml
file, you will need to use helm upgrade
, or helm delete --purge
and helm install
to pick up the change.
If you did not modify the overrides.yaml
file, you can simply delete the stuck pods. Kubernetes will re-create the pods.
In this example, the error logs show that a secure connection to the database could not be established because Deep Security Smart Check has been configured to connect to the database using an IP address, but the server certificate does not include an entry for that address.
$ kubectl logs auth-66b5d948c-t5jj9 -c db-init
{"component":"db-init","error":"x509: cannot validate certificate for 192.168.2.54 because it doesn't contain any IP SANs","message":"could not get database connection","severity":"info","timestamp":"2019-07-11T11:29:45Z"}
There are two paths to resolving this issue: either configure Deep Security Smart Check to use a host name for the database server (of course, the certificate must be valid for that host name!), or re-create the certificate and ensure that the server's IP address is present in the Subject Alternative Name
list.
In this example, the error logs show that the database server is not running TLS, but Deep Security Smart Check has been configured to use TLS.
$ kubectl logs -f auth-66b5d948c-9cp78 -c db-init
{"commit":"5c108f2e383fd54fef8d3f6848c0424d9de9e001","component":"db-init","message":"Starting up","severity":"audit","timestamp":"2019-06-22T15:26:42Z"}
{"component":"db-init","error":"pq: SSL is not enabled on the server","message":"could not get database connection","severity":"info","timestamp":"2019-06-22T15:26:42Z"}
Update your server configuration to use TLS.
If you cannot update your server configuration to use TLS, you can disable TLS in your overrides.yaml
file. This option is less secure and could make it easier for your system to be compromised.
See the documentation in the values.yaml
for more details on options for configuring TLS for the database connection.
Network delays between your external database and the Deep Security Smart Check cluster could cause Deep Security Smart Check to be less responsive or to fail.
Resolving the network delays should resolve the problems.