Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TLSv1.3 ciphers/ciphersuites cannot be changed #8507

Closed
ldawert-sys11 opened this issue Apr 25, 2022 · 17 comments
Closed

TLSv1.3 ciphers/ciphersuites cannot be changed #8507

ldawert-sys11 opened this issue Apr 25, 2022 · 17 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@ldawert-sys11
Copy link

ldawert-sys11 commented Apr 25, 2022

NGINX Ingress controller version: Release v1.1.2, build bab0fba, NGINX version nginx/1.19.9

Kubernetes version: v1.21.3

Environment:

  • OS: Ubuntu 20.04.3 LTS
  • Kernel: 5.4.0-99-generic
  • Basic cluster related info:
    • kubectl version:
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:59:07Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
  • kubectl get nodes -o wide
kubectl get nodes
NAME                             STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP       OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
loving-carson-65bdbd7d4b-2grqg   Ready    <none>   55d   v1.21.3   192.168.1.158   195.192.159.33    Ubuntu 20.04.3 LTS   5.4.0-99-generic   docker://19.3.15
loving-carson-65bdbd7d4b-5rgl4   Ready    <none>   30d   v1.21.3   192.168.1.185   195.192.158.14    Ubuntu 20.04.3 LTS   5.4.0-99-generic   docker://19.3.15
loving-carson-65bdbd7d4b-9d4j8   Ready    <none>   55d   v1.21.3   192.168.1.131   195.192.156.157   Ubuntu 20.04.3 LTS   5.4.0-99-generic   docker://19.3.15
loving-carson-65bdbd7d4b-dkltn   Ready    <none>   55d   v1.21.3   192.168.1.44    195.192.158.213   Ubuntu 20.04.3 LTS   5.4.0-99-generic   docker://19.3.15
  • How was the ingress-nginx-controller installed: Helm Chart
    • If helm was used then please show output of helm ls -A | grep -i ingress:
$ helm ls -aA | grep ingress
ingress-nginx                  	syseleven-ingress-nginx        	3       	2022-04-04 09:37:54.845571954 +0000 UTC	deployed	ingress-nginx-4.0.18                 	1.1.2
  • If helm was used then please show output of helm -n <ingresscontrollernamepspace> get values <helmreleasename>
helm install values
USER-SUPPLIED VALUES:
controller:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            app.kubernetes.io/name: ingress-nginx
        topologyKey: kubernetes.io/hostname
  allowSnippetAnnotations: true
  config:
    compute-full-forwarded-for: "true"
    custom-http-errors: 502,503,504
    log-format-upstream: $remote_addr - $remote_user [$time_local] $ingress_name "$request"
      $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length
      $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr
      $upstream_response_length $upstream_response_time $upstream_status $req_id
    use-forwarded-headers: "true"
    use-proxy-protocol: "true"
  extraArgs:
    default-backend-service: syseleven-ingress-nginx/ingress-nginx-extension
  ingressClass: nginx
  metrics:
    enabled: true
    prometheusRule:
      enabled: true
      rules:
      - alert: NGINXConfigFailed
        annotations:
          description: bad ingress config - nginx config test failed
          summary: uninstall the latest ingress changes to allow config reloads to
            resume
        expr: count(nginx_ingress_controller_config_last_reload_successful == 0) >
          0
        for: 1s
        labels:
          severity: critical
      - alert: NGINXCertificateExpiry
        annotations:
          description: ssl certificate(s) will expire in less then a week
          summary: renew expiring certificates to avoid downtime
        expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds) by (host) - time())
          < 604800
        for: 1s
        labels:
          severity: critical
    serviceMonitor:
      enabled: true
  publishService:
    enabled: true
  replicaCount: 2
  resources:
    limits:
      cpu: 1
      memory: 256Mi
    requests:
      cpu: 1
      memory: 256Mi
  service:
    annotations:
      loadbalancer.openstack.org/proxy-protocol: "true"
  stats:
    enabled: true
  updateStrategy:
    type: RollingUpdate
defaultBackend:
  enabled: false
rbac:
  create: true
  • Current State of the controller:
    • kubectl describe ingressclasses
kubectl describe ingressclasses
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.1.2
              helm.sh/chart=ingress-nginx-4.0.18
Annotations:  meta.helm.sh/release-name: ingress-nginx
              meta.helm.sh/release-namespace: syseleven-ingress-nginx
Controller:   k8s.io/ingress-nginx
Events:       <none>
  • kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl get all
NAME                                            READY   STATUS    RESTARTS   AGE     IP            NODE                             NOMINATED NODE   READINESS GATES
pod/ingress-nginx-controller-59bd6dd5ff-4r7gm   1/1     Running   0          4d23h   172.25.1.33   loving-carson-65bdbd7d4b-2grqg   <none>           <none>
pod/ingress-nginx-controller-59bd6dd5ff-tqkxg   1/1     Running   0          20d     172.25.0.61   loving-carson-65bdbd7d4b-dkltn   <none>           <none>

NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE    SELECTOR
service/ingress-nginx-controller             LoadBalancer   10.240.20.72    195.192.153.120   80:31006/TCP,443:30351/TCP   103d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission   ClusterIP      10.240.30.100   <none>            443/TCP                      103d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-metrics     ClusterIP      10.240.19.212   <none>            10254/TCP                    103d   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS                IMAGES                                                                                                               SELECTOR
deployment.apps/ingress-nginx-controller   2/2     2            2           103d   controller                k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                                  DESIRED   CURRENT   READY   AGE    CONTAINERS                IMAGES                                                                                                               SELECTOR
replicaset.apps/ingress-nginx-controller-546f5958c4   0         0         0       30d    controller                k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=546f5958c4
replicaset.apps/ingress-nginx-controller-59bd6dd5ff   2         2         2       20d    controller                k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=59bd6dd5ff
replicaset.apps/ingress-nginx-controller-6f64dddc7c   0         0         0       103d   controller                k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=6f64dddc7c
  • kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl describe pod
Name:         ingress-nginx-controller-59bd6dd5ff-tqkxg
Namespace:    syseleven-ingress-nginx
Priority:     0
Node:         loving-carson-65bdbd7d4b-dkltn/192.168.1.44
Start Time:   Mon, 04 Apr 2022 11:38:01 +0200
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              pod-template-hash=59bd6dd5ff
Annotations:  cni.projectcalico.org/podIP: 172.25.0.61/32
              kubectl.kubernetes.io/restartedAt: 2022-03-25T13:40:04+01:00
Status:       Running
IP:           172.25.0.61
IPs:
  IP:           172.25.0.61
Controlled By:  ReplicaSet/ingress-nginx-controller-59bd6dd5ff
Containers:
  controller:
    Container ID:  docker://9ecceaf892ffa0b48a9e088bd0ee5fd4eaf5b02dddc9fbad19a80078c9942438
    Image:         k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c
    Image ID:      docker-pullable://k8s.gcr.io/ingress-nginx/controller@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c
    Ports:         80/TCP, 443/TCP, 10254/TCP, 8443/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --default-backend-service=syseleven-ingress-nginx/ingress-nginx-extension
    State:          Running
      Started:      Mon, 04 Apr 2022 11:38:07 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  256Mi
    Requests:
      cpu:      1
      memory:   256Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-59bd6dd5ff-tqkxg (v1:metadata.name)
      POD_NAMESPACE:  syseleven-ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zw5m2 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-zw5m2:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
Name:         ingress-nginx-controller-59bd6dd5ff-4r7gm
Namespace:    syseleven-ingress-nginx
Priority:     0
Node:         loving-carson-65bdbd7d4b-2grqg/192.168.1.158
Start Time:   Wed, 20 Apr 2022 09:41:14 +0200
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              pod-template-hash=59bd6dd5ff
Annotations:  cni.projectcalico.org/podIP: 172.25.1.33/32
              kubectl.kubernetes.io/restartedAt: 2022-03-25T13:40:04+01:00
Status:       Running
IP:           172.25.1.33
IPs:
  IP:           172.25.1.33
Controlled By:  ReplicaSet/ingress-nginx-controller-59bd6dd5ff
Containers:
  controller:
    Container ID:  docker://53154e06133fd91d6093147e684df4748beddf50d8fe6e0802ba0d9d1792f006
    Image:         k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c
    Image ID:      docker-pullable://k8s.gcr.io/ingress-nginx/controller@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c
    Ports:         80/TCP, 443/TCP, 10254/TCP, 8443/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --default-backend-service=syseleven-ingress-nginx/ingress-nginx-extension
    State:          Running
      Started:      Wed, 20 Apr 2022 09:41:20 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  256Mi
    Requests:
      cpu:      1
      memory:   256Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-59bd6dd5ff-4r7gm (v1:metadata.name)
      POD_NAMESPACE:  syseleven-ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ts9sh (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  kube-api-access-ts9sh:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
  • kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
kubectl describe svc
Name:                     ingress-nginx-controller
Namespace:                syseleven-ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.1.2
                          helm.sh/chart=ingress-nginx-4.0.18
Annotations:              loadbalancer.openstack.org/proxy-protocol: true
                          meta.helm.sh/release-name: ingress-nginx
                          meta.helm.sh/release-namespace: syseleven-ingress-nginx
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.240.20.72
IPs:                      10.240.20.72
LoadBalancer Ingress:     195.192.153.120
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31006/TCP
Endpoints:                172.25.0.61:80,172.25.1.33:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30351/TCP
Endpoints:                172.25.0.61:443,172.25.1.33:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
  • Current state of ingress object, if applicable:
    • kubectl -n <appnnamespace> get all,ing -o wide
kubectl get all,ing -n APP-NS
NAME                           READY   STATUS    RESTARTS   AGE     IP            NODE                             NOMINATED NODE   READINESS GATES
pod/nginx                      1/1     Running   0          30d     172.25.0.50   loving-carson-65bdbd7d4b-dkltn   <none>           <none>

NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/nginx   ClusterIP   10.240.31.28   <none>        80/TCP    30d   run=nginx

NAME                                     CLASS    HOSTS                      ADDRESS           PORTS     AGE
ingress.networking.k8s.io/test-ingress   <none>   www2.ldawert.metakube.io   195.192.153.120   80, 443   30d
  • kubectl -n <appnamespace> describe ing <ingressname>
kubectl describe ing test-ingress
Name:             test-ingress
Namespace:        ldawert
Address:          195.192.153.120
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  www2.ldawert.metakube.io-tls terminates www2.ldawert.metakube.io
Rules:
  Host                      Path  Backends
  ----                      ----  --------
  www2.ldawert.metakube.io
                            /   nginx:80 (172.25.0.50:80)
Annotations:                cert-manager.io/cluster-issuer: letsencrypt-production
                            kubernetes.io/ingress.class: nginx
                            nginx.ingress.kubernetes.io/server-snippet:
                              ssl_protocols TLSv1.2 TLSv1.3;
                              ssl_conf_command Ciphersuites "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384";
                            nginx.ingress.kubernetes.io/ssl-ciphers:
                              'ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384'
Events:                     <none>

What happened:

Trying to configure TLSv1.3 ciphers with:

metadata:
  annotations:
    nginx.ingress.kubernetes.io/server-snippet: |
      ssl_protocols TLSv1.2 TLSv1.3;
      ssl_conf_command Ciphersuites "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384";
    nginx.ingress.kubernetes.io/ssl-ciphers: |
      'ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384' # TLSv1.2 works

The configuration made by the server-snippet is loaded correctly into the ingress controller pod into the server config block:

< ... >
server {
		server_name www2.ldawert.metakube.io ;

		listen 80 proxy_protocol ;
		listen 443 proxy_protocol ssl http2 ;

		set $proxy_upstream_name "-";

		ssl_certificate_by_lua_block {
			certificate.call()
		}

		ssl_ciphers                             'ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384'
		;

		# Custom code snippet configured for host www2.ldawert.metakube.io
		ssl_protocols TLSv1.2 TLSv1.3;
		ssl_conf_command Ciphersuites "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384";
< ... >

However when testing the TLS ciphers for example with nmap it shows that still the default ciphers for TLSv1.3 are being used:

$ nmap -sV --script ssl-enum-ciphers -p 443 www2.ldawert.metakube.io
Starting Nmap 7.92 ( https://nmap.org ) at 2022-04-25 09:55 CEST
Nmap scan report for www2.ldawert.metakube.io (195.192.153.120)
Host is up (0.020s latency).

PORT    STATE SERVICE  VERSION
443/tcp open  ssl/http nginx (reverse proxy)
| ssl-enum-ciphers:
|   TLSv1.2:
|     ciphers:
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (ecdh_x25519) - A
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (ecdh_x25519) - A
|       TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
|       TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
|     compressors:
|       NULL
|     cipher preference: server
|   TLSv1.3:
|     ciphers:
|       TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
|       TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
|       TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
|     cipher preference: server
|_  least strength: A

What you expected to happen:

Setting ssl_conf_command Ciphersuites via nginx.ingress.kubernetes.io/server-snippet should configure the used TLSv1.3 ciphers for the server block it is configured in.

How to reproduce it:

  • install k8s cluster (e.g. minikube)
  • install ingress nginx via helm chart
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx 
  • create app
create app
$ cat app.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx
status:
  loadBalancer: {}

$ kubectl apply -f app.yaml
  • create ingress
create ingress
$ cat ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/ssl-ciphers: |
      'ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384'
    nginx.ingress.kubernetes.io/server-snippet: |
      ssl_protocols TLSv1.2 TLSv1.3;
      ssl_conf_command Ciphersuites "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384";
    kubernetes.io/ingress.class: nginx
  name: test-ingress
spec:
  rules:
  - host: testdomain.local
    http:
      paths:
      - backend:
          service:
            name: nginx
            port:
              number: 80
        path: /
        pathType: ImplementationSpecific

$ kubectl apply -f ingress.yaml
  • create debugging pod (with nmap binary):
create debugging pod
$ cat netshoot.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: tmp-shell
  name: tmp-shell
spec:
  containers:
  - image: nicolaka/netshoot
    name: tmp-shell
    resources: {}
    command:
    - sleep
    - "100000"
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

$ kubectl apply -f netshoot.yaml
  • Get IP of ingress pod
$ kubectl get pod <ingress-controller> -o jsonpath='{.status.podIP}'
  • Check ciphers in debugging pod
check ciphers
$ kubectl exec -ti tmp-shell -- bash

bash-5.1$ echo "<ingress-controller-ip> testdomain.local" >> /etc/hosts
bash-5.1$ nmap -sV --script ssl-enum-ciphers -p 443 testdomain.local
Starting Nmap 7.92 ( https://nmap.org ) at 2022-04-25 08:18 UTC
Nmap scan report for testdomain.local (172.17.0.4)
Host is up (0.000044s latency).

PORT    STATE SERVICE  VERSION
443/tcp open  ssl/http nginx (reverse proxy)
| ssl-enum-ciphers:
|   TLSv1.2:
|     ciphers:
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (ecdh_x25519) - A
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (ecdh_x25519) - A
|       TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
|       TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
|     compressors:
|       NULL
|     cipher preference: server
|   TLSv1.3:
|     ciphers:
|       TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
|       TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
|       TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
|     cipher preference: server
|_  least strength: A
MAC Address: 02:42:AC:11:00:04 (Unknown)
@ldawert-sys11 ldawert-sys11 added the kind/bug Categorizes issue or PR as related to a bug. label Apr 25, 2022
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Apr 25, 2022
@k8s-ci-robot
Copy link
Contributor

@ldawert-sys11: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 24, 2022
@UnrealCraig
Copy link

Those other ciphers don't exist for TLSv1.3; why would you want them?
TLSv1.3 has 3 standard ciphers and 2 optional ones.
The TLSv1.3 spec also doesn't allow RSA and cipher naming doesn't include key exchange type (because only Diffie Hellmann is allowed) so your attempted config could never by valid for 1.3, only for 1.2.

From OpenSSL docs: "Note that changing the TLSv1.2 and below cipher list has no impact on the TLSv1.3 ciphersuite configuration."

@chancez
Copy link
Member

chancez commented Aug 24, 2022

I was trying to do something similar to this today because I'm having trouble connecting to a TLS enabled gRPC service via ingress-nginx. The backend only supports TLS 1.3, and I can connect to it via port-forward.

nginx is failing with the following in the logs:

SSL: error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:SSL alert number 70

I then ran ssldump in the container to troubleshoot and see the following:

New TCP connection #1105: 10.0.0.232(40460) <-> 10.0.0.129(4245)
1105 1  0.0041 (0.0041)  C>S  Handshake
      ClientHello
        Version 3.3 
        cipher suites
        TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
        TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
        TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
        TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
        TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
        TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256
        TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
        TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
        TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
        TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
        TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
        TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
        TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
        TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
        TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
        TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
        TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
        TLS_DHE_RSA_WITH_AES_256_CBC_SHA
        TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
        TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
        TLS_DHE_RSA_WITH_AES_128_CBC_SHA
        TLS_RSA_WITH_AES_256_GCM_SHA384
        TLS_RSA_WITH_AES_128_GCM_SHA256
        TLS_RSA_WITH_AES_256_CBC_SHA256
        TLS_RSA_WITH_AES_128_CBC_SHA256
        TLS_RSA_WITH_AES_256_CBC_SHA
        TLS_RSA_WITH_AES_128_CBC_SHA
        TLS_EMPTY_RENEGOTIATION_INFO_SCSV
        compression methods
                  NULL
        extensions
          ec_point_formats
          supported_groups
          session_ticket
          application_layer_protocol_negotiation
          encrypt_then_mac
          extended_master_secret
          signature_algorithms
1105 2  0.0044 (0.0003)  S>C  Alert
    level           fatal
    value           protocol_version
1105    0.0059 (0.0015)  C>S  TCP RST
1103    0.0085 (0.0078)  S>C  TCP FIN

I'm not seeing TLS_AES_256_GCM_SHA384 in this list, despite using TLS 1.3 which supports this. Is it possible nginx has misconfigure ciphers for TLS 1.3?

@UnrealCraig
Copy link

  ClientHello

...
I'm not seeing TLS_AES_256_GCM_SHA384 in this list, despite using TLS 1.3 which supports this. Is it possible nginx has misconfigure ciphers for TLS 1.3?

No, that's the list of ciphers supported by the client, not the server. The ServerHello tells you which of those NGinX decided to use, in this case it rejected all of them.

@chancez
Copy link
Member

chancez commented Aug 24, 2022

@UnrealCraig ah your totally right, I mixed up the client/server hellos. Then it seems it's failing due to protocol_version.

Looking at the docs for grpc_ssl_protocols this might be due to the default not having tls 1.3.

Default grpc_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

@chancez
Copy link
Member

chancez commented Aug 24, 2022

Yep that was it. So gRPC TLS backends that only support tls 1.3 fail because the default grpc_ssl_protocols doesn't have tls 1.3 enabled.

The following worked for gRPC with TLS termination at ingress, with TLS enabled on the backend:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "GRPCS"
    nginx.ingress.kubernetes.io/server-snippet: |
      grpc_ssl_protocols TLSv1.3;
    cert-manager.io/cluster-issuer: "selfsigned-ca-issuer"
  name: hubble-relay
  namespace: kube-system
spec:
  ingressClassName: nginx
  rules:
  - host: hubble-relay.127-0-0-1.sslip.io
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hubble-relay
            port:
              number: 443
  tls:
  - secretName: hubble-relay-ingress-cert
    hosts:
      - hubble-relay.127-0-0-1.sslip.io

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 23, 2022
@ldawert-sys11
Copy link
Author

ldawert-sys11 commented Sep 26, 2022

I don't see how the gRPC solution is related to mine as there's no GRPC involved and the Backends are also not queried via TLS but rather plain HTTP.

@UnrealCraig

Those other ciphers don't exist for TLSv1.3; why would you want them?

I want to configure TLSv1.2 ciphers AND TLSv1.3 ciphersuites.

TLSv1.3 has 3 standard ciphers and 2 optional ones. The TLSv1.3 spec also doesn't allow RSA and cipher naming doesn't include key exchange type (because only Diffie Hellmann is allowed) so your attempted config could never by valid for 1.3, only for 1.2.

Could you please elaborate a bit more on this? Maybe give an example that I can try out for the naming?

From OpenSSL docs: "Note that changing the TLSv1.2 and below cipher list has no impact on the TLSv1.3 ciphersuite configuration."

I know - however as said I would like to adjust both TLSv1.2 and TLSv1.3 settings.

Thanks in advance for your help!
Leon

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 26, 2022
@jstangroome
Copy link
Contributor

@ldawert-sys11 your original configuration looks correct, but notably it sets the TLS ciphers for the server block for your www2.ldawert.metakube.io server name specifically. Is it possible the nmap test was targeting the default server block, instead of your server block with the overridden ciphers?

I.e. was nmap sending the correct Server Name Indication (SNI) value in the TLS ClientHello record? According to the nmap ssl-enum-ciphers documentation, the tls.servername script argument would control the SNI value and I suspect it may be blank by default.

E.g. is the test result what you expect if you instead run:

$ nmap -sV --script ssl-enum-ciphers --script-args tls.servername=www2.ldawert.metakube.io -p 443 www2.ldawert.metakube.io

I'd also suggest adding your ssl_conf_command as a http-snippet so it applies to all server blocks and repeat your test, to see if also corrects the result for your original nmap test command.

@ldawert-sys11
Copy link
Author

ldawert-sys11 commented Sep 27, 2022

Hi @jstangroome, I tried both:

the tls.servername script argument would control the SNI value and I suspect it may be blank by default

I tried it with the --script-args tls.servername=www2.ldawert.metakube.io option but the results stayed the same.

I'd also suggest adding your ssl_conf_command as a http-snippet

Als no success with this one. Behaviour was still the same.

@azhozhin azhozhin mentioned this issue Jan 31, 2023
11 tasks
@razholio
Copy link

razholio commented Jul 18, 2023

It's very frustrating that this is not in the docs, because they make it look like TLS_1.2 and 1.3 can be configured together:
https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers
and
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-ciphers
Additionally the helm chart docs that I can't seem to locate at the moment also made it look configurable for both 1.2 and 1.3. However, unfortunately the openssl project drastically changed how cipher-suite can be configured at runtime between 1.2 and 1.3, and the nginx developers have not been shy with their disapproval.

I would love to see this at least mentioned in the docs somewhere, but the config directive of 'ssl-ciphers' will only apply to TLS_1.2 (and earlier). For TLS_1.3 you have to use a generic http config directive called http-snippet that allows you to drop in any raw nginx config (and hope it's formatted correctly). This is what we have tested to work (from the ingress-nginx CM)

ssl-ciphers: ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256
http-snippet: ssl_conf_command Ciphersuites TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256;

note the ';' at the end of the http-snippet line.
It's been well over a year since we added this workAround, and the latest ingress-nginx still does not have a fix.

@longwuyuan
Copy link
Contributor

@razholio There is not enough resources so sometimes it takes too long. If you submit a PR for fixing the docs, I am sure it will get appropriate attention.

@SpringHgui
Copy link

TLSv1.2 not supported too .......

@SpringHgui
Copy link

only TLSv1 and TLSv1.3 can work😓

@longwuyuan
Copy link
Contributor

Hi,

Can someone here say for sure that this method https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers is invalid for configuring both the TLS version and also the cipher suits !

I did a test and I can see that out of the box, this is what is offered
image

So I would assume that as of today, this is not a bug.

Please re-open with data on the current release of the controller and any other findings, if my assessment is not true about being able to configure TLS v1.3 and the cipher suite via configMap or the annotation.

For now I will close the issue as there are too many open issues that are inactive so skewing the info on hwat we are tracking as action-items.

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

Hi,

Can someone here say for sure that this method https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers is invalid for configuring both the TLS version and also the cipher suits !

I did a test and I can see that out of the box, this is what is offered
image

So I would assume that as of today, this is not a bug.

Please re-open with data on the current release of the controller and any other findings, if my assessment is not true about being able to configure TLS v1.3 and the cipher suite via configMap or the annotation.

For now I will close the issue as there are too many open issues that are inactive so skewing the info on hwat we are tracking as action-items.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants