Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

backend-protocol https does not work with tlsv1.3 #8257

Closed
bh-tt opened this issue Feb 18, 2022 · 19 comments
Closed

backend-protocol https does not work with tlsv1.3 #8257

bh-tt opened this issue Feb 18, 2022 · 19 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@bh-tt
Copy link

bh-tt commented Feb 18, 2022

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
NGINX Ingress controller
Release: v1.1.1
Build: a17181e
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:19:12Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration:

  • OS (e.g. from /etc/os-release): "Debian GNU/Linux 11 (bullseye)"

  • Kernel (e.g. uname -a): 5.10.0-11-amd64 Basic structure  #1 SMP Debian 5.10.92-1 (2022-01-18) x86_64 GNU/Linux

  • Install tools: kubeadm

  • Basic cluster related info:
    NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
    k8s-intra-rd-master0 Ready control-plane,master 174d v1.23.3 10.247.9.68 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
    k8s-intra-rd-master1 Ready control-plane,master 174d v1.23.3 10.247.9.69 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
    k8s-intra-rd-master2 Ready control-plane,master 174d v1.23.3 10.247.9.70 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
    k8s-intra-rd-node0 Ready 174d v1.23.3 10.247.9.72 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
    k8s-intra-rd-node1 Ready 174d v1.23.3 10.247.9.73 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
    k8s-intra-rd-node2 Ready 174d v1.23.3 10.247.9.74 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
    k8s-intra-rd-node3 Ready 174d v1.23.3 10.247.9.75 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
    k8s-intra-rd-node4 Ready 174d v1.23.3 10.247.9.76 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12

  • How was the ingress-nginx-controller installed:

  • ingress-nginx ingress-nginx 14 2022-02-07 16:21:08.741712029 +0100 CET deployed ingress-nginx-4.0.17 1.1.1

  • Current State of the controller:

    • kubectl describe ingressclasses
    • Name: nginx
      Labels: app.kubernetes.io/component=controller
      app.kubernetes.io/instance=ingress-nginx
      app.kubernetes.io/managed-by=Helm
      app.kubernetes.io/name=ingress-nginx
      app.kubernetes.io/part-of=ingress-nginx
      app.kubernetes.io/version=1.1.1
      helm.sh/chart=ingress-nginx-4.0.17
      Annotations: meta.helm.sh/release-name: ingress-nginx
      meta.helm.sh/release-namespace: ingress-nginx
      Controller: k8s.io/ingress-nginx
      Events:
    • kubectl -n <ingresscontrollernamespace> get all -A -o wide\
    • NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
      pod/ingress-nginx-controller-7445b7d6dc-z4mvd 1/1 Running 0 10d 10.244.3.49 k8s-intra-rd-node0

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ingress-nginx-controller LoadBalancer 10.106.136.24 10.247.9.80 80:30308/TCP,443:30065/TCP 174d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission ClusterIP 10.106.134.166 443/TCP 174d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/ingress-nginx-controller 1/1 1 1 174d controller k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/ingress-nginx-controller-54bfb9bb 0 0 0 84d controller k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=54bfb9bb
replicaset.apps/ingress-nginx-controller-568764d844 0 0 0 35d controller k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=568764d844
replicaset.apps/ingress-nginx-controller-5c8d66c76d 0 0 0 112d controller k8s.gcr.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5c8d66c76d
replicaset.apps/ingress-nginx-controller-7445b7d6dc 1 1 1 10d controller k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7445b7d6dc
replicaset.apps/ingress-nginx-controller-77f4468d76 0 0 0 86d controller k8s.gcr.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=77f4468d76
replicaset.apps/ingress-nginx-controller-fd7bb8d66 0 0 0 174d controller k8s.gcr.io/ingress-nginx/controller:v1.0.0@sha256:0851b34f69f69352bf168e6ccf30e1e20714a264ab1ecd1933e4d8c0fc3215c6 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=fd7bb8d66

  • kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
  • Name: ingress-nginx-controller-7445b7d6dc-z4mvd
    Namespace: ingress-nginx
    Priority: 0
    Node: k8s-intra-rd-node0/10.247.9.72
    Start Time: Mon, 07 Feb 2022 17:01:53 +0100
    Labels: app.kubernetes.io/component=controller
    app.kubernetes.io/instance=ingress-nginx
    app.kubernetes.io/name=ingress-nginx
    pod-template-hash=7445b7d6dc
    Annotations:
    Status: Running
    IP: 10.244.3.49
    IPs:
    IP: 10.244.3.49
    Controlled By: ReplicaSet/ingress-nginx-controller-7445b7d6dc
    Containers:
    controller:
    Container ID: containerd://a84c1fbcde1a35054a45a2df900a806dbc763aaa9967f21a967c67474a5b03eb
    Image: k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de
    Image ID: k8s.gcr.io/ingress-nginx/controller@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de
    Ports: 80/TCP, 443/TCP, 8443/TCP
    Host Ports: 0/TCP, 0/TCP, 0/TCP
    Args:
    /nginx-ingress-controller
    --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
    --election-id=ingress-controller-leader
    --controller-class=k8s.io/ingress-nginx
    --ingress-class=nginx
    --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
    --validating-webhook=:8443
    --validating-webhook-certificate=/usr/local/certificates/cert
    --validating-webhook-key=/usr/local/certificates/key
    State: Running
    Started: Mon, 07 Feb 2022 17:02:04 +0100
    Ready: True
    Restart Count: 0
    Requests:
    cpu: 100m
    memory: 90Mi
    Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
    POD_NAME: ingress-nginx-controller-7445b7d6dc-z4mvd (v1:metadata.name)
    POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
    LD_PRELOAD: /usr/local/lib/libmimalloc.so
    Mounts:
    /usr/local/certificates/ from webhook-cert (ro)
    /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9g28q (ro)
    Conditions:
    Type Status
    Initialized True
    Ready True
    ContainersReady True
    PodScheduled True
    Volumes:
    webhook-cert:
    Type: Secret (a volume populated by a Secret)
    SecretName: ingress-nginx-admission
    Optional: false
    kube-api-access-9g28q:
    Type: Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds: 3607
    ConfigMapName: kube-root-ca.crt
    ConfigMapOptional:
    DownwardAPI: true
    QoS Class: Burstable
    Node-Selectors: kubernetes.io/os=linux
    Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
    node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
    Type Reason Age From Message

Normal RELOAD 37m (x9 over 10d) nginx-ingress-controller NGINX reload triggered due to a change in configuration

  • kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
    Name: ingress-nginx-controller
    Namespace: ingress-nginx
    Labels: app.kubernetes.io/component=controller
    app.kubernetes.io/instance=ingress-nginx
    app.kubernetes.io/managed-by=Helm
    app.kubernetes.io/name=ingress-nginx
    app.kubernetes.io/part-of=ingress-nginx
    app.kubernetes.io/version=1.1.1
    helm.sh/chart=ingress-nginx-4.0.17
    Annotations: meta.helm.sh/release-name: ingress-nginx
    meta.helm.sh/release-namespace: ingress-nginx
    Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
    Type: LoadBalancer
    IP Family Policy: SingleStack
    IP Families: IPv4
    IP: 10.106.136.24
    IPs: 10.106.136.24
    IP: 10.247.9.80
    LoadBalancer Ingress: 10.247.9.80
    Port: http 80/TCP
    TargetPort: http/TCP
    NodePort: http 30308/TCP
    Endpoints: 10.244.3.49:80
    Port: https 443/TCP
    TargetPort: https/TCP
    NodePort: https 30065/TCP
    Endpoints: 10.244.3.49:443
    Session Affinity: None
    External Traffic Policy: Cluster
    Events:

  • Current state of ingress object, if applicable:

    • kubectl -n <appnnamespace> get all,ing -o wide
    • kubectl -n <appnamespace> describe ing <ingressname>
      Name: intranetws
      Labels: app.kubernetes.io/instance=intranetws
      app.kubernetes.io/managed-by=Helm
      app.kubernetes.io/name=intranetws
      helm.sh/chart=webservice-1.0.31
      Namespace: intranet
      Address: 10.247.9.80
      Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" is forbidden: User "bh" cannot get resource "endpoints" in API group "" in the namespace "kube-system">)
      TLS:
      intra-rd-ingress terminates intranetws.k8s-intra-rd.local
      Rules:
      Host Path Backends

    intranetws.k8s-intra-rd.local
    / intranetws:443 (10.244.2.250:8081,10.244.3.4:8081)
    Annotations: meta.helm.sh/release-name: intranetws
    meta.helm.sh/release-namespace: intranet
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    Events:
    Type Reason Age From Message


    Normal Sync 51m (x4 over 102m) nginx-ingress-controller Scheduled for sync

    • If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag
      • Trying 10.247.9.80:443...
  • Connected to intranetws.k8s-intra-rd.local (10.247.9.80) port 443 (#0)
  • ALPN, offering h2
  • ALPN, offering http/1.1
  • successfully set certificate verify locations:
  • CAfile: /etc/ssl/certs/ca-certificates.crt
  • CApath: /etc/ssl/certs
  • TLSv1.3 (OUT), TLS handshake, Client hello (1):
  • TLSv1.3 (IN), TLS handshake, Server hello (2):
  • TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
  • TLSv1.3 (IN), TLS handshake, Certificate (11):
  • TLSv1.3 (IN), TLS handshake, CERT verify (15):
  • TLSv1.3 (IN), TLS handshake, Finished (20):
  • TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
  • TLSv1.3 (OUT), TLS handshake, Finished (20):
  • SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
  • ALPN, server accepted to use h2
  • Server certificate:
  • subject: C=NL; ST=ZH; L=Rotterdam; O= BV; OU=ingress; CN=k8s-intra-rd.local; emailAddress=root@.com
  • start date: Nov 10 12:46:30 2021 GMT
  • expire date: Nov 10 12:46:30 2022 GMT
  • subjectAltName: host "intranetws.k8s-intra-rd.local" matched cert's "*.k8s-intra-rd.local"
  • issuer: C=NL; ST=ZH; O= BV; OU=IT; CN= Server CA X2
  • SSL certificate verify ok.
  • Using HTTP2, server supports multi-use
  • Connection state changed (HTTP/2 confirmed)
  • Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
  • Using Stream ID: 1 (easy handle 0x556254899560)

GET /intranetws/ HTTP/2
Host: intranetws.k8s-intra-rd.local
user-agent: curl/7.74.0
accept: /

  • TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
  • TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
  • old SSL session ID is stale, removing
  • Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
    < HTTP/2 502
    < date: Fri, 18 Feb 2022 10:33:26 GMT
    < content-type: text/html
    < content-length: 150
    < strict-transport-security: max-age=31536000
    < cache-control: no-cache, no-store, must-revalidate
    < pragma: no-cache
    < referrer-policy: no-referrer
    < x-content-type-options: nosniff
    < x-frame-options: sameorigin
    < x-xss-protection: 1; mode=block
    <
<title>502 Bad Gateway</title>

502 Bad Gateway


nginx * Connection #0 to host intranetws.k8s-intra-rd.local left intact
  • Others:
    • Any other related information like ;
      • copy/paste of the snippet (if applicable)
      • kubectl describe ... of any custom configmap(s) created and in use
        Name: ingress-nginx-controller
        Namespace: ingress-nginx
        Labels: app.kubernetes.io/component=controller
        app.kubernetes.io/instance=ingress-nginx
        app.kubernetes.io/managed-by=Helm
        app.kubernetes.io/name=ingress-nginx
        app.kubernetes.io/part-of=ingress-nginx
        app.kubernetes.io/version=1.1.1
        helm.sh/chart=ingress-nginx-4.0.17
        Annotations: meta.helm.sh/release-name: ingress-nginx
        meta.helm.sh/release-namespace: ingress-nginx

Data

add-headers:

ingress-nginx/custom-headers
allow-snippet-annotations:

true
client-body-buffer-size:

64k
disable-access-log:

true
ssl-protocols:

TLSv1.3

BinaryData

Events:
Type Reason Age From Message


Normal UPDATE 42m nginx-ingress-controller ConfigMap ingress-nginx/ingress-nginx-controller

What happened:
We have webservices with self-signed certificates which we would like to access using ingress/nginx. When we enable TLSv1.2 on the webservices everything works, however when it is disabled the following error message occurs:

2022/02/18 10:33:26 [error] 2188#2188: *31100206 peer closed connection in SSL handshake while SSL handshaking to upstream, client: 10.244.3.1, server: intranetws.k8s-intra-rd.local, request: "GET /intranetws/ HTTP/2.0", upstream: "https://10.244.2.250:8081/intranetws/", host: "intranetws.k8s-intra-rd.local"

This happens even when ssl-protocols is set to TLSv1.3 only in the ingress controller config. It appears that ingress uses TLSv1.2 no matter the settings.

What you expected to happen:
We expected ingress/nginx to use TLSv1.3 if the application supports it and when configured to only use TLSv1.3.

How to reproduce it:
I'm assuming this can be reproduced using any application which supports only TLSv.1.3, but I have not yet tried it.

Anything else we need to know:

@bh-tt bh-tt added the kind/bug Categorizes issue or PR as related to a bug. label Feb 18, 2022
@k8s-ci-robot
Copy link
Contributor

@bh-tt: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Feb 18, 2022
@longwuyuan
Copy link
Contributor

I am reading here https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/ and if that means that using TLS 1.3 disables use of TLS 1.2 then backward compatibility breaks.

I am not a developer and a developer needs to comment.

@rikatz
Copy link
Contributor

rikatz commented Feb 20, 2022

As far as I remember, TLS v1.3 has a lot of compatibility changes, that means that it cannot co-exist with TLS v1.2. I might be wrong and need to re-read about it.

Have you tried in your specific ingress setting:

nginx.ingress.kubernetes.io/proxy-ssl-protocols: TLSv1.3 

?

@bh-tt
Copy link
Author

bh-tt commented Feb 21, 2022

The same error happens (nginx returns 502):
2022/02/21 10:18:37 [error] 849#849: *5942535 peer closed connection in SSL handshake while SSL handshaking to upstream, client: 10.244.3.0, server: intranetws.k8s-intra-rd.local, request: "GET / HTTP/2.0", upstream: "https://10.244.7.109:8081/", host: "intranetws.k8s-intra-rd.local"
ingress:

Name:             intranetws
Labels:           app.kubernetes.io/instance=intranetws
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=intranetws
                  helm.sh/chart=webservice-1.0.33
Namespace:        intranet
Address:          10.247.9.80
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" is forbidden: User "bh" cannot get resource "endpoints" in API group "" in the namespace "kube-system">)
TLS:
  intra-rd-ingress terminates intranetws.k8s-intra-rd.local
Rules:
  Host                           Path  Backends
  ----                           ----  --------
  intranetws.k8s-intra-rd.local  
                                 /   intranetws:443 (10.244.2.241:8081,10.244.7.109:8081)
Annotations:                     meta.helm.sh/release-name: intranetws
                                 meta.helm.sh/release-namespace: intranet
                                 nginx.ingress.kubernetes.io/backend-protocol: HTTPS
                                 nginx.ingress.kubernetes.io/proxy-ssl-protocols: TLSv1.3
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    12m (x3 over 41m)  nginx-ingress-controller  Scheduled for sync

I have confirmed i can access the application itself from the pod (using kubectl exec) and the service. 8081 is the TLS port of the application.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 22, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@vanimi
Copy link

vanimi commented May 31, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 31, 2022
@chancez
Copy link
Member

chancez commented Aug 24, 2022

I'm also hitting this same issue with a GRPCS backend and nginx.ingress.kubernetes.io/proxy-ssl-protocols: "TLSv1.3" does not resolve the issue.

@chancez
Copy link
Member

chancez commented Aug 24, 2022

It might be that the ciphers are incorrect:

#8507 (comment)

@chancez
Copy link
Member

chancez commented Aug 24, 2022

#7084 might also be related. Seems that even though I set proxy-ssl-ciphers it's probably not doing anything since it only works when proxy-ssl-secret is set.

@chancez
Copy link
Member

chancez commented Aug 24, 2022

For gRPC it might also be that grpc_ssl_ciphers isn't being set, but even when I try configuring it as a server-snippet/configuration-snippet, it doesn't change anything.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 22, 2022
@hostettler
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 25, 2023
@valentin2105
Copy link

nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/configuration-snippet: |
  proxy_ssl_protocols TLSv1.3;

This work.

@longwuyuan
Copy link
Contributor

  • the original issue-description is really badly formatted and its hard to get the small tiny required details there

  • there is a post that setting backend-protocol to HTTPS and using snippet to set TLSv1.3 worked

  • there is no traction on the issue since months

  • there are too many open issues that are inactive and do not provide a action-item for the project to track

So I am closing this issue for now. THe original creator of the issue can re-open the issue if data is posted here that shows a problem in the controller or implies a action0item on the project

thanks

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

  • the original issue-description is really badly formatted and its hard to get the small tiny required details there

  • there is a post that setting backend-protocol to HTTPS and using snippet to set TLSv1.3 worked

  • there is no traction on the issue since months

  • there are too many open issues that are inactive and do not provide a action-item for the project to track

So I am closing this issue for now. THe original creator of the issue can re-open the issue if data is posted here that shows a problem in the controller or implies a action0item on the project

thanks

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@bh-tt
Copy link
Author

bh-tt commented Apr 30, 2024

That's fine, we have migrated to istio quite a while ago which supports it all just fine.

@Tommyf
Copy link

Tommyf commented May 30, 2024

Could this be re-opened?

Yes, it works with the configuration-snippet, but that has recently become disabled by default (#10393). It's not advised to enable this setting due to CVE-2021-25742.

This all means that in order to adhere to good security practices (Use TLSv1.3), you need to ignore the CVE mitigation by enabling, and using, the configuration snippets.

It would be ideal if nginx.ingress.kubernetes.io/proxy-ssl-protocols: TLSv1.3 would work as expected.

This issue was mentioned briefly here but then someone mentions that there "are open issues" relating to this annotation. As best as I can figure out, this issue is the only one dealing with it, and it's been closed.

@longwuyuan
Copy link
Contributor

@Tommyf can you paste every small detail using kubectl commands, curl commands, other commands, on a kind/minikube cluster with backend-protocol set to HTTPS and proxy-ssl-protocols annotation set to tlsv1.3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

10 participants