Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

auth-tls-pass-certificate-to-upstream does not work with https #3511

Closed
joshrosso opened this issue Dec 4, 2018 · 41 comments
Closed

auth-tls-pass-certificate-to-upstream does not work with https #3511

joshrosso opened this issue Dec 4, 2018 · 41 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@joshrosso
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

NGINX Ingress controller version: 0.21.0

Kubernetes version:

Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-27T01:14:37Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

What happened:

When adding auth-tls-pass-certificate-to-upstream: true to an ingress resource, the client certificate passed to the ingress controller is not forwarded to the backend pod.

What you expected to happen:

The backend pod should receive the client certificate.

How to reproduce it (as minimally and precisely as possible):

  1. Start an https server expecting mtls

  2. Create an ingress resource, such as the following, that points to the mtls server's service

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: mtls-sample
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "https"
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
spec:
  rules:
    - http:
        paths:
          - path: /hello
            backend:
              serviceName: mtls-svc
              servicePort: 443
  1. Send a request to the ingress controller, such as the following.
# curl -L --cert 4_client/certs/localhost.cert.pem --key 4_client/private/localhost.key.pem https://172.17.0.11:443/hello -k

<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.15.6</center>
</body>
</html>
  1. View the pod logs (assuming go app) to verify missing client cert.
2018/12/04 17:42:29 http: TLS handshake error from 172.17.0.11:41922: tls: client didn't provide a certificate
  1. Send same curl directly to pod service and verify mtls succeeds.
# curl -L --cert 4_client/certs/localhost.cert.pem --key 4_client/private/localhost.key.pem https://10.102.202.95:443/hello -k

Hello World/ 

Anything else we need to know:

Unless i'm misunderstanding the annotation, I'd expect the client cert to be passed on to the upstream pod.

@joshrosso
Copy link
Contributor Author

joshrosso commented Dec 4, 2018

Alright, after digging deeper, I'm finding the issue to be more around standardization of passing client certificates in headers rather than my initial theory, the nginx-ingress-controller not passing the client cert.

I've found nginx is passing the client cert to the backend pod in the Ssl-client-certificate header.

It seems for projects like Envoy, there's been lots of discussion around how to accomplish this, in their case they went with x-forwarded-client-cert. And some suggestions on stackoverflow suggest using headers like x-ssl-cert.

I suggest we provide the functionality to specify the header key for which the client cert will be forwarded in. Something such as:

nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream-header: "x-forwarded-client-cert"

To ensure compatibility with upstream servers / pods.

Let me know your thoughts.

@aledbf
Copy link
Member

aledbf commented Dec 4, 2018

I suggest we provide the functionality to specify the header key for which the client cert will be forwarded in. Something such as:

That makes sense. We already do this for another header https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#forwarded-for-header
That said, this should be a global value instead of a new annotation, at least as a first step.

Edit: just in case, this does not means the header will be compatible with envoy (http://nginx.org/en/docs/http/ngx_http_ssl_module.html != https://www.envoyproxy.io/docs/envoy/latest/configuration/http_conn_man/headers#x-forwarded-client-cert)

@joshrosso
Copy link
Contributor Author

this should be a global value instead of a new annotation,

I can take a stab at this. Can you elaborate on the meaning of global value @aledbf ?

this does not means the header will be compatible with envoy

100% understood.

@aledbf
Copy link
Member

aledbf commented Dec 4, 2018

Can you elaborate on the meaning of global value @aledbf ?

Sorry, we use a configmap that configures global values https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/config/config.go#L431 with defaults https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/config/config.go#L586

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 4, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 3, 2019
@baurmatt
Copy link

Any update on this? :) I really like to see this implemented!

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 16, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 15, 2019
@baurmatt
Copy link

Still interested! :)

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 15, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 13, 2019
@baurmatt
Copy link

Again, still interested! :)

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 14, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 12, 2020
@baurmatt
Copy link

Can't wait to get this! ;)

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 13, 2020
@akashvyom
Copy link

did anyone figure this out, we have the same situation, had to spend a few days debugging before landing here.

@manpatel3107
Copy link

Any update on this? We have a same situation like this.

@IoakeimSamarasGR
Copy link

Hello,

I run into this thread a couple of days ago.

It seems that there is a configuration snippet annotation (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet) that lets you define the HTTP header in which the client certificate will be inserted. The following annotation worked for me:

nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;

Of course you can replace the 'X-SSL-CERT' with the name of your desired header.

@rdoering
Copy link

rdoering commented Jul 2, 2020

@IoakeimSamarasGR Could you please share your whole annotation?

I am using the following annotation without success

annotations:
      kubernetes.io/ingress.class: "nginx"
      nginx.ingress.kubernetes.io/affinity: "cookie"
      nginx.ingress.kubernetes.io/session-cookie-name: "MYSERVICE"
      nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
      nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
      #nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
      #nginx.ingress.kubernetes.io/auth-tls-verify-client: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: |
        proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;

@IoakeimSamarasGR
Copy link

IoakeimSamarasGR commented Jul 6, 2020

@rdoering Here are the annotations that worked for me:

annotations:
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
    nginx.ingress.kubernetes.io/auth-tls-secret: default/{{ .Values.ingress.caTls.caSecretName }}
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;
      proxy_ssl_name "mydemo.demo.com";
    kubernetes.io/ingress.allow-http: "false"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/proxy-ssl-secret: default/{{ .Values.ingress.caTls.caSecretName }}
    nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
    nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "1"
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 4, 2020
@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 12, 2021
@rikatz
Copy link
Contributor

rikatz commented May 4, 2021

/kind feature
/remove-lifecycle stale
/assign

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 4, 2021
@k8s-triage-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 2, 2021
@rikatz
Copy link
Contributor

rikatz commented Aug 6, 2021

/remove-lifecycle stale
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 6, 2021
@aceeric
Copy link

aceeric commented Oct 21, 2021

Any/All with visibility to this: we're running the Nginx ingress controller 0.47.0. Specifying in the ingress yaml:

...
metadata:
  annotations:
    ...
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
...

The upstream (another nginx) is not receiving the cert in the $ssl_client_cert embedded variable. So:

kubectl -n kube-system exec -it rke-ingress-nginx-controller-46tln -- cat /etc/nginx/nginx.conf > nginx.conf

Then looking at the config file:

...
    server {
    ....
    # Pass the extracted client certificate to the backend

    # Allow websocket connections
    proxy_set_header  Upgrade $http_upgrade;
    '''
    }
...

In other words, it looks as if the ingress controller saw the annotation in the manifest to pass the cert, added a comment to the configuration file, but then didn't add any actual configuration directives. So - just wondering if this is a known defect, or if there is a known workaround. I've tried some of the suggestions in this issue without success so far. Thanks.

@bcessa
Copy link

bcessa commented Oct 26, 2021

Based on inspecting the code, the client cert should be expected on $ssl_client_escaped_cert. But it will only be available if you are also providing a valid value for the auth-tls-secret annotation, specifically the value for CAFileName. The parameters expected in the secret object are:

type AuthSSLCert struct {
	// Secret contains the name of the secret this was fetched from
	Secret string `json:"secret"`
	// CAFileName contains the path to the secrets 'ca.crt'
	CAFileName string `json:"caFilename"`
	// CASHA contains the SHA1 hash of the 'ca.crt' or combinations of (tls.crt, tls.key, tls.crt) depending on certs in secret
	CASHA string `json:"caSha"`
	// CRLFileName contains the path to the secrets 'ca.crl'
	CRLFileName string `json:"crlFileName"`
	// CRLSHA contains the SHA1 hash of the 'ca.crl' file
	CRLSHA string `json:"crlSha"`
	// PemFileName contains the path to the secrets 'tls.crt' and 'tls.key'
	PemFileName string `json:"pemFilename"`
}

type AuthSSLCert struct {

{{ if $server.CertificateAuth.PassCertToUpstream }}
proxy_set_header ssl-client-cert $ssl_client_escaped_cert;

cc. @aceeric

@asettouf
Copy link

Same issue.

@iamNoah1
Copy link
Contributor

/priority important-longterm

@k8s-ci-robot k8s-ci-robot added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Dec 15, 2021
@vijaysimhajoshi
Copy link

vijaysimhajoshi commented Mar 10, 2022

Following annotation configuration works

annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: default/ca-root-cert
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_ssl_name "hostname";
nginx.ingress.kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
nginx.ingress.kubernetes.io/proxy-ssl-secret: default/ingress-cert-key-with-ca-root-cert
nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "1"

As per the documentation: 
[](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-certificate-authentication)

nginx.ingress.kubernetes.io/proxy-ssl-secret: secretName: Specifies a Secret with the certificate tls.crt, key tls.key in PEM format used for authentication to a proxied HTTPS server. It should also contain trusted CA certificates ca.crt in PEM format used to verify the certificate of the proxied HTTPS server. This annotation expects the Secret name in the form "namespace/secretName".

@NxSoftware
Copy link

@vijaysimhajoshi I may be wrong but that looks like it's configuring backend certificate verification rather than client certificate verification. This issue is about client certificate validation.

@paulers
Copy link

paulers commented Jan 23, 2023

Is it straight up not possible to pass the PEM certificate upstream WITHOUT verifying it? I'm getting ssl handshake errors on a self-signed certificate, while all I'm looking to do is pass it through and verify on the backend. Tried all kinds of configurations above to no avail.

@richshadman
Copy link

I ran into this this week - took me 3 days to figure out that nginx passes the cert on a non standard, non common header (perhaps i missed that in the documentation somewhere?).

I was able to work around this by the configuration snippet suggested above, however I agree that the best approach would be an additional annotation that lets you specify which header to pass the cert on, at least until the following is no longer in draft (or a simlar proposal is approved and made official): https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-client-cert-field.

@tuxillo
Copy link

tuxillo commented Sep 1, 2023

Same issue here. Are there any plans to add something like nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream-header: "x-forwarded-client-cert" as suggested by @joshrosso ?

@k8s-triage-robot
Copy link

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Aug 31, 2024
@tuxillo
Copy link

tuxillo commented Sep 2, 2024

sorry, forgot about this one. please don't close it

@Sea-you
Copy link

Sea-you commented Sep 10, 2024

The annotations that worked for Vault that is also serving requests via a TLS (that one is self-signed on Vault):

    ingress.kubernetes.io/secure-backends: "true"
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
    nginx.ingress.kubernetes.io/auth-tls-secret: vault/vault-ingress-tls
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "off"
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    nginx.ingress.kubernetes.io/proxy-ssl-secret: vault/vault-ingress-tls
    nginx.ingress.kubernetes.io/ssl-verify-client: "on"

In this case, I'm using the same certificate to auth to vault as the certificate on my ingress.

The vault-ingress-tls must have the following keys:

tls.crt
tls.key
ca.crt

@longwuyuan
Copy link
Contributor

Hi,

The annotations required to solve the original issue described by the creator are posted 2 times.
Also other issues related to mTLS were created that hint that mTLS works for those users.

There is no action0item visible in this long standing issue, for the project to take up because if if it was broken, then the latest post above would not have been legitimate.

Its been a long time since creating this issue and the project has acute shortage of resources hence a tally of open issues needs to be closer to tracking real action items. Also there is a drive to secure the controller by default out of the box while minimizing efforts to maintain/support features that are not implied/inherited from the Ingress-API functionalities. Implementing the Gateway-API is another effort progressing in parallel.

So since this issue does not track any action item and adds to the tally of open issues, I will close this issue for now. The original creator of the issue can post data from tests using the latest release of the controller, if mTLS is not working. Please post data that can be analyzed for the problem so that the efforts by the readers are not ambiguous or unclear (for reproducing or commenting). Thanks.

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

Hi,

The annotations required to solve the original issue described by the creator are posted 2 times.
Also other issues related to mTLS were created that hint that mTLS works for those users.

There is no action0item visible in this long standing issue, for the project to take up because if if it was broken, then the latest post above would not have been legitimate.

Its been a long time since creating this issue and the project has acute shortage of resources hence a tally of open issues needs to be closer to tracking real action items. Also there is a drive to secure the controller by default out of the box while minimizing efforts to maintain/support features that are not implied/inherited from the Ingress-API functionalities. Implementing the Gateway-API is another effort progressing in parallel.

So since this issue does not track any action item and adds to the tally of open issues, I will close this issue for now. The original creator of the issue can post data from tests using the latest release of the controller, if mTLS is not working. Please post data that can be analyzed for the problem so that the efforts by the readers are not ambiguous or unclear (for reproducing or commenting). Thanks.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests