-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
auth-tls-pass-certificate-to-upstream does not work with https #3511
Comments
Alright, after digging deeper, I'm finding the issue to be more around standardization of passing client certificates in headers rather than my initial theory, the nginx-ingress-controller not passing the client cert. I've found nginx is passing the client cert to the backend pod in the It seems for projects like Envoy, there's been lots of discussion around how to accomplish this, in their case they went with I suggest we provide the functionality to specify the header key for which the client cert will be forwarded in. Something such as:
To ensure compatibility with upstream servers / pods. Let me know your thoughts. |
That makes sense. We already do this for another header https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#forwarded-for-header Edit: just in case, this does not means the header will be compatible with envoy (http://nginx.org/en/docs/http/ngx_http_ssl_module.html != https://www.envoyproxy.io/docs/envoy/latest/configuration/http_conn_man/headers#x-forwarded-client-cert) |
I can take a stab at this. Can you elaborate on the meaning of global value @aledbf ?
100% understood. |
Sorry, we use a configmap that configures global values https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/config/config.go#L431 with defaults https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/config/config.go#L586 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Any update on this? :) I really like to see this implemented! /remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Still interested! :) /remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Again, still interested! :) /remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Can't wait to get this! ;) /remove-lifecycle stale |
did anyone figure this out, we have the same situation, had to spend a few days debugging before landing here. |
Any update on this? We have a same situation like this. |
Hello, I run into this thread a couple of days ago. It seems that there is a configuration snippet annotation (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet) that lets you define the HTTP header in which the client certificate will be inserted. The following annotation worked for me:
Of course you can replace the 'X-SSL-CERT' with the name of your desired header. |
@IoakeimSamarasGR Could you please share your whole annotation? I am using the following annotation without success
|
@rdoering Here are the annotations that worked for me:
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/kind feature |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
Any/All with visibility to this: we're running the Nginx ingress controller 0.47.0. Specifying in the ingress yaml:
The upstream (another nginx) is not receiving the cert in the
Then looking at the config file:
In other words, it looks as if the ingress controller saw the annotation in the manifest to pass the cert, added a comment to the configuration file, but then didn't add any actual configuration directives. So - just wondering if this is a known defect, or if there is a known workaround. I've tried some of the suggestions in this issue without success so far. Thanks. |
Based on inspecting the code, the client cert should be expected on type AuthSSLCert struct {
// Secret contains the name of the secret this was fetched from
Secret string `json:"secret"`
// CAFileName contains the path to the secrets 'ca.crt'
CAFileName string `json:"caFilename"`
// CASHA contains the SHA1 hash of the 'ca.crt' or combinations of (tls.crt, tls.key, tls.crt) depending on certs in secret
CASHA string `json:"caSha"`
// CRLFileName contains the path to the secrets 'ca.crl'
CRLFileName string `json:"crlFileName"`
// CRLSHA contains the SHA1 hash of the 'ca.crl' file
CRLSHA string `json:"crlSha"`
// PemFileName contains the path to the secrets 'tls.crt' and 'tls.key'
PemFileName string `json:"pemFilename"`
}
ingress-nginx/rootfs/etc/nginx/template/nginx.tmpl Lines 1057 to 1058 in 7d5452d
cc. @aceeric |
Same issue. |
/priority important-longterm |
Following annotation configuration works annotations:
nginx.ingress.kubernetes.io/proxy-ssl-secret: secretName: Specifies a Secret with the certificate tls.crt, key tls.key in PEM format used for authentication to a proxied HTTPS server. It should also contain trusted CA certificates ca.crt in PEM format used to verify the certificate of the proxied HTTPS server. This annotation expects the Secret name in the form "namespace/secretName". |
@vijaysimhajoshi I may be wrong but that looks like it's configuring backend certificate verification rather than client certificate verification. This issue is about client certificate validation. |
Is it straight up not possible to pass the PEM certificate upstream WITHOUT verifying it? I'm getting ssl handshake errors on a self-signed certificate, while all I'm looking to do is pass it through and verify on the backend. Tried all kinds of configurations above to no avail. |
I ran into this this week - took me 3 days to figure out that nginx passes the cert on a non standard, non common header (perhaps i missed that in the documentation somewhere?). I was able to work around this by the configuration snippet suggested above, however I agree that the best approach would be an additional annotation that lets you specify which header to pass the cert on, at least until the following is no longer in draft (or a simlar proposal is approved and made official): https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-client-cert-field. |
Same issue here. Are there any plans to add something like |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
sorry, forgot about this one. please don't close it |
The annotations that worked for Vault that is also serving requests via a TLS (that one is self-signed on Vault):
In this case, I'm using the same certificate to auth to vault as the certificate on my ingress. The
|
Hi, The annotations required to solve the original issue described by the creator are posted 2 times. There is no action0item visible in this long standing issue, for the project to take up because if if it was broken, then the latest post above would not have been legitimate. Its been a long time since creating this issue and the project has acute shortage of resources hence a tally of open issues needs to be closer to tracking real action items. Also there is a drive to secure the controller by default out of the box while minimizing efforts to maintain/support features that are not implied/inherited from the Ingress-API functionalities. Implementing the Gateway-API is another effort progressing in parallel. So since this issue does not track any action item and adds to the tally of open issues, I will close this issue for now. The original creator of the issue can post data from tests using the latest release of the controller, if mTLS is not working. Please post data that can be analyzed for the problem so that the efforts by the readers are not ambiguous or unclear (for reproducing or commenting). Thanks. /close |
@longwuyuan: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
NGINX Ingress controller version:
0.21.0
Kubernetes version:
What happened:
When adding
auth-tls-pass-certificate-to-upstream: true
to an ingress resource, the client certificate passed to the ingress controller is not forwarded to the backend pod.What you expected to happen:
The backend pod should receive the client certificate.
How to reproduce it (as minimally and precisely as possible):
Start an https server expecting mtls
Create an ingress resource, such as the following, that points to the mtls server's service
curl
directly to pod service and verify mtls succeeds.Anything else we need to know:
Unless i'm misunderstanding the annotation, I'd expect the client cert to be passed on to the upstream pod.
The text was updated successfully, but these errors were encountered: