-
Notifications
You must be signed in to change notification settings - Fork 619
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propagate FQDN for TLS verification with proto=https #362
Comments
Hi @deuch, this is what I understand so far:
The cert is for |
Hello, Our fabio are not exposed to internet so it's more like that: Load Balancer (Appliance) https://foo.com/bar --> Fabio (4 instances at least) --> https://1.2.3.4/bar We can not use wildcard for our certificates (forbidden by the security) in a banking context. Fabio and services are containers running on the same platform. For each application, we deploy a Load Balancer (Appliance), 4 Fabio instances on a dedicated overlay network and the services (containers too) are connected to this overlay network. On the same platform we have multiple time this setup for each application and environment (dev, int, uat ...) So i can not use a 10.0.0.0/8 wildcard certificate (because it can not be generated by our PKI) and it will break the multi-tenancy of the platform. We are using a mutualized platform for running containers of many applications. So to be sure that a service is served by the right Fabio (or to ensure that Fabio serve the right service), it will be a good thing to check the CN of the certificate backend and not it's IP. Of course it's not the normal behaviour and must be an option for some use case like mine. The normal behaviour is to check the CN/SAN of the back-end with the back-end FQDN registered in consul (for my use case an IP). With containers, generate a certificate each time a container is created its a too heavy operation and difficult to maintain (certificate revocation will be a nightmare ...). I think that i'm not the only one in that case :) |
Would the |
I'm not sure to understand the behaviour of this : With host=dst, what will be the header when connecting the upstream ? The upstream hostname/ip ? Or the hostname header coming to Fabio ? |
|
For HTTPS, it isn't sufficient to set the In essence, you want fabio to make the upstream request with the original hostname (e.g. fabio could either spoof the DNS lookup for that request or try to establish the TCP connection first and then run the TLS handshake with the original server name (which circumvents the DNS lookup). This would then allow you to re-use the same cert on all upstream servers. Does that make sense? |
Yes it makes sense. In my use case, the TLS server name has to be the original hostname indeed. I do not know what is the best thing to do. DNS spoofing or TCP connection first and TLS handshake after. What is the most secure ? |
Hello, did you have time to try some stuff for this use case ? |
not yet. sorry. |
Hello,
We'are using Fabio 1.5.2 and we host many apis with it.
We have a rule to be able to serve api from fabio regarding FQDN and API name:
myFQDN.society.com/apiname/version
For eg:
analysis.mycompany.com/risk/v1
analysis.mycompany.com/risk/v1.2
analysis.mycompany.com/calculation/v1
computation.mycompany.com/schedule/v1
To avoid generate certificate for each version or new FQDN, we choose this rule.
To be served by Fabio in full HTTPS (HTTPS to fabio and HTTPS to he backend) we need to set proto=https to be able to choose the right backend regarding the all URL (with path and context). SNI doesn't work in our case because the path is not taken in account.
Our backend are containers, connected to the same overlay network than Fabio. So we register the IP of the overlay network in consul.
It works but we have to add tlsskipverify=true to have it works. Indeed, with tlsverification enabled, it failed because the certificate doesn"t have the ip of the container in it's SAN list. And it start to be difficult to regenerate certificates each time we scale up/down or redeploy the service in Docker.
So, is it possible, in a sort of passtrough way to verify that the backend certificate has the same FQDN that was used to reach the fabio ? In fact we are using the same certificate for Fabio and the Backend (we can not use PKI features of Vault because of security restriction, but certificates are stored as secrets).
So the idea is to have a new parameter to set fabio to use (per source/FQDN) the source/FQDN for tslverification of the backend and not the backend itself IP/name. And not for all the routes, but only those which need this behaviour (a tag option in consul ?)
Maybe it's already the case but i just saw a global parameter (proxy.tls.header.value ???) and not a per source based.
Thanks for reading me :)
The text was updated successfully, but these errors were encountered: