Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[nginx] Allow ELB SSL termination with HSTS and HTTPS redirect #314

Closed
foxylion opened this issue Feb 21, 2017 · 34 comments · Fixed by #365 or #5374
Closed

[nginx] Allow ELB SSL termination with HSTS and HTTPS redirect #314

foxylion opened this issue Feb 21, 2017 · 34 comments · Fixed by #365 or #5374

Comments

@foxylion
Copy link
Contributor

We are terminating our SSL traffic at AWS ELB. This is supported by the nginx ingress.

But we are also require the HSTS headers to be always set and HTTPS redirection enabled by default.
This can be a bit tricky, because nginx needs to know which traffic from ELB was HTTPS traffic and which traffic is HTTP traffic.

In our current setup we use a different port in nginx which has the only purpose of redirecting all traffic to the HTTPS equivalent.

server {
   listen 8000 proxy_protocol;
   server_tokens off;
   return 301 https://$host$request_uri;
}

This enables us to route all ELB traffic on port 80 to port 8000 of ingress. And SSL terminated traffic on port 443 to port 80 of ingress. So far this did not need major changes in ingress, but it will no longer support the HSTS headers. Currently we modified the ingress-controller to always set the header (removed the conditionals). It would be great if both features could be integrated into the controller by default. This would allow us to switch back to the default nginx.tmpl.

If this is a change which is welcome I will be happy to contribute some code, but will need some assistance.

@foxylion
Copy link
Contributor Author

foxylion commented Mar 3, 2017

This is cool. But I think some minor things are still missing. It is still not possible to add HSTS headers when you're not using SSL termination in ingress (this would be a important feature in combination with ForceSSLRedirect).

Edit: I had a look into the code. There is one thing I would like to improve:
This "https redirect" feature is on a per location basis. This means for each ingress I have to ensure it is enabled. Ingress is controlled by our developers. If one forgets to add this option traffic could be served in plain text.
Wouldn't it be a good idea to also support https redirect and hsts in a global context?

@aledbf Could you reopen this?

@kylegato
Copy link

kylegato commented Mar 28, 2017

When I add:

annotations: kubernetes.io/ingress.class: "nginx" ingress.kubernetes.io/force-ssl-redirect: : "true"

To my Ingress config for a service, it just loops over and over and over again, because we are doing SSL termination at the ELB and therefore passing traffic from the ELB to our Ingress controller via HTTP.

@foxylion is the above issue why you've had to do what you're doing? I may end up doing the same....

@foxylion
Copy link
Contributor Author

foxylion commented Mar 28, 2017

@kylegato Back then the annotation did not exist. And since then I had no time to actually test the new annotation, especially because it does not solve everything (HSTS headers are missing).

For the moment I've the separate port and modified to always set the HSTS header (removed the if statements in the template).

@asmith60
Copy link

@kylegato @foxylion I am having the same issue where SSL termination at the ELB + ingress.kubernetes.io/force-ssl-redirect: : "true" results in an infinite redirect loop. I am on version 0.9.0-beta.3. Did you ever find a solution?

@kylegato
Copy link

@asmith60 I am just going to rely on HSTS for now.

@foxylion
Copy link
Contributor Author

@kylegato Main problem with that solution is, that unless the client uses HTTPS once he can still use HTTP.

@kerin
Copy link

kerin commented Jun 15, 2017

Has anyone managed to get the ingress.kubernetes.io/force-ssl-redirect annotation to work with SSL termination on an ELB? Whatever I do I end up with an infinite HTTPS->HTTPS redirect loop (with 0.9-beta.8), so like others I'm having to override and modify the nginx.conf template, making updates pretty painful.

@philipbjorge
Copy link
Contributor

Does anyone have feedback on the preferred way to force HSTS for SSL terminated environments?
#917

@danielfm
Copy link

danielfm commented Aug 18, 2017

For those experiencing the redirect loop, make sure you enabled the HTTP backend protocol in ingress service via the corresponding annotations:

    annotations:
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ...

This will cause the ELB to operate in Layer 7, and thus inject the X-Forwarded-* headers required for the redirect logic to work.

image

Also please note the redirect won't work if you enabled Proxy Protocol as well; in this case, the ELB would need to operate on TCP/SSL (Layer 4), thus such headers won't be available to NGINX, causing the redirect to loop.

@asmith60
Copy link

asmith60 commented Sep 7, 2017

@danielfm Your solution worked for me. Thanks 😄

@aledbf
Copy link
Member

aledbf commented Sep 7, 2017

@joe-elliott
Copy link

joe-elliott commented Oct 6, 2017

We are currently also running nginx ingress behind an ELB. The ELB is responsible for SSL termination and we are running into difficulties similar to the ones discussed in this thread.

This is currently working nicely to force redirect: ingress.kubernetes.io/force-ssl-redirect.

Will 9.0 include a similar option to force hsts even when nginx isn't terminating SSL?

EDIT:

For others researching this topic I was able to get the desired behavior by adding this setting to the ingress configmap:

http-snippet: 'add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;'

I believe that this option is first available in 0.9.0-beta.14. Adjust values as necessary.

@SharpEdgeMarshall
Copy link

SharpEdgeMarshall commented Oct 27, 2017

So actually there is no way to have ssl termination and redirect with proxy-protocol on L4 ELB?

@mirath
Copy link

mirath commented Jan 16, 2018

@aledbf Would adding a HSTS annotation be of interest to the project? I'm willing to work on a PR that implements this feature

@aledbf
Copy link
Member

aledbf commented Jan 16, 2018

Would adding a HSTS annotation be of interest to the project?

Yes

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2018
@sbrandtb
Copy link

sbrandtb commented Apr 24, 2018

As far as I understand, part of this ticket is done by adding force-ssl-redirect to the configmap (or ingress).

However, HSTS is not working in such cases because nginx.tmpl is explicitly asking for an SSL certificate being set:

            {{ if (and (not (empty $server.SSLCertificate)) $all.Cfg.HSTS) }}
            if ($scheme = https) {
            more_set_headers                        "Strict-Transport-Security: max-age={{ $all.Cfg.HSTSMaxAge }}{{ if $all.Cfg.HSTSIncludeSubdomains }}; includeSubDomains{{ end }}{{ if $all.Cfg.HSTSPreload }}; preload{{ end }}";
            }
            {{ end }}

I see two solutions here:

  • Introduce force-hsts configuration flag and

              {{ if (or $all.Cfg.ForceHSTS (and (not (empty $server.SSLCertificate)) $all.Cfg.HSTS)) }}
    

    however, we should also ask for $pass_access_scheme because it will either be the forwarded protocol or scheme if we're not behind a proxy, so it should always be right.

    if ($pass_access_scheme = https) {
    
  • Then again, we could introduce a "we are behind SSL Offloading" variable because I guess having lots of force variables (like force-ssl-redirect and the proposed force-hsts) will make things very confusing at some point.

@mfrister
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 25, 2018
@foxylion
Copy link
Contributor Author

foxylion commented Apr 25, 2018

Here is the summary of how we are doing TLS offloading currently (including HSTS headers):

We need a custom nginx.tmpl otherwise the upstream services will receive the wrong X-Forwarded headers. In our case this is done by using a custom built Docker image.

Our Dockerfile:

FROM quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.11.0

# We use a patch file to add our customizations to the current nginx.tmpl
# If something is no longer patchable the patch call will fail
ADD nginx.tmpl.patch /etc/nginx/template/nginx.tmpl.patch
RUN patch /etc/nginx/template/nginx.tmpl < /etc/nginx/template/nginx.tmpl.patch

The patch file:

diff --git a/controller/service/nginx.tmpl b/controller/service/nginx.tmpl
index 6fefb5e..4e7f4c4 100644
--- a/controller/service/nginx.tmpl
+++ b/controller/service/nginx.tmpl
@@ -712,8 +720,8 @@ stream {
             proxy_set_header X-Real-IP              $the_real_ip;
             proxy_set_header X-Forwarded-For        $the_real_ip;
             proxy_set_header X-Forwarded-Host       $best_http_host;
-            proxy_set_header X-Forwarded-Port       $pass_port;
-            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
+            proxy_set_header X-Forwarded-Port       443;
+            proxy_set_header X-Forwarded-Proto      https;
             proxy_set_header X-Original-URI         $request_uri;
             proxy_set_header X-Scheme               $pass_access_scheme;

And finally our (simplified) configuration:

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: ingress-nginx
  labels:
    k8s-addon: ingress-nginx.addons.k8s.io
data:
  use-proxy-protocol: true
  http-snippet: |
    # redirect HTTP traffic to HTTPS
    server {
      listen 8000 proxy_protocol;
      server_tokens off;
      return 301 https://$host$request_uri;
    }
  server-snippet: |
    more_set_headers "Strict-Transport-Security: max-age=31536000; includeSubDomains; preload";
---
kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  labels:
    k8s-addon: ingress-nginx.addons.k8s.io
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: '$((elb-certificate-arn))'
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443'
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
spec:
  type: LoadBalancer
  selector:
    app: ingress-nginx
  ports:
  - name: http
    port: 80
    targetPort: 8000
  - name: https
    port: 443
    targetPort: 80
---
kind: Deployment
apiVersion: apps/v1beta1
metadata:
  name: ingress-nginx
  labels:
    k8s-addon: ingress-nginx.addons.k8s.io
spec:
  replicas: 2
  minReadySeconds: 10
  template:
    metadata:
      labels:
        app: ingress-nginx
        k8s-addon: ingress-nginx.addons.k8s.io
    spec:
      terminationGracePeriodSeconds: 300
      containers:
      - name: ingress-nginx
        image: meisterplan/ingress-nginx-controller:$((image-tag))
        imagePullPolicy: Always
        ports:
          - containerPort: 80
            protocol: TCP
          - containerPort: 8000
            protocol: TCP
        lifecycle:
          preStop:
            exec:
              command: ["sleep", "15"]
        args:
        - /nginx-ingress-controller
        - --default-backend-service=infrastructure/ingress-error-pages
        - --configmap=infrastructure/ingress-nginx

I removed a lot of configuration (we use many of the additional features), this should be the relevant parts for TLS offloading.

I hope this helps. 😄

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 24, 2018
@mfrister
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 25, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 19, 2018
@mfrister
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 19, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 17, 2019
@sbrandtb
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 17, 2019
@shake76
Copy link

shake76 commented Apr 2, 2019

Just to let you know that I m still facing issues trying to redirect from http to https with TLS termination, I'm using nginx-ingress controller version 0.23, I already have enable use-forwarded-headers: "true" in my Config and have this annotation in my ingress rule nginx.ingress.kubernetes.io/force-ssl-redirect: "true", does anyone know if I m missing something else here?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 1, 2019
@sbrandtb
Copy link

sbrandtb commented Jul 2, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 2, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 30, 2019
@sbrandtb
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 30, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 29, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 28, 2020
@heidemn
Copy link

heidemn commented Feb 24, 2020

/remove-lifecycle rotten

As @sbrandtb suggested, an option like "behind-tls-termination" would be useful. Then lots of HTTPS-related options would work as expected.

Or even better, there could be an option for Nginx to trust the "X-Forwarded-Proto", and use it to determine if the client is using HTTPS for this request.
This header is e.g. set by AWS ELB:
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/x-forwarded-headers.html
Could be e.g.

  • "trust-forwarded-headers" - maybe too generic
  • "trust-forwarded-proto" (specific to HTTP(S))

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 24, 2020
@aledbf
Copy link
Member

aledbf commented Apr 2, 2020

Changes:

  • Deprecation of ELB as LB
  • Use NLBs instead
  • Remove the need to use proxy-protocol
  • Default externalTrafficPolicy: Local

With these changes the source IP address is preserved, in case of using the TCP/UDP feature, there are no issues with proxy-protocol and the redirect to HTTPS works OOTB
Please test the deploy-tls-termination.yaml manifest and let me know of any issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
17 participants