Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support offloading SSL to the proxy service #796

Merged
merged 1 commit into from
Aug 10, 2018

Conversation

pcfens
Copy link
Contributor

@pcfens pcfens commented Jul 25, 2018

Rather than always having different target ports for the HTTP and HTTPS frontends, setting proxy.https.type to offload will tell the service to route ports 443 and 80 both to port 80 on the backend.

For this to work, annotations need to be added to proxy.service.annotations to attach certificates to the load balancer.

@betatim
Copy link
Member

betatim commented Jul 26, 2018

Is this a common enough use case that we want to take on supporting it?

If yes, can you add some documentation in the HTTPS section of the guide on when and how you'd use this feature?

Rather than always having different target ports for the HTTP and
HTTPS frontends, setting proxy.https.type to offload will tell
the service to route ports 443 and 80 both to port 80 on the
backend.

For this to work, annotations need to be added to
proxy.service.annotations to attach certificates to the load
balancer.

Fixes jupyterhub#675
@pcfens
Copy link
Contributor Author

pcfens commented Jul 26, 2018

I don't know how widely it'd be used, but I know it's a common use case for us, especially when testing (the Let's Encrypt duplicate certificate rate limit keeps us at 5 iterations per week otherwise).

The approach I took here is designed to have as small an impact as possible, which should also keep maintenance to a minimum going forward.

I've updated the documentation in the latest push too. The original issue (#675) has a complete example if it helps figure out next steps.

@minrk
Copy link
Member

minrk commented Aug 10, 2018

Thanks! This looks good to me now. Well documented.

@minrk minrk merged commit 2b8364b into jupyterhub:master Aug 10, 2018
@manics manics mentioned this pull request Aug 15, 2018
7 tasks
@pcfens pcfens deleted the offload-ssl branch December 21, 2018 02:44
@consideRatio
Copy link
Member

@pcfens @betatim @minrk - I have never understood why proxy.https.type: offload was helpful, but at this point in time I'm blocked in another PR (#1811) without understanding this better.

I look at the provided example and wonder what would happen if...

 proxy:
   https:
-    enabled: true
-    type: offload
+    enabled: false
   service:
     annotations:
       # Certificate ARN
       service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:1234567891011:certificate/uuid"
       # The protocol to use on the backend, we use TCP since we're using websockets
       service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
       # Which ports should use SSL
-      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
+      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "http"
       service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'

Would that work? I really want to figure out if we can make use of this AWS LB without having a dummy 443 port alongside a 80 port.

@pcfens
Copy link
Contributor Author

pcfens commented Oct 17, 2020

Hi @consideRatio-

In the example you provide a load balancer would be created, but it would only listen on port 80, but the annotation tells AWS to attach a certificate to it anyway (https on port 80 I believe, unless Amazon has some logic on their end). The original intent of this PR was to support offloading SSL termination with the smallest amount of disruption to the how things previously worked. I'll comment on the other PR with some information that might be helpful there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants