-
Notifications
You must be signed in to change notification settings - Fork 16.7k
Conversation
Hi @kminehart. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
release: "{{ .Release.Name }}" | ||
heritage: "{{ .Release.Service }}" | ||
spec: | ||
type: ClusterIP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be dynamically set
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The service type is now configurable via. values.yaml
.
app: {{ template "fullname" . }} | ||
ports: | ||
- name: service | ||
port: 4444 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should also be dynamically set
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've made the service port configurable in values.yaml
.
- name: DATABASE_URL | ||
value: postgres://{{ .Values.postgresql.postgresUser }}:{{ .Values.postgresql.postgresPassword }}@{{ template "postgresql.fullname" . }}:5432/{{ .Values.postgresql.postgresDatabase }}?sslmode=disable | ||
- name: HTTPS_ALLOW_TERMINATION_FROM | ||
value: 0.0.0.0/0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This as well should come from Values.yaml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you referring to the HTTPS_ALLOW_TERMINATION_FROM
environment?
If you're referring to the postgres URL, this was one thing I was pretty unsure about. the requirements.yaml
file seems to imply that a new postgres server will spin up for hydra regardless if there's one there already. Not sure how that works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I was just referring to the HTTPS_ALLOW_TERMINATION_FROM
. We are currently dynamically setting that at runtime. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I exclude that environment variable then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would just have the value set in values.yaml
with a default of 0.0.0.0/0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah gotya. Will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry this took me so long. I'll push it now.
- name: hydra | ||
image: {{ .Values.image }}:{{ .Values.imageTag }} | ||
imagePullPolicy: {{ .Values.imagePullPolicy }} | ||
command: ["hydra", "host", "--dangerous-auto-logon"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove --dangerous-auto-logon
and add sql migrations beforehand /go/bin/hydra migrate sql $DATABASE_URL;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do you propose we do the sql migrations beforehand?
We can run 2 commands by doing
/bin/sh -c '/go/bin/hydra migrade sql $DATABASE_URL && /go/bin/hydra host'
but I'm not a huge fan of that solution.
A better solution in my opinion would be init containers.
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
so I'll try to set that up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Either would work, personally I'm a fan of the first solution, init containers can be tricky to debug
Thanks for reviewing. I'll get those changes in asap @grillz |
Awesome, thanks for doing this @kminehart beat me too it 😉 |
Anything else you see, @grillz? |
name: {{ template "fullname" . }}-secret | ||
key: system.secret | ||
- name: DATABASE_URL | ||
value: postgres://{{ .Values.postgresql.postgresUser }}:{{ .Values.postgresql.postgresPassword }}@{{ template "postgresql.fullname" . }}:5432/{{ .Values.postgresql.postgresDatabase }}?sslmode=disable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use secrets for passwords. Also, is there any other way to specify the password rather than in the URL?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@unguiculus I'm not sure using secrets for passwords should be a requirement, secrets in kubernetes are still not secure and wont be for some time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@grillz if K8s is properly configured (TLS, don't let just anybody access the REST API) I don't see why a secret is any less secure than the deployment file.
If an attacker gains access to the k8s API, the secret is compromised either way, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@20zinnm For one this is a major compliance issue kubernetes/kubernetes#12742 . Not to mention the partitioning problem they are working are attempting to solve at the moment, but its not going to happen in the near future. Secrets just really aren't secret in k8s, and that no secret in the community. Sorry no pun intended 😉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regardless, we should still put the password in a secret, especially now that RBAC is a thing, you can give someone access to deployments, and not secrets, and they won't be able to grab the database password.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can its just not actually secret, and RBAC is very limited in its ability to secure secrets at the moment. You can join sig-auth every other Wednesday where this is a hot topic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we get around this by throwing these credentials into the environment from a secret? I.e. values.yaml --> secret --> environment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That doesn't help much, our current setup is just using helm install --set my.secret=foo
and pushing those to env vars. If we really want to use secrets its not the end of the world, they will be secure at some point here in the next year I imagine 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's go ahead and set ourselves up for success :). I'll add a commit to kminehart/charts#1 with it as soon as I test it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess... kubernetes is a funny exercise in branding, call something a secret even if it isn't at all and people will use it as such ¯_(ツ)_/¯
metadata: | ||
name: {{ template "fullname" . }} | ||
labels: | ||
app: {{ template "fullname" . }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use {{ template "name" . }}
for the app
label. Apply everywhere.
replicas: {{ .Values.replicas }} | ||
selector: | ||
matchLabels: | ||
app: {{ template "fullname" . }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update selector for the label change:
app: {{ template "name" . }}
release: "{{ .Release.Name }}"
Apply everywhere.
incubator/hydra/README.md
Outdated
|
||
# Prerequesites | ||
|
||
* Kubernetes 1.4+ with Beta APIs enabled |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason you still want to support 1.4?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No particular reason. 1.4 + Beta APIs is the minimum requirement, though. Honestly, I'd prefer to just remove this line anyways.
@@ -0,0 +1,69 @@ | |||
apiVersion: extensions/v1beta1 | |||
kind: Deployment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it make more sense to use a Statefulset since you are using persistent storage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@unguiculus Statefulset is still beta, not sure we should be pushing it yet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if anything @kminehart I would consider removing postgres entirely from here, most people still aren't putting db's on k8s for a variety of reasons. Best method I have seen so far is the operator method, but still I think Hydra should just deploy agnostic of postgres, or use the postgres chart as a dependency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@grillz Postgres chart as a dependency sounds good. It's probably best to have Hydra's chart set up its own database one way or the other because if left to their own devices, inexperienced system admins will likely configure Hydra to use the same database as the application, which is a major security hole. Maintaining database isolation is very important here, so it should be emphasized one way or another.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@20zinnm yeah I agree with that, but at the same time most of the k8s deployments were seeing right now are utilizing RDS or some hosted datastore, so I would hate to marry it either. Kind of a rock and a hard place
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- We've seen before that
beta
Kubernetes resources are fairly stable, I have no problem with making Hydra a StatefulSet. I don't fully understand what the benefit of using StatefulSets would be in this case, though.
If I understand correctly, the only thing you really need persistent storage for is the SSL certificates and CA files created by Hydra, right?
- As for Postgres, it is currently a dependency, as @20zinnm said. I followed in Gitlab's example, which may not have been the best idea considering Gitlab has the "omnibus" methodology of installing their software, but they also have a hell of a lot more than just Postgres as requirements.
There's a solution to this problem in Helm already.
By default, we will install the bundled postgres. In requirements.yaml
, I'll add a condition: postgres.bundled
, and if that value is true
in values.yaml
, then the bundled postgres will be installed.
This way, users that will prefer using RDS can opt-in to using RDS, and those of us that don't care or prefer keeping everything in Kubernetes can keep it the way it is.
It makes sense to me to make external database connections opt-in because it's always going to require configuration anyways. Keeping it bundled means the application will work out of the box without as much configuration.
Bear with me, I'm still fairly new to Helm and Hydra.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think that should work, Hydra should not be a statefulset, only postgres would be, but that should be a separate chart as mentioned.
So is there a plan to get this in soon? Very excited! @kminehart |
@20zinnm I'm working on the requested changes right now. Expect the requested changes to be applied within the next few days. |
Thanks so much @kminehart |
###################################################################### | ||
{{- end }} | ||
|
||
Hydra should not be facing the internet itself, so you can access hydra via. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Part of Hydra's authorization flow requires a redirect to Hydra. It definitely does need to be publicly accessible.
Overview of the OAuth2 Authorization Code flow
Basically, the user is initially redirected by the client to Hydra, which then redirects to the consent app with a signed challenge token. The consent app does necessary authentication and consent before redirecting the user back to Hydra (passing a signed challenge response). Hydra validates the response and generates the tokens before redirecting the user back to the client with the proper tokens.
This is incredibly simplified, but the key thing is that the user does need to be able to access Hydra from outside of the kubernetes cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotya. Sorry about that! Part of the reason for me making this helm chart was so that I could set up Hydra on our cluster and give it a test run :P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No worries. Let me know if I can help :) I too need to set up Hydra. Figured I'd go big or go home.
@kminehart I made the requested changes and submitted a PR to your fork. If you merge that, the only thing left to decide on is the database credentials (secrets vs values). |
Thanks @20zinnm. I'll make the final changes within the next hour or so. |
I took it upon myself to make a helm chart based on your work that incorporates idiomatic Kubernetes structures, based on the chart for GitLab. I have confirmed that it works locally with minikube. (That's not the packaged version but rather a ZIP of the directory, you must unzip it and then package it.) |
@20zinnm I like that a lot more. This was sort of a quick work solution that turned into a pull request, so the quality of the config layout and the readme are not ideal. Do you want to create a separate PR and we can just close this one? |
incubator/hydra/values.yaml
Outdated
@@ -37,6 +40,13 @@ config: | |||
idTokenLifespan: "1h" | |||
authorizeCodeLifespan: "10m" | |||
httpsAllowTerminationFrom: 0.0.0.0/0 | |||
postgresql: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hydra supports multiple backing databases. This locks you in with PostgreSQL.
I would do "databaseURL" instead.
I'd be happy to. You will be listed as a contributor for your work. |
In case this was missed, we have now official helm charts: https://k8s.ory.sh/helm/ |
No description provided.