-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow DNS resolution of the runner pod for all k8s setup #886
Conversation
{{- range include "tf-controller.runner.allowedNamespaces" . | fromJsonArray }} | ||
--- | ||
apiVersion: v1 | ||
kind: Service | ||
metadata: | ||
name: tf-runner | ||
namespace: {{ . }} | ||
spec: | ||
clusterIP: None | ||
ports: | ||
- name: grpc | ||
port: 30000 | ||
selector: | ||
app.kubernetes.io/created-by: tf-controller | ||
app.kubernetes.io/name: tf-runner | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @syalioune
We want to rely only on Pod hostnames without introducing the Service layer.
Would it be possible to do so with your hacks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A service is mandatory in my proposal to have a consistent DNS entry generated regardless of the k8s cluster setup.
A service with pod hostnames/subdomains is the best compromise since it requires only one service per namespace instead of a service per runner pod.
controllers/tf_controller_runner.go
Outdated
@@ -233,6 +234,8 @@ func (r *TerraformReconciler) runnerPodSpec(terraform infrav1.Terraform, tlsSecr | |||
} | |||
|
|||
return v1.PodSpec{ | |||
Hostname: terraform.Name, | |||
Subdomain: "tf-runner", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we introduce --use-pod-subdomain-resolution
to guard here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, will do !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those two fields are now guarded by --use-pod-subdomain-resolution
mtls/rotator.go
Outdated
cert, key, err := cr.createCertPEM(caArtifacts, hostname, time.Now().Add(-1*time.Hour), caArtifacts.validUntil) | ||
hostnames := []string{ | ||
fmt.Sprintf("*.%s.pod.%s", namespace, cr.ClusterDomain), | ||
fmt.Sprintf("*.tf-runner.%s.svc.%s", namespace, cr.ClusterDomain), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we introduce --use-pod-subdomain-resolution
to also guard here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, will do !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
selector: | ||
app.kubernetes.io/created-by: tf-controller | ||
app.kubernetes.io/name: tf-runner |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
selector: | |
app.kubernetes.io/created-by: tf-controller | |
app.kubernetes.io/name: tf-runner | |
selector: | |
app.kubernetes.io/created-by: tf-controller | |
app.kubernetes.io/name: tf-runner |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please introduce the new chart value called usePodSubdomainResolution
to
- add
--use-pod-subdomain-resolution
flag to the controller (which requires you to create that flag for it too) - guard generation of
Service
objects here.
Please note that we need tf-runner
Service by default for flux-system
namespace too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modifications performed 😄
Thank you for your contributions @syalioune Maybe we want to go with the subdomain resolution as you suggested. |
Signed-off-by: Alioune SY <sy_alioune@yahoo.fr>
Signed-off-by: syalioune <sy_alioune@yahoo.fr>
e1d4b00
to
574b05e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thank you so much @syalioune 🥇
app.kubernetes.io/created-by: tf-controller | ||
app.kubernetes.io/name: tf-runner | ||
{{- end }} | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the yaml file needs a new line I believe.
Co-authored-by: souleb <bah.soule@gmail.com>
Draft PR related to #843 (comment) which content I'll replicate here.
First of all, thanks for the great work on this controller.
We've tried deploying recently the
tf-controller
in a standardGKE clusters
and hit a wall because of DNS resolution issues. The controller is not able to resolve DNS names like10-40-129-81.podinfo.pod.cluster.local
.This issue is best described here #365 (comment) and here #462 (comment). GKE uses Cloud DNS which does not provide IP based DNS name resolution like
CoreDNS
.It's still very rough but revolves around :
tf-runner
service in each allowed namespaceshostname=terraform_crd_name, subdomain=tf-runner
fields to the runner pod so that anA record : terraform_crd_name.tf-runner.namespace.svc.cluster_domain
is automatically createdSAN
into the runner generated certificatePreliminary tests shows that it works.
Before going further, I'm looking for community/maintainer feedbacks 😄
Cheers