-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: automatically configure istio #2818
Conversation
334e350
to
7e7e866
Compare
05f9222
to
e318fa7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not an Istio expert in any way, but I think I understand what this does. Nice!
e318fa7
to
8fa7eca
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome!
backend/controller/scaling/k8sscaling/deployment_provisioner.go
Outdated
Show resolved
Hide resolved
if err != nil { | ||
if !errors.IsNotFound(err) { | ||
|
||
logger.Errorf(err, "failed to delete service %s", msg.ModuleName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: capitalise logs
Why is this logging rather than returning an error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's in a goroutine due to delayed deletion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case we should use an errgroup.Group
or other mechanism to propagate the error, not drop it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a form of eventual consistency here, where 'orphaned' objects get cleaned up on the next schema change. If we used a wait group it would be the same as just adding a big sleep in the main thread.
backend/controller/scaling/k8sscaling/deployment_provisioner.go
Outdated
Show resolved
Hide resolved
backend/controller/scaling/k8sscaling/deployment_provisioner.go
Outdated
Show resolved
Hide resolved
backend/controller/scaling/k8sscaling/deployment_provisioner.go
Outdated
Show resolved
Hide resolved
in.Run(t, | ||
in.WithKubernetes(), | ||
in.WithIstio(istio), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does in.WithIstio(false)
make sense? Should this still be in.WithKubernetes(istio bool)
?
But also, do we care about testing without Istio, when we're assuming it is always present? Just get rid of the non-Istio test IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure we want to require Istio to run on kube? Also having the non-istio test helps verify that Istio is working as expected, and the naughty deployment is not just hitting some other error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already decided that we were going to require Istio, so why wouldn't we?
8fa7eca
to
79819e2
Compare
09da70b
to
6f1fcff
Compare
This will automatically create istio AuthorisationPolicy objects to limit ingress to the runners to the controller. This will be expanded to include egress and peer to peer in future. add istio test
c43f981
to
1fb3b38
Compare
@@ -20,3 +20,6 @@ rules: | |||
resourceNames: | |||
- ftl-controller-deployment-config | |||
verbs: [ "get"] | |||
- apiGroups: [ "security.istio.io" ] | |||
resources: [ "authorizationpolicies" ] | |||
verbs: [ "get", "list", "watch", "delete", "create", "update", "patch" ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still makes me nervous. I know that this is the direction we agreed on, but I wonder if this should really be broken out into seperate roles, or whether the actions of modifying cluster istio authz policies should be done by an entity other than the controller. Gaining access to the ftl-controller, is already root on the deployed FTL cluster, but now it's also root on the underlying infrastructure. I'm imagining privilege escalation scenarios where a cluster is completely hijacked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will get moved to the provisioner at some point.
For now I could probably drop down to just the 'create' role, using owner references for deletion, but that will likely change if we start doing P2P and need to modify them as deployments come online.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/TBD54566975/ftl/pull/2936/files reduces the permissions somewhat
This will automatically create istio AuthorisationPolicy objects to limit ingress to the runners to the controller.
This will be expanded to include egress and peer to peer in future.