Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: automatically configure istio #2818

Merged
merged 1 commit into from
Oct 1, 2024
Merged

Conversation

stuartwdouglas
Copy link
Collaborator

This will automatically create istio AuthorisationPolicy objects to limit ingress to the runners to the controller.

This will be expanded to include egress and peer to peer in future.

@stuartwdouglas stuartwdouglas requested review from a team and jonathanj-square and removed request for a team September 25, 2024 06:19
This was referenced Sep 25, 2024
@stuartwdouglas stuartwdouglas added the run-all A PR with this label will run the full set of CI jobs in the PR rather than in the merge queue label Sep 25, 2024
@stuartwdouglas stuartwdouglas force-pushed the stuartwdouglas/istio branch 3 times, most recently from 05f9222 to e318fa7 Compare September 25, 2024 07:32
Copy link
Contributor

@jvmakine jvmakine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not an Istio expert in any way, but I think I understand what this does. Nice!

Copy link
Collaborator

@alecthomas alecthomas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!

if err != nil {
if !errors.IsNotFound(err) {

logger.Errorf(err, "failed to delete service %s", msg.ModuleName)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: capitalise logs

Why is this logging rather than returning an error?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's in a goroutine due to delayed deletion.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case we should use an errgroup.Group or other mechanism to propagate the error, not drop it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a form of eventual consistency here, where 'orphaned' objects get cleaned up on the next schema change. If we used a wait group it would be the same as just adding a big sleep in the main thread.

in.Run(t,
in.WithKubernetes(),
in.WithIstio(istio),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does in.WithIstio(false) make sense? Should this still be in.WithKubernetes(istio bool)?

But also, do we care about testing without Istio, when we're assuming it is always present? Just get rid of the non-Istio test IMO.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure we want to require Istio to run on kube? Also having the non-istio test helps verify that Istio is working as expected, and the naughty deployment is not just hitting some other error.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already decided that we were going to require Istio, so why wouldn't we?

internal/configuration/dal/db/schema.sql Outdated Show resolved Hide resolved
@stuartwdouglas stuartwdouglas force-pushed the stuartwdouglas/istio branch 7 times, most recently from 09da70b to 6f1fcff Compare October 1, 2024 00:30
@stuartwdouglas stuartwdouglas requested review from a team as code owners October 1, 2024 00:30
@stuartwdouglas stuartwdouglas requested review from tomdaffurn and removed request for a team October 1, 2024 00:30
@stuartwdouglas stuartwdouglas requested review from worstell and wesbillman and removed request for a team October 1, 2024 00:30
This will automatically create istio AuthorisationPolicy objects to limit
ingress to the runners to the controller.
This will be expanded to include egress and peer to peer in future.

add istio test
@stuartwdouglas stuartwdouglas merged commit a416149 into main Oct 1, 2024
93 checks passed
@stuartwdouglas stuartwdouglas deleted the stuartwdouglas/istio branch October 1, 2024 00:49
@@ -20,3 +20,6 @@ rules:
resourceNames:
- ftl-controller-deployment-config
verbs: [ "get"]
- apiGroups: [ "security.istio.io" ]
resources: [ "authorizationpolicies" ]
verbs: [ "get", "list", "watch", "delete", "create", "update", "patch" ]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still makes me nervous. I know that this is the direction we agreed on, but I wonder if this should really be broken out into seperate roles, or whether the actions of modifying cluster istio authz policies should be done by an entity other than the controller. Gaining access to the ftl-controller, is already root on the deployed FTL cluster, but now it's also root on the underlying infrastructure. I'm imagining privilege escalation scenarios where a cluster is completely hijacked.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will get moved to the provisioner at some point.

For now I could probably drop down to just the 'create' role, using owner references for deletion, but that will likely change if we start doing P2P and need to modify them as deployments come online.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
run-all A PR with this label will run the full set of CI jobs in the PR rather than in the merge queue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants