Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding an allocator service that acts as a reverse proxy. #768
Adding an allocator service that acts as a reverse proxy. #768
Changes from all commits
02e06da
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any particular reason to tie this to a specific namespace? We could grab the namespace from the
GameServerAllocation.ObjectMeta
on deserialisation -- might be more flexible?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am introducing gRPC interface and that will not support k8s APIs including ObjectMeta. The idea is allocator service that is deployed to a namespace is responsible for serving allocation in that namespace. Then MatchMaking does not need to be aware of internal structure of k8s and namespaces and only calls the allocator's endpoint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But if a user wants to run GameServers in more than one namespace, then they have to run two different services - which brings us back to the same place - the matchmaker being namespace aware (and we have to run twice the infrastructure, and potentially redeploy if we want to be dynamic about the namespaces being used).
Seems simpler to me to allow the endpoint to take a namespace as an agreement? (Maybe the "default" namespace if the default?)
Wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there are two allocator services deployed to k8s in different namespaces, matchmaker only needs to know the endpoints. However, if we want to have one allocator service deployed to one cluster and handle allocation requests for two namespaces for potentially two different purposes, then there will not be enough isolation between traffics for the two namespaces. IOW QPS for one namespace may impact allocator service performance for another namespace. So there is a trade off.
I would prefer not to expose namespace unless we for sure know that it is needed. Because adding additional fields are easy to do but removing them in future is hard as it will be a breaking change. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything comes back to the k8s api anyway, so we'll always have that bottleneck?
I can't disagree with that statement 👍
The only other few devil's advocate statement I can make, is that I think that this makes things for the end user a tad more complicated. Up until this point, everything is installed in the
agones-system
namespace - now we have Agones system components bleeding into other areas of Kubernetes, whereas before they were pretty tightly contained in theagones-system
namespace.The other thing is - we're saying we're a reverse proxy for this CRD, but we are changing the expected behaviour of that CRD with the reverse proxy. So it might be a bit confusing for users.
But given your excellent point above - I think we'll be okay to have the namespace defined in the env var -- and see how users like it. Much easier to add later 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I think you are right.
Can you please explain more? How are we changing the expected behavior?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure - basically, through the k8s api I can pass through what namespace I'm working with as part of the URL path - so I can access whatever namespace I want. That being said, in this instance, we aren't doing that - so it's probably not that huge a deal.
The only potentially confusing thing I see is if a user sets the namespace in
ObjectMeta.Namespace
and it doesn't translate through to the applied namespace in the service.But I don't see either of these things as blocking issues. As you say above, we can add this functionality later if we need it.
(Also, if someone wants to work on a new namespace, we have to provision a service account and rbac rules anyway, so it's not like you can dynamically add/remove namespace suport that quickly)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the gRPC interface that I will introduce next, the namespace is not exposed. So ObjectMeta.Namespace will not be relevant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normally this should be one CA certificate regardless of # of clusters, clients will use leaf certificates signed by this CA, so you technically only need one PEM file with all CA certificate bundle, but not all the client certs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IOW - we need make sure we have the certificates example set up correctly, because people will be cargo-culting this a lot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This acts as a revocation list for certificates as well. If a secret is compromised, its CA should be revoked without impacting other clients calling allocation service. Having this pattern is valuable for match making and not for cluster to cluster calls. If a cluster secret is compromised, then all secrets to talk to another cluster is compromised. However, matchmaker is an independent entity and this solution helps securing system from matchmakers bad actors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at this point returning http.Error will probably have no effect, since we already sent the headers, I would just log a warning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also returns the http status and body with error message.