Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

authorizer/webhook: make webhook authorizer cluster aware #43

Open
wants to merge 89 commits into
base: feature-logical-clusters-1.23
Choose a base branch
from

Conversation

s-urbaniak
Copy link

smarterclayton and others added 30 commits February 3, 2022 16:52
with duplicated k8s code.

cmd/kube-apiserver/app/* -> pkg/genericcontrolplane/*
pkg/kubeapiserver/admission/* -> pkg/genericcontrolplane/admission/*
... stripped down from kube-apiserver,
based on the following duplicated k8s files:

pkg/controlplane/instance.go -> pkg/genericcontrolplane/apis/apis.go
pkg/kubeapiserver/options/authentication.go -> pkg/genericcontrolplane/options.go
cmd/kube-apiserver/app/options/options.go -> pkg/genericcontrolplane/options/options.go
Bootstrapped files are copied from existing k8s code:

pkg/api/legacyscheme/scheme.go -> pkg/api/genericcontrolplanescheme/scheme.go
pkg/apis/core/install/install.go -> pkg/apis/core/install/genericcontrolplane/install.go
pkg/apis/core/register.go -> pkg/apis/core/register_generic_control_plane.go
pkg/apis/core/v1/register.go -> pkg/apis/core/v1/register_generic_control_plane.go
staging/src/k8s.io/api/core/v1/register.go -> staging/src/k8s.io/api/core/v1/register_generic_control_plane.go
pkg/registry/core/rest/storage_core.go -> pkg/registry/core/rest/genericcontrolplane/storage_core.go
Some of the core API group is required for generic control plane
server startup - specifically events and namespaces - and so the
core API group must be installed. Instead of loading it as the
legacy group, attempt to make it a standard API group and adapt
the generic startup code as necessary to enable that.
The following existing k8S file is duplicated to prepare the change:

pkg/kubeapiserver/default_storage_factory_builder.go -> pkg/kubeapiserver/legacy_storage_factory_builder.go
…rage

A number of hardcoded assumptions exist in the standard resource factory
that are specific to versions in use. Instead, create a separate factory
without these assumptions. In the future this might instead be a way for
injecting a different form of placement logic (identify resources with a
unique lineage and map them to a single key that can have multiple readers.
This is a generalization of the prototype approach used to retrieve the clusterName
during wildcard watch across clusters

For etcd3 storage functions, we should detect when an operation involves (or results in) an empty clusterName.
However we only emit a warning, in order to avoid breaking normal usage of kubernetes starting kube-apiserver.

Of course, in the future, we should add an abstraction layer to allow several implementations of the
clusterName setting / retrieval, and not be depending on a single etcd-based implementation.

Signed-off-by: David Festal <dfestal@redhat.com>
This may be useful in case when some legacy scheme resources
are not registered, which is the case for example in KCP.

There has to be special cases for `core` resources that
have an empty group and are at a dedicated api prefix.

This doesn't fully work with `kubectl` or kubernetes client,
due to the fact that those client tools systematically use
strategic merge patch for legacy scheme resources, and CRDs
don't support strategic merge patch for now.
The fix for this limitation will be in a next commit.

Signed-off-by: David Festal <dfestal@redhat.com>
…netes#6)

Currently CRDs only allow defining custom table column based on a single basic JsonPath expression.
This is obviously not sufficient to reproduce the various column definitions of legacy scheme objects like
deployments, etc ..., since those definitions are implemented in Go code.

So for example in KCP, when the `deployments` API resource is brought back from physical clusters under the form of a CRD, the table columns shown from a kubectl get deployments command are not the ones typically expected.

This PR adds a temporary hack to replace the table converter of CRDs that bring back legacy-schema resources, with the default table converter of the related legacy scheme resource.

In the future this should probably be replaced by some new mechanism that would allow customizing some behaviors of resources defined by CRDs.

Signed-off-by: David Festal <dfestal@redhat.com>
... based on CRD published openapi v2 definitions
In order to compile from a go module, the APIServer requires access
to GetOpenAPIDefinitions() for at least a subset of the core code.
The minimal control plane would need openapi docs for all inline
objects - one way to solve that would be to split the set up based
on what objects are registered and to accept definitions at init
time. In either case, a downstream vendor would have to reference
some of these objects, so exclude them from being ignored temporarily
until a longer term solution can be provided.
...This commit hacks CRD tenancy by ensuring that logical clusters are taken in account in:
- CRD-related controllers
- APIServices-related controllers
- Discovery + OpenAPI endpoints

In the current Kubernetes design, those 3 areas are highly coupled and intricated,
which explains why this commit had to hack the code at various levels:
- client-go level
- controllers level,
- http handlers level.
While this gives a detailed idea of which code needs to be touched in order to enable CRD tenancy,
a clean implementation would first require some refactoring,
in order to build the required abstraction layers that would allow decoupling those areas.

In the current implementation, we ensure that the clusterName is managed consistently
and do not allow any operation that involves (or results in) an object having an empty `clusterName`

Signed-off-by: David Festal <dfestal@redhat.com>
In some contexts, like the controller-runtime library used by
the Operator SDK, all the resources of the client-go scheme
are created / updated using the protobuf content type.
However when these resources are in fact added as CRDs,
in the KCP minimal API server scenario, these resources cannot
be created / updated since the protobuf (de)serialization is not
supported for CRDs.
So in this case we just convert the protobuf request to a Json one
(using the `client-go` scheme decoder/encoder),
before letting the CRD handler serve it.

A real, long-term and non-hacky, fix for this problem would be as follows:
When a request for an unsupported serialization is returned, the server should
reject it with a 406 and provide a list of supported content types.
client-go should then examine whether it can satisfy such a request by
encoding the object with a different scheme.
This would require a KEP but is in keeping with content negotiation on GET / WATCH in Kube

Signed-off-by: David Festal <dfestal@redhat.com>
... and potential client problems

Signed-off-by: David Festal <dfestal@redhat.com>
It is necessary to take in account the `ClusterName` (logical cluster) of the current request context
and use it to validate the request against the given namespace of the same logical cluster.
Before this change, it was not possible to create an object in a given namespace in a non-default logical
cluster until a namespace with the same name would be created in the default ('admin') logical cluster
(issue kcp-dev/kcp#157)

Signed-off-by: David Festal <dfestal@redhat.com>
The Namespace controller must take in account the `ClusterName` (== logical cluster) in all
its actions, especially when deleting contained objects during finalization.

This also required hacking a bit the RBAC policy initialization so that it
explicitely uses the `admin` logical cluster (until we have a decent inheritance mechanism
between logical clusters for RBAC)

Signed-off-by: David Festal <dfestal@redhat.com>
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Fix an issue where the naming controller was hot looping trying to
enqueue other CRDs in the same API group.

With the change to cache.MetaNamespaceKeyFunc to now encode the cluster
name as part of the string key, we have to make a similar change to the
naming controller so that it decodes the cluster name from the key and
reinserts it when enqueuing new items.

Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
This hack fixes issue kcp-dev/kcp#183: One cannot
watch with wildcards (across logical clusters) if the CRD of the related API Resource
hasn't been added in the admin logical cluster first.

The fix in this HACK is limited since the request will fail if 2 logical clusters
contain CRDs for the same GVK with non-equal specs (especially non-equal schemas).

Signed-off-by: David Festal <dfestal@redhat.com>
Transparent sharding needs to occur after a client has passed auth{n,z}
for the primary replica they contact. This means that kcp needs to
inject a handler in between those chains and the gorestful delegates.
Right now the simplest way to do that is to just open up the option.

Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
ncdc and others added 26 commits February 3, 2022 16:52
As part of the updated Lister interface methods, fix the hand-written
conversionLister to conform to the updated interface.

Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
s.Logs.Apply no longer exists; it is now ValidateAndApply. Upstream
calls this as early as possible in the Cobra RunE method, so this is a
deviation (it's a minimal change to keep compiling).

Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Undo most of our kube-aggregator hacks, now that we're no longer using
it for genericcontrolplane.

Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
(cherry picked from commit 31f5bdc)
(cherry picked from commit 9b29aab)
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Signed-off-by: David Festal <dfestal@redhat.com>
(cherry picked from commit d7e9ea1)
Signed-off-by: David Festal <dfestal@redhat.com>
(cherry picked from commit 29fcab6)
…apiserver-config-23

1.23: genericcontrolplane: split generic config construction from kube-apiserver and apiextensions
@@ -67,11 +67,12 @@ type WebhookAuthorizer struct {
retryBackoff wait.Backoff
decisionOnError authorizer.Decision
metrics AuthorizerMetrics
cluster string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't used anywhere?

@s-urbaniak
Copy link
Author

side note: discussed this with @sttts as I am not convinced this is the right path. This is currently used upstream in k8s.io/apiserver/pkg/server/options to configure delegated subject access reviews so the approach here has two drawbacks:

  • the cluster is configured statically, for all requests
  • the cluster should indeed be read from a request context
  • generally, i'll try to wrap the RESTClient somehow as patching the upstream delegating authorizers seems heavy weight

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants