Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support namespace filter in RR for search proxy #3527

Merged
merged 1 commit into from
May 22, 2023

Conversation

ikaven1024
Copy link
Member

@ikaven1024 ikaven1024 commented May 11, 2023

What type of PR is this?
/kind feature

What this PR does / why we need it:

Which issue(s) this PR fixes:
Fixes #3445

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

`karmada-search`: support namespace filter in RR for search proxy.

@karmada-bot karmada-bot requested a review from huntsman-li May 11, 2023 12:44
@karmada-bot karmada-bot added the kind/feature Categorizes issue or PR as related to a new feature. label May 11, 2023
@karmada-bot karmada-bot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label May 11, 2023
@ikaven1024
Copy link
Member Author

Test Report

There are two clusters.
In cluster1:

➜  ~ kubectl get po -A
NAMESPACE   NAME                          READY   STATUS    RESTARTS   AGE
default     nginx1-5959467b5-hstfz        0/1     Pending   0          29d
default     nginx1-5959467b5-mdhcw        0/1     Pending   0          29d
test        nginx1-test-5959467b5-c9w6x   0/1     Pending   0          22h
test        nginx1-test-5959467b5-lrg8x   0/1     Pending   0          22h

In cluster2:

➜  ~ kubectl get po -A
NAMESPACE   NAME                           READY   STATUS    RESTARTS   AGE
default     nginx2-7859864b68-6vz4w        0/1     Pending   0          29d
default     nginx2-7859864b68-bvkc4        0/1     Pending   0          29d
test        nginx2-test-7859864b68-k6n2p   0/1     Pending   0          22h
test        nginx2-test-7859864b68-kclzq   0/1     Pending   0          22h

rr:

apiVersion: search.karmada.io/v1alpha1
kind: ResourceRegistry
metadata:
  name: foo
spec:
  resourceSelectors:
  - apiVersion: v1
    kind: Pod
    namespace: test
  • get with not cached ns (default), return none.
➜  ~ kubectl get po
No resources found in default namespace.
  • get with cached ns (test), return pods in test namespace.
➜  ~ kubectl get po -n test
NAME                           AGE
nginx1-test-5959467b5-c9w6x    22h
nginx1-test-5959467b5-lrg8x    22h
nginx2-test-7859864b68-k6n2p   22h
nginx2-test-7859864b68-kclzq   22h
  • get all namespaces, return pods in test namespace.
➜  ~ kubectl get po -A
NAMESPACE   NAME                           AGE
test        nginx1-test-5959467b5-c9w6x    22h
test        nginx1-test-5959467b5-lrg8x    22h
test        nginx2-test-7859864b68-k6n2p   22h
test        nginx2-test-7859864b68-kclzq   22h

@ikaven1024
Copy link
Member Author

/assign @XiShanYongYe-Chang

@codecov-commenter
Copy link

codecov-commenter commented May 11, 2023

Codecov Report

Merging #3527 (67351f4) into master (ee778ff) will increase coverage by 0.44%.
The diff coverage is 84.16%.

❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.

@@            Coverage Diff             @@
##           master    #3527      +/-   ##
==========================================
+ Coverage   52.64%   53.09%   +0.44%     
==========================================
  Files         213      216       +3     
  Lines       19581    20107     +526     
==========================================
+ Hits        10308    10675     +367     
- Misses       8720     8856     +136     
- Partials      553      576      +23     
Flag Coverage Δ
unittests 53.09% <84.16%> (+0.44%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
pkg/search/proxy/store/cluster_cache.go 73.33% <55.55%> (-5.24%) ⬇️
pkg/search/proxy/store/store.go 70.39% <75.40%> (+5.13%) ⬆️
pkg/search/proxy/controller.go 80.35% <100.00%> (+0.60%) ⬆️
pkg/search/proxy/store/multi_cluster_cache.go 76.66% <100.00%> (-0.26%) ⬇️
pkg/search/proxy/store/resource_cache.go 79.09% <100.00%> (+0.38%) ⬆️
pkg/search/proxy/store/util.go 92.06% <100.00%> (-1.12%) ⬇️

... and 9 files with indirect coverage changes

@XiShanYongYe-Chang
Copy link
Member

Thanks a lot, I will take a look ASAP.

Copy link
Member

@XiShanYongYe-Chang XiShanYongYe-Chang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot

pkg/search/proxy/store/util.go Outdated Show resolved Hide resolved
pkg/search/proxy/store/util.go Show resolved Hide resolved
pkg/search/proxy/store/store_test.go Show resolved Hide resolved
Signed-off-by: yingjinhui <yingjinhui@didiglobal.com>
@XiShanYongYe-Chang
Copy link
Member

/lgtm

Hi @hanzai, I think this pr has achieved the function you want, can you help take a look?

@karmada-bot karmada-bot added the lgtm Indicates that a PR is ready to be merged. label May 16, 2023
@hanzai
Copy link

hanzai commented May 17, 2023

image

In this case, should proxy get pods of default namespace from cache or karmada cluster?

The code indicates that this action hits the cache plugin when pods of default namespace are not cached. @ikaven1024 @XiShanYongYe-Chang

https://github.com/ikaven1024/karmada/blob/proxy/pkg/search/proxy/framework/plugins/cache/cache.go#L55

@XiShanYongYe-Chang
Copy link
Member

Hi @hanzai I'm sorry, I didn't catch your point. Do you mean if it's not in the search cache, it's directly obtained from the member cluster?

@ikaven1024
Copy link
Member Author

image

In this case, should proxy get pods of default namespace from cache or karmada cluster?

pods in default ns are not cached, so client get none.

The code indicates that this action hits the cache plugin when pods of default namespace are not cached. @ikaven1024 @XiShanYongYe-Chang

https://github.com/ikaven1024/karmada/blob/proxy/pkg/search/proxy/framework/plugins/cache/cache.go#L55

This code indicating Cache plugin can handle this request, and return none for default ns.

@hanzai
Copy link

hanzai commented May 18, 2023

Hi @hanzai I'm sorry, I didn't catch your point. Do you mean if it's not in the search cache, it's directly obtained from the member cluster?

image

image

The Command get pod -n default can get different results even if the admin of default namespace makes no operations.

In product environment, different namespaces are used by different teams. Someone adds RR with a namespace should not make affects in other namespaces.

@XiShanYongYe-Chang
Copy link
Member

Hi @hanzai, thanks for your wonderful reply, I understand your concern. This is indeed a question worth considering.

According to the current design, ResourceRegistry is a system configuration. When an administrator performs operations on ResourceRegistry, all users who use karmada search to query data are affected. Therefore, all users need to know the existence of the ResourceRegistry, that is, users need to know which resources are cached on the current control plane.

How to let common users know whether the queried information is cached from the member cluster to the control plane? One method is that the user proactively queries the ResourceRegistry, and the other method is based on the returned result.

Currently, resources obtained from the cache carry labels of cluster information, but resources obtained from karmada do not have labels.

Perhaps we can consider removing karmada-plugin so that we can judge based on the returned error information. Of course, it's just an assumption.

Ask @ikaven1024 for some advice.

@ikaven1024
Copy link
Member Author

Perhaps we can consider removing karmada-plugin so that we can judge based on the returned error information. Of course, it's just an assumption.

karmada-plugin is useful in kubectl query non-resource request:

➜  .kube kubectl get po --v=6
I0518 22:31:55.927365     949 loader.go:374] Config loaded from file:  /Users/didi/.kube/config
I0518 22:31:55.928499     949 cert_rotation.go:137] Starting client certificate rotation controller
I0518 22:31:55.949184     949 round_trippers.go:553] GET https://127.0.0.1:32764/api?timeout=32s 200 OK in 20 milliseconds
I0518 22:31:55.955817     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis?timeout=32s 200 OK in 2 milliseconds
I0518 22:31:55.956970     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/autoscaling/v2?timeout=32s 200 OK in 0 milliseconds
I0518 22:31:55.958407     949 round_trippers.go:553] GET https://127.0.0.1:32764/api/v1?timeout=32s 200 OK in 1 milliseconds
I0518 22:31:55.958422     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/events.k8s.io/v1?timeout=32s 200 OK in 1 milliseconds
I0518 22:31:55.958408     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s 200 OK in 1 milliseconds
I0518 22:31:55.958617     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 1 milliseconds
I0518 22:31:55.958628     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 1 milliseconds
I0518 22:31:55.959020     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 1 milliseconds
I0518 22:31:55.959033     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/node.k8s.io/v1?timeout=32s 200 OK in 1 milliseconds
I0518 22:31:55.959179     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 1 milliseconds
I0518 22:31:55.959239     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/autoscaling/v2beta2?timeout=32s 200 OK in 1 milliseconds
I0518 22:31:55.959253     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/discovery.k8s.io/v1?timeout=32s 200 OK in 1 milliseconds
I0518 22:31:55.959341     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds
I0518 22:31:55.959401     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds
I0518 22:31:55.959849     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds
I0518 22:31:55.959860     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds
I0518 22:31:55.959877     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/policy/v1?timeout=32s 200 OK in 2 milliseconds
I0518 22:31:55.959898     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/batch/v1?timeout=32s 200 OK in 2 milliseconds
I0518 22:31:55.959866     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/networking.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds
I0518 22:31:55.960478     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/flowcontrol.apiserver.k8s.io/v1beta2?timeout=32s 200 OK in 2 milliseconds
I0518 22:31:55.960486     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds
I0518 22:31:55.960497     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/autoscaling/v1?timeout=32s 200 OK in 2 milliseconds
I0518 22:31:55.960502     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/apps/v1?timeout=32s 200 OK in 3 milliseconds
I0518 22:31:55.960507     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/storage.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds
I0518 22:31:55.960519     949 round_trippers.go:553] GET https://127.0.0.1:32764/apis/certificates.k8s.io/v1?timeout=32s 200 OK in 3 milliseconds
I0518 22:31:55.969654     949 round_trippers.go:553] GET https://127.0.0.1:32764/api/v1/namespaces/default/pods?limit=500 200 OK in 1 milliseconds

With it, it's is friendly to users to operate with kubectl.

Of cause, users can custom their individual strategy by refactoring the proxy framework. But it's not recommend to merge it to the opensource.

@hanzai
Copy link

hanzai commented May 19, 2023

karmada-plugin is useful in kubectl query non-resource request:

Yeah,I agree with it. Karmada-plugin is necessary in my use case.
I figure that the SupportRequest function can take namespace into account. When the namespace is not cached, pass through to the next plugin, as following code.

https://github.com/hanzai/karmada/blob/c7453080a056bc79138dba290c1bdd1eef64e208/pkg/search/proxy/framework/plugins/cache/cache.go#L59

@ikaven1024
Copy link
Member Author

For example:

➜  ~ kubectl get po -A
test        nginx-1
test        nginx-2
➜  ~ kubectl get po -n kube-system
kube-system        kube-apiserver-0

It's is strange that get some pods from kube-system while not appear in all namespaces.

IMO, we should still hold this logic atpresent. And do it when we have sufficient cases.
Ask @XiShanYongYe-Chang @RainbowMango for advices.

@XiShanYongYe-Chang
Copy link
Member

IMO, we should still hold this logic atpresent. And do it when we have sufficient cases.

I agree with this point.

Hi @hanzai, if you want to discuss it at the meeting, please feel free to add agenda in the next Karmada Community Meeting: https://docs.google.com/document/d/1y6YLVC-v7cmVAdbjedoyR5WL0-q45DBRXTvz5_I7bkA/edit#heading=h.g61sgp7w0d0c

@hanzai
Copy link

hanzai commented May 22, 2023

IMO, we should still hold this logic atpresent. And do it when we have sufficient cases.

I agree with this point.

Hi @hanzai, if you want to discuss it at the meeting, please feel free to add agenda in the next Karmada Community Meeting: https://docs.google.com/document/d/1y6YLVC-v7cmVAdbjedoyR5WL0-q45DBRXTvz5_I7bkA/edit#heading=h.g61sgp7w0d0c

I'm tied up this month and glad to attend the meeting next month.

@XiShanYongYe-Chang
Copy link
Member

I'm tied up this month and glad to attend the meeting next month.

Okay, We are looking forward to your participation.

I think that this pr can continue to advance, related questions you can raise an issue to track, thank you.

/approve

@karmada-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: XiShanYongYe-Chang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 22, 2023
@karmada-bot karmada-bot merged commit 6089fa3 into karmada-io:master May 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/feature Categorizes issue or PR as related to a new feature. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants