-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ feature: sync resources from syncer virtual server #1995
Conversation
962d0af
to
8f0061b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we certain that a reconciler process per synctarget like this is the most efficient way to do it?
return nil | ||
} | ||
|
||
syncerVirtualWorkspaceURL := syncTarget.Status.VirtualWorkspaces[0].URL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it too early to handle the N>1 case?
|
||
syncTargetKey := workloadv1alpha1.ToSyncTargetKey(c.syncTargetWorkspace, c.syncTargetName) | ||
|
||
upstreamInformers := dynamicinformer.NewFilteredDynamicSharedInformerFactoryWithOptions(c.upstreamDynamicClusterClient.Cluster(logicalcluster.Wildcard), metav1.NamespaceAll, func(o *metav1.ListOptions) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it at all possible to have one set of informers and to defer filtering? Do we care about the load here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The filter is to handle the certain synctarget only.
} | ||
|
||
func getAllGVRs(synctarget *workloadv1alpha1.SyncTarget) (map[schema.GroupVersionResource]bool, error) { | ||
// add configmap and secrets by default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? We have a list of built-in types that's more than just this, and regardless don't we have e.g. the kubernetes
export with these bits?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is from the old code :) https://github.com/kcp-dev/kcp/blob/main/pkg/syncer/syncer.go#L292. I think the reason is we do not sync cluster scoped and we do not sync sa. The kubernetes export does not include configmaps and secrets.
8f0061b
to
fb90e1d
Compare
would you explain more on the question? I think the syncer process is to handle a single synctarget. |
9004f53
to
e974da8
Compare
@qiujian16 do we think at a fundamental design level this could be done by dynamically managing the lifecycle of informer's handlers in the syncer controllers as-is? E.G. when the spec or status syncer sees that the set of GVRs to manage changes, start & stop handling events on the correct informers. That would allow us to have fewer controllers happening here and likely make the approach a little simpler. |
@qiujian16 could you please explain what you're doing here? 😄 I'm curious why you are starting 1 set of controllers per GVR? I think you could accomplish the same thing by just filtering out "invalid" GVRs when processing resources, instead of starting lots and lots of controllers? |
we do not know what is a "invalid" GVR, the objective is the syncer could know from synctarget API (or maybe discovery API from the syncer virtual workspace) what are the GVRs to be synced from the upstream. The GVRs list can be pretty dynamic. so we need to dynamically add/remove upstream/downstream informer per gvr for spec/status sync. |
I think we could. We still need to start/stop informer per gvr based on synctarget API. But we could just set eventHandler for the spec/status sync, instead of start spec/status sync per gvr. Does it sound sensible? |
cefc822
to
75ca22e
Compare
@stevekuznetsov @ncdc PTAL again, change to avoid creating spec/status syncer per gvr |
defer c.mutex.Unlock() | ||
|
||
if _, ok := c.syncerInformerMap[gvr]; ok { | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how do you handle the case where the set of resources for a synctarget changes.?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in sync target, we only track per gvr. If a gvr is included in the syncTarget, we start the informer when it is not started.
75ca22e
to
b2bbc31
Compare
/retest |
76d5a87
to
79c24d8
Compare
98f44f5
to
c3b6f9c
Compare
/retest |
Some minor comments, but apart from that LGTM |
c3b6f9c
to
dda8b95
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we discussed during the meeting, do you think we could:
- issue a SSAR before adding a GVR to the map of syncer informers (here ?)
- Trigger a new reconcile (every 5 or 10 minutes) for resources that have not the expected permissions
- Add the related tests
Signed-off-by: Jian Qiu <jqiu@redhat.com>
Signed-off-by: Jian Qiu <jqiu@redhat.com>
dda8b95
to
a249760
Compare
I think we already have heartbeat that will trigger the reconcile here every 20s. |
Signed-off-by: Jian Qiu <jqiu@redhat.com>
a249760
to
d1aafb3
Compare
ssar check and related test added. |
[APPROVALNOTIFIER] This PR is APPROVED Approval requirements bypassed by manually added approval. This pull-request has been approved by: davidfestal The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: Jian Qiu jqiu@redhat.com
Summary
In addition to sync resource based on resources flag on syncer, also read the syncedResource on synctarget. It requires to start spec/status syncer per gvr dynamically. With this change, user can specify the apiexport on synctarget and do not need to set resource flag on syncer any more.
Related issue(s)
Fixes #1888