Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ feature: sync resources from syncer virtual server #1995

Merged
merged 3 commits into from
Oct 4, 2022

Conversation

qiujian16
Copy link
Contributor

Signed-off-by: Jian Qiu jqiu@redhat.com

Summary

In addition to sync resource based on resources flag on syncer, also read the syncedResource on synctarget. It requires to start spec/status syncer per gvr dynamically. With this change, user can specify the apiexport on synctarget and do not need to set resource flag on syncer any more.

Related issue(s)

Fixes #1888

Copy link
Contributor

@stevekuznetsov stevekuznetsov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we certain that a reconciler process per synctarget like this is the most efficient way to do it?

return nil
}

syncerVirtualWorkspaceURL := syncTarget.Status.VirtualWorkspaces[0].URL
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it too early to handle the N>1 case?


syncTargetKey := workloadv1alpha1.ToSyncTargetKey(c.syncTargetWorkspace, c.syncTargetName)

upstreamInformers := dynamicinformer.NewFilteredDynamicSharedInformerFactoryWithOptions(c.upstreamDynamicClusterClient.Cluster(logicalcluster.Wildcard), metav1.NamespaceAll, func(o *metav1.ListOptions) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it at all possible to have one set of informers and to defer filtering? Do we care about the load here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The filter is to handle the certain synctarget only.

}

func getAllGVRs(synctarget *workloadv1alpha1.SyncTarget) (map[schema.GroupVersionResource]bool, error) {
// add configmap and secrets by default
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? We have a list of built-in types that's more than just this, and regardless don't we have e.g. the kubernetes export with these bits?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is from the old code :) https://github.com/kcp-dev/kcp/blob/main/pkg/syncer/syncer.go#L292. I think the reason is we do not sync cluster scoped and we do not sync sa. The kubernetes export does not include configmaps and secrets.

@qiujian16
Copy link
Contributor Author

Are we certain that a reconciler process per synctarget like this is the most efficient way to do it?

would you explain more on the question? I think the syncer process is to handle a single synctarget.

@qiujian16 qiujian16 force-pushed the dynamic-syncer branch 3 times, most recently from 9004f53 to e974da8 Compare September 16, 2022 10:16
@stevekuznetsov
Copy link
Contributor

@qiujian16 do we think at a fundamental design level this could be done by dynamically managing the lifecycle of informer's handlers in the syncer controllers as-is? E.G. when the spec or status syncer sees that the set of GVRs to manage changes, start & stop handling events on the correct informers. That would allow us to have fewer controllers happening here and likely make the approach a little simpler.

@ncdc
Copy link
Member

ncdc commented Sep 16, 2022

@qiujian16 could you please explain what you're doing here? 😄 I'm curious why you are starting 1 set of controllers per GVR?

I think you could accomplish the same thing by just filtering out "invalid" GVRs when processing resources, instead of starting lots and lots of controllers?

@qiujian16
Copy link
Contributor Author

@qiujian16 could you please explain what you're doing here? 😄 I'm curious why you are starting 1 set of controllers per GVR?

I think you could accomplish the same thing by just filtering out "invalid" GVRs when processing resources, instead of starting lots and lots of controllers?

we do not know what is a "invalid" GVR, the objective is the syncer could know from synctarget API (or maybe discovery API from the syncer virtual workspace) what are the GVRs to be synced from the upstream. The GVRs list can be pretty dynamic. so we need to dynamically add/remove upstream/downstream informer per gvr for spec/status sync.

@qiujian16
Copy link
Contributor Author

@qiujian16 do we think at a fundamental design level this could be done by dynamically managing the lifecycle of informer's handlers in the syncer controllers as-is? E.G. when the spec or status syncer sees that the set of GVRs to manage changes, start & stop handling events on the correct informers. That would allow us to have fewer controllers happening here and likely make the approach a little simpler.

I think we could. We still need to start/stop informer per gvr based on synctarget API. But we could just set eventHandler for the spec/status sync, instead of start spec/status sync per gvr. Does it sound sensible?

@qiujian16 qiujian16 force-pushed the dynamic-syncer branch 4 times, most recently from cefc822 to 75ca22e Compare September 19, 2022 08:31
@qiujian16
Copy link
Contributor Author

@stevekuznetsov @ncdc PTAL again, change to avoid creating spec/status syncer per gvr

defer c.mutex.Unlock()

if _, ok := c.syncerInformerMap[gvr]; ok {
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how do you handle the case where the set of resources for a synctarget changes.?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in sync target, we only track per gvr. If a gvr is included in the syncTarget, we start the informer when it is not started.

@qiujian16
Copy link
Contributor Author

/retest

@qiujian16 qiujian16 force-pushed the dynamic-syncer branch 8 times, most recently from 76d5a87 to 79c24d8 Compare September 23, 2022 05:29
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 26, 2022
@qiujian16 qiujian16 force-pushed the dynamic-syncer branch 6 times, most recently from 98f44f5 to c3b6f9c Compare September 28, 2022 08:10
@qiujian16
Copy link
Contributor Author

/retest

@davidfestal
Copy link
Member

Some minor comments, but apart from that LGTM

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 28, 2022
@openshift-merge-robot openshift-merge-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Sep 29, 2022
Copy link
Member

@davidfestal davidfestal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed during the meeting, do you think we could:

  • issue a SSAR before adding a GVR to the map of syncer informers (here ?)
  • Trigger a new reconcile (every 5 or 10 minutes) for resources that have not the expected permissions
  • Add the related tests

Signed-off-by: Jian Qiu <jqiu@redhat.com>
Signed-off-by: Jian Qiu <jqiu@redhat.com>
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 30, 2022
@qiujian16
Copy link
Contributor Author

As we discussed during the meeting, do you think we could:

  • Trigger a new reconcile (every 5 or 10 minutes) for resources that have not the expected permissions

I think we already have heartbeat that will trigger the reconcile here every 20s.

Signed-off-by: Jian Qiu <jqiu@redhat.com>
@qiujian16
Copy link
Contributor Author

ssar check and related test added.

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Oct 4, 2022
@davidfestal davidfestal added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 4, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 4, 2022

[APPROVALNOTIFIER] This PR is APPROVED

Approval requirements bypassed by manually added approval.

This pull-request has been approved by: davidfestal

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit 7cda1cb into kcp-dev:main Oct 4, 2022
@qiujian16 qiujian16 deleted the dynamic-syncer branch October 8, 2022 06:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

feature: syncer should sync all resources exposed in syncer virtual workspace.
5 participants