-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Partitionset reconciliation #2513
✨ Partitionset reconciliation #2513
Conversation
7294cce
to
2376619
Compare
2376619
to
a7fb7b1
Compare
e1f1ada
to
c991948
Compare
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a first pass, for now only over the API. Let me know what you think.
// the MatchLabels generated for the Partition according to the dimensions in the PartitionSet. | ||
// This does not limit the filtering as any MatchLabel can also be expressed as | ||
// a MatchExpression. | ||
MatchExpressions []metav1.LabelSelectorRequirement `json:"matchExpressions,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that a standard selector can be matchLabels
or/and matchExpression
and I think that both can be set at the same time. Not sure if I understand why we cannot use a standard label.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also could we rename it to shardSelector/shardMatchExpression
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as per the comment " MatchExpressions are used here instead of Selector to avoid possible collisions with the MatchLabels generated for the Partition".
PartitionSets are used to generate Partitions. Partitions have a LabelSelector. LabelSelectors have two parts:
- MatchLabels
- MatchExpressions
MatchLabels of Partitions are generated from the Dimension field of PartitionSet. MatchExpressions of Partitions are copied from MatchExpressions of PartitionSet. Keeping this separation provides a clearer UX. It would be more difficult for users to understand the generated Partitions from a PartitionSet if MatchLabels could be a mix of the Dimension generation and MatchLabels inherited from the PartitionSet. As per the comment there is no trade-off in terms of capabilities: any MatchLabels can be expressed by MatchExpressions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as per the comment " MatchExpressions are used here instead of Selector to avoid possible collisions with the MatchLabels generated for the Partition".
if we have a collision, does it mean we can achieve the same by providing a selector on a Partition
resource :) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand your point correctly PartitionSet is just a convenient way to create Partitions. Service Providers are free to create Partitions the way it best suits them: using PartitionSet or directly creating the resource. If they do it manually they can set Selectors: MatchLabels and MatchExpressions in Partitions in a similar way to what is generated by the controller.
@@ -0,0 +1,125 @@ | |||
/* | |||
Copyright 2022 The KCP Authors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: 2023
(applies to other places as well)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR and commit were created in 2022
// the MatchLabels generated for the Partition according to the dimensions in the PartitionSet. | ||
// This does not limit the filtering as any MatchLabel can also be expressed as | ||
// a MatchExpression. | ||
MatchExpressions []metav1.LabelSelectorRequirement `json:"matchExpressions,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to make PartitionSetSpec.Dimensions
and/or PartitionSetSpec.matchExpressions/PartitionSetSpec.selector
immutable ?
I think this could be less disruptive. A SP could create a new PartitionSet
wait for a Partition
to be created, switch an APIExport
to the new Partition
and then delete the old PartitionSet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I see it this is not the likely flow. This does not take in considerations that Partitions need to be colocated with the APIExportEndpointSlices that consume them. Here is how I see the flow:
- A service provider update a PartitionSet
- The set of Partitions bound to the PartitionSet is updated accordingly in the PartitionSet workspace
- Each of this Partition is copied into the workspace where the intended APIExportEndpointSlice is located. Each APIExportEndpointSlice is intended for an operator/controller manager instance. It makes URLs for a set of shards identified by the Partition available to it.
- The service provider can choose at any convenient time to patch the APIExportEndpointSlices with the new Partitions
There is no modification of the service and no disruption until the APIExportEndpointSlices are patched.
The steps above could be automated but it is another discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if I read the code correctly but it seems that we are deleting old partitions potentially being used by APIExportEndpointSlice which seems to be disruptive.
xref:
if err := c.deletePartition(ctx, logicalcluster.From(oldPartition).Path(), oldPartition.Name); err != nil && !apierrors.IsNotFound(err) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes we do delete partitions but only in the workspace where the PartitionSet has been created. Partitions used by APIExportEndpointSlices should be copied into the workspaces where the APIExportEndpointSlices are located to suit the placement decision: regional, cloud proximity. Theses copied Partitions should not get modified or deleted by the reconciliation of the PartitionSet.
pkg/reconciler/topology/resources.go
Outdated
func generatePartition(name string, matchExpressions []metav1.LabelSelectorRequirement, matchLabels map[string]string) *topologyv1alpha1.Partition { | ||
return &topologyv1alpha1.Partition{ | ||
ObjectMeta: metav1.ObjectMeta{ | ||
GenerateName: name + "-", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It appears to me that providers will have to manually introspect generated Partitions
to make use of them because names appear random and there is no explanation of the generated selector, am I right?
Wondering if we could/should improve this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Names are generated as Partitions are. We could create deterministic names by hashing the Partition spec but I am not sure it would be reasonable. An annotation may be better suited to store the hash. This would be stable as the PartitionSet reconciliation only does Partition create/delete, no update.
That said I am not sure it would be a significant improvement:
- Shard topology should be stable, e.g. you may add new shards in a region time to time but adding a new region is a rare event that would need action from Service Providers for them to cover it. It would also mean just a few additional Partitions to be taken care of. Deletion is not much different.
- PartitionSet change is driven by Service Provider and it means that they want to change the distribution of their service deployments. In that case they can check the deltas between the deployed Partitions and the newly generated one, even if a random name is used for Partitions. Some tooling on the client side may help with that.
There may be bits in the workflow that can be improved and I am open to proposals. My current thinking is that this could get better addressed when we develop some client side tooling that automates service providers deployments and topology changes, possibly taking a PartitionSet and an APIExport as parameter. I am a bit wary with having this automation on the server side because:
- that could drive automated changes that introduce big disruption and cost increases by service providers, so that it would need to be gated, hence may not bring much gain compared to a client-side approach.
- I am not sure that cross-shard communication will always be possible.
/retest |
pkg/reconciler/topology/resources.go
Outdated
@@ -22,12 +22,20 @@ import ( | |||
topologyv1alpha1 "github.com/kcp-dev/kcp/pkg/apis/topology/v1alpha1" | |||
) | |||
|
|||
const dimensionsLabelKeyBase = "partition.kcp.io/" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
partition.topology.kcp.io/
b8ead47
to
2c93afd
Compare
@sttts @p0lyn0mial I have implemented the changes we discussed:
|
/retest |
0db73e9
to
bfa0983
Compare
// selector (optional) is a label selector that filters shard targets. | ||
Selector *metav1.LabelSelector `json:"selector,omitempty"` | ||
// ShardSelector (optional) specifies filtering for shard targets. | ||
ShardSelector *metav1.LabelSelector `json:"shardSelector,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the name :)
@@ -453,6 +453,12 @@ func (s *Server) Run(ctx context.Context) error { | |||
} | |||
} | |||
|
|||
if s.Options.Controllers.EnableAll || enabled.Has("partition") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
partitionset
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would be ok with changing to partitionset. I used partition as we don't have any other controller specific to Partitions. Partitions alone don't get reconciled. It is only in the context of an APIExportEndpointSlice that the controller takes the referenced Partition in consideration for populating the endpoints. The same mechanism could also be leveraged in other places in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should honestly remove all these ifs. We don't use them anywhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack. Outside of the scope of this PR
return | ||
} | ||
|
||
logger := logging.WithObject(logging.WithReconciler(klog.Background(), ControllerName), obj.(*topologyv1alpha1.Partition)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move after line 218 so that we are sure the obj is obj.(*topologyv1alpha1.Partition)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
for _, ownerRef := range partition.OwnerReferences { | ||
if ownerRef.Kind == "PartitionSet" { | ||
path := logicalcluster.From(partition).Path() | ||
if !ok { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
if !ok { | ||
return false | ||
} | ||
if !reflect.DeepEqual(oldShard.Labels, newShard.Labels) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return reflect.DeepEqual(oldShard.Labels, newShard.Labels)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return !reflect.DeepEqual(oldShard.Labels, newShard.Labels)
done
} | ||
|
||
func (c *controller) deletePartitions(ctx context.Context, partitionSet *topologyv1alpha1.PartitionSet) error { | ||
partitionSet.Status.Count = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this modification required?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, it tracks the number of associated partitions
.
should we then decrease this number by the number of successful calls to deletePartition
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deletePartitions
(mind the s
at the end) is only call in case of an issue with the selector (lines 55 and 83) the count should then be set back to 0. In case of regular deletion because of a change in shards deletePartition
without s
is called (lines 147 and 182). In case of no error with the selector the number of endpoints is already captured in the size of matchLabelsMap
, no need for additional counting.
I will move partitionSet.Status.Count = 0
one level up as it is not clear from the name of the function that it does that as well.
getPartitionSet: func(clusterName logicalcluster.Name, name string) (*topologyv1alpha1.PartitionSet, error) { | ||
return partitionSetClusterInformer.Lister().Cluster(clusterName).Get(name) | ||
}, | ||
getPartitionsByPartitionSet: func(key string) ([]*topologyv1alpha1.Partition, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we change the signature of this method to accept an object instead of the key?
we wouldn't have to calculate the key on each invocation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes I can change the signature of the method but still we will need to calculate the key in the method to use the indexer, won't we?
} | ||
for _, partition := range existingPartitions { | ||
if err := c.deletePartition(ctx, logicalcluster.From(partition).Path(), partition.Name); err != nil && !apierrors.IsNotFound(err) { | ||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since this is a network call, we could be more greedy and try all existingPartitions
and collect all errors, wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure how you would do that. I thought of DeleteCollection
with FieldSelector
. FieldSelector
is however limited to certain fields and the owner reference is not among them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just loop over existingPartitions
, try to deletePartition
, collect any errors and return at the end :) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok so it makes no difference in terms of network calls, does it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does, we try all existingPartitions
even if we failed to delete a partition
. Hopefully on the next iteration we will have less work to do. Does it make sense?
topologyv1alpha1.PartitionsReady, | ||
topologyv1alpha1.ErrorGeneratingPartitionsReason, | ||
conditionsv1alpha1.ConditionSeverityError, | ||
"error listing Shards", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't we log the err
too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error is returned by the reconcile function, hence logged by the controller
labelSelector = labels.Everything() | ||
} else { | ||
var err error | ||
labelSelector, err = metav1.LabelSelectorAsSelector(partitionSet.Spec.ShardSelector) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since an invalid selector is expensive (deletePartitions
) would it make sense to move validation to an admission plugin and simply return an error (without deletion) here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see above. We can discuss it on slack
} | ||
|
||
var matchLabelsMap map[string]map[string]string | ||
if partitionSet.Spec.ShardSelector != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it looks like we could get rid of this if
statement since the partition
function can already handle nil shardSelectorLabels
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cannot reference MatchLabels
on a nil pointer
|
||
// partition populates shard label selectors according to dimensions. | ||
// It only keeps selectors that have at least one match. | ||
func partition(shards []*corev1alpha1.Shard, dimensions []string, shardSelectorLabels map[string]string) (matchLabelsMap map[string]map[string]string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would it make sense to rename to partitionByDimensionsAndLabels
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the need for now. There is currently a single way of partitioning. Labels
could also be ambiguous in this context. Dimensions
are a way to slice and dice by Labels
. Labels
are also used for filtering, defining the boundaries of the piece to slice and dice. What I mean is that Dimensions
and Labels
are not of the same nature for the partitioning.
for _, label := range labels { | ||
labelValue, ok := shard.Labels[label] | ||
if !ok { | ||
break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does it mean?
what if we didn't find let's say the second label out of four? (len(key) > 0
will apply)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
true. Adding a boolean to check the match
conditionsv1alpha1.ConditionSeverityError, | ||
"old partition could not get deleted", | ||
) | ||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
again, since this is a network call, we could be more greedy and try all oldPartitions
, wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure how you would do that here. There is no single selector for the Partitions to delete.
if !ok { | ||
break | ||
} | ||
key = key + "+" + label + "=" + labelValue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm not sure if i understand why the key
has to collect all labels with their values, couldn't we calculate some hash
from the labels and values ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a kind of hash. We could use a hash function for it. I am not sure that the processing time would be shorter and it was nice to have something readable by debugging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we won't reduce the size of the key it might grow indefinitely, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The keys are only used as a transient and quick way of comparing the desired state of Partitions with the existing ones. They are not stored.
conditionsv1alpha1.ConditionSeverityError, | ||
"partition could not get created", | ||
) | ||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since this is a network call, we could be more greedy and try to create all partitions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure how you would do that
c.enqueueAllPartitionSets(obj) | ||
}, | ||
UpdateFunc: func(oldObj, newObj interface{}) { | ||
if filterShardEvent(oldObj, newObj) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
explain in a comment what the filtering is and why it makes sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
indexers.AddIfNotPresentOrDie(partitionClusterInformer.Informer().GetIndexer(), cache.Indexers{ | ||
indexPartitionsByPartitionSet: indexPartitionsByPartitionSetFunc, | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: indexes first, event handler after that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
type controller struct { | ||
queue workqueue.RateLimitingInterface | ||
|
||
// kcpClusterClient is cluster aware and used to communicate with kcp API servers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need for the comment. This is convention.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
|
||
func (c *controller) reconcile(ctx context.Context, partitionSet *topologyv1alpha1.PartitionSet) error { | ||
labelSelector := labels.Everything() | ||
if partitionSet.Spec.ShardSelector == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
!= nil ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
} | ||
partitionSet.Status.Count = uint16(len(matchLabelsMap)) | ||
existingMatches := map[string]struct{}{} | ||
var placeHolder struct{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just use boolean type for the map
matchLabelsMap = partition(shards, partitionSet.Spec.Dimensions, nil) | ||
} | ||
partitionSet.Status.Count = uint16(len(matchLabelsMap)) | ||
existingMatches := map[string]struct{}{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can be made type-safe. Compare https://goplay.space/#IxSJ1_iWaY9.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was only interested in the indexed keys of the map, an empty struct has 0 width, whereas the bool has 1 byte. As the map won't be huge and is not stored I guess it does not matter much. That said, an empty struct holds no data, isn't it type-safe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant the key, i.e. that the key is a string but could be a struct. No need to think about representation.
|
||
// Sorting the keys of the old partition for consistent comparison | ||
sortedOldKeys := make([]string, len(oldMatchLabels)) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: empty line. Put it after line 135
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
if _, ok := matchLabelsMap[partitionKey]; ok { | ||
existingMatches[partitionKey] = placeHolder | ||
} else { | ||
if err := c.deletePartition(ctx, logicalcluster.From(oldPartition).Path(), oldPartition.Name); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
&& !IsNotFound
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
if partitionSet.Spec.ShardSelector != nil { | ||
newMatchExpressions = partitionSet.Spec.ShardSelector.MatchExpressions | ||
} | ||
for _, oldPartition := range oldPartitions { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is this loop doing? A oneliner comment makes this much easier to read.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
} | ||
} | ||
} | ||
// Create partitions when no existing partition for the set has the same selector. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: empty line above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
if _, ok := matchLabelsMap[partitionKey]; ok { | ||
existingMatches[partitionKey] = placeHolder | ||
} else { | ||
if err := c.deletePartition(ctx, logicalcluster.From(oldPartition).Path(), oldPartition.Name); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We usually put logger.V(2)
before every client call
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
partition.OwnerReferences = []metav1.OwnerReference{ | ||
*metav1.NewControllerRef(partitionSet, topologyv1alpha1.SchemeGroupVersion.WithKind("PartitionSet")), | ||
} | ||
_, err = c.createPartition(ctx, logicalcluster.From(partitionSet).Path(), partition) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logger output
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
labels = append(labels, label) | ||
} | ||
// Sorting for consistent comparison. | ||
sort.Strings(labels) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
=== empty line ===
// Sorting for consistent comparison.
sort.Strings(labels)
=== empty line ===
or
sort.Strings(labels) // Sorting for consistent comparison.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed
@@ -0,0 +1,49 @@ | |||
/* | |||
Copyright 2022 The KCP Authors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and everywhere else
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I created the PR in 2022. Anyways I can change that if it matters
- op: add | ||
path: /spec/versions/name=v1alpha1/schema/openAPIV3Schema/properties/spec/properties/shardSelector/properties/matchExpressions/items/properties/values/items/pattern | ||
value: | ||
^[A-Za-z0-9]([-A-Za-z0-9_.]{0,61}[A-Za-z0-9])?$ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vincepri label selector validation should come from controller-tools, shouldn't it?
0246a36
to
63a6681
Compare
/retest |
8fd892a
to
e67f08a
Compare
Signed-off-by: Frederic Giloux <fgiloux@redhat.com>
e67f08a
to
d63ce02
Compare
/lgtm thanks! |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sttts The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Summary
This PR implements a reconciliation mechanism so that shard partitions can be automatically created based on specified dimensions and selector.
Design document
Related issue(s)
Follows #2469
Fixes #2334