Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Partitionset reconciliation #2513

Merged

Conversation

fgiloux
Copy link
Contributor

@fgiloux fgiloux commented Dec 20, 2022

Summary

This PR implements a reconciliation mechanism so that shard partitions can be automatically created based on specified dimensions and selector.

Design document

Related issue(s)

Follows #2469
Fixes #2334

@openshift-ci openshift-ci bot added the kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API label Dec 20, 2022
@fgiloux fgiloux force-pushed the partitionset-reconciliation branch from 7294cce to 2376619 Compare December 20, 2022 14:20
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 22, 2022
@fgiloux fgiloux force-pushed the partitionset-reconciliation branch from 2376619 to a7fb7b1 Compare January 13, 2023 14:40
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 13, 2023
@fgiloux fgiloux force-pushed the partitionset-reconciliation branch 2 times, most recently from e1f1ada to c991948 Compare January 13, 2023 19:40
@fgiloux
Copy link
Contributor Author

fgiloux commented Jan 13, 2023

/retest

@fgiloux fgiloux requested review from sttts and p0lyn0mial January 13, 2023 20:36
Copy link
Contributor

@p0lyn0mial p0lyn0mial left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a first pass, for now only over the API. Let me know what you think.

// the MatchLabels generated for the Partition according to the dimensions in the PartitionSet.
// This does not limit the filtering as any MatchLabel can also be expressed as
// a MatchExpression.
MatchExpressions []metav1.LabelSelectorRequirement `json:"matchExpressions,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that a standard selector can be matchLabels or/and matchExpression and I think that both can be set at the same time. Not sure if I understand why we cannot use a standard label.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also could we rename it to shardSelector/shardMatchExpression ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as per the comment " MatchExpressions are used here instead of Selector to avoid possible collisions with the MatchLabels generated for the Partition".
PartitionSets are used to generate Partitions. Partitions have a LabelSelector. LabelSelectors have two parts:

  • MatchLabels
  • MatchExpressions

MatchLabels of Partitions are generated from the Dimension field of PartitionSet. MatchExpressions of Partitions are copied from MatchExpressions of PartitionSet. Keeping this separation provides a clearer UX. It would be more difficult for users to understand the generated Partitions from a PartitionSet if MatchLabels could be a mix of the Dimension generation and MatchLabels inherited from the PartitionSet. As per the comment there is no trade-off in terms of capabilities: any MatchLabels can be expressed by MatchExpressions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as per the comment " MatchExpressions are used here instead of Selector to avoid possible collisions with the MatchLabels generated for the Partition".

if we have a collision, does it mean we can achieve the same by providing a selector on a Partition resource :) ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand your point correctly PartitionSet is just a convenient way to create Partitions. Service Providers are free to create Partitions the way it best suits them: using PartitionSet or directly creating the resource. If they do it manually they can set Selectors: MatchLabels and MatchExpressions in Partitions in a similar way to what is generated by the controller.

@@ -0,0 +1,125 @@
/*
Copyright 2022 The KCP Authors.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: 2023 (applies to other places as well)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR and commit were created in 2022

// the MatchLabels generated for the Partition according to the dimensions in the PartitionSet.
// This does not limit the filtering as any MatchLabel can also be expressed as
// a MatchExpression.
MatchExpressions []metav1.LabelSelectorRequirement `json:"matchExpressions,omitempty"`
Copy link
Contributor

@p0lyn0mial p0lyn0mial Jan 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to make PartitionSetSpec.Dimensions and/or PartitionSetSpec.matchExpressions/PartitionSetSpec.selector immutable ?

I think this could be less disruptive. A SP could create a new PartitionSet wait for a Partition to be created, switch an APIExport to the new Partition and then delete the old PartitionSet

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I see it this is not the likely flow. This does not take in considerations that Partitions need to be colocated with the APIExportEndpointSlices that consume them. Here is how I see the flow:

  • A service provider update a PartitionSet
  • The set of Partitions bound to the PartitionSet is updated accordingly in the PartitionSet workspace
  • Each of this Partition is copied into the workspace where the intended APIExportEndpointSlice is located. Each APIExportEndpointSlice is intended for an operator/controller manager instance. It makes URLs for a set of shards identified by the Partition available to it.
  • The service provider can choose at any convenient time to patch the APIExportEndpointSlices with the new Partitions
    There is no modification of the service and no disruption until the APIExportEndpointSlices are patched.

The steps above could be automated but it is another discussion.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I read the code correctly but it seems that we are deleting old partitions potentially being used by APIExportEndpointSlice which seems to be disruptive.

xref:

if err := c.deletePartition(ctx, logicalcluster.From(oldPartition).Path(), oldPartition.Name); err != nil && !apierrors.IsNotFound(err) {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we do delete partitions but only in the workspace where the PartitionSet has been created. Partitions used by APIExportEndpointSlices should be copied into the workspaces where the APIExportEndpointSlices are located to suit the placement decision: regional, cloud proximity. Theses copied Partitions should not get modified or deleted by the reconciliation of the PartitionSet.

func generatePartition(name string, matchExpressions []metav1.LabelSelectorRequirement, matchLabels map[string]string) *topologyv1alpha1.Partition {
return &topologyv1alpha1.Partition{
ObjectMeta: metav1.ObjectMeta{
GenerateName: name + "-",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears to me that providers will have to manually introspect generated Partitions to make use of them because names appear random and there is no explanation of the generated selector, am I right?

Wondering if we could/should improve this?

Copy link
Contributor Author

@fgiloux fgiloux Jan 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Names are generated as Partitions are. We could create deterministic names by hashing the Partition spec but I am not sure it would be reasonable. An annotation may be better suited to store the hash. This would be stable as the PartitionSet reconciliation only does Partition create/delete, no update.
That said I am not sure it would be a significant improvement:

  • Shard topology should be stable, e.g. you may add new shards in a region time to time but adding a new region is a rare event that would need action from Service Providers for them to cover it. It would also mean just a few additional Partitions to be taken care of. Deletion is not much different.
  • PartitionSet change is driven by Service Provider and it means that they want to change the distribution of their service deployments. In that case they can check the deltas between the deployed Partitions and the newly generated one, even if a random name is used for Partitions. Some tooling on the client side may help with that.

There may be bits in the workflow that can be improved and I am open to proposals. My current thinking is that this could get better addressed when we develop some client side tooling that automates service providers deployments and topology changes, possibly taking a PartitionSet and an APIExport as parameter. I am a bit wary with having this automation on the server side because:

  • that could drive automated changes that introduce big disruption and cost increases by service providers, so that it would need to be gated, hence may not bring much gain compared to a client-side approach.
  • I am not sure that cross-shard communication will always be possible.

@fgiloux fgiloux mentioned this pull request Jan 18, 2023
@fgiloux
Copy link
Contributor Author

fgiloux commented Jan 23, 2023

/retest

@@ -22,12 +22,20 @@ import (
topologyv1alpha1 "github.com/kcp-dev/kcp/pkg/apis/topology/v1alpha1"
)

const dimensionsLabelKeyBase = "partition.kcp.io/"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

partition.topology.kcp.io/

@fgiloux fgiloux force-pushed the partitionset-reconciliation branch 2 times, most recently from b8ead47 to 2c93afd Compare January 25, 2023 21:25
@fgiloux
Copy link
Contributor Author

fgiloux commented Jan 25, 2023

@sttts @p0lyn0mial I have implemented the changes we discussed:

  • using the dimension values for Partition names (still using name generator taking care of overflows)
  • removed the labels with the dimensions (as the information is now made visible by the name)
  • using a selector in PartitionSet rather than matchExpressions only (some collision check was needed)

@fgiloux
Copy link
Contributor Author

fgiloux commented Jan 25, 2023

/retest

@fgiloux fgiloux force-pushed the partitionset-reconciliation branch 2 times, most recently from 0db73e9 to bfa0983 Compare January 26, 2023 06:39
// selector (optional) is a label selector that filters shard targets.
Selector *metav1.LabelSelector `json:"selector,omitempty"`
// ShardSelector (optional) specifies filtering for shard targets.
ShardSelector *metav1.LabelSelector `json:"shardSelector,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the name :)

@@ -453,6 +453,12 @@ func (s *Server) Run(ctx context.Context) error {
}
}

if s.Options.Controllers.EnableAll || enabled.Has("partition") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

partitionset ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would be ok with changing to partitionset. I used partition as we don't have any other controller specific to Partitions. Partitions alone don't get reconciled. It is only in the context of an APIExportEndpointSlice that the controller takes the referenced Partition in consideration for populating the endpoints. The same mechanism could also be leveraged in other places in the future.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should honestly remove all these ifs. We don't use them anywhere.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack. Outside of the scope of this PR

return
}

logger := logging.WithObject(logging.WithReconciler(klog.Background(), ControllerName), obj.(*topologyv1alpha1.Partition))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move after line 218 so that we are sure the obj is obj.(*topologyv1alpha1.Partition)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

for _, ownerRef := range partition.OwnerReferences {
if ownerRef.Kind == "PartitionSet" {
path := logicalcluster.From(partition).Path()
if !ok {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

if !ok {
return false
}
if !reflect.DeepEqual(oldShard.Labels, newShard.Labels) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return reflect.DeepEqual(oldShard.Labels, newShard.Labels)

Copy link
Contributor Author

@fgiloux fgiloux Jan 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return !reflect.DeepEqual(oldShard.Labels, newShard.Labels)
done

}

func (c *controller) deletePartitions(ctx context.Context, partitionSet *topologyv1alpha1.PartitionSet) error {
partitionSet.Status.Count = 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this modification required?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, it tracks the number of associated partitions.
should we then decrease this number by the number of successful calls to deletePartition ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deletePartitions (mind the s at the end) is only call in case of an issue with the selector (lines 55 and 83) the count should then be set back to 0. In case of regular deletion because of a change in shards deletePartitionwithout s is called (lines 147 and 182). In case of no error with the selector the number of endpoints is already captured in the size of matchLabelsMap, no need for additional counting.
I will move partitionSet.Status.Count = 0one level up as it is not clear from the name of the function that it does that as well.

getPartitionSet: func(clusterName logicalcluster.Name, name string) (*topologyv1alpha1.PartitionSet, error) {
return partitionSetClusterInformer.Lister().Cluster(clusterName).Get(name)
},
getPartitionsByPartitionSet: func(key string) ([]*topologyv1alpha1.Partition, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we change the signature of this method to accept an object instead of the key?
we wouldn't have to calculate the key on each invocation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes I can change the signature of the method but still we will need to calculate the key in the method to use the indexer, won't we?

}
for _, partition := range existingPartitions {
if err := c.deletePartition(ctx, logicalcluster.From(partition).Path(), partition.Name); err != nil && !apierrors.IsNotFound(err) {
return err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this is a network call, we could be more greedy and try all existingPartitions and collect all errors, wdyt?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure how you would do that. I thought of DeleteCollection with FieldSelector. FieldSelectoris however limited to certain fields and the owner reference is not among them.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just loop over existingPartitions, try to deletePartition, collect any errors and return at the end :) ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok so it makes no difference in terms of network calls, does it?

Copy link
Contributor

@p0lyn0mial p0lyn0mial Jan 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does, we try all existingPartitions even if we failed to delete a partition. Hopefully on the next iteration we will have less work to do. Does it make sense?

topologyv1alpha1.PartitionsReady,
topologyv1alpha1.ErrorGeneratingPartitionsReason,
conditionsv1alpha1.ConditionSeverityError,
"error listing Shards",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't we log the err too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error is returned by the reconcile function, hence logged by the controller

labelSelector = labels.Everything()
} else {
var err error
labelSelector, err = metav1.LabelSelectorAsSelector(partitionSet.Spec.ShardSelector)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since an invalid selector is expensive (deletePartitions) would it make sense to move validation to an admission plugin and simply return an error (without deletion) here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see above. We can discuss it on slack

}

var matchLabelsMap map[string]map[string]string
if partitionSet.Spec.ShardSelector != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it looks like we could get rid of this if statement since the partition function can already handle nil shardSelectorLabels

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I cannot reference MatchLabels on a nil pointer


// partition populates shard label selectors according to dimensions.
// It only keeps selectors that have at least one match.
func partition(shards []*corev1alpha1.Shard, dimensions []string, shardSelectorLabels map[string]string) (matchLabelsMap map[string]map[string]string) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it make sense to rename to partitionByDimensionsAndLabels?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the need for now. There is currently a single way of partitioning. Labelscould also be ambiguous in this context. Dimensionsare a way to slice and dice by Labels. Labelsare also used for filtering, defining the boundaries of the piece to slice and dice. What I mean is that Dimensionsand Labelsare not of the same nature for the partitioning.

for _, label := range labels {
labelValue, ok := shard.Labels[label]
if !ok {
break
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does it mean?
what if we didn't find let's say the second label out of four? (len(key) > 0 will apply)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

true. Adding a boolean to check the match

conditionsv1alpha1.ConditionSeverityError,
"old partition could not get deleted",
)
return err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again, since this is a network call, we could be more greedy and try all oldPartitions, wdyt?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure how you would do that here. There is no single selector for the Partitions to delete.

if !ok {
break
}
key = key + "+" + label + "=" + labelValue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm not sure if i understand why the key has to collect all labels with their values, couldn't we calculate some hash from the labels and values ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a kind of hash. We could use a hash function for it. I am not sure that the processing time would be shorter and it was nice to have something readable by debugging.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we won't reduce the size of the key it might grow indefinitely, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The keys are only used as a transient and quick way of comparing the desired state of Partitions with the existing ones. They are not stored.

conditionsv1alpha1.ConditionSeverityError,
"partition could not get created",
)
return err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this is a network call, we could be more greedy and try to create all partitions

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure how you would do that

c.enqueueAllPartitionSets(obj)
},
UpdateFunc: func(oldObj, newObj interface{}) {
if filterShardEvent(oldObj, newObj) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

explain in a comment what the filtering is and why it makes sense

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


indexers.AddIfNotPresentOrDie(partitionClusterInformer.Informer().GetIndexer(), cache.Indexers{
indexPartitionsByPartitionSet: indexPartitionsByPartitionSetFunc,
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: indexes first, event handler after that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

type controller struct {
queue workqueue.RateLimitingInterface

// kcpClusterClient is cluster aware and used to communicate with kcp API servers
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need for the comment. This is convention.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed


func (c *controller) reconcile(ctx context.Context, partitionSet *topologyv1alpha1.PartitionSet) error {
labelSelector := labels.Everything()
if partitionSet.Spec.ShardSelector == nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

!= nil ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

}
partitionSet.Status.Count = uint16(len(matchLabelsMap))
existingMatches := map[string]struct{}{}
var placeHolder struct{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just use boolean type for the map

matchLabelsMap = partition(shards, partitionSet.Spec.Dimensions, nil)
}
partitionSet.Status.Count = uint16(len(matchLabelsMap))
existingMatches := map[string]struct{}{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can be made type-safe. Compare https://goplay.space/#IxSJ1_iWaY9.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was only interested in the indexed keys of the map, an empty struct has 0 width, whereas the bool has 1 byte. As the map won't be huge and is not stored I guess it does not matter much. That said, an empty struct holds no data, isn't it type-safe?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant the key, i.e. that the key is a string but could be a struct. No need to think about representation.


// Sorting the keys of the old partition for consistent comparison
sortedOldKeys := make([]string, len(oldMatchLabels))

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: empty line. Put it after line 135

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

if _, ok := matchLabelsMap[partitionKey]; ok {
existingMatches[partitionKey] = placeHolder
} else {
if err := c.deletePartition(ctx, logicalcluster.From(oldPartition).Path(), oldPartition.Name); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

&& !IsNotFound

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

if partitionSet.Spec.ShardSelector != nil {
newMatchExpressions = partitionSet.Spec.ShardSelector.MatchExpressions
}
for _, oldPartition := range oldPartitions {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is this loop doing? A oneliner comment makes this much easier to read.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

}
}
}
// Create partitions when no existing partition for the set has the same selector.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: empty line above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

if _, ok := matchLabelsMap[partitionKey]; ok {
existingMatches[partitionKey] = placeHolder
} else {
if err := c.deletePartition(ctx, logicalcluster.From(oldPartition).Path(), oldPartition.Name); err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We usually put logger.V(2) before every client call

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

partition.OwnerReferences = []metav1.OwnerReference{
*metav1.NewControllerRef(partitionSet, topologyv1alpha1.SchemeGroupVersion.WithKind("PartitionSet")),
}
_, err = c.createPartition(ctx, logicalcluster.From(partitionSet).Path(), partition)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logger output

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

labels = append(labels, label)
}
// Sorting for consistent comparison.
sort.Strings(labels)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

=== empty line ===
// Sorting for consistent comparison.
sort.Strings(labels)
=== empty line ===

or

sort.Strings(labels) // Sorting for consistent comparison.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed

@@ -0,0 +1,49 @@
/*
Copyright 2022 The KCP Authors.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2023

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and everywhere else

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created the PR in 2022. Anyways I can change that if it matters

- op: add
path: /spec/versions/name=v1alpha1/schema/openAPIV3Schema/properties/spec/properties/shardSelector/properties/matchExpressions/items/properties/values/items/pattern
value:
^[A-Za-z0-9]([-A-Za-z0-9_.]{0,61}[A-Za-z0-9])?$
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vincepri label selector validation should come from controller-tools, shouldn't it?

@fgiloux fgiloux force-pushed the partitionset-reconciliation branch 2 times, most recently from 0246a36 to 63a6681 Compare February 7, 2023 14:19
@fgiloux
Copy link
Contributor Author

fgiloux commented Feb 7, 2023

/retest

@fgiloux fgiloux force-pushed the partitionset-reconciliation branch 2 times, most recently from 8fd892a to e67f08a Compare February 8, 2023 06:23
Signed-off-by: Frederic Giloux <fgiloux@redhat.com>
@fgiloux fgiloux force-pushed the partitionset-reconciliation branch from e67f08a to d63ce02 Compare February 8, 2023 07:06
@p0lyn0mial
Copy link
Contributor

/lgtm

thanks!

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 8, 2023
@sttts
Copy link
Member

sttts commented Feb 8, 2023

/approve

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 8, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sttts

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 8, 2023
@openshift-merge-robot openshift-merge-robot merged commit 87f44d4 into kcp-dev:main Feb 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

feature: PartitionSet reconciliation
5 participants