-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Sends V*ReplicationClass
to client
#2727
base: main
Are you sure you want to change the base?
Conversation
/cc @Madhu-1 |
cb85825
to
eeb74d1
Compare
/cc @rewantsoni |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For Volume*ReplicationClass we are sending only the parameters field like we did for Volume*SnapshotClass and StorageClass, Should we send the entire spec for it or maybe the entire object that will allow us to add labels/annotations required (ramen labels, reclaimspace annotation) from the provider itself and the client would be responsible for adding missing fields like ProvisionerName and namespace for secrets and then creating/upating it.
@nb-ohad WDYT?
Data: mustMarshal(map[string]string{ | ||
"replication.storage.openshift.io/replication-secret-name": provisionerSecretName, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For flattening VRC, we require the special parameter flattenMode: force
to be added, we should add it from the provider side
services/provider/server/server.go
Outdated
Data: mustMarshal(map[string]string{ | ||
"replication.storage.openshift.io/replication-secret-name": provisionerSecretName, | ||
}), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Along with the replication secret name, we also require mirroringMode key that can we sent from the provider, could you add that as well in all the V*RC?
Ref: https://github.com/red-hat-storage/odf-multicluster-orchestrator/blob/main/controllers/drpolicy_controller.go#L170-L178
services/provider/server/server.go
Outdated
Data: mustMarshal(map[string]string{ | ||
"replication.storage.openshift.io/group-replication-secret-name": provisionerSecretName, | ||
}), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For flattening VGRC, we require the special parameter flattenMode: force
to be added, we should add it from the provider side
@raaizik: GitHub didn't allow me to request PR reviews from the following users: Rakshith-R. Note that only red-hat-storage members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/cc @Rakshith-R |
@raaizik: GitHub didn't allow me to request PR reviews from the following users: Rakshith-R. Note that only red-hat-storage members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
services/provider/server/server.go
Outdated
}), | ||
}, | ||
&pb.ExternalResource{ | ||
Name: "ceph-rbd-image-flattening", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Name: "ceph-rbd-image-flattening", | |
Name: "ceph-rbd-flatten", |
Please make sure everything matches with https://github.com/red-hat-storage/odf-multicluster-orchestrator/blob/b3df88c23b06d80515f27a36f656296e2d413681/controllers/drpolicy_controller.go#L212
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I notice the name's suffix is a hashed scheduling interval. Since I am watching the new DRClusterConfig resource (that provides the sched intervals) from the client op, I'd need to override this field there. But I guess it doesn't matter on your end because all you care about is that the client op sends the correct name. Also, didn't you mean this?:
RBDFlattenVolumeReplicationClassNameTemplate = "rbd-flatten-volumereplicationclass-%v"
ffd5355
to
5c428c0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"replication.storage.openshift.io/is-default-class": "true",
This should be added as a annotation
and
"replication.storage.openshift.io/flatten-mode": "force",
This as a label
hope both of the above will be handled accordingly in consumer side.
Other than the above, everything else looks good to me.
@Rakshith-R I'm aware and it will. |
I will start with the later proposal first:
We do not want to do that, an object contains not just desired state but also status and some metadata that is local to the provider cluster. In internal discussions, we already acknowledged the need to send labels and annotations but not as part of the serialized spec we send (the data field of the ExternalResource struct). The plan is to add it to the metadata of the
|
/test ocs-operator-bundle-e2e-aws |
1 similar comment
/test ocs-operator-bundle-e2e-aws |
/test ocs-operator-bundle-e2e-aws |
/lgtm |
/retest |
/hold |
1f19940
to
f06a010
Compare
New changes are detected. LGTM label has been removed. |
f06a010
to
af994af
Compare
Can unhold now, @rewantsoni 👍 |
/unhold |
/test ocs-operator-bundle-e2e-aws |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we consider sending the spec of Volume*ReplicationClass instead of spec.parameter from the provider server?
} else if extResource.Kind == "VolumeReplicationClass" { | ||
name = fmt.Sprintf("%s-volumereplicationclass", name) | ||
} else if extResource.Kind == "VolumeGroupReplicationClass" { | ||
name = fmt.Sprintf("%s-volumegroupreplicationclass", name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as we are not creating these resources for filesystem, we don't need them here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not dependent on whether it's for FS or not
"replication.storage.openshift.io/replication-secret-name": provisionerSecretName, | ||
}), | ||
Labels: map[string]string{ | ||
"mirroringMode": "snapshot", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mirroringMode is not a label but a parameter under the spec (spec.parameters) section of the VolumeReplicationClass
Labels any `json:"labels"` | ||
Annotations any `json:"annotations"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as we know the type for Labels and Annotations, let's use that instead of using any
Labels any `json:"labels"` | ||
Annotations any `json:"annotations"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are missing assert on labels in the tests
dffe4af
to
bd6ccbb
Compare
I think we've had a previous discussion regarding this or similar. IMO the server should send only the info that's required and not the entire spec. |
Sends two V*RCs (one for image flattening) for RDR Signed-off-by: raaizik <132667934+raaizik@users.noreply.github.com> Co-Authored-By: Rewant Soni <rewant.soni@gmail.com>
8d3d0da
to
3986183
Compare
V*ReplicationClass
to clientV*ReplicationClass
to client
9d1a00b
to
049aa26
Compare
Signed-off-by: raaizik <132667934+raaizik@users.noreply.github.com>
049aa26
to
5209514
Compare
/test ocs-operator-bundle-e2e-aws |
assert.True(t, reflect.DeepEqual(extResource.Labels, mockResoruce.Labels)) | ||
assert.True(t, reflect.DeepEqual(extResource.Annotations, mockResoruce.Annotations)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could use assert.Equal() it uses reflect.DeepEqual internally
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both labels
and annotations
are of type map
, which means that in order to use assert.Equal()
I'd need to convert from map to []byte
or string
which is not straightforward. Therefore I've chosen this assertion instead so it's compact as a one-liner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to convert the map to byte or string. See here.
}), | ||
Labels: getExternalResourceLabels("VolumeReplicationClass", replicationEnabled, mirroringEnabled, true, ID, SID), | ||
Annotations: map[string]string{ | ||
"replication.storage.openshift.io/is-default-class": "true", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from what I recall, we don't require this annotation for flatten VRC
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure it is needed. See here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MCO is creating two VRC's one with flatten mode and one without flatten.
The one that is without flatten mode has the annotation the other one doesn't. See here and here
We are setting the annotations to be empty when creating the flatten mode VRC.
Also the same in mentioned in the design doc Changes required
}), | ||
Labels: getExternalResourceLabels("VolumeGroupReplicationClass", replicationEnabled, mirroringEnabled, true, ID, SID), | ||
Annotations: map[string]string{ | ||
"replication.storage.openshift.io/is-default-class": "true", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from what I recall, we don't require this annotation for flatten VGRC
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See reply
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See reply
@@ -832,6 +911,39 @@ func (s *OCSProviderServer) GetStorageClaimConfig(ctx context.Context, req *pb.S | |||
|
|||
} | |||
|
|||
func getExternalResourceLabels(kind string, isReplicationEnabled bool, isMirroringEnabled bool, isFlattenMode bool, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to differentiate between the Replication and Mirroring flag, we can have one mirroring flag (which will check the cephblockpool) and add labels based on that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my perspective, there's no other way to handle these flags. Replication is Vol*RepClass
specific, while Mirroring is aimed for both Vol*RepClass
and Vol*SnapClass
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your replication flag depends on storageClusterPeer, whereas the mirroring flag is derived from CephBlockPool.
Enabling mirroring on the cephblockpool is the responsibility of the storageClusterPeer. Isn't both pointing to the same field eventually?
I think we should add the labels for rbd-based VRC/VSC and SC based only on the mirroring field. It is possible that a particular blockpool is not selected for mirroring from StorageClusterPeer CR. See here
I was going through the PR and I am not able to find the labels applied for CephFS-based V*SC and SC. Do we not need them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enabling mirroring on the cephblockpool is the responsibility of the storageClusterPeer.
I am not aware of such thing. How does it fetch this flag exactly? Is it merged already?
I am not able to find the labels applied for CephFS-based V*SC and SC
As discussed, all of this will be handled in another PR. This PR refers only to Vol*RepClass
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not aware of such thing. How does it fetch this flag exactly? Is it merged already?
This was discussed in the design discussion. Each StorageClusterPeer will enable mirroring on the cephblockpool based on the label selector.
For rbd we can use the mirroring field on the cephblockpool to add ramenRelated labels, we don't need another replication flag explicitly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're misunderstanding. I am using the Mirroring flag solely for the StorageID label. It's unrelated to RepID.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need two flags for the same set of ramen labels? From what I understand we are creating a differentiation which we don't need. It will just complicate the things.
As an example what if we have 2 cephblockpool a and b the storageclusterpeer is configured only for blockpool a.
There are two clients 1 and 2. 1 uses blockpool a, 2 uses blockpool b.
If we keep the flags separate,
For client 1 we will enable both replication and storage id label
For client 2 we will enable only replication id label
In the second case we don't require any ramen related labels to be added as that blockpool is not configured for mirroring.
Hence we don't need to check for both storageclusterpeer and cephblockpool. Checking only the cephblockpool will work
ID = hex.EncodeToString(md5Sum[:16]) | ||
} // todo storageID in provider is planned to be string(r.storageClaim.UID) | ||
// SID for RamenDR | ||
SID := req.StorageConsumerUUID |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
StorageID is used to match V*C to StorageClass, we don't need it to be storageClaim UID, we can have it as storageRequest UID as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've already agreed that it'll be string(r.storageClaim.UID)
in #168. Eventually the value that will be assigned to this ID is the one from the client, so it actually doesn't matter what value is chosen on this end.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the concept for storageClaim might go away in the future because of convergence, we should pivot and not use it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even so, the final value assigned would be determined on the client.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to determine the label value on the client if we can do it from the provider itself?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could, I'm saying with the current state of things in these PRs this is what would happen. After this PR will get merged, the value assigned from here will be propagated to the client and changes will be reflected in the mentioned PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree that we could but I think as storageID is an arbitrary value used by ramen to cross-link the VRC/VSC and SC, the hash of storageRequest UID perfectly fits our use case. I don't think we need to change it further on the client. Let's wait for others to review this.
/retest |
Changes
Sends two VRCs (one for image flattening) and one VGRC. Depends on #2620
RHSTOR-5753, RHSTOR-5794