-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster-mapping during failover not applied #4493
Comments
@kmadac is |
If you are looking for mapping you handle it you can remove the |
I can confirm that putting secondary mon ip addresses to primary ceph id worked. Here is final csi config map:
Thank you very much, I'm closing the issue. |
We might have missed adding it to the document, please feel free to open PR to add missing details. Thank you :) |
Describe the bug
I have an issue with cluster-mapping and with using mirrored RBD volumes by ceph-csi in case of disaster?
In the test environment I'm trying to use mirrored rbd volumes on k8s with ceph-csi. I have a cluster-mapping.json in place where primary pool and ceph id is mapped to secondary ceph. I have also config.json with list of mons for both cephs. The issue is that during failover to secondary site, when I manually create PV/PVC the same way as was on primary side, cluster-mapping is not applied during NodeStageVolume (at least what I can see in the code) and ceph-csi still tries to access inaccessible primary cluster, which is unsucessfull and indefinitely stucks application pods in ContainerCreting phase. When I manualy create PV on secondary site with correct volumeHandle, then it works. Why the cluster-mapping.json is needed then if volumeHandle still needs to be manually changed in case of failover? Shouldn't it be applied also during call of NodeStageVolume?
Environment details
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) : rbd-nbdSteps to reproduce
csi-config-map
Actual results
PV and PVC are in Bound state, but application Pod is stucked in ContainerCreation and csi pods show following errors:
And I can see also error log in csi-rbdplugin pod where I can see that it tries to connect to primary ceph which is down:
Expected behavior
Remount will be done successfully from secondary cluster and application will start.
The text was updated successfully, but these errors were encountered: