-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
opennebula_cluster removes resources on secondary run #389
Comments
Thanks for the issue, good point, I forgot to check all inter resources relationships. Actually it possible to tie a datastore/host to a cluster in two ways and the way it's implemented for now is problematic. When you attach an host to a cluster via the |
We need to remove the direct dependencies between the two resources. Add
This may be solved in reading only the configured elements of the Some other ideas:
An other thing that is not consistent and may be confusing in the provider across resources is the way we delete a group VS the way we delete a cluster:
I'll probably open one or two issues (on |
The proposed solution does not allow you to associate existing resources to the cluster. |
You still have the
To retrieve the list of cluster member I propose to modify cluster fields |
We could instead manage membership from the cluster resource but I did this way because:
I had to make a choice, I knew these arguments, but it's not a big deal to manage membership from the cluster resource so if everyone think it's better and I'm open to do like this. |
The proposed solution is good, but we allso need a way to associate existing resources. |
Ok so membership management would be less exclusive from the cluster, PR updated. |
Seems legit. Can you add it as a RC release or something so we can test it in pre-prod? |
The |
Partially fixed, new datastores are allocated to default cluster as well.
|
In the actual way to manage cluster membership that we just modified for the last RC release of the provider: A new datastore is added to the default cluster due to these points:
To fix this we could:
|
A few ideeas to consider:
|
Considering your first idea would mean that we use We could go this way, i.e. manage the dependency from datastore/host side via the It's still possible to rollback these changes as the provider hasn't been release with it's related attributes changes... |
Keep the current RC1 changes, they are way better than the previous version.
And this might be the best ideea yet. That way we don't loose our current gained flexibility and resources are still managed individually. Update: just realized that updating the default cluster, won't work without importing the resource ... |
Related?
|
You can't delete cluster with host/datastore inside it's an OpenNebula constraint and the provider doesn't try add behavior to empty the cluster from it's member before deleting it. |
As I said in this previous comment, an host is only in one cluster at a time (datastore could be in several clusters at the same time), so when you add it to a new cluster, OpenNebula automatically remove it from the other. This is not related to the provider. |
OpenNebula version: 6.4.0
terraform-provider-opennebula: 1.1.0
On the second run the opennebula_cluster are removed from cluster.
Plan:
Variables:
Run 1:
Run 2:
Expected behavior:
No change is expected.
The text was updated successfully, but these errors were encountered: