-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proxmox Inventory: added new statuses for qemu #4723
Proxmox Inventory: added new statuses for qemu #4723
Conversation
/rebuild_failed |
This comment was marked as outdated.
This comment was marked as outdated.
plugins/inventory/proxmox.py
Outdated
# get more details about the status of the qemu VM if want_facts == True | ||
if want_facts: | ||
item_status = properties.get(self._fact('qmpstatus'), item_status) | ||
self.inventory.add_child(self._group('all_%s' % (item_status)), name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, isn't this a breaking change, since some hosts are now in different groups than before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now they would be correct. Any VM that has a Paused state cannot be used and cannot be connected to. The reason for this was that Proxmox says that the VM is running, but in actuality is not. Without this, you have a wrong status for your machine.
The only other thing I can think of is introduce a new group prefixed with qemu like proxmox_qemu_paused
and leave it in running. That way it will always be added to running even though it's in a paused/preflight state.
I'm fine with adding the two new groups and add the new status there. What do you think @felixfontein ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are gonna end up with the following groups. And we can do intersection between proxmox_all_running and proxmox_all_qemu_running to figure out which ones are currently actually running.
|--@proxmox_all_lxc:
|--@proxmox_all_qemu:
|--@proxmox_all_qemu_paused:
|--@proxmox_all_qemu_prelaunch:
|--@proxmox_all_qemu_running:
|--@proxmox_all_qemu_stopped:
|--@proxmox_all_running:
|--@proxmox_all_stopped:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding the new groups (and not modifying the old ones) definitely works; the chance that it breaks an existing setup should be negligible.
What do the other proxmox maintainers think? Do you prefer the old solution, or tend more to this one (which is definitely backwards compatible)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've changed it, so we still have the old behavior, and we just add them to the new group.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm inclined to still move ahead with it. @felixfontein absolutely right it's a breaking change on how it behaves, however the current behavior should more be considered a bug then that this is a new feature. As @ilijamt pointed out, in his current setup, which might impact other users as well, the module breaks play executions when a KVM is currently moving somewhere else, because it's reporting as running, while it technically not is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not the only case, you can also manually suspend a VM, and it will report as running when it's suspended.
The only issue is that we need want_facts set to true to be able to tell the difference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what is the consensus? cc @Thulium-Drake @felixfontein
- Add a new flag to allow for the new statuses (empty when flag not set), and deprecate it on the new release.
- Fix the issue now, and the statuses will be extended with the additional statuses
prelaunch
andpaused
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm good with either, but I think option A is easier for @felixfontein to release right now :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the next release would be a major release I would be OK with option B, but since we just had one it's probably better to go with option A. This feels too close to a breaking change to me, even though I can understand very well why the current behavior sucks :)
SUMMARY
Added support for missing statuses for QEMU
ISSUE TYPE
COMPONENT NAME
proxmox.inventory