-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Openshift graph refresh #34
Conversation
This pull request is not mergeable. Please rebase and repush. |
edfe550
to
dc4f1e2
Compare
This is now ready. @enoodle @zakiva @zeari @Ladas @agrare please review. Got delayed as enabling the deletion tests on graph refresh revealed some issues, notably with
P.S. P.S. @Ladas has some more fixes but they're not covered by current specs. We'll do them in later PR(s). |
I'm good with that now that we have a way of increasing the timeout on the collection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great. 👍
I thought this was already in, I was wondering yesterday why we don't test the graph refresh in OpenShift gem. :-)
expect(ManageIQ::Providers::Openshift::ContainerManager::RefreshParser).not_to receive(:ems_inv_to_inv_collections) | ||
end | ||
|
||
include_examples "openshift refresher VCR tests" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
end | ||
|
||
# We rely on Kubernetes initialize_inventory_collections setting up our collections too. | ||
# TODO: does it mean kubernetes refresh would wipe out the openshift tables? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cben can you explain why? initialize_inventory_collections is filling in some metadata (and the ems) why would that wipe tables?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, if you would run kubernetes refresh on openshift provider, it would delete the tables you are not sending data for. But we don't do that (running another's manager type refresh), since that would break anywhere. :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The parser returns all @inv_collections.values
as the collections to sync.
So it always includes e.g. ems.container_templates
. Persistor doesn't know it's empty because we didn't fill it, or empty because all templates were deleted in openshift, if it finds any ems.container_templates
in DB it will delete them.
These collections should always be empty in DB for a kubernetes provider, so it's not really a problem. I want to see where @Ladas's upcoming restructurings (including ManageIQ/manageiq-providers-kubernetes#73) end up before moving openshift collections here...
But already had to move image labels to initialize_inventory_collections
here, for this very reason, so I'll move this comment to the super
call there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, I moved the initialize_inventory_collections a bit, now it's under Openshift Persister
#39
inventory["project"].each do |data| | ||
h = parse_project(data) | ||
# TODO: Assumes full refresh, and running after get_namespaces_graph. | ||
# Would be a problem with partial refresh. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Must run after get_namespaces_graph` seems like a fair assumption (we have lots of those)
Can you rephrase the TODO's as assertions?
(I mean something like # Must run after get_namespaces_graph
, # need modification for partial refresh
, etc)
I'm not sure there is actually something todo here right now, so let's bring in as little TODO
as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Old refresh had these assumptions (order & full refresh) everywhere.
Graph refresh is pretty close to not having any :-), and there are concrete wins to making them independent.
But yeah, there is more to track for partial refresh, I'll make assertions not TODOs.
Add ems_refresh.openshift.inventory_object_refresh setting. Run refresher_spec file for both old and graph refresh. Graph refresh preserve images metadata with get_container_images=false (except tag column).
Rewrote comments, PTAL. |
Checked commit cben@94f3a5a with ruby 2.2.6, rubocop 0.47.1, and haml-lint 0.20.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look good 👍 (pending Travis)
Kicking Travis as master is now green |
@cben travis failure seems to be related to ContainerDefinition removal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
Strange, I thought ContainerDefinition has been all fixed, checking... => Ari fixed in #40 |
Manually cherry-picked out of ManageIQ/manageiq-providers-openshift#34 Useful to better test the backported get_container_images option (ManageIQ#14606). https://bugzilla.redhat.com/show_bug.cgi?id=1484548
Manually cherry-picked out of ManageIQ/manageiq-providers-openshift#34 Useful to better test the backported get_container_images option (ManageIQ#14606). https://bugzilla.redhat.com/show_bug.cgi?id=1484548
Depended on Changes needed for openshift graph refresh manageiq-providers-kubernetes#57 [merged]
Running all VCR tests, both older cassette and the newer deletion tests.
RFE: https://bugzilla.redhat.com/show_bug.cgi?id=1470021