Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Openshift graph refresh #34

Merged
merged 1 commit into from
Jul 27, 2017
Merged

Conversation

cben
Copy link
Contributor

@cben cben commented Jul 12, 2017

RFE: https://bugzilla.redhat.com/show_bug.cgi?id=1470021

@miq-bot
Copy link
Member

miq-bot commented Jul 17, 2017

This pull request is not mergeable. Please rebase and repush.

@cben cben closed this Jul 18, 2017
@cben cben reopened this Jul 18, 2017
@cben cben force-pushed the graph-openshift branch 4 times, most recently from edfe550 to dc4f1e2 Compare July 20, 2017 21:02
@cben cben changed the title [WIP] Openshift graph refresh Openshift graph refresh Jul 20, 2017
@miq-bot miq-bot removed the wip label Jul 20, 2017
@cben
Copy link
Contributor Author

cben commented Jul 20, 2017

This is now ready. @enoodle @zakiva @zeari @Ladas @agrare please review.

Got delayed as enabling the deletion tests on graph refresh revealed some issues, notably with get_container_images=false mode, and extending that test revealed some more... (Yay tests!)

  • One tiny point I'm not fixing in this PR, because it's also this way with old refresh, is images tag column — in get_container_images=false mode it's not preserved like the rest of the metadata but "regresses" to the kubernetes parser value.
    Fixing that in graph refresh would be easy matter of blacklist_attributes!. But I really want exactly same behavior. Fixing in both would be uglier: eca2260.

P.S. get_container_images=false mode is getting annoying to support, I would like to drop it in favor of store_unused_images=false mode proposed in #9.

P.S. @Ladas has some more fixes but they're not covered by current specs. We'll do them in later PR(s).

@cben
Copy link
Contributor Author

cben commented Jul 24, 2017

ping . @enoodle @zakiva @zeari please review.

@cben cben closed this Jul 24, 2017
@cben cben reopened this Jul 24, 2017
@agrare
Copy link
Member

agrare commented Jul 24, 2017

P.S. get_container_images=false mode is getting annoying to support, I would like to drop it in favor of store_unused_images=false mode proposed in #9.

I'm good with that now that we have a way of increasing the timeout on the collection

@cben
Copy link
Contributor Author

cben commented Jul 26, 2017

@moolitayer @enoodle @yaacov @zgalor @zakiva @zeari Please review — this is blocking delivering graph refresh for Scale team to test, and some upcoming PRs...

Copy link
Contributor

@Ladas Ladas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great. 👍

I thought this was already in, I was wondering yesterday why we don't test the graph refresh in OpenShift gem. :-)

expect(ManageIQ::Providers::Openshift::ContainerManager::RefreshParser).not_to receive(:ems_inv_to_inv_collections)
end

include_examples "openshift refresher VCR tests"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

end

# We rely on Kubernetes initialize_inventory_collections setting up our collections too.
# TODO: does it mean kubernetes refresh would wipe out the openshift tables?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cben can you explain why? initialize_inventory_collections is filling in some metadata (and the ems) why would that wipe tables?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, if you would run kubernetes refresh on openshift provider, it would delete the tables you are not sending data for. But we don't do that (running another's manager type refresh), since that would break anywhere. :-)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parser returns all @inv_collections.values as the collections to sync.
So it always includes e.g. ems.container_templates. Persistor doesn't know it's empty because we didn't fill it, or empty because all templates were deleted in openshift, if it finds any ems.container_templates in DB it will delete them.

These collections should always be empty in DB for a kubernetes provider, so it's not really a problem. I want to see where @Ladas's upcoming restructurings (including ManageIQ/manageiq-providers-kubernetes#73) end up before moving openshift collections here...

But already had to move image labels to initialize_inventory_collections here, for this very reason, so I'll move this comment to the super call there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, I moved the initialize_inventory_collections a bit, now it's under Openshift Persister
#39

inventory["project"].each do |data|
h = parse_project(data)
# TODO: Assumes full refresh, and running after get_namespaces_graph.
# Would be a problem with partial refresh.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Must run after get_namespaces_graph` seems like a fair assumption (we have lots of those)

Can you rephrase the TODO's as assertions?
(I mean something like # Must run after get_namespaces_graph, # need modification for partial refresh, etc)
I'm not sure there is actually something todo here right now, so let's bring in as little TODO as possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Old refresh had these assumptions (order & full refresh) everywhere.
Graph refresh is pretty close to not having any :-), and there are concrete wins to making them independent.
But yeah, there is more to track for partial refresh, I'll make assertions not TODOs.

Add ems_refresh.openshift.inventory_object_refresh setting.
Run refresher_spec file for both old and graph refresh.
Graph refresh preserve images metadata with get_container_images=false
(except tag column).
@cben
Copy link
Contributor Author

cben commented Jul 26, 2017

Rewrote comments, PTAL.

@miq-bot
Copy link
Member

miq-bot commented Jul 26, 2017

Checked commit cben@94f3a5a with ruby 2.2.6, rubocop 0.47.1, and haml-lint 0.20.0
3 files checked, 0 offenses detected
Everything looks fine. 🍰

Copy link
Contributor

@zakiva zakiva left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes look good 👍 (pending Travis)

@cben
Copy link
Contributor Author

cben commented Jul 27, 2017

Kicking Travis as master is now green

@cben cben closed this Jul 27, 2017
@cben cben reopened this Jul 27, 2017
@moolitayer
Copy link

@cben travis failure seems to be related to ContainerDefinition removal

Copy link

@moolitayer moolitayer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

@cben
Copy link
Contributor Author

cben commented Jul 27, 2017

Strange, I thought ContainerDefinition has been all fixed, checking... => Ari fixed in #40

@cben cben closed this Jul 27, 2017
@cben cben reopened this Jul 27, 2017
@moolitayer moolitayer merged commit 35adfc1 into ManageIQ:master Jul 27, 2017
@moolitayer moolitayer added this to the Sprint 66 Ending Aug 7, 2017 milestone Aug 9, 2017
cben added a commit to cben/manageiq that referenced this pull request Aug 28, 2017
d-m-u pushed a commit to d-m-u/manageiq that referenced this pull request Jun 6, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants