Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing the refresh to remove all physical switches. #17390

Merged
merged 1 commit into from
Jun 12, 2018

Conversation

CharlleDaniel
Copy link
Member

@CharlleDaniel CharlleDaniel commented May 7, 2018

This PR is able to:

  • Fix the refresh to remove all PhysicalSwitches after an infrastructure provider be refreshed.

Issue: #17383 - PhysicalSwitches being deleted after the Infrastructure provider refresh

@CharlleDaniel CharlleDaniel changed the title Fix all the physical switches be removed after the refresh. Fixing the refresh removal of physical switches. May 7, 2018
@CharlleDaniel CharlleDaniel changed the title Fixing the refresh removal of physical switches. Fixing the refresh to remove all physical switches. May 7, 2018
@CharlleDaniel
Copy link
Member Author

@miq-bot add_label bug

@miq-bot miq-bot added the bug label May 7, 2018
@agrare agrare self-assigned this May 14, 2018
@agrare
Copy link
Member

agrare commented May 14, 2018

:( this is a method that I wish we had done differently, the real problem is that this will impact other providers switches. I'd prefer not to make this method even worse by tacking on "not physical switches" and instead fix the problem.

I started a PR here to better model distributed and host-local switches for InfraManagers, #17420 we'll see how that goes. If that takes too long we can revisit this, if anything I'd prefer to scope this to just switches under this ems.

@CharlleDaniel
Copy link
Member Author

@agrare I agree with you and I will close this PR and follow your PR #17420. it's ok?

# delete from switches as s where s.id not in (select switch_id from host_switches)
# AND s.type not like '%PhysicalInfraManager%'
switch_ids = HostSwitch.all.collect(&:switch_id)
Switch.where.not(:id => switch_ids).where.not("type LIKE '%PhysicalInfraManager%'").destroy_all
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you do just Switch.where.not(:id => switch_ids, :type => 'PhysicalSwitch').destroy_all

Copy link
Member Author

@CharlleDaniel CharlleDaniel Jun 8, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hey @agrare the type is a little bigger ManageIQ::Providers::Lenovo::PhysicalInfraManager::PhysicalSwitch and if I use this Switch.where.not(:id => switch_ids, :type => "ManageIQ::Providers::Lenovo::PhysicalInfraManager::PhysicalSwitch").destroy_all I will only to fix Lenovo PhysicalSwitches and not for all PhysicalInfraStructure. What do you think?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, today we have a new PhysicalInfraStructure the RedFish provider.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay how about this, before we save any inventory, up here we collect all switch ids up front by doing ems.switches.pluck(:id) which we then pass into this method. That will give us the scope of switches that we are going to delete here and not impact other providers at all.

@agrare
Copy link
Member

agrare commented Jun 7, 2018

While I think #17420 is the right way to fix this, I get that is taking a long time and this will solve things in the short term.

@CharlleDaniel just the one change and then this looks okay to me

# delete from switches as s where s.shared is NULL and s.id not in (select switch_id from host_switches)
# delete from switches as s where s.shared = 't' and s.id not in (select switch_id from host_switches)
Switch.where.not(:id => HostSwitch.all.collect(&:switch).uniq).destroy_all
switch_ids = HostSwitch.all.collect(&:switch_id)
switches.where.not(:id => switch_ids).destroy_all
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@agrare It's works. Is it ok for you?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this point the switches have already been saved so this won't actually disconnect anything.

I suggested collecting the IDs here since that will be the original list. You can then destroy the old_switch_ids - switch_ids

@@ -64,7 +64,7 @@ def save_ems_infra_inventory(ems, hashes, target = nil, disconnect = true)
:orchestration_templates,
:orchestration_stacks
]

old_switch_ids = ems.switches.pluck(:id)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@agrare Is this your suggestion?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 yes

# delete from switches as s where s.shared is NULL and s.id not in (select switch_id from host_switches)
# delete from switches as s where s.shared = 't' and s.id not in (select switch_id from host_switches)
Switch.where.not(:id => HostSwitch.all.collect(&:switch).uniq).destroy_all
obsolete_switch_ids = old_switch_ids - HostSwitch.all.collect(&:switch_id)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since our old_switch_ids is scoped down to just this ems (as it should be) we should use - ems.switches.reload.pluck(:id)

Copy link
Member Author

@CharlleDaniel CharlleDaniel Jun 8, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@agrare I'm not sure, but think we should consider obsolete only switches that not have relationship with hosts, what do you think?

Copy link
Member Author

@CharlleDaniel CharlleDaniel Jun 8, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because if we use obsolete_switches - ems.switches.reload.pluck(:id) we are only considering the difference between lists

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because if we use obsolete_switches - ems.switches.reload.pluck(:id) we are only considering the difference between lists

Yeah that's the idea, we basically want to replicate the behavior of using the association to do the deletes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, I got it. I will change this right now.

@miq-bot
Copy link
Member

miq-bot commented Jun 11, 2018

Checked commit CharlleDaniel@a3cd885 with ruby 2.3.3, rubocop 0.52.1, haml-lint 0.20.0, and yamllint 1.10.0
1 file checked, 0 offenses detected
Everything looks fine. 🏆

@agrare
Copy link
Member

agrare commented Jun 11, 2018

@CharlleDaniel this looks good I just want to give it a quick test before merging

@CharlleDaniel
Copy link
Member Author

@agrare ok, thanks.

Copy link
Member

@agrare agrare left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM confirmed we do not disconnect other unassociated switches and destroy switches which were deleted

@agrare agrare merged commit d13ce78 into ManageIQ:master Jun 12, 2018
@agrare
Copy link
Member

agrare commented Jun 12, 2018

Thanks for bearing with me @CharlleDaniel !

@agrare agrare added this to the Sprint 88 Ending Jun 18, 2018 milestone Jun 12, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants