Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: RAID volume pre cleanup #169

Merged
merged 1 commit into from
Jul 19, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 7 additions & 2 deletions library/blivet.py
Original file line number Diff line number Diff line change
Expand Up @@ -1002,8 +1002,13 @@
if self._device:
return

if safe_mode:
raise BlivetAnsibleError("cannot create new RAID in safe mode")
for spec in self._volume["disks"]:
disk = self._blivet.devicetree.resolve_device(spec)
if not disk.isleaf or disk.format.type is not None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check for safe_mode has to be a bit more careful, like the ones in BlivetVolume._reformat and BlivetBase._manage_one_encryption. I see that BlivetPool._create_members also needs to check for device.original_format.name != get_format(None).name (to catch formatting reported by blkid but not recognized/handled by blivet).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

if safe_mode and (disk.format.type is not None or disk.format.name != get_format(None).name):
raise BlivetAnsibleError("cannot remove existing formatting and/or devices on disk '%s' in safe mode" % disk.name)

Check warning on line 1009 in library/blivet.py

View check run for this annotation

Codecov / codecov/patch

library/blivet.py#L1005-L1009

Added lines #L1005 - L1009 were not covered by tests
else:
self._blivet.devicetree.recursive_remove(disk)

Check warning on line 1011 in library/blivet.py

View check run for this annotation

Codecov / codecov/patch

library/blivet.py#L1011

Added line #L1011 was not covered by tests

# begin creating the devices
members = self._create_raid_members(self._volume["disks"])
Expand Down
102 changes: 102 additions & 0 deletions tests/tests_raid_volume_cleanup.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
---
- name: Test RAID cleanup
hosts: all
become: true
vars:
storage_safe_mode: false
storage_use_partitions: true
mount_location1: '/opt/test1'
mount_location2: '/opt/test2'
volume1_size: '5g'
volume2_size: '4g'

tasks:
- name: Run the role
include_role:
name: linux-system-roles.storage

- name: Mark tasks to be skipped
set_fact:
storage_skip_checks:
- blivet_available
- packages_installed
- service_facts

- name: Get unused disks
include_tasks: get_unused_disk.yml
vars:
max_return: 3
disks_needed: 3

- name: Create two LVM logical volumes under volume group 'foo'
include_role:
name: linux-system-roles.storage
vars:
storage_pools:
- name: foo
disks: "{{ unused_disks }}"
volumes:
- name: test1
size: "{{ volume1_size }}"
mount_point: "{{ mount_location1 }}"
- name: test2
size: "{{ volume2_size }}"
mount_point: "{{ mount_location2 }}"

- name: Enable safe mode
set_fact:
storage_safe_mode: true

- name: >-
Try to overwrite existing device with raid volume
and safe mode on (expect failure)
include_tasks: verify-role-failed.yml
vars:
__storage_failed_regex: cannot remove existing formatting.*in safe mode
__storage_failed_msg: >-
Unexpected behavior when overwriting existing device with RAID volume
__storage_failed_params:
storage_volumes:
- name: test1
type: raid
raid_level: "raid1"
raid_device_count: 2
raid_spare_count: 1
disks: "{{ unused_disks }}"
mount_point: "{{ mount_location1 }}"
state: present

- name: Disable safe mode
set_fact:
storage_safe_mode: false

- name: Create a RAID0 device mounted on "{{ mount_location1 }}"
include_role:
name: linux-system-roles.storage
vars:
storage_volumes:
- name: test1
type: raid
raid_level: "raid1"
raid_device_count: 2
raid_spare_count: 1
disks: "{{ unused_disks }}"
mount_point: "{{ mount_location1 }}"
state: present

- name: Verify role results
include_tasks: verify-role-results.yml

- name: Cleanup - remove the disk device created above
include_role:
name: linux-system-roles.storage
vars:
storage_volumes:
- name: test1
type: raid
raid_level: "raid1"
raid_device_count: 2
raid_spare_count: 1
disks: "{{ unused_disks }}"
mount_point: "{{ mount_location1 }}"
state: absent