Skip to content

Commit

Permalink
fix: raid volume pre cleanup
Browse files Browse the repository at this point in the history
Cause: Existing data were not removed from member disks before RAID
volume creation.

Fix: RAID volumes now remove existing data from member disks as needed before creation.

Signed-off by: Jan Pokorny <japokorn@redhat.com>
  • Loading branch information
japokorn committed Jul 14, 2023
1 parent d95e590 commit f89f484
Show file tree
Hide file tree
Showing 2 changed files with 112 additions and 2 deletions.
9 changes: 7 additions & 2 deletions library/blivet.py
Original file line number Diff line number Diff line change
Expand Up @@ -1002,8 +1002,13 @@ def _create(self):
if self._device:
return

if safe_mode:
raise BlivetAnsibleError("cannot create new RAID in safe mode")
for spec in self._volume["disks"]:
disk = self._blivet.devicetree.resolve_device(spec)
if not disk.isleaf or disk.format.type is not None:
if safe_mode and (disk.format.type is not None or disk.format.name != get_format(None).name):
raise BlivetAnsibleError("cannot remove existing formatting and/or devices on disk '%s' in safe mode" % disk.name)

Check warning on line 1009 in library/blivet.py

View check run for this annotation

Codecov / codecov/patch

library/blivet.py#L1005-L1009

Added lines #L1005 - L1009 were not covered by tests
else:
self._blivet.devicetree.recursive_remove(disk)

Check warning on line 1011 in library/blivet.py

View check run for this annotation

Codecov / codecov/patch

library/blivet.py#L1011

Added line #L1011 was not covered by tests

# begin creating the devices
members = self._create_raid_members(self._volume["disks"])
Expand Down
105 changes: 105 additions & 0 deletions tests/tests_raid_volume_cleanup.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
---
- hosts: all

Check failure on line 2 in tests/tests_raid_volume_cleanup.yml

View workflow job for this annotation

GitHub Actions / ansible_lint

name[play]

All plays should be named.
become: true
vars:
storage_safe_mode: false
storage_use_partitions: true
mount_location1: '/opt/test1'
mount_location2: '/opt/test2'
volume1_size: '5g'
volume2_size: '4g'

tasks:
- include_role:

Check failure on line 13 in tests/tests_raid_volume_cleanup.yml

View workflow job for this annotation

GitHub Actions / ansible_lint

name[missing]

All tasks should be named.
name: linux-system-roles.storage

- name: Mark tasks to be skipped
set_fact:
storage_skip_checks:
- blivet_available
- packages_installed
- service_facts

- include_tasks: get_unused_disk.yml

Check failure on line 23 in tests/tests_raid_volume_cleanup.yml

View workflow job for this annotation

GitHub Actions / ansible_lint

name[missing]

All tasks should be named.
vars:
max_return: 3
disks_needed: 3

- name: Create two LVM logical volumes under volume group 'foo'
include_role:
name: linux-system-roles.storage
vars:
storage_pools:
- name: foo
disks: "{{ unused_disks }}"
volumes:
- name: test1
size: "{{ volume1_size }}"
mount_point: "{{ mount_location1 }}"
- name: test2
size: "{{ volume2_size }}"
mount_point: "{{ mount_location2 }}"

- name: Enable safe mode
set_fact:
storage_safe_mode: true

- name: Try to overwrite existing device with raid volume and safe mode on (expect failure)

Check failure on line 47 in tests/tests_raid_volume_cleanup.yml

View workflow job for this annotation

GitHub Actions / ansible_lint

yaml[line-length]

Line too long (95 > 80 characters)
block:
- name: Create a RAID0 device mounted on "{{ mount_location1 }}"
include_role:
name: linux-system-roles.storage
vars:
storage_volumes:
- name: test1
type: raid
raid_level: "raid1"
raid_device_count: 2
raid_spare_count: 1
disks: "{{ unused_disks }}"
mount_point: "{{ mount_location1 }}"
state: present

- name: unreachable task

Check failure on line 63 in tests/tests_raid_volume_cleanup.yml

View workflow job for this annotation

GitHub Actions / ansible_lint

name[casing]

All names should start with an uppercase letter.
fail:
msg: UNREACH

rescue:
- name: Check that we failed in the role
assert:
that:
- ansible_failed_result.msg != 'UNREACH'
msg: "Role has not failed when it should have"

- name: Disable safe mode
set_fact:
storage_safe_mode: false

- name: Create a RAID0 device mounted on "{{ mount_location1 }}"
include_role:
name: linux-system-roles.storage
vars:
storage_volumes:
- name: test1
type: raid
raid_level: "raid1"
raid_device_count: 2
raid_spare_count: 1
disks: "{{ unused_disks }}"
mount_point: "{{ mount_location1 }}"
state: present

- name: Cleanup - remove the disk device created above
include_role:
name: linux-system-roles.storage
vars:
storage_volumes:
- name: test1
type: raid
raid_level: "raid1"
raid_device_count: 2
raid_spare_count: 1
disks: "{{ unused_disks }}"
mount_point: "{{ mount_location1 }}"
state: absent

Check failure on line 105 in tests/tests_raid_volume_cleanup.yml

View workflow job for this annotation

GitHub Actions / ansible_lint

yaml[empty-lines]

Too many blank lines (1 > 0)

0 comments on commit f89f484

Please sign in to comment.