Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

centos2alma process has failed. Error: Failed: update modern mariadb. #313

Open
osprinzl opened this issue Jul 24, 2024 · 7 comments
Open
Labels
bug Something isn't working

Comments

@osprinzl
Copy link

Describe the bug
I tried to upgrade a CentOS 7 system to Alma Linux 8. This system has a custom MariaDB installation (10.4). As far as I understand this will be reinstalled with CentOS/AlmaLinux 8 MariaDB packages. Here is a problem in the update/reinstall routine, it generates a yum error and breaks further install:

2024-07-24 02:49:52,497 - INFO - Running: ['/usr/bin/yum', 'install', '--repo', 'alma-alma-mariadb', '-y', 'MariaDB-client', 'MariaDB-server']. Output:
2024-07-24 02:49:52,662 - INFO - Loaded plugins: fastestmirror
2024-07-24 02:49:52,666 - INFO - Command line error: no such option: --repo
2024-07-24 02:49:52,666 - INFO - Usage: yum [options] COMMAND
...
2024-07-24 02:49:52,679 - ERROR - Command ['/usr/bin/yum', 'install', '--repo', 'alma-alma-mariadb', '-y', 'MariaDB-client', 'MariaDB-server'] failed with return code 1
2024-07-24 02:49:52,679 - ERROR - Failed: update modern mariadb. The reason: Command '['/usr/bin/yum', 'install', '--repo', 'alma-alma-mariadb', '-y', 'MariaDB-client', 'MariaDB-server']' returned non-zero exit status 1.

Could this be fixed?

The upgrade routine removes the following RPM packages:
MariaDB-server
MariaDB-common
MariaDB-compat
MariaDB-client
But it needs to remove galera-4-26.4.18-1.el7.centos.x86_64 as well!

Kernel: 3.10.0-1160.119.1.el7.x86_64

Feedback archive
centos2alma_feedback.zip

@osprinzl osprinzl added the bug Something isn't working label Jul 24, 2024
@osprinzl
Copy link
Author

osprinzl commented Jul 24, 2024

The option --repo is not known in yum. This should be --enablerepo=

@SandakovMM
Copy link
Collaborator

SandakovMM commented Jul 25, 2024

The option --repo is not known in yum. This should be --enablerepo=

The --repo option is meant for AlmaLinux 8, where yum is an alias of the dnf utility, which supports the --repo argument. It appears that, for some reason, centos2alma resume was initiated inside CentOS 7, which is unexpected.

It seems that CentOS 7 was not converted to AlmaLinux 8 in the temporary container created by leapp. To determine exactly what happened, the processes inside the container need to be checked via console. Can you access these logs?

@SandakovMM
Copy link
Collaborator

I suspect that your system did not reboot into the temporary container that elevate uses to install new packages. This usually occurs when there is an unexpected issue with the grub configuration. We encountered a similar issue as described in issue #224.

Could you confirm whether you are using an EFI-based system? Alternatively, please provide the following information:

  1. ls -la /boot/grub2
  2. cat /boot/grub2/grub.cfg

@osprinzl
Copy link
Author

osprinzl commented Oct 8, 2024

First off: this issue is not on a "high" priority anymore. I am using CentOS 7 a year further with TuxCare support.
The system with this error is not a EFI system, it boots in standard BIOS mode:
`

ls -l /sys/firmware/

total 0
drwxr-xr-x. 5 root root 0 Oct 8 11:08 acpi
drwxr-xr-x. 4 root root 0 Oct 8 11:08 dmi
drwxr-xr-x. 12 root root 0 Oct 8 11:08 memmap
drwxr-xr-x. 2 root root 0 Oct 8 11:08 qemu_fw_cfg
`

@MegaS0ra
Copy link

MegaS0ra commented Nov 19, 2024

Hi,
I had the same error, but as pointed by @SandakovMM, it's not the root cause of the problem, (and I guess it could be a different cause for everyone). So I found the "real" error (for my case at least) at the very beginning of the file /var/log/leapp/leapp-report.txt, the relevant lines were :

CalledProcessError: Command ['/bin/mount', '-a'] failed with exit code 32.
Actor remove_upgrade_boot_entry unexpectedly terminated with exit code: 1

This was because of a line in /etc/fstab about an unavailable partition (which was not needed anyway) : The Leapp process tried to mount this partition before starting to convert, and it failed to do so, so it stops the convert process right there.

On the production server, a "mount -a" gave me this error :

mount: wrong fs type, bad option, bad superblock on /dev/loop0,
       missing codepage or helper program, or other error

So for my case, fixing the /etc/fstab solves this issue.

If you end up here, I can only advice to check for the beginning of the /var/log/leapp/leapp-report.txt file :)

@SandakovMM
Copy link
Collaborator

Thank you, @MegaS0ra, for the investigation.
At the very least, I can add a pre-checker to ensure that /etc/fstab is correct before beginning the conversion.

@MegaS0ra
Copy link

Thanks @SandakovMM, I guess a pre-check would help :)

I should also let you know that the entry in my fstab was not something mandatory for the system to work, as it contained the option "nofail" :
/var/tmpDIR /tmp ext3 loop,nosuid,nofail,noexec,rw 0 0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants