Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

set-ffi-env-e2e aborts when executing tests manually with tmt #521

Closed
pengshanyu opened this issue Aug 28, 2024 · 16 comments · Fixed by #537
Closed

set-ffi-env-e2e aborts when executing tests manually with tmt #521

pengshanyu opened this issue Aug 28, 2024 · 16 comments · Fixed by #537
Assignees

Comments

@pengshanyu
Copy link
Collaborator

When running ffi test against local VM (provision via the c9s cloud image), the set-ffi-env-e2e script will run into an error:

 + '[' -z 'Starting setup' ']'
        + BLUE='\033[94m'
        + ENDCOLOR='\033[0m'
        + echo -e '[ \033[94mINFO\033[0m  ] Starting setup'
        + info_message ==============================
        + '[' -z ============================== ']'
        + BLUE='\033[94m'
        + ENDCOLOR='\033[0m'
        + echo -e '[ \033[94mINFO\033[0m  ] =============================='
        + '[' 0 -ne 0 ']'
        + echo
        + info_message 'Checking if QM already installed'
        + '[' -z 'Checking if QM already installed' ']'
        + BLUE='\033[94m'
        + ENDCOLOR='\033[0m'
        + echo -e '[ \033[94mINFO\033[0m  ] Checking if QM already installed'
        + info_message ==============================
        + '[' -z ============================== ']'
        + BLUE='\033[94m'
        + ENDCOLOR='\033[0m'
        + echo -e '[ \033[94mINFO\033[0m  ] =============================='
        ++ systemctl is-enabled qm
        + QM_STATUS='Failed to get unit file state for qm.service: No such file or directory' 
@Yarboa
Copy link
Collaborator

Yarboa commented Aug 28, 2024

Thanks @pengshanyu

We need to add rpm -q qm verification
before systemd checks

@dougsland
Copy link
Collaborator

dougsland commented Aug 28, 2024

@Yarboa we can work with @pengshanyu in others issues which involve more complex scenarios as she is already onboard the project. Let's keep onboarding nsednev. nsednev could you please investigate ?

Yarboa added a commit to Yarboa/qm that referenced this issue Aug 28, 2024
Fix containers#521

Signed-off-by: Yariv Rachmani <yrachman@redhat.com>
@Yarboa
Copy link
Collaborator

Yarboa commented Aug 28, 2024

@nsednev please check this fix, if you want to take it, assign your self to issue

@@ -276,8 +276,9 @@ fi
 echo
 info_message "Checking if QM already installed"
 info_message "=============================="
+QM_INST="$(rpm -q qm)"
 QM_STATUS="$(systemctl is-enabled qm 2>&1)"
-if [ "$QM_STATUS" == "generated" ]; then
+if [[ -n "$QM_INST" && "$QM_STATUS" == "generated" ]]; then
    if [ "$(systemctl is-active qm)" == "active" ]; then
        info_message "QM Enabled and Active"
        info_message "=============================="

@dougsland
Copy link
Collaborator

nsednev, here another suggestion (never tested), however less complicated the code better for us to maintain and easier to others to join us.

  • I would remove this part the code and convert it into a "external tool" - "check-qm-status"
  • call it from the script

Why?
Easier to maintain the separate logic into a single place and easy to keep our brain "safe"

@Yarboa Is it what you shared yesterday?

#!/bin/bash

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Function to check QM package and service status
check_qm_status() {
    # Check if the 'qm' package is installed via RPM
    if ! rpm -q qm > /dev/null 2>&1; then
        info_message "QM package is not installed."
        info_message "=============================="
        return 1
    fi

    # Check the status of the 'qm' service
    local qm_status
    qm_status="$(systemctl is-enabled qm 2>&1)"
    if [ "$qm_status" == "generated" ]; then
        if [ "$(systemctl is-active qm)" == "active" ]; then
            info_message "QM Enabled and Active"
            info_message "=============================="
            return 0
        fi
        if [ -d /var/qm ] && [ -d /etc/qm ]; then
            info_message "QM Enabled and not Active"
            info_message "=============================="
            return 1
        fi
    fi

    # Check if the system is booted with OSTree
    if stat /run/ostree-booted > /dev/null 2>&1; then
        info_message "Warning: script cannot run on ostree image"
        info_message "=============================="
        return 0
    fi

    # If none of the above conditions were met
    info_message "QM service status is unclear."
    info_message "=============================="
    return 1
}

Calling it in the set-ffi-env-e2e:

check_qm_status

@Yarboa
Copy link
Collaborator

Yarboa commented Aug 28, 2024

@dougsland sure
Once it is working it will rearranged as you suggest, 👍

@nsednev
Copy link
Collaborator

nsednev commented Aug 28, 2024

Might be related to this issue "QM_STATUS='Failed to get unit file state for qm.service: No such file or directory":
We're receiving some errors now from the TC:

    script:
        cd tests/e2e
        ./set-ffi-env-e2e "${FFI_SETUP_OPTIONS}"
    fail: Command '/var/ARTIFACTS/work-ffiiaeheny1/plans/e2e/ffi/tree/tmt-prepare-wrapper.sh-Set-QM-env-default-0' returned 1.
finish

        summary: 0 tasks completed

plan failed

The exception was caused by 1 earlier exceptions

Cause number 1:

prepare step failed

The exception was caused by 1 earlier exceptions

Cause number 1:

    Command '/var/ARTIFACTS/work-ffiiaeheny1/plans/e2e/ffi/tree/tmt-prepare-wrapper.sh-Set-QM-env-default-0' returned 1.

    stdout (5 lines)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    [ INFO  ] Starting setup
    [ INFO  ] ==============================

    [ INFO  ] Checking if QM already installed
    [ INFO  ] ==============================
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    stderr (77 lines)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.
.
.
+ ENDCOLOR='\033[0m'
+ echo -e '[ \033[94mINFO\033[0m ] Starting setup'
+ info_message ==============================
+ '[' -z ============================== ']'
+ BLUE='\033[94m'
+ ENDCOLOR='\033[0m'
+ echo -e '[ \033[94mINFO\033[0m ] =============================='
+ '[' 0 -ne 0 ']'
+ echo
+ info_message 'Checking if QM already installed'
+ '[' -z 'Checking if QM already installed' ']'
+ BLUE='\033[94m'
+ ENDCOLOR='\033[0m'
+ echo -e '[ \033[94mINFO\033[0m ] Checking if QM already installed'
+ info_message ==============================
+ '[' -z ============================== ']'
+ BLUE='\033[94m'
+ ENDCOLOR='\033[0m'
+ echo -e '[ \033[94mINFO\033[0m ] =============================='
++ systemctl is-enabled qm
+ QM_STATUS='Failed to get unit file state for qm.service: No such file or directory'
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

@nsednev nsednev self-assigned this Aug 28, 2024
@Yarboa
Copy link
Collaborator

Yarboa commented Aug 29, 2024

@nsednev as mentioned in slack for c9s manual run, not in packit
there is no qm installed at all,just need to add this check
see the +/- signs relevant to main

#521 (comment)

@nsednev
Copy link
Collaborator

nsednev commented Sep 2, 2024

@Yarboa

@nsednev please check this fix, if you want to take it, assign your self to issue

@@ -276,8 +276,9 @@ fi
 echo
 info_message "Checking if QM already installed"
 info_message "=============================="
+QM_INST="$(rpm -qa qm)"
 QM_STATUS="$(systemctl is-enabled qm 2>&1)"
-if [ "$QM_STATUS" == "generated" ]; then
+if [[ -n "$QM_INST" && "$QM_STATUS" == "generated" ]]; then
    if [ "$(systemctl is-active qm)" == "active" ]; then
        info_message "QM Enabled and Active"
        info_message "=============================="

I don't see the same code in

if [ "$(systemctl is-active qm)" == "active" ]; then

after your suggested changes I see the info message and thats it:
info_message "QM Enabled and Active"
info_message "=============================="

But I don't see the same code in

if [ "$(systemctl is-active qm)" == "active" ]; then

I see:
# Restart QM after mount /var on separate partition
if grep -qi "${QC_SOC}" "${SOC_DISTRO_FILE}"; then
systemctl restart qm
fi
And only after that the info message appears.

So my question is about the restart of the QM part here:
# Restart QM after mount /var on separate partition
if grep -qi "${QC_SOC}" "${SOC_DISTRO_FILE}"; then
systemctl restart qm
fi
In your #521 (comment) this part is missing.
Don't we want to preserve it?

@nsednev
Copy link
Collaborator

nsednev commented Sep 2, 2024

    [ INFO  ] Starting setup
    [ INFO  ] ==============================

    [ INFO  ] Check if qm requires additional partition
    [ INFO  ] ==============================

    [ INFO  ] Checking if QM already installed
    [ INFO  ] ==============================
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    stderr (103 lines)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ++ date +%s
    + START_TIME=1725291227
    +++ dirname -- ./set-ffi-env-e2e
    ++ cd -- .
    ++ pwd
    + SCRIPT_DIR=/var/ARTIFACTS/work-tier-0o98m9ja7/plans/e2e/tier-0/tree/tests/e2e
    + source /var/ARTIFACTS/work-tier-0o98m9ja7/plans/e2e/tier-0/tree/tests/e2e/lib/utils
    + source /var/ARTIFACTS/work-tier-0o98m9ja7/plans/e2e/tier-0/tree/tests/e2e/lib/container
    + source /var/ARTIFACTS/work-tier-0o98m9ja7/plans/e2e/tier-0/tree/tests/e2e/lib/systemd
    + source /var/ARTIFACTS/work-tier-0o98m9ja7/plans/e2e/tier-0/tree/tests/e2e/lib/tests
    ++ NODES_FOR_TESTING_ARR='control qm-node1'
    ++ readarray -d ' ' -t NODES_FOR_TESTING
    ++ CONTROL_CONTAINER_NAME=control
    ++ WAIT_BLUECHI_AGENT_CONNECT=5
    + source /var/ARTIFACTS/work-tier-0o98m9ja7/plans/e2e/tier-0/tree/tests/e2e/lib/diskutils
    + export CONFIG_NODE_AGENT_PATH=/etc/bluechi/agent.conf.d/agent.conf
    + CONFIG_NODE_AGENT_PATH=/etc/bluechi/agent.conf.d/agent.conf
    + export REGISTRY_UBI8_MINIMAL=registry.access.redhat.com/ubi8/ubi-minimal
    + REGISTRY_UBI8_MINIMAL=registry.access.redhat.com/ubi8/ubi-minimal
    + export WAIT_BLUECHI_SERVER_BE_READY_IN_SEC=5
    + WAIT_BLUECHI_SERVER_BE_READY_IN_SEC=5
    + export CONTROL_CONTAINER_NAME=control
    + CONTROL_CONTAINER_NAME=control
    + NODES_FOR_TESTING=('control' 'node1')
    + export NODES_FOR_TESTING
    + export IP_CONTROL_MACHINE=
    + IP_CONTROL_MACHINE=
    + export CONTAINER_CAP_ADD=
    + CONTAINER_CAP_ADD=
    + export ARCH=
    + ARCH=
    + export DISK=
    + DISK=
    + export PART_ID=
    + PART_ID=
    + export QC_SOC=SA8775P
    + QC_SOC=SA8775P
    + export SOC_DISTRO_FILE=/sys/devices/soc0/machine
    + SOC_DISTRO_FILE=/sys/devices/soc0/machine
    + export QC_SOC_DISK=sde
    + QC_SOC_DISK=sde
    + export BUILD_BLUECHI_FROM_GH_URL=
    + BUILD_BLUECHI_FROM_GH_URL=
    + export QM_GH_URL=
    + QM_GH_URL=
    + export BRANCH_QM=
    + BRANCH_QM=
    + export SET_QM_PART=
    + SET_QM_PART=
    + export USE_QM_COPR=packit/containers-qm-532
    + USE_QM_COPR=packit/containers-qm-532
    + RED='\033[91m'
    + GRN='\033[92m'
    + CLR='\033[0m'
    + ARGUMENT_LIST=("qm-setup-from-gh-url" "branch-qm" "set-qm-disk-part" "use-qm-copr")
    +++ printf help,%s:, qm-setup-from-gh-url branch-qm set-qm-disk-part use-qm-copr
    +++ basename ./set-ffi-env-e2e
    ++ getopt --longoptions help,qm-setup-from-gh-url:,help,branch-qm:,help,set-qm-disk-part:,help,use-qm-copr:, --name set-ffi-env-e2e --options '' -- none
    + opts=' -- '\''none'\'''
    + eval set '-- -- '\''none'\'''
    ++ set -- -- none
    + '[' 2 -gt 0 ']'
    + case "$1" in
    + break
    + info_message 'Starting setup'
    + '[' -z 'Starting setup' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Starting setup'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + '[' 0 -ne 0 ']'
    + stat /run/ostree-booted
    + echo
    + info_message 'Check if qm requires additional partition'
    + '[' -z 'Check if qm requires additional partition' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Check if qm requires additional partition'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + '[' -n '' ']'
    + echo
    + info_message 'Checking if QM already installed'
    + '[' -z 'Checking if QM already installed' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Checking if QM already installed'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    ++ rpm -qa qm
    + QM_INST=qm-0.6.5-1.20240902150444282009.pr532.68.gf27cba2.el9.noarch
    + [[ -n qm-0.6.5-1.20240902150444282009.pr532.68.gf27cba2.el9.noarch ]]
    ./set-ffi-env-e2e: line 267: QM_STATUS: unbound variable
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

---^---^---^---^---^---

@Yarboa
Copy link
Collaborator

Yarboa commented Sep 3, 2024

@nsednev How did you run tmt?
Can you share it?

@nsednev
Copy link
Collaborator

nsednev commented Sep 3, 2024

I received this from when I didn't wiped out the QM restart part from the code.
Its taken from the testing-farm:centos-stream-9-x86_64:e2e-ffi checkup from #532.
Now that I finalized the code in PR, I see these:
WARN[0010] StopSignal SIGTERM failed to stop container ffi-qm in 10 seconds, resorting to SIGKILL
Deleted: e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57
Trying to pull quay.io/centos-sig-automotive/ffi-tools:latest...
Getting image source signatures
Copying blob sha256:80c27f0a59c1ae0fb0437fc08ff5721fe093c488d6f5c5059745b3e57991f775
Copying blob sha256:ea58703461dace3304e355ec5c1ab4d72976ed546f2b896b8546c737ddc4c5b0
Copying config sha256:e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57
Writing manifest to image destination
e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57
Getting image source signatures
Copying blob sha256:d4cf3585b76c558f542b352c16e3df670a7ac4c4d655a7d618171a1e07a4e399
Copying blob sha256:c8fb351d6683cb7200fd6db901d7a33a67a1e4a52c7ed5b54135ab330bc24c90
Copying config sha256:e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57
Writing manifest to image destination
Untagged: quay.io/centos-sig-automotive/ffi-tools:latest
Deleted: e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57
Getting image source signatures
Writing manifest to image destination
Error: OCI runtime error: crun: the requested cgroup controller pids is not available
[ INFO ] PASS: qm.container oom_score_adj value == 500
./test.sh: line 39: [: cat: /proc/0/oom_score_adj: No such file or directory: integer expression expected
[ INFO ] FAIL: qm containers oom_score_adj != 750. Current value is cat: /proc/0/oom_score_adj: No such file or directory
Shared connection to 3.15.160.149 closed.

Its available from here:
https://artifacts.dev.testing-farm.io/38432c5c-1e7d-470e-a737-02109ab30c6e/

@dougsland
Copy link
Collaborator

Should be fixed soon via: #531

@dougsland
Copy link
Collaborator

I received this from when I didn't wiped out the QM restart part from the code. Its taken from the testing-farm:centos-stream-9-x86_64:e2e-ffi checkup from #532. Now that I finalized the code in PR, I see these: WARN[0010] StopSignal SIGTERM failed to stop container ffi-qm in 10 seconds, resorting to SIGKILL Deleted: e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Trying to pull quay.io/centos-sig-automotive/ffi-tools:latest... Getting image source signatures Copying blob sha256:80c27f0a59c1ae0fb0437fc08ff5721fe093c488d6f5c5059745b3e57991f775 Copying blob sha256:ea58703461dace3304e355ec5c1ab4d72976ed546f2b896b8546c737ddc4c5b0 Copying config sha256:e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Writing manifest to image destination e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Getting image source signatures Copying blob sha256:d4cf3585b76c558f542b352c16e3df670a7ac4c4d655a7d618171a1e07a4e399 Copying blob sha256:c8fb351d6683cb7200fd6db901d7a33a67a1e4a52c7ed5b54135ab330bc24c90 Copying config sha256:e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Writing manifest to image destination Untagged: quay.io/centos-sig-automotive/ffi-tools:latest Deleted: e794e15abb35fd558e3afa25fc8c751d7e3418a016b8b0fbee4ebdc4cf2e6a57 Getting image source signatures Writing manifest to image destination Error: OCI runtime error: crun: the requested cgroup controller pids is not available [ INFO ] PASS: qm.container oom_score_adj value == 500 ./test.sh: line 39: [: cat: /proc/0/oom_score_adj: No such file or directory: integer expression expected [ INFO ] FAIL: qm containers oom_score_adj != 750. Current value is cat: /proc/0/oom_score_adj: No such file or directory Shared connection to 3.15.160.149 closed.

Its available from here: https://artifacts.dev.testing-farm.io/38432c5c-1e7d-470e-a737-02109ab30c6e/

solved.

@nsednev
Copy link
Collaborator

nsednev commented Sep 4, 2024

I still see these while running against testing-farm checkup tool:
WARN[0010] StopSignal SIGTERM failed to stop container ffi-qm in 10 seconds, resorting to SIGKILL
Deleted: 2477d71ac8f1ce834221178bbe0a3526ef93dbc7d89518d6a6ce757cb8e2ca39
Trying to pull quay.io/centos-sig-automotive/ffi-tools:latest...
Getting image source signatures
Copying blob sha256:364b7f4a78417c35ed3d5f4785cbe2b34f4f1f552a7e01655c1c9c5f7b6e5f61
Copying blob sha256:4ca947be8ae2828258086eb666acaac2516cdbca60a8107cb6badb276a65e981
Copying config sha256:2477d71ac8f1ce834221178bbe0a3526ef93dbc7d89518d6a6ce757cb8e2ca39
Writing manifest to image destination
2477d71ac8f1ce834221178bbe0a3526ef93dbc7d89518d6a6ce757cb8e2ca39
Getting image source signatures
Copying blob sha256:7555554ffea12f2e51f0dcf41e89523f58f790697442872907f9c2b6955e9ea2
Copying blob sha256:313b79904146885ddf6ce5104fc71cc7e081bfec070a48e3618fac00b6671127
Copying config sha256:2477d71ac8f1ce834221178bbe0a3526ef93dbc7d89518d6a6ce757cb8e2ca39
Writing manifest to image destination
Untagged: quay.io/centos-sig-automotive/ffi-tools:latest
Deleted: 2477d71ac8f1ce834221178bbe0a3526ef93dbc7d89518d6a6ce757cb8e2ca39
Getting image source signatures
Writing manifest to image destination
Error: OCI runtime error: crun: the requested cgroup controller pids is not available
Retrieved QM_PID: 26154
Retrieved QM_FFI_PID: 0
Retrieved QM_OOM_SCORE_ADJ: '500'
Retrieved QM_FFI_OOM_SCORE_ADJ: '/bin/bash: line 1: /proc/0/oom_score_adj: No such file or directory
'
PASS: qm.container oom_score_adj value == 500
./test.sh: line 91: [[: /bin/bash: line 1: /proc/0/oom_score_adj: No such file or directory
: syntax error: operand expected (error token is "/bin/bash: line 1: /proc/0/oom_score_adj: No such file or directory
")
FAIL: qm containers oom_score_adj != 750. Current value is '/bin/bash: line 1: /proc/0/oom_score_adj: No such file or directory
'
Shared connection to 18.117.235.104 closed.

@nsednev
Copy link
Collaborator

nsednev commented Sep 5, 2024

I tested the code against local VM running with:
CentOS Stream release 9
Linux ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com 5.14.0-503.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Aug 22 17:03:23 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
I took it from CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2.

I checked that on the VM there was no podman installed before testing it, then I ran the tmt like so:
tmt -c distro=centos-stream-9 run -a provision --how connect -u root -p ${PASSWORD} -P ${PORT} -g localhost plans -n /plans/e2e/tier-0

The result was:
stdout (8/8 lines)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ INFO ] Starting setup
[ INFO ] ==============================

    [ INFO  ] Check if qm requires additional partition
    [ INFO  ] ==============================

    [ INFO  ] Checking if QM already installed
    [ INFO  ] ==============================
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    stderr (100/103 lines)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    ++ cd -- .
    ++ pwd
    + SCRIPT_DIR=/var/tmp/tmt/run-001/plans/e2e/tier-0/tree/tests/e2e
    + source /var/tmp/tmt/run-001/plans/e2e/tier-0/tree/tests/e2e/lib/utils
    + source /var/tmp/tmt/run-001/plans/e2e/tier-0/tree/tests/e2e/lib/container
    + source /var/tmp/tmt/run-001/plans/e2e/tier-0/tree/tests/e2e/lib/systemd
    + source /var/tmp/tmt/run-001/plans/e2e/tier-0/tree/tests/e2e/lib/tests
    ++ NODES_FOR_TESTING_ARR='control qm-node1'
    ++ readarray -d ' ' -t NODES_FOR_TESTING
    ++ CONTROL_CONTAINER_NAME=control
    ++ WAIT_BLUECHI_AGENT_CONNECT=5
    + source /var/tmp/tmt/run-001/plans/e2e/tier-0/tree/tests/e2e/lib/diskutils
    + export CONFIG_NODE_AGENT_PATH=/etc/bluechi/agent.conf.d/agent.conf
    + CONFIG_NODE_AGENT_PATH=/etc/bluechi/agent.conf.d/agent.conf
    + export REGISTRY_UBI8_MINIMAL=registry.access.redhat.com/ubi8/ubi-minimal
    + REGISTRY_UBI8_MINIMAL=registry.access.redhat.com/ubi8/ubi-minimal
    + export WAIT_BLUECHI_SERVER_BE_READY_IN_SEC=5
    + WAIT_BLUECHI_SERVER_BE_READY_IN_SEC=5
    + export CONTROL_CONTAINER_NAME=control
    + CONTROL_CONTAINER_NAME=control
    + NODES_FOR_TESTING=('control' 'node1')
    + export NODES_FOR_TESTING
    + export IP_CONTROL_MACHINE=
    + IP_CONTROL_MACHINE=
    + export CONTAINER_CAP_ADD=
    + CONTAINER_CAP_ADD=
    + export ARCH=
    + ARCH=
    + export DISK=
    + DISK=
    + export PART_ID=
    + PART_ID=
    + export QC_SOC=SA8775P
    + QC_SOC=SA8775P
    + export SOC_DISTRO_FILE=/sys/devices/soc0/machine
    + SOC_DISTRO_FILE=/sys/devices/soc0/machine
    + export QC_SOC_DISK=sde
    + QC_SOC_DISK=sde
    + export BUILD_BLUECHI_FROM_GH_URL=
    + BUILD_BLUECHI_FROM_GH_URL=
    + export QM_GH_URL=
    + QM_GH_URL=
    + export BRANCH_QM=
    + BRANCH_QM=
    + export SET_QM_PART=
    + SET_QM_PART=
    + export USE_QM_COPR=rhcontainerbot/qm
    + USE_QM_COPR=rhcontainerbot/qm
    + RED='\033[91m'
    + GRN='\033[92m'
    + CLR='\033[0m'
    + ARGUMENT_LIST=("qm-setup-from-gh-url" "branch-qm" "set-qm-disk-part" "use-qm-copr")
    +++ printf help,%s:, qm-setup-from-gh-url branch-qm set-qm-disk-part use-qm-copr
    +++ basename ./set-ffi-env-e2e
    ++ getopt --longoptions help,qm-setup-from-gh-url:,help,branch-qm:,help,set-qm-disk-part:,help,use-qm-copr:, --name set-ffi-env-e2e --options '' -- none
    + opts=' -- '\''none'\'''
    + eval set '-- -- '\''none'\'''
    ++ set -- -- none
    + '[' 2 -gt 0 ']'
    + case "$1" in
    + break
    + info_message 'Starting setup'
    + '[' -z 'Starting setup' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Starting setup'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + '[' 0 -ne 0 ']'
    + stat /run/ostree-booted
    + echo
    + info_message 'Check if qm requires additional partition'
    + '[' -z 'Check if qm requires additional partition' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Check if qm requires additional partition'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + '[' -n '' ']'
    + echo
    + info_message 'Checking if QM already installed'
    + '[' -z 'Checking if QM already installed' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Checking if QM already installed'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    ++ rpm -qa qm
    + QM_INST=
    ++ systemctl is-enabled qm
    + QM_STATUS='Failed to get unit file state for qm.service: No such file or directory'
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

On VM I checked that it installed podman-5.2.2-1.el9.x86_64 and had not ran the QM at all:
[root@ibm-p8-kvm-03-guest-02 ~]# podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@ibm-p8-kvm-03-guest-02 ~]#

@nsednev
Copy link
Collaborator

nsednev commented Sep 5, 2024

Tested on distro: CentOS Stream 9:
Linux ibm-p8-kvm-03-guest-02.virt.pnr.lab.eng.rdu2.redhat.com 5.14.0-503.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Aug 22 17:03:23 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2

No podman and qm installed on clean and fresh OS:
[root@ibm-p8-kvm-03-guest-02 ~]# rpm -qa podman
[root@ibm-p8-kvm-03-guest-02 ~]# rpm -qa qm
[root@ibm-p8-kvm-03-guest-02 ~]#

Running tmt tier-0 against VM:

    multihost name: default-0
    arch: x86_64
    distro: CentOS Stream 9

    summary: 1 guest provisioned
prepare
    queued push task #1: push to default-0
    
    push task #1: push to default-0

    queued prepare task #1: Install podman on default-0
    queued prepare task #2: Set QM environment on default-0
    queued prepare task #3: requires on default-0
    
    prepare task #1: Install podman on default-0
    how: install
    name: Install podman
    package: podman
    
    prepare task #2: Set QM environment on default-0
    how: shell
    name: Set QM environment
    overview: 1 script found
    
    prepare task #3: requires on default-0
    how: install
    summary: Install required packages
    name: requires
    where: default-0
    package: /usr/bin/flock

    queued pull task #1: pull from default-0
    
    pull task #1: pull from default-0

    summary: 3 preparations applied
execute
    queued execute task #1: default-0 on default-0
    
    execute task #1: default-0 on default-0
    how: tmt
    progress:                                                           

    summary: 6 tests executed
report
    how: junit
    output: /var/tmp/tmt/run-010/plans/e2e/tier-0/report/default-0/junit.xml
    summary: 6 tests passed
finish

    summary: 0 tasks completed

total: 6 tests passed

On VM I see:
[root@ibm-p8-kvm-03-guest-02 ~]# rpm -qa podman
podman-5.2.2-1.el9.x86_64
[root@ibm-p8-kvm-03-guest-02 ~]# rpm -qa qm
qm-0.6.5-1.20240903182315916484.main.77.g64fc09a.el9.noarch

[root@ibm-p8-kvm-03-guest-02 ~]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
717d6c2cfa7c /sbin/init 6 minutes ago Up 6 minutes qm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants