-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ccruntime e2e test nightly - unstable #339
Labels
Comments
I managed to reproduce the kcli create vm -i ubuntu2204 -P memory=8G -P numcpus=4 -P disks=[50] e2e
kcli ssh e2e
git clone --depth=1 https://github.com/confidential-containers/operator
cd operator/tests/e2e
export PATH="$PATH:/usr/local/bin"
ansible-playbook -i localhost, -c local --tags untagged ansible/main.yml
sudo -E PATH="$PATH" bash -c './cluster/up.sh'
export KUBECONFIG=/etc/kubernetes/admin.conf followed by a loop: export "PATH=$PATH:/usr/local/bin"
export KUBECONFIG=/etc/kubernetes/admin.conf
UP=0
TEST=0
DOWN=0
I=0
while :; do
echo "---< START ITERATION $I: $(date) >--" | tee -a job.log; SECONDS=0
sudo -E PATH="$PATH" timeout 25m bash -c './operator.sh' || { date; exit -1; }
UP="$SECONDS"; SECONDS=0; echo "UP $(date) ($UP)" | tee -a job.log
sudo -E PATH="$PATH" timeout 25m bash -c ./tests_runner.sh -r kata-qemu || { date; exit -2; }
TEST="$SECONDS"; SECONDS=0; echo "TESTS $(date) ($TEST)" | tee -a job.log
sudo -E PATH="$PATH" timeout 25m bash -c './operator.sh uninstall' || { date; exit -3; }
DOWN="$SECONDS"; SECONDS=0; echo "DOWN $(date) ($TEST)" | tee -a job.log
echo -e "---< END ITERATION $I: $(date) ($UP\t$TEST\t$DOWN)\t[$((UP+TEST+DOWN))] >---" | tee -a job.log
((I+=1))
done Which resulted in the left-behind labels. Interesting is that the operator stayed installed and the |
ldoktor
added a commit
to ldoktor/coco-operator
that referenced
this issue
Jan 30, 2024
recent issues in CI indicate that kubectl might sometimes fail which results in wait_for_process interrupting the loop. Let's improve the command to ensure kubectl command passed and only then grep for the (un)expected output. Note the positive commands do not need to be treated as the output should not contain the pod names on failure. Fixes: confidential-containers#339 Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
ldoktor
added a commit
to ldoktor/coco-operator
that referenced
this issue
Jan 30, 2024
recent issues in CI indicate that kubectl might sometimes fail which results in wait_for_process interrupting the loop. Let's improve the command to ensure kubectl command passed and only then grep for the (un)expected output. Note the positive commands do not need to be treated as the output should not contain the pod names on failure. Fixes: confidential-containers#339 Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
ldoktor
added a commit
to ldoktor/coco-operator
that referenced
this issue
Jan 31, 2024
the network in CI environment tends to break from time to time, let's allow up to 3 retries for tasks that support it and that use external sources. Fixes: confidential-containers#339 Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
ldoktor
added a commit
to ldoktor/coco-operator
that referenced
this issue
Jan 31, 2024
the network in CI environment tends to break from time to time, let's allow up to 3 retries for tasks that support it and that use external sources. Fixes: confidential-containers#339 Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
ldoktor
added a commit
to ldoktor/coco-operator
that referenced
this issue
Feb 5, 2024
the network in CI environment tends to break from time to time, let's allow up to 3 retries for tasks that support it and that use external sources. Fixes: confidential-containers#339 Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
ldoktor
added a commit
to ldoktor/coco-operator
that referenced
this issue
Feb 16, 2024
the network in CI environment tends to break from time to time, let's allow up to 3 retries for tasks that support it and that use external sources. Fixes: confidential-containers#339 Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
ccruntime e2e test nightly jobs are pretty unstable, latest 5 out of 9 failed.
They aren't failing due same reason. For example:
In another job:
The text was updated successfully, but these errors were encountered: