Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing image is hard to debug #2265

Open
lentzi90 opened this issue Nov 21, 2024 · 2 comments
Open

Missing image is hard to debug #2265

lentzi90 opened this issue Nov 21, 2024 · 2 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@lentzi90
Copy link
Contributor

/kind bug

What steps did you take and what happened:

  1. Create a cluster using an image that doesn't exist.
  2. The OpenStackCluster becomes ready (provided no bastion is used or bastion is using an existing image)
  3. There is no visible progress and no explanation for what is wrong at the OpenStackMachine level. Worse, there is a misleading message that the OpenStackMachine is waiting for bootstrap data, which is not true.
  4. There is no explanation on the OpenStackServer. It is just "not ready"

What did you expect to happen:

There should be a condition on the OpenStackServer that would propagate to the OpenStackMachine, with a message explaining that the image cannot be found. The OpenStackMachine should not say that it is waiting for bootstrap data when the data is available.

Anything else you would like to add:

This issue is also present in v0.10, just that there is no OpenStackServer involved then.

Environment:

  • Cluster API Provider OpenStack version (Or git rev-parse HEAD if manually built): v0.11.2
  • Cluster-API version: v1.8.5
  • OpenStack version:
  • Minikube/KIND version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Nov 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 19, 2025
@EmilienM
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
Status: Inbox
Development

No branches or pull requests

4 participants