-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
libvirt installer - openshift-console pods never start, installation fails #1443
Comments
I also see that only 1 out of 2 routers are running
|
Having the same exact problem. |
the solution was quite easy in my case: the current ingress deployment is configured with 2 replica pods of the router
so you have to spin up at least two compute nodes (aka worker) to fullfill the requirements
|
Regards, |
/label platform/libvirt |
I'm guessing this is no longer reproducible. /lifecycle stale |
The workaroundable console issue is tracked in #1007. /close |
@zeenix: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Version
Platform (aws|libvirt):
libvirt
What happened?
Compiled the go installer with the flag for libvirt, installed following the instructions. It seems to get nearly all the way there, it removes the bootstrap node and leaves one master and one worker but the final part of the install times out ...
bin/openshift-install create cluster
? Platform libvirt
? Libvirt Connection URI qemu+tcp://192.168.122.1/system
? Base Domain adrians.laptop
? Cluster Name lab
? Pull Secret [? for help] ******************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
INFO Fetching OS image: rhcos-maipo-400.7.20190306.0-qemu.qcow2.gz
INFO Creating infrastructure resources...
INFO Waiting up to 30m0s for the Kubernetes API at https://api.lab.adrians.laptop:6443...
INFO API v1.12.4+915ac9d up
INFO Waiting up to 30m0s for the bootstrap-complete event...
INFO Destroying the bootstrap resources...
INFO Waiting up to 30m0s for the cluster at https://api.lab.adrians.laptop:6443 to initialize...
FATAL failed to initialize the cluster: Could not update servicemonitor "openshift-kube-scheduler-operator/kube-scheduler-operator" (303 of 310): the server does not recognize this resource, check extension API servers
The location it complains about (https://api.lab.adrians.laptop:6443) is totally available in my webbrowser
What you expected to happen?
To have a working cluster
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?
While the installer does error out and no web UI is available I was able to run the kubectl command to list pods - looks like its my opensshift console pods that have the issue
(see attachment)
References
Maybe #1397 is related
oc-get-all-pods.txt
The text was updated successfully, but these errors were encountered: