Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use PSI for running OpenShift 3.x tests #4288

Closed
2 of 3 tasks
dharmit opened this issue Dec 4, 2020 · 35 comments
Closed
2 of 3 tasks

Use PSI for running OpenShift 3.x tests #4288

dharmit opened this issue Dec 4, 2020 · 35 comments
Assignees
Labels
area/infra Issues or PRs related to setting up or fixing things in infrastructure. Mostly CI infrastructure. area/testing Issues or PRs related to testing, Quality Assurance or Quality Engineering priority/High Important issue; should be worked on before any other issues (except priority/Critical issue(s)).

Comments

@dharmit
Copy link
Member

dharmit commented Dec 4, 2020

OpenShift 3.11 tests that were earlier working on Travis need to run on PSI infrastructure. This is mainly required to test Service Catalog.

/area testing
/priority high

Work pending for Sprint 198:
The required scrips are pending review in PR #4406. Once merged the corrsponding Jenkins job will be enabled to complete the setup for first run.

Acceptance criteria:

  • Trigger Jenkins job to run tests in Minishift
  • E2E and integration tests are executed
  • Tests are running correctly. This means that they are not flaking. If the test suite passes, then the repeated runs with the same code base pass as well.
@openshift-ci-robot openshift-ci-robot added area/testing Issues or PRs related to testing, Quality Assurance or Quality Engineering priority/High Important issue; should be worked on before any other issues (except priority/Critical issue(s)). labels Dec 4, 2020
@dharmit
Copy link
Member Author

dharmit commented Dec 4, 2020

If we manage to address #4242 elegantly (that is, odo code becoming more clear and maintainable w.r.t supporting multiple service backends - Svc Cat and Operator Hub), we can simply run the Service Catalog tests on minikube as proposed in #4287. All we will then have to do is bring up Svc Cat on minikube .

IMO, that's what we should target. I don't see any other reason why we would want to test things on OCP 3.x. @kadel @girishramnani @mik-dass is there anything else we test on 3.x?

@dharmit dharmit added the area/infra Issues or PRs related to setting up or fixing things in infrastructure. Mostly CI infrastructure. label Dec 17, 2020
@dharmit
Copy link
Member Author

dharmit commented Jan 21, 2021

$ minishift openshift component add service-catalog
$ minishift openshift component add automation-broker
$ minishift openshift component add template-service-broker

This is ALL you need to enable Service Catalog on a Minishift VM.

@rnapoles-rh
Copy link
Contributor

rnapoles-rh commented Jan 25, 2021

The following need to be achieved:

  • Provisions VM on OpenStack using the releng-key public key
  • Install docker
  • Install minishift inside the created VM
  • Add service catalog
  • add automation-broker
  • add template-service-broker
  • Run tests manually

Create image:
openstack server create --flavor m1.large --image Fedora-Cloud-Base-32 --nic net-id=provider_net_shared_3 --security-group default --security-group all-open --key-name releng-pub-key odo-fedora-minishift

ssh to the image using the releng-key:
ssh -i ~/psi/PSI/lib/common/releng-key fedora@<ip_address>

Had problems to ssh:

$ ssh -i ~/psi/PSI/lib/common/releng-key fedora@10.0.111.190
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@     WARNING: UNPROTECTED PRIVATE KEY FILE!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for '/Users/rnapoles@ca.ibm.com/psi/PSI/lib/common/releng-key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/Users/rnapoles@ca.ibm.com/psi/PSI/lib/common/releng-key": bad permissions
fedora@10.0.111.190's password:

Resolved by changing permissions on the local lib/common/releng-key file from 0644 to 0600

$ ssh -i ~/psi/PSI/lib/common/releng-key fedora@10.0.111.190
sudo -i
dnf install grubby
grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"
systemctl reboot
ssh -i ~/psi/PSI/lib/common/releng-key fedora@<ip_address>
sudo -id

Install docker:
Follow https://docs.docker.com/engine/install/fedora/

sudo dnf -y install dnf-plugins-core
sudo dnf config-manager \
    --add-repo \
    https://download.docker.com/linux/fedora/docker-ce.repo

sudo dnf install docker-ce docker-ce-cli containerd.io
sudo groupadd docker
sudo usermod -aG docker $USER
sudo newgrp docker
sudo systemctl start docker

[root@odo-fedora-minishift ~]# docker version
Client: Docker Engine - Community
 Version:           20.10.2
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        2291f61
 Built:             Mon Dec 28 16:18:28 2020
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.2
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       8891c58
  Built:            Mon Dec 28 16:15:41 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Setup Minishift by following:
https://docs.okd.io/3.11/minishift/getting-started/setting-up-virtualization-environment.html#kvm-driver-fedora

sudo dnf install libvirt qemu-kvm
sudo usermod -a -G libvirt $(whoami)
newgrp libvirt

Install minishift:

curl -Lo minishift.tgz https://github.com/minishift/minishift/releases/download/v1.34.3/minishift-1.34.3-linux-amd64.tgz
tar -xvzf minishift.tgz
sudo mv minishift-1.34.3-linux-amd64/minishift /usr/local/bin

@rnapoles-rh
Copy link
Contributor

rnapoles-rh commented Jan 25, 2021

Error starting minishift:

$ minishift start
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.11.0' is valid ... 
   Hit github rate limit: GET https://api.github.com/repos/openshift/origin/releases/tags/v3.11.0: 403 API rate limit exceeded for 66.187.232.127. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.) [rate reset in 58m18s]
FAIL

Resolved by creating a GitHub API token (personal in this case) and add the env variable MINISHIFT_GITHUB_API_TOKEN using that token.
@mohammedzee1000 do we have a shared github token to be used for PSI?

@rnapoles-rh
Copy link
Contributor

minishift start
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.11.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.11.0' is supported ... OK
-- Checking if requested hypervisor 'kvm' is supported on this platform ... OK
-- Checking if KVM driver is installed ... 
   Driver is available at /usr/local/bin/docker-machine-driver-kvm ... 
   Checking driver binary is executable ... OK
-- Checking if Libvirt is installed ... OK
-- Checking if Libvirt default network is present ... FAIL
   See the 'Setting Up the Virtualization Environment' topic (https://docs.okd.io/latest/minishift/getting-started/setting-up-virtualization-environment.html) for more information

Checking on it

@rnapoles-rh
Copy link
Contributor

rnapoles-rh commented Jan 26, 2021

To setup virtualization env following:
https://docs.okd.io/latest/minishift/getting-started/setting-up-virtualization-environment.html
and https://docs.fedoraproject.org/en-US/quick-docs/getting-started-with-virtualization/

sudo dnf install @virtualization
sudo dnf group install --with-optional virtualization
sudo systemctl start libvirtd
sudo systemctl enable libvirtd      # To start the service on boot

@rnapoles-rh
Copy link
Contributor

minishift start
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.11.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.11.0' is supported ... OK
-- Checking if requested hypervisor 'kvm' is supported on this platform ... OK
-- Checking if KVM driver is installed ... 
   Driver is available at /usr/local/bin/docker-machine-driver-kvm ... 
   Checking driver binary is executable ... OK
-- Checking if Libvirt is installed ... OK
-- Checking if Libvirt default network is present ... OK
-- Checking if Libvirt default network is active ... OK
-- Checking the ISO URL ... OK
-- Downloading OpenShift binary 'oc' version 'v3.11.0'
 53.89 MiB / 53.89 MiB [================================================================================================================================] 100.00% 0s-- Downloading OpenShift v3.11.0 checksums ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'kvm' hypervisor ...
-- Minishift VM will be configured with ...
   Memory:    4 GB
   vCPUs :    2
   Disk size: 20 GB

   Downloading ISO 'https://github.com/minishift/minishift-centos-iso/releases/download/v1.17.0/minishift-centos7.iso'
 375.00 MiB / 375.00 MiB [==============================================================================================================================] 100.00% 0s
-- Starting Minishift VM ................................................................. FAIL E0126 14:31:30.949960   18432 start.go:499] Error starting the VM: Error creating the VM. Error creating machine: Error detecting OS: Too many retries waiting for SSH to be available.  Last error: Maximum number of retries (60) exceeded. Retrying.
Error starting the VM: Error creating the VM. Error creating machine: Error detecting OS: Too many retries waiting for SSH to be available.  Last error: Maximum number of retries (60) exceeded

@rnapoles-rh
Copy link
Contributor

sudo virsh net-start default
sudo virsh net-autostart default
minishift delete
rm -RF .minishift
minishift start
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.11.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.11.0' is supported ... OK
-- Checking if requested hypervisor 'kvm' is supported on this platform ... OK
-- Checking if KVM driver is installed ...
Driver is available at /usr/local/bin/docker-machine-driver-kvm ...
Checking driver binary is executable ... OK
-- Checking if Libvirt is installed ... OK
-- Checking if Libvirt default network is present ... OK
-- Checking if Libvirt default network is active ... OK
-- Checking the ISO URL ... OK
-- Downloading OpenShift binary 'oc' version 'v3.11.0'
53.89 MiB / 53.89 MiB [================================================================================================================================] 100.00% 0s-- Downloading OpenShift v3.11.0 checksums ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'kvm' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 4 GB
vCPUs : 2
Disk size: 20 GB

Downloading ISO 'https://github.com/minishift/minishift-centos-iso/releases/download/v1.17.0/minishift-centos7.iso'
375.00 MiB / 375.00 MiB [==============================================================================================================================] 100.00% 0s
-- Starting Minishift VM ..................... OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ...
Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ...
Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 1% used OK
-- Writing current configuration for static assignment of IP address ... WARN
Importing 'openshift/origin-control-plane:v3.11.0' CACHE MISS
Importing 'openshift/origin-docker-registry:v3.11.0' CACHE MISS
Importing 'openshift/origin-haproxy-router:v3.11.0' CACHE MISS
-- OpenShift cluster will be configured with ...
Version: v3.11.0
-- Pulling the OpenShift Container Image .Error pulling the OpenShift container image: ssh command error:
command : docker pull openshift/origin-control-plane:v3.11.0
err : exit status 1
output : Trying to pull repository docker.io/openshift/origin-control-plane ...
toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

@rnapoles-rh
Copy link
Contributor

rnapoles-rh commented Jan 26, 2021

As per https://www.docker.com/increase-rate-limits:
On November 20, 2020, rate limits anonymous and free authenticated use of Docker Hub went into effect. Anonymous and Free Docker Hub users are limited to 100 and 200 container image pull requests per six hours.

If you are affected by these changes you will receive this error message:
ERROR: toomanyrequests: Too Many Requests.
OR
You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limits.
To increase your pull rate limits you can upgrade your account to a Docker Pro or Team subscription.
The rate limits of 100 container image requests per six hours for anonymous usage, and 200 container image requests per six hours for free Docker accounts are now in effect. Image requests exceeding these limits will be denied until the six hour window elapses.

@rnapoles-rh
Copy link
Contributor

Trying a bit later:

minishift start
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.11.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.11.0' is supported ... OK
-- Checking if requested hypervisor 'kvm' is supported on this platform ... OK
-- Checking if KVM driver is installed ... 
   Driver is available at /usr/local/bin/docker-machine-driver-kvm ... 
   Checking driver binary is executable ... OK
-- Checking if Libvirt is installed ... OK
-- Checking if Libvirt default network is present ... OK
-- Checking if Libvirt default network is active ... OK
-- Checking the ISO URL ... OK
-- Downloading OpenShift binary 'oc' version 'v3.11.0'
 53.89 MiB / 53.89 MiB [================================================================================================================================] 100.00% 0s-- Downloading OpenShift v3.11.0 checksums ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'kvm' hypervisor ...
-- Minishift VM will be configured with ...
   Memory:    4 GB
   vCPUs :    2
   Disk size: 20 GB

   Downloading ISO 'https://github.com/minishift/minishift-centos-iso/releases/download/v1.17.0/minishift-centos7.iso'
 375.00 MiB / 375.00 MiB [==============================================================================================================================] 100.00% 0s
-- Starting Minishift VM ..................... OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ... 
   Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ... 
   Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 1% used OK
-- Writing current configuration for static assignment of IP address ... WARN
   Importing 'openshift/origin-control-plane:v3.11.0' . CACHE MISS
   Importing 'openshift/origin-docker-registry:v3.11.0'  CACHE MISS
   Importing 'openshift/origin-haproxy-router:v3.11.0'  CACHE MISS
-- OpenShift cluster will be configured with ...
   Version: v3.11.0
-- Pulling the OpenShift Container Image ............ OK
-- Copying oc binary from the OpenShift container image to VM ... OK
-- Starting OpenShift cluster .......................................................................................Error during 'cluster up' execution: Error starting the cluster. ssh command error:
command : /var/lib/minishift/bin/oc cluster up --routing-suffix 192.168.42.199.nip.io --base-dir /var/lib/minishift/base --image 'openshift/origin-${component}:v3.11.0' --public-hostname 192.168.42.199
err     : exit status 1
output  : Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11.0 is available ...
Pulling image openshift/origin-cli:v3.11.0
E0126 17:21:52.567614    2177 helper.go:173] Reading docker config from /home/docker/.docker/config.json failed: open /home/docker/.docker/config.json: no such file or directory, will attempt to pull image docker.io/openshift/origin-cli:v3.11.0 anonymously
Image pull complete
Pulling image openshift/origin-node:v3.11.0
E0126 17:21:53.027758    2177 helper.go:173] Reading docker config from /home/docker/.docker/config.json failed: open /home/docker/.docker/config.json: no such file or directory, will attempt to pull image docker.io/openshift/origin-node:v3.11.0 anonymously
Pulled 5/6 layers, 93% complete
Pulled 6/6 layers, 100% complete
Extracting
Image pull complete
Checking type of volume mount ...
Determining server IP ...
Using public hostname IP 192.168.42.199 as the host IP
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image openshift/origin-control-plane:v3.11.0 is available ...
Starting OpenShift using openshift/origin-control-plane:v3.11.0 ...
I0126 17:22:04.912775    2177 config.go:40] Running "create-master-config"
I0126 17:22:07.981100    2177 config.go:46] Running "create-node-config"
I0126 17:22:08.970032    2177 flags.go:30] Running "create-kubelet-flags"
I0126 17:22:09.631298    2177 run_kubelet.go:49] Running "start-kubelet"
I0126 17:22:09.964747    2177 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ...
I0126 17:22:39.993724    2177 interface.go:26] Installing "kube-proxy" ...
I0126 17:22:39.993830    2177 interface.go:26] Installing "kube-dns" ...
I0126 17:22:39.993848    2177 interface.go:26] Installing "openshift-service-cert-signer-operator" ...
I0126 17:22:39.993858    2177 interface.go:26] Installing "openshift-apiserver" ...
I0126 17:22:39.993940    2177 apply_template.go:81] Installing "openshift-apiserver"
I0126 17:22:39.997096    2177 apply_template.go:81] Installing "kube-dns"
I0126 17:22:39.997104    2177 apply_template.go:81] Installing "kube-proxy"
I0126 17:22:39.997673    2177 apply_template.go:81] Installing "openshift-service-cert-signer-operator"
I0126 17:22:46.318392    2177 interface.go:41] Finished installing "kube-proxy" "kube-dns" "openshift-service-cert-signer-operator" "openshift-apiserver"
I0126 17:24:51.362191    2177 run_self_hosted.go:242] openshift-apiserver available
I0126 17:24:51.363028    2177 interface.go:26] Installing "openshift-controller-manager" ...
I0126 17:24:51.363101    2177 apply_template.go:81] Installing "openshift-controller-manager"
I0126 17:24:56.663651    2177 interface.go:41] Finished installing "openshift-controller-manager"
Adding default OAuthClient redirect URIs ...
Adding web-console ...
Adding registry ...
Adding router ...
Adding persistent-volumes ...
Adding centos-imagestreams ...
Adding sample-templates ...
I0126 17:24:56.695192    2177 interface.go:26] Installing "openshift-web-console-operator" ...
I0126 17:24:56.695214    2177 interface.go:26] Installing "openshift-image-registry" ...
I0126 17:24:56.695224    2177 interface.go:26] Installing "openshift-router" ...
I0126 17:24:56.695231    2177 interface.go:26] Installing "persistent-volumes" ...
I0126 17:24:56.695240    2177 interface.go:26] Installing "centos-imagestreams" ...
I0126 17:24:56.695249    2177 interface.go:26] Installing "sample-templates" ...
I0126 17:24:56.695361    2177 interface.go:26] Installing "sample-templates/rails quickstart" ...
I0126 17:24:56.695370    2177 interface.go:26] Installing "sample-templates/mariadb" ...
I0126 17:24:56.695377    2177 interface.go:26] Installing "sample-templates/django quickstart" ...
I0126 17:24:56.695384    2177 interface.go:26] Installing "sample-templates/postgresql" ...
I0126 17:24:56.695391    2177 interface.go:26] Installing "sample-templates/cakephp quickstart" ...
I0126 17:24:56.695398    2177 interface.go:26] Installing "sample-templates/dancer quickstart" ...
I0126 17:24:56.695404    2177 interface.go:26] Installing "sample-templates/nodejs quickstart" ...
I0126 17:24:56.695413    2177 interface.go:26] Installing "sample-templates/jenkins pipeline ephemeral" ...
I0126 17:24:56.695419    2177 interface.go:26] Installing "sample-templates/sample pipeline" ...
I0126 17:24:56.695427    2177 interface.go:26] Installing "sample-templates/mongodb" ...
I0126 17:24:56.695433    2177 interface.go:26] Installing "sample-templates/mysql" ...
I0126 17:24:56.695483    2177 apply_list.go:67] Installing "sample-templates/mysql"
I0126 17:24:56.695945    2177 apply_template.go:81] Installing "openshift-web-console-operator"
I0126 17:24:56.697488    2177 apply_list.go:67] Installing "centos-imagestreams"
I0126 17:24:56.697707    2177 apply_list.go:67] Installing "sample-templates/rails quickstart"
I0126 17:24:56.697895    2177 apply_list.go:67] Installing "sample-templates/mariadb"
I0126 17:24:56.697999    2177 apply_list.go:67] Installing "sample-templates/django quickstart"
I0126 17:24:56.698095    2177 apply_list.go:67] Installing "sample-templates/postgresql"
I0126 17:24:56.698280    2177 apply_list.go:67] Installing "sample-templates/cakephp quickstart"
I0126 17:24:56.698382    2177 apply_list.go:67] Installing "sample-templates/dancer quickstart"
I0126 17:24:56.698477    2177 apply_list.go:67] Installing "sample-templates/nodejs quickstart"
I0126 17:24:56.698642    2177 apply_list.go:67] Installing "sample-templates/jenkins pipeline ephemeral"
I0126 17:24:56.698773    2177 apply_list.go:67] Installing "sample-templates/sample pipeline"
I0126 17:24:56.698881    2177 apply_list.go:67] Installing "sample-templates/mongodb"
I0126 17:25:20.261735    2177 interface.go:41] Finished installing "sample-templates/rails quickstart" "sample-templates/mariadb" "sample-templates/django quickstart" "sample-templates/postgresql" "sample-templates/cakephp quickstart" "sample-templates/dancer quickstart" "sample-templates/nodejs quickstart" "sample-templates/jenkins pipeline ephemeral" "sample-templates/sample pipeline" "sample-templates/mongodb" "sample-templates/mysql"
E0126 17:30:20.431540    2177 interface.go:34] Failed to install "openshift-web-console-operator": timed out waiting for the condition
I0126 17:30:20.432707    2177 interface.go:41] Finished installing "openshift-web-console-operator" "openshift-image-registry" "openshift-router" "persistent-volumes" "centos-imagestreams" "sample-templates"
Error: timed out waiting for the condition

[fedora@odo-fedora-minishift ~]$ minishift status
Minishift:  Running
Profile:    minishift
OpenShift:  Running (openshift v3.11.0+1cd89d4-542)
DiskUsage:  13% of 19G (Mounted On: /mnt/sda1)
CacheUsage: 513.6 MB (used by oc binary, ISO or cached images)

@rnapoles-rh
Copy link
Contributor

after a while it gets operational

minishift openshift component add service-catalog
Adding service-catalog ...
I0126 20:55:21.667316   29370 interface.go:26] Installing "openshift-service-catalog" ...
I0126 20:55:22.498959   29370 apply_template.go:81] Installing "service-catalog"
I0126 20:55:50.611690   29370 interface.go:41] Finished installing "openshift-service-catalog"

minishift openshift component add automation-service-broker
Adding automation-service-broker ...
I0126 20:57:55.948860   31149 interface.go:26] Installing "automation-service-broker" ...
I0126 20:57:55.990482   31149 apply_template.go:81] Installing "automation-service-broker"
I0126 20:59:06.991486   31149 interface.go:41] Finished installing "automation-service-broker"

minishift openshift component add template-service-broker
Adding template-service-broker ...
I0126 20:59:16.848960     466 interface.go:26] Installing "openshift-template-service-broker" ...
I0126 20:59:16.851596     466 apply_template.go:81] Installing "template-service-broker-apiserver"
I0126 20:59:48.550132     466 apply_template.go:81] Installing "tsb-registration"
I0126 20:59:52.352624     466 interface.go:41] Finished installing "openshift-template-service-broker"

@rnapoles-rh
Copy link
Contributor

oc login -u developer -p developer --insecure-skip-tls-verify $(minishift ip):8443
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>

Welcome! See 'oc help' to get started.

@rnapoles-rh
Copy link
Contributor

rnapoles-rh commented Jan 29, 2021

All tests for test-cmd-project PASSED

All tests for test-cmd-service FAILED . Running them manually to check why they are failing

Summarizing 12 Failures:

[Fail] odo service command tests [BeforeEach] When working from outside a component dir should be able to list services, as well as json list in a given app and project combination 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests [BeforeEach] When describing services should succeed when we're describing service that could have integer value for default field 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests [BeforeEach] check catalog service search functionality check that a service does not exist 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests [BeforeEach] create service with Env non-interactively should be able to create postgresql with env multiple times 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests [BeforeEach] When working from outside a component dir should be able to create, list and delete a service using a given value for --context 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests [BeforeEach] When the application is deleted should delete the service(s) in the application as well 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests [BeforeEach] check search functionality should pass with searching for part of a service name 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests [BeforeEach] When working from outside a component dir should be able to create, list and delete services without a context and using --app and --project flags instaed 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests [BeforeEach] when running help for service command should display the help 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests [BeforeEach] checking machine readable output for service catalog should succeed listing catalog components 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests [BeforeEach] checking machine readable output for service catalog should succeed listing catalog components 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests [BeforeEach] create service with Env non-interactively should be able to create postgresql with env 
/root/openshift/odo/tests/helper/helper_run.go:34

Ran 12 of 13 Specs in 0.934 seconds
FAIL! -- 0 Passed | 12 Failed | 0 Pending | 1 Skipped

Moving along:

Summarizing 8 Failures:

[Fail] odo service command tests create service with Env non-interactively [It] should be able to create postgresql with env 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests create service with Env non-interactively [It] should be able to create postgresql with env multiple times 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests When working from outside a component dir [It] should be able to list services, as well as json list in a given app and project combination 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests When describing services [It] should succeed when we're describing service that could have integer value for default field 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests When working from outside a component dir [It] should be able to create, list and delete a service using a given value for --context 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests When working from outside a component dir [It] should be able to create, list and delete services without a context and using --app and --project flags instaed 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests When the application is deleted [It] should delete the service(s) in the application as well 
/root/openshift/odo/tests/helper/helper_run.go:34

[Fail] odo service command tests check search functionality [It] should pass with searching for part of a service name 
/root/openshift/odo/tests/helper/helper_run.go:34

Ran 12 of 13 Specs in 9.709 seconds
FAIL! -- 4 Passed | 8 Failed | 0 Pending | 1 Skipped

@rnapoles-rh
Copy link
Contributor

Tests are now passing on PSI minishift:

------------------------------
Running tests...
------------------------------

+ make test-cmd-project
ginkgo  -randomizeAllSpecs -slowSpecThreshold=120 -timeout 7200s -nodes=1 -focus="odo project command tests" tests/integration/project/
Running Suite: Project Suite
============================
Random Seed: 1612303647 - Will randomize all specs
Will run 7 of 7 specs

•••••••
JUnit report was created: /home/fedora/openshift/odo/reports/junit_2021-2-2_22-07-31_1.xml

Ran 7 of 7 Specs in 24.943 seconds
SUCCESS! -- 7 Passed | 0 Failed | 0 Pending | 0 Skipped
PASS

Ginkgo ran 1 suite in 28.74755811s
Test Suite Passed
+ make test-cmd-service
ginkgo  -randomizeAllSpecs -slowSpecThreshold=120 -timeout 7200s -nodes=2 -focus="odo service command tests" tests/integration/servicecatalog/
Running Suite: Servicecatalog Suite
===================================
Random Seed: 1612303677 - Will randomize all specs
Will run 13 specs

Running in parallel across 2 nodes

•••S•••••
------------------------------
• [SLOW TEST:170.968 seconds]
odo service command tests
/home/fedora/openshift/odo/tests/integration/servicecatalog/cmd_service_test.go:13
  create service with Env non-interactively
  /home/fedora/openshift/odo/tests/integration/servicecatalog/cmd_service_test.go:81
    should be able to create postgresql with env multiple times
    /home/fedora/openshift/odo/tests/integration/servicecatalog/cmd_service_test.go:100
------------------------------
•••
Ran 12 of 13 Specs in 225.586 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 1 Skipped


Ginkgo ran 1 suite in 3m48.081352406s
Test Suite Passed

@prietyc123
Copy link
Contributor

@rnapoles-rh I tried replicating the steps you mentioned above but minishift start fails on

# minishift start
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.11.0' is valid ... 
   Hit github rate limit: GET https://api.github.com/repos/openshift/origin/releases/tags/v3.11.0: 403 API rate limit exceeded
[...]

Might be I am doing something silly and end up with rate limit. Could please summarise e2e process/step in a single comment.

@rnapoles-rh
Copy link
Contributor

rnapoles-rh commented Feb 3, 2021

Steps to setup OpenStack Linux VM:

#Install openstack as per: https://docs.openstack.org/mitaka/cli-reference/common/cli_install_openstack_command_line_clients.html

#Download the OpenStack RC file from https://cloud.psi.redhat.com/dashboard/project/api_access/

#Source the rc file devtools-odo-openrc.sh

#Edit the rc file to set user (OS_USERNAME) and password (OS_PASSWORD)

#Create VM:
openstack server create --flavor m1.large --image Fedora-Cloud-Base-32 --nic net-id=provider_net_shared_3 --security-group default --security-group all-open --key-name releng-pub-key odo-fedora-minishift

#ssh to the vm:
ssh -i ~/psi/PSI/lib/common/releng-key fedora@10.0.111.243

sudo -i
dnf install grubby
grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"
systemctl reboot
ssh -i ~/psi/PSI/lib/common/releng-key fedora@<ip_address>
sudo -id

#Install docker:
#Follow https://docs.docker.com/engine/install/fedora/

sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
sudo dnf install docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo usermod -aG docker $USER
newgrp docker

#Setup virtualization environment by following:
https://docs.okd.io/3.11/minishift/getting-started/setting-up-virtualization-environment.html#kvm-driver-fedora

sudo dnf install libvirt qemu-kvm
sudo usermod -a -G libvirt $(whoami)
sudo newgrp libvirt
curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-centos7 -o /usr/local/bin/docker-machine-driver-kvm
sudo chmod +x /usr/local/bin/docker-machine-driver-kvm
sudo dnf install @virtualization
sudo dnf group install --with-optional virtualization
sudo systemctl start libvirtd
sudo systemctl enable libvirtd
sudo virsh net-autostart default

#Install minishift:
curl -Lo minishift.tgz https://github.com/minishift/minishift/releases/download/v1.34.3/minishift-1.34.3-linux-amd64.tgz
tar -xvzf minishift.tgz
sudo mv minishift-1.34.3-linux-amd64/minishift /usr/local/bin

sudo dnf install make
sudo dnf install gcc
sudo dnf install git
sudo dnf install wget
sudo wget https://golang.org/dl/go1.13.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.13.linux-amd64.tar.gz

#Login to docker to avoid getting "ERROR: toomanyrequests: Too Many Requests. You have reached your pull rate limit." during minishift start
docker login --username <user_name> --password <user_token>

Steps to run tests (scripts/minishift-all-tests.sh):

#!/usr/bin/env bash

executing() {
   set +x
  echo -e "\n------------------------------\n${1}\n------------------------------\n"
   set -x
}

set -ex

#Export GitHub token to avoid 
executing "Setting up environment..."

export PATH="$PATH:/usr/local/go/bin/"
export GOPATH=$HOME/go
git clone https://github.com/openshift/odo.git openshift/odo
cd openshift/odo

mkdir -p $GOPATH/bin
make goget-ginkgo
export PATH="$PATH:$(pwd):$GOPATH/bin"

executing "Building ODO..."
make bin
sudo cp odo /usr/bin

executing "Stopping minishift..."
minishift stop
yes | minishift delete
MINISHIFT_ENABLE_EXPERIMENTAL=y 

executing "Starting minishift..."
minishift start 
executing "Adding components: service-catalog, automation-service-broker, and template-service-broker ..."
minishift openshift component add service-catalog
minishift openshift component add automation-service-broker
minishift openshift component add template-service-broker
sleep 3m
eval $(minishift oc-env)

executing "Executing tests..."
make test-cmd-project
make test-cmd-service

executing "Removing cloned repo..."
rm -Rf openshift

@rnapoles-rh
Copy link
Contributor

I am having problems to get minishift started:
Could not set oc CLI context for 'minishift' profile: Error during setting 'minishift' as active profile: The specified path to the kube config '/home/fedora/.minishift/machines/minishift_kubeconfig' does not exist

@rnapoles-rh
Copy link
Contributor

rnapoles-rh commented Feb 8, 2021

Created issue #4410 to track work for Jenkins configuration for running tests on PSI minishift

@dharmit dharmit removed the points/3 label Apr 14, 2021
@dharmit
Copy link
Member Author

dharmit commented Apr 15, 2021

For the issue of pulling images from docker.io, can you PTAL if minishift image cache-config command can help? I think it will cache the image on the host so that images don't need to be pulled again when someone does minishift delete and then minishift start.

But I could be wrong.

@rnapoles-rh
Copy link
Contributor

Exploring minishift image cache-config and the use of local registry mirror

@rnapoles-rh
Copy link
Contributor

rnapoles-rh commented Apr 29, 2021

Anjan suggested to use CDK. When starting minishift it timeouts waiting for condition:

I0429 20:43:49.795437    2549 apply_template.go:81] Installing "kube-dns"
I0429 20:43:49.800280    2549 apply_template.go:81] Installing "openshift-service-cert-signer-operator"
I0429 20:43:49.801580    2549 apply_template.go:81] Installing "kube-proxy"
I0429 20:43:59.774967    2549 interface.go:41] Finished installing "kube-proxy" "kube-dns" "openshift-service-cert-signer-operator" "openshift-apiserver"
Error: timed out waiting for the condition

Others have seen this problem (related to slow network connection:
Reported in bugzilla https://bugzilla.redhat.com/show_bug.cgi?id=1750913 and in Openshift:
openshift/origin#22194 (closed without fixing) CDK offers newer versions of minishift

@rnapoles-rh
Copy link
Contributor

rnapoles-rh commented May 3, 2021

Went back to standard minishift due to the CDK (3.11) issues.
I was able to get minishift started by manually pulling all required images and adding them to the cache.
Now the problem seems to be networking within PSI, when pinging minishift's ip:

[fedora@odo-fedora-minishift-pr-test ~]$ minishift ip
192.168.42.115
[fedora@odo-fedora-minishift-pr-test ~]$ ping 192.168.41.115
PING 192.168.41.115 (192.168.41.115) 56(84) bytes of data.
^C
--- 192.168.41.115 ping statistics ---
103 packets transmitted, 0 received, 100% packet loss, time 104451ms 

Checking with the PSI people

@rnapoles-rh
Copy link
Contributor

rnapoles-rh commented May 5, 2021

I was able to bypass the docker pull limit by logging in to docker within the minishift VM and in the PSI vm itself, then exporting all required images by running:
minishift image export openshift/origin-cli:v3.11.0 openshift/origin-control-plane:v3.11.0 openshift/origin-deployer:v3.11.0 openshift/origin-docker-registry:v3.11.0 openshift/origin-haproxy-router:v3.11.0 openshift/origin-hyperkube:v3.11.0 openshift/origin-hypershift:v3.11.0 openshift/origin-node:v3.11.0 openshift/origin-pod:v3.11.0 openshift/origin-service-serving-cert-signer:v3.11 openshift/origin-web-console:v3.11.0
However, when staring minishift it fails to start:

-- Copying oc binary from the OpenShift container image to VM ... OK
-- Starting OpenShift cluster ..................................................................Error during 'cluster up' execution: Error starting the cluster. ssh command error:
command : /var/lib/minishift/bin/oc cluster up --base-dir /var/lib/minishift/base --image 'openshift/origin-${component}:v3.11.0' --public-hostname 192.168.42.243 --routing-suffix 192.168.42.243.nip.io
err     : exit status 1
output  : Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11.0 is available ...
Checking type of volume mount ...
Determining server IP ...
Using public hostname IP 192.168.42.243 as the host IP
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image openshift/origin-control-plane:v3.11.0 is available ...
Starting OpenShift using openshift/origin-control-plane:v3.11.0 ...
I0505 11:11:30.525165    2062 config.go:40] Running "create-master-config"
I0505 11:11:36.096531    2062 config.go:46] Running "create-node-config"
I0505 11:11:37.753153    2062 flags.go:30] Running "create-kubelet-flags"
I0505 11:11:38.944326    2062 run_kubelet.go:49] Running "start-kubelet"
I0505 11:11:39.490756    2062 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ...
E0505 11:16:39.496344    2062 run_self_hosted.go:571] API server error: Get https://192.168.42.243:8443/healthz?timeout=32s: dial tcp 192.168.42.243:8443: connect: connection refused ()
Error: timed out waiting for the condition

Found this minishift issue where people were able to resolve this issue with fedora31. I will try on a fedora31 PSI VM
Alternatively, if the fedora31 option does not work, then we can deploy an OCP 3.11 cluster

@rnapoles-rh
Copy link
Contributor

rnapoles-rh commented May 26, 2021

When running provision-hosts.sh I get the following (debugging):

<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1620678402.4327867-5265-14552138360885/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
  File "/root/.ansible/tmp/ansible-tmp-1620678402.4327867-5265-14552138360885/AnsiballZ_os_server.py", line 102, in <module>
    _ansiballz_main()
  File "/root/.ansible/tmp/ansible-tmp-1620678402.4327867-5265-14552138360885/AnsiballZ_os_server.py", line 94, in _ansiballz_main
    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
  File "/root/.ansible/tmp/ansible-tmp-1620678402.4327867-5265-14552138360885/AnsiballZ_os_server.py", line 40, in invoke_module
    runpy.run_module(mod_name='ansible.modules.cloud.openstack.os_server', init_globals=None, run_name='__main__', alter_sys=True)
  File "/usr/lib64/python3.9/runpy.py", line 210, in run_module
    return _run_module_code(code, init_globals, run_name, mod_spec)
  File "/usr/lib64/python3.9/runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/tmp/ansible_os_server_payload_63rh70ok/ansible_os_server_payload.zip/ansible/modules/cloud/openstack/os_server.py", line 759, in <module>
  File "/tmp/ansible_os_server_payload_63rh70ok/ansible_os_server_payload.zip/ansible/modules/cloud/openstack/os_server.py", line 749, in main
  File "/tmp/ansible_os_server_payload_63rh70ok/ansible_os_server_payload.zip/ansible/modules/cloud/openstack/os_server.py", line 666, in _get_server_state                                                                                                                                             
  File "/usr/local/lib/python3.9/site-packages/openstack/cloud/_compute.py", line 518, in get_server
    server = _utils._get_entity(self, searchfunc, name_or_id, filters)
  File "/usr/local/lib/python3.9/site-packages/openstack/cloud/_utils.py", line 198, in _get_entity
    entities = search(name_or_id, filters, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/openstack/cloud/_compute.py", line 100, in search_servers
    servers = self.list_servers(
  File "/usr/local/lib/python3.9/site-packages/openstack/cloud/_compute.py", line 316, in list_servers
    self._servers = self._list_servers(
  File "/usr/local/lib/python3.9/site-packages/openstack/cloud/_compute.py", line 336, in _list_servers
    for server in self.compute.servers(
  File "/usr/local/lib/python3.9/site-packages/openstack/service_description.py", line 87, in __get__
    proxy = self._make_proxy(instance)
  File "/usr/local/lib/python3.9/site-packages/openstack/service_description.py", line 262, in _make_proxy
    found_version = temp_adapter.get_api_major_version()
  File "/usr/local/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 354, in get_api_major_version
    return self.session.get_api_major_version(auth or self.auth, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/keystoneauth1/session.py", line 1276, in get_api_major_version
    return auth.get_api_major_version(self, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/keystoneauth1/identity/base.py", line 500, in get_api_major_version
    data = get_endpoint_data(discover_versions=discover_versions)
  File "/usr/local/lib/python3.9/site-packages/keystoneauth1/identity/base.py", line 271, in get_endpoint_data
    service_catalog = self.get_access(session).service_catalog
  File "/usr/local/lib/python3.9/site-packages/keystoneauth1/identity/base.py", line 134, in get_access
    self.auth_ref = self.get_auth_ref(session)
  File "/usr/local/lib/python3.9/site-packages/keystoneauth1/identity/generic/base.py", line 208, in get_auth_ref
    return self._plugin.get_auth_ref(session, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/keystoneauth1/identity/v3/base.py", line 187, in get_auth_ref
    resp = session.post(token_url, json=body, headers=headers,
  File "/usr/local/lib/python3.9/site-packages/keystoneauth1/session.py", line 1149, in post
    return self.request(url, 'POST', **kwargs)
  File "/usr/local/lib/python3.9/site-packages/keystoneauth1/session.py", line 986, in request
    raise exceptions.from_response(resp, method, url)
keystoneauth1.exceptions.http.BadRequest: Invalid input for field 'identity/password/user/password': None is not of type 'string'

Failed validating 'type' in schema['properties']['identity']['properties']['password']['properties']['user']['properties']['password']:
    {'type': 'string'}

On instance['identity']['password']['user']['password']:
    None (HTTP 400) (Request-ID: req-f74cb8a2-c3f2-4a10-b9f7-7decee94b7c0)
fatal: [127.0.0.1]: FAILED! => {
    "changed": false,
    "module_stderr": "Traceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1620678402.4327867-5265-14552138360885/AnsiballZ_os_server.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1620678402.4327867-5265-14552138360885/AnsiballZ_os_server.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1620678402.4327867-5265-14552138360885/AnsiballZ_os_server.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible.modules.cloud.openstack.os_server', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/lib64/python3.9/runpy.py\", line 210, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib64/python3.9/runpy.py\", line 97, in _run_module_code\n    _run_code(code, mod_globals, init_globals,\n  File \"/usr/lib64/python3.9/runpy.py\", line 87, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_os_server_payload_63rh70ok/ansible_os_server_payload.zip/ansible/modules/cloud/openstack/os_server.py\", line 759, in <module>\n  File \"/tmp/ansible_os_server_payload_63rh70ok/ansible_os_server_payload.zip/ansible/modules/cloud/openstack/os_server.py\", line 749, in main\n  File \"/tmp/ansible_os_server_payload_63rh70ok/ansible_os_server_payload.zip/ansible/modules/cloud/openstack/os_server.py\", line 666, in _get_server_state\n  File \"/usr/local/lib/python3.9/site-packages/openstack/cloud/_compute.py\", line 518, in get_server\n    server = _utils._get_entity(self, searchfunc, name_or_id, filters)\n  File \"/usr/local/lib/python3.9/site-packages/openstack/cloud/_utils.py\", line 198, in _get_entity\n    entities = search(name_or_id, filters, **kwargs)\n  File \"/usr/local/lib/python3.9/site-packages/openstack/cloud/_compute.py\", line 100, in search_servers\n    servers = self.list_servers(\n  File \"/usr/local/lib/python3.9/site-packages/openstack/cloud/_compute.py\", line 316, in list_servers\n    self._servers = self._list_servers(\n  File \"/usr/local/lib/python3.9/site-packages/openstack/cloud/_compute.py\", line 336, in _list_servers\n    for server in self.compute.servers(\n  File \"/usr/local/lib/python3.9/site-packages/openstack/service_description.py\", line 87, in __get__\n    proxy = self._make_proxy(instance)\n  File \"/usr/local/lib/python3.9/site-packages/openstack/service_description.py\", line 262, in _make_proxy\n    found_version = temp_adapter.get_api_major_version()\n  File \"/usr/local/lib/python3.9/site-packages/keystoneauth1/adapter.py\", line 354, in get_api_major_version\n    return self.session.get_api_major_version(auth or self.auth, **kwargs)\n  File \"/usr/local/lib/python3.9/site-packages/keystoneauth1/session.py\", line 1276, in get_api_major_version\n    return auth.get_api_major_version(self, **kwargs)\n  File \"/usr/local/lib/python3.9/site-packages/keystoneauth1/identity/base.py\", line 500, in get_api_major_version\n    data = get_endpoint_data(discover_versions=discover_versions)\n  File \"/usr/local/lib/python3.9/site-packages/keystoneauth1/identity/base.py\", line 271, in get_endpoint_data\n    service_catalog = self.get_access(session).service_catalog\n  File \"/usr/local/lib/python3.9/site-packages/keystoneauth1/identity/base.py\", line 134, in get_access\n    self.auth_ref = self.get_auth_ref(session)\n  File \"/usr/local/lib/python3.9/site-packages/keystoneauth1/identity/generic/base.py\", line 208, in get_auth_ref\n    return self._plugin.get_auth_ref(session, **kwargs)\n  File \"/usr/local/lib/python3.9/site-packages/keystoneauth1/identity/v3/base.py\", line 187, in get_auth_ref\n    resp = session.post(token_url, json=body, headers=headers,\n  File \"/usr/local/lib/python3.9/site-packages/keystoneauth1/session.py\", line 1149, in post\n    return self.request(url, 'POST', **kwargs)\n  File \"/usr/local/lib/python3.9/site-packages/keystoneauth1/session.py\", line 986, in request\n    raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.BadRequest: Invalid input for field 'identity/password/user/password': None is not of type 'string'\n\nFailed validating 'type' in schema['properties']['identity']['properties']['password']['properties']['user']['properties']['password']:\n    {'type': 'string'}\n\nOn instance['identity']['password']['user']['password']:\n    None (HTTP 400) (Request-ID: req-f74cb8a2-c3f2-4a10-b9f7-7decee94b7c0)\n",                                                                                
    "module_stdout": "",
    "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
    "rc": 1
}

PLAY RECAP *****************************************************************************************************************************************
127.0.0.1                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

+ /home/rnapoles/PSI/ocp-311-cluster/hosts-provision/extract-hosts-ip.sh /tmp/ansible-generated-files/hosts-ip
cat: /tmp/ansible-generated-files/hosts-ip: No such file or directory
cat: /tmp/ansible-generated-files/hosts-ip: No such file or directory
cat: /tmp/ansible-generated-files/hosts-ip: No such file or directory
cat: /tmp/ansible-generated-files/hosts-ip: No such file or directory
cat: /tmp/ansible-generated-files/hosts-ip: No such file or directory
cat: /tmp/ansible-generated-files/hosts-ip: No such file or directory
/home/rnapoles/PSI/ocp-311-cluster/hosts-provision/extract-hosts-ip.sh: line 30: /tmp/ansible-generated-files/inventory: No such file or directory
+ ansible-playbook -vv -i /tmp/ansible-generated-files/inventory /home/rnapoles/PSI/ocp-311-cluster/hosts-provision/generate-machines-inventory.yaml
ansible-playbook 2.9.20
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.9/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 3.9.4 (default, Apr  6 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)]
Using /etc/ansible/ansible.cfg as config file
[WARNING]: Unable to parse /tmp/ansible-generated-files/inventory as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Skipping callback 'actionable', as we already have a stdout callback.
Skipping callback 'counter_enabled', as we already have a stdout callback.
Skipping callback 'debug', as we already have a stdout callback.
Skipping callback 'dense', as we already have a stdout callback.
Skipping callback 'dense', as we already have a stdout callback.
Skipping callback 'full_skip', as we already have a stdout callback.
Skipping callback 'json', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'null', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
Skipping callback 'selective', as we already have a stdout callback.
Skipping callback 'skippy', as we already have a stdout callback.
Skipping callback 'stderr', as we already have a stdout callback.
Skipping callback 'unixy', as we already have a stdout callback.
Skipping callback 'yaml', as we already have a stdout callback.

PLAYBOOK: generate-machines-inventory.yaml *********************************************************************************************************
1 plays in /home/rnapoles/PSI/ocp-311-cluster/hosts-provision/generate-machines-inventory.yaml

PLAY [Generate machines configuration file] ********************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************************************
task path: /home/rnapoles/PSI/ocp-311-cluster/hosts-provision/generate-machines-inventory.yaml:3
ok: [127.0.0.1]
META: ran handlers

TASK [create_machine_inventory : Create main configuration inventory file] *************************************************************************
task path: /home/rnapoles/PSI/ocp-311-cluster/hosts-provision/roles/create_machine_inventory/tasks/main.yml:1
fatal: [127.0.0.1]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'master_ip' is undefined"}

PLAY RECAP *****************************************************************************************************************************************
127.0.0.1                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

+ ansible-playbook -vv -i /tmp/ansible-generated-files/main-configuration-inventory /home/rnapoles/PSI/ocp-311-cluster/hosts-provision/generate-ocp-inventory.yaml
ansible-playbook 2.9.20
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.9/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 3.9.4 (default, Apr  6 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)]
Using /etc/ansible/ansible.cfg as config file
[WARNING]: Unable to parse /tmp/ansible-generated-files/main-configuration-inventory as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Skipping callback 'actionable', as we already have a stdout callback.
Skipping callback 'counter_enabled', as we already have a stdout callback.
Skipping callback 'debug', as we already have a stdout callback.
Skipping callback 'dense', as we already have a stdout callback.
Skipping callback 'dense', as we already have a stdout callback.
Skipping callback 'full_skip', as we already have a stdout callback.
Skipping callback 'json', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'null', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
Skipping callback 'selective', as we already have a stdout callback.
Skipping callback 'skippy', as we already have a stdout callback.
Skipping callback 'stderr', as we already have a stdout callback.
Skipping callback 'unixy', as we already have a stdout callback.
Skipping callback 'yaml', as we already have a stdout callback.

PLAYBOOK: generate-ocp-inventory.yaml **************************************************************************************************************
1 plays in /home/rnapoles/PSI/ocp-311-cluster/hosts-provision/generate-ocp-inventory.yaml

PLAY [Generate OCP configuration file] *************************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************************************
task path: /home/rnapoles/PSI/ocp-311-cluster/hosts-provision/generate-ocp-inventory.yaml:3
ok: [127.0.0.1]
META: ran handlers

TASK [create_ocp_inventory : Create "OCP" configuration inventory file] ****************************************************************************
task path: /home/rnapoles/PSI/ocp-311-cluster/hosts-provision/roles/create_ocp_inventory/tasks/main.yaml:1
fatal: [127.0.0.1]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'ocp_router_subdomain' is undefined"}

PLAY RECAP *****************************************************************************************************************************************
127.0.0.1                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

++ awk '{print $2}'
++ grep ansible
++ cat /tmp/ansible-generated-files/hosts-ip
cat: /tmp/ansible-generated-files/hosts-ip: No such file or directory
+ ANSIBLE_IP=
++ awk '{print $2}'
++ grep ssh_key
++ cat /tmp/ansible-generated-files/hosts-ip
cat: /tmp/ansible-generated-files/hosts-ip: No such file or directory
+ SSH_KEY_NAME=
+ SSH_KEY_PATH=/home/rnapoles/PSI/ocp-311-cluster/
+ ssh -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ root@ rm -rf /root/deploy-ocp-crew
ssh: Could not resolve hostname : Name or service not known
+ scp -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ -r /home/rnapoles/PSI/ocp-311-cluster root@:/root/deploy-ocp-crew
ssh: Could not resolve hostname : Name or service not known
lost connection
+ ssh -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ root@ rm -rf /tmp/ansible-generated-files
ssh: Could not resolve hostname : Name or service not known
+ ssh -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ root@ mkdir /tmp/ansible-generated-files
ssh: Could not resolve hostname : Name or service not known
+ scp -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ /tmp/ansible-generated-files/main-configuration-inventory root@:/tmp/ansible-generated-files/main-configuration-inventory
ssh: Could not resolve hostname : Name or service not known
lost connection
+ scp -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ /tmp/ansible-generated-files/ocp-instalation-inventory root@:/tmp/ansible-generated-files/ocp-instalation-inventory
ssh: Could not resolve hostname : Name or service not known
lost connection
+ ssh -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ root@ chmod 0600 /root/deploy-ocp-crew/
ssh: Could not resolve hostname : Name or service not known
+ ssh -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ root@ /root/deploy-ocp-crew/deploy-ocp.sh
ssh: Could not resolve hostname : Name or service not known

@rnapoles-rh
Copy link
Contributor

  • PSI resource allocation was increased.
  • Install ansible and required pre-requesites in control machine (Linux VirtualBox)
  • Provision 3.11 cluster
  • After 3.11 provisioning run tests and ensure all test pass, fix failing tests or report bugs as required

I was able to move forward from the previous errors, the ocp-ansible-machine instance got created, then it fail due to timeout waiting for it, so I ran the provision-hosts.sh script again and then it created the ocp-dns-machine. Failed again due to timeout, ran it again and now I get the following, note I changed the permissions on releng-key and releng-key pub to 600 and still get this:

PLAY RECAP ********************************************************************************************************************************************************************************************
127.0.0.1                  : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
++ awk '{print $2}'
++ grep ansible
++ cat /tmp/ansible-generated-files/hosts-ip
+ ANSIBLE_IP=10.0.149.134
++ awk '{print $2}'
++ grep ssh_key
++ cat /tmp/ansible-generated-files/hosts-ip
+ SSH_KEY_NAME=
+ SSH_KEY_PATH=/home/rnapoles/PSI/ocp-311-cluster/
+ ssh -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ root@10.0.149.134 rm -rf /root/deploy-ocp-crew
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0775 for '/home/rnapoles/PSI/ocp-311-cluster/' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/home/rnapoles/PSI/ocp-311-cluster/": bad permissions
root@10.0.149.134's password: 
Connection closed by 10.0.149.134 port 22
+ scp -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ -r /home/rnapoles/PSI/ocp-311-cluster root@10.0.149.134:/root/deploy-ocp-crew
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0775 for '/home/rnapoles/PSI/ocp-311-cluster/' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/home/rnapoles/PSI/ocp-311-cluster/": bad permissions
root@10.0.149.134's password: 
Permission denied, please try again.
root@10.0.149.134's password: 
Permission denied, please try again.
root@10.0.149.134's password: 
root@10.0.149.134: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
lost connection
+ ssh -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ root@10.0.149.134 rm -rf /tmp/ansible-generated-files
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0775 for '/home/rnapoles/PSI/ocp-311-cluster/' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/home/rnapoles/PSI/ocp-311-cluster/": bad permissions
root@10.0.149.134's password: 
Permission denied, please try again.
root@10.0.149.134's password: 
Permission denied, please try again.
root@10.0.149.134's password: 
root@10.0.149.134: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
+ ssh -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ root@10.0.149.134 mkdir /tmp/ansible-generated-files
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0775 for '/home/rnapoles/PSI/ocp-311-cluster/' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/home/rnapoles/PSI/ocp-311-cluster/": bad permissions
root@10.0.149.134's password: 
ssh_dispatch_run_fatal: Connection to 10.0.149.134 port 22: Broken pipe
+ scp -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ /tmp/ansible-generated-files/main-configuration-inventory root@10.0.149.134:/tmp/ansible-generated-files/main-configuration-inventory
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0775 for '/home/rnapoles/PSI/ocp-311-cluster/' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/home/rnapoles/PSI/ocp-311-cluster/": bad permissions
root@10.0.149.134's password: 
Permission denied, please try again.
root@10.0.149.134's password: 
Permission denied, please try again.
root@10.0.149.134's password: 
root@10.0.149.134: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
lost connection
+ scp -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ /tmp/ansible-generated-files/ocp-instalation-inventory root@10.0.149.134:/tmp/ansible-generated-files/ocp-instalation-inventory
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0775 for '/home/rnapoles/PSI/ocp-311-cluster/' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/home/rnapoles/PSI/ocp-311-cluster/": bad permissions
root@10.0.149.134's password: 
Permission denied, please try again.
root@10.0.149.134's password: 
Permission denied, please try again.
root@10.0.149.134's password: 
root@10.0.149.134: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
lost connection
+ ssh -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ root@10.0.149.134 chmod 0600 /root/deploy-ocp-crew/
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0775 for '/home/rnapoles/PSI/ocp-311-cluster/' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/home/rnapoles/PSI/ocp-311-cluster/": bad permissions
root@10.0.149.134's password: 
Permission denied, please try again.
root@10.0.149.134's password: 
Permission denied, please try again.
root@10.0.149.134's password: 
root@10.0.149.134: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
+ ssh -o 'StrictHostKeyChecking no' -i /home/rnapoles/PSI/ocp-311-cluster/ root@10.0.149.134 /root/deploy-ocp-crew/deploy-ocp.sh
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0775 for '/home/rnapoles/PSI/ocp-311-cluster/' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/home/rnapoles/PSI/ocp-311-cluster/": bad permissions
root@10.0.149.134's password: 
Permission denied, please try again.
root@10.0.149.134's password: 
Permission denied, please try again.
root@10.0.149.134's password: 
root@10.0.149.134: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
[rnapoles@localhost ocp-311-cluster]$ ls -las
total 28
0 drwxrwxr-x. 1 rnapoles rnapoles  292 May 26 07:53 .
0 drwxrwxr-x. 1 rnapoles rnapoles  270 May 10 13:09 ..
4 -rw-rw-r--. 1 rnapoles rnapoles 2257 May 10 13:09 cluster-configuration
0 drwxrwxr-x. 1 rnapoles rnapoles   24 May 10 13:09 common
4 -rwxrwxr-x. 1 rnapoles rnapoles  718 May 10 13:09 configure-ansible-host.sh
4 -rwxrwxr-x. 1 rnapoles rnapoles 1743 May 10 13:09 deploy-ocp.sh
0 drwxrwxr-x. 1 rnapoles rnapoles  250 May 10 13:09 hosts-provision
0 drwxrwxr-x. 1 rnapoles rnapoles   56 May 10 13:09 htpasswd
0 drwxrwxr-x. 1 rnapoles rnapoles  252 May 10 13:09 ocp-setup
4 -rwxrwxr-x. 1 rnapoles rnapoles 2828 May 10 13:09 provision-hosts.sh
4 -rw-rw-r--. 1 rnapoles rnapoles  266 May 10 13:09 provision.py
4 -rw-rw-r--. 1 rnapoles rnapoles 2686 May 10 13:09 README.md
4 lrwxrwxrwx. 1 rnapoles rnapoles   24 May 10 13:09 releng-key -> ../lib/common/releng-key
[rnapoles@localhost ocp-311-cluster]$ ls -las ../lib/common/
total 20
0 drwxrwxr-x. 1 rnapoles rnapoles  118 May 20 13:50 .
0 drwxrwxr-x. 1 rnapoles rnapoles   68 May 10 13:09 ..
4 -rw-------. 1 rnapoles rnapoles  245 May 10 13:09 clouds.yaml
4 -rw-------. 1 rnapoles rnapoles  118 May 10 13:09 helpers.sh
4 -rw-------. 1 rnapoles rnapoles  779 May 10 13:09 htpass
0 drw-------. 1 rnapoles rnapoles   48 May 10 13:09 old-keys
4 -rw-------. 1 rnapoles rnapoles 1679 May 10 13:09 releng-key
4 -rw-------. 1 rnapoles rnapoles  399 May 10 13:09 releng-key.pub
[rnapoles@localhost ocp-311-cluster]$ 

@rnapoles-rh
Copy link
Contributor

rnapoles-rh commented Jun 3, 2021

3.11 provisioning failing with the following after creating all VM instances:

INSTALLER STATUS ***************************************************************
Initialization  : In Progress (0:00:11)
+ ansible-playbook -vv --private-key=/root/deploy-ocp-crew/releng-key -i /tmp/ansible-generated-files/main-configuration-inventory /root/deploy-ocp-crew/ocp-setup/post-install-actions.yml
ansible-playbook 2.6.20
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.7.5 (default, Aug 13 2020, 02:51:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: post-install-actions.yml *********************************************
3 plays in /root/deploy-ocp-crew/ocp-setup/post-install-actions.yml
PLAY [Set cluster admin] *******************************************************
TASK [Gathering Facts] *********************************************************
task path: /root/deploy-ocp-crew/ocp-setup/post-install-actions.yml:3
/usr/lib/python2.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.24.3) or chardet (3.0.4) doesn't match a supported version!
  RequestsDependencyWarning)
ok: [10.0.150.229]
META: ran handlers
TASK [set-cluster-admin : Set cluster admin] ***********************************
task path: /root/deploy-ocp-crew/ocp-setup/roles/set-cluster-admin/tasks/main.yml:3
fatal: [10.0.150.229]: FAILED! => {"changed": false, "cmd": "oc adm policy add-cluster-role-to-user cluster-admin developer --as=system:admin", "msg": "[Errno 2] No such file or directory", "rc": 2}
        to retry, use: --limit @/root/deploy-ocp-crew/ocp-setup/post-install-actions.retry
PLAY RECAP *********************************************************************
10.0.150.229               : ok=1    changed=0    unreachable=0    failed=1 

following up with Zeeshan

Just started exploring cluster deployment in the ibm cloud. IBM cloud account created. Checking with Karel about budget.

@rnapoles-rh
Copy link
Contributor

Onboarded to the IBM Cloud. Currently reviewing guidelines, documentation, and troubleshooting permissions.

@rnapoles-rh
Copy link
Contributor

rnapoles-rh commented Jun 10, 2021

Created 4.7 cluster in the IBM Cloud. Looking on how to provision a 3.11 cluster (depreciated in IBM Cloud). Also checking how to onboard a robot account. Once we get a robot account in the IBM Cloud we can start using these clusters.

@dharmit
Copy link
Member Author

dharmit commented Jun 10, 2021

Looking on how to provision a 3.11 cluster (depreciated in IBM Cloud).

minishift can work with a remote VM. Might be worth exploring.

@prietyc123
Copy link
Contributor

Closing this issue as we don't support 3.11 and hence doesn't make sense to have test running on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/infra Issues or PRs related to setting up or fixing things in infrastructure. Mostly CI infrastructure. area/testing Issues or PRs related to testing, Quality Assurance or Quality Engineering priority/High Important issue; should be worked on before any other issues (except priority/Critical issue(s)).
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants