Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restart QDR after changing the password #530

Merged
merged 8 commits into from
Nov 13, 2023

Conversation

csibbitt
Copy link
Collaborator

@csibbitt csibbitt commented Nov 9, 2023

  • Fixes bug reported here: QDR Auth for infrared 17.1 script #517 (comment)
  • Avoids an extra manual step when password changes
  • Would affect users who upgrade from earlier STF and subsequently enable basic auth
  • Also users who need to change their passwords

* Fixes bug reported here: #517 (comment)
* Avoids an extra manual step when changing password
* Would affect users who upgrade from earlier STF and subsequently enable basic auth
* Also users who need to change their passwords
@csibbitt
Copy link
Collaborator Author

csibbitt commented Nov 9, 2023

Testing

Before

I have a working system, then I regenerate the password by removing the stf_one_time_upgrade label and deleting the STO pod (to trigger an operator run), after that the system is not working.

$ ../tests/smoketest/smoketest.sh 
[...]
*** [SUCCESS] Smoke test job completed successfully

$ oc patch secret default-interconnect-users --type=json --patch '[{"op": "remove", "path":"/metadata/labels/stf_one_time_upgrade"}]' 
secret/default-interconnect-users patched

$ oc delete po -l name=service-telemetry-operator
pod "service-telemetry-operator-54cd5c8dd-rh8rs" deleted

$ oc get secret default-interconnect-users -o yaml | grep stf_one_time
    stf_one_time_upgrade: "1699468854"

$ ../tests/smoketest/smoketest.sh 
[...]
*** [FAILURE] Smoke test job still not succeeded after 300s

Deleting the QDR pod makes it work again

$ oc delete po -l application=default-interconnect
pod "default-interconnect-7c77794bf7-tjqj9" deleted

$ ./tests/smoketest/smoketest.sh 
[...]
*** [SUCCESS] Smoke test job completed successfully

After

Get working system

$ git reflog | head -1
5ada4e9 HEAD@{0}: commit (amend): Restart QDR after changing the password

$ oc delete project service-telemetry
project.project.openshift.io "service-telemetry" deleted

$ oc new-project service-telemetry
[...]

$ ansible-playbook --extra-vars __local_build_enabled=true --extra-vars working_branch="$(git rev-parse --abbrev-ref HEAD)" --extra-vars __service_telemetry_snmptraps_enabled=true --extra-vars __service_telemetry_storage_ephemeral_enabled=true --extra-vars __service_telemetry_observability_strategy=use_redhat ./run-ci.yaml
[...]

$ ../tests/smoketest/smoketest.sh 
[...]
*** [SUCCESS] Smoke test job completed successfully

Stays working after password regeneration, with no QDR pod restart required

$ oc patch secret default-interconnect-users --type=json --patch '[{"op": "remove", "path":"/metadata/labels/stf_one_time_upgrade"}]' 
secret/default-interconnect-users patched

$ oc delete po -l name=service-telemetry-operator
pod "service-telemetry-operator-6d8b6797b7-k2sqf" deleted

$ oc get secret default-interconnect-users -o yaml | grep stf_one_time
    stf_one_time_upgrade: "1699547215"

$ ../tests/smoketest/smoketest.sh 
[...]
*** [SUCCESS] Smoke test job completed successfully

Copy link
Member

@leifmadsen leifmadsen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving because this looks good, but a couple minor comments.

roles/servicetelemetry/tasks/component_qdr.yml Outdated Show resolved Hide resolved
roles/servicetelemetry/tasks/component_qdr.yml Outdated Show resolved Hide resolved
Copy link
Collaborator

@vkmc vkmc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good, I wouldn't worry too much about HA is we are planning to deprecate soon

@csibbitt
Copy link
Collaborator Author

csibbitt commented Nov 9, 2023

HA Testing

Rebuild with latest changes

$ git reflog | head -1
5cd57e1 HEAD@{0}: commit (amend): Adjust QDR restarts to account for HA

$ oc start-build service-telemetry-operator --wait --from-dir . ; oc delete pod --selector=name=service-telemetry-operator
[...]

Test password regeneration (and restart) has no errors when no Interconnect pods exist

$ oc get po -l application=default-interconnect
NAME                                    READY   STATUS    RESTARTS   AGE
default-interconnect-7c77794bf7-x4m7s   1/1     Running   0          4m

$ oc delete interconnect default-interconnect; oc delete po -l application=default-interconnect
interconnect.interconnectedcloud.github.io "default-interconnect" deleted
pod "default-interconnect-7c77794bf7-x4m7s" deleted

# STO converges without intervention because it watches the Interconnect object
# On first pass it creates the Interconnect object but there was no users secret to upgrade yet
# The pod starts up

$ oc get po -l application=default-interconnect
NAME                                    READY   STATUS    RESTARTS   AGE
default-interconnect-7c77794bf7-8x8q2   1/1     Running   0          12s

# On second pass it upgrades the secret and restarts the pod

$ oc get secret default-interconnect-users -o yaml | grep stf_one_time
    stf_one_time_upgrade: "1699561745"

$ oc get po -l application=default-interconnect
NAME                                    READY   STATUS    RESTARTS   AGE
default-interconnect-7c77794bf7-8x8q2   1/1     Running   0          6s

Enable HA to get multiple pods and ensure they restart when we regenerate the password

$ oc patch stf default --patch '{"spec":{"highAvailability":{"enabled": true}}}' --type=merge
servicetelemetry.infra.watch/default patched

$ oc get po -l application=default-interconnect
NAME                                    READY   STATUS    RESTARTS   AGE
default-interconnect-7c77794bf7-8x8q2   1/1     Running   0          6m21s
default-interconnect-7c77794bf7-c8xjr   1/1     Running   0          2m12s

$ oc patch secret default-interconnect-users --type=json --patch '[{"op": "remove", "path":"/metadata/labels/stf_one_time_upgrade"}]'
secret/default-interconnect-users patched

$ oc delete po -l name=service-telemetry-operator
pod "service-telemetry-operator-6d8b6797b7-pjrq5" deleted

$ oc get po -l application=default-interconnect
NAME                                    READY   STATUS    RESTARTS   AGE
default-interconnect-7c77794bf7-2bm6d   1/1     Running   0          6s
default-interconnect-7c77794bf7-dpb7r   1/1     Running   0          5s

Copy link
Contributor

@ayefimov-1 ayefimov-1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems good to me.

@csibbitt
Copy link
Collaborator Author

The 4.14 tests are failing from what looks like a race condition. I have a patch coming that will wait for the qdr-test to be Running, which I hope will fix it.

@csibbitt
Copy link
Collaborator Author

Okay, there is actually a SECOND race condition now. Sometimes the qdr-test configmap is created before the password gets upgraded, and so fails to connect to an upgraded/restarted STF QDR. This was actually masked before when the QDR was not being restarted since they both had the old password and would connect using it. I'll keep working on this branch to solve this so that we don't have additional backports.

@csibbitt csibbitt added the do-not-merge Code is not ready to be merged label Nov 13, 2023
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/629ee486ebce48e3aeee8bb020b8e707

stf-crc-latest-nightly_bundles TIMED_OUT in 1h 00m 48s
stf-crc-latest-local_build TIMED_OUT in 1h 00m 50s

@csibbitt
Copy link
Collaborator Author

recheck

@csibbitt
Copy link
Collaborator Author

csibbitt commented Nov 13, 2023

This passed the 4.12 and 4.14 Jenkins tests 3 times in a row (each), but the zuul one timed out waiting for the password to upgrade. Without an STO log I'm not really sure why that might happen.

@csibbitt
Copy link
Collaborator Author

csibbitt commented Nov 13, 2023

The previous r3check caused the 4.12 jenkins job to run twice (once in branch mode, once in merge mode maybe?). So now the Jenkins' have run this 8 times in a row with no trouble, but zuul times out waiting on this line: d0d5460#diff-16335c82c44304ee49855ecebf854c2a8db384a1240f299c5f023b4b9845e407R64

That's the exact same expression as was previously used to capture the password, and it passed in previous iterations of this PR. Not sure what's going on here, but will probably need to extend the logging to get enough information to debug.

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/6dc66d329b15441a9da68059ab49fa8c

stf-crc-latest-nightly_bundles TIMED_OUT in 1h 00m 51s
stf-crc-latest-local_build TIMED_OUT in 1h 00m 50s

@csibbitt
Copy link
Collaborator Author

csibbitt commented Nov 13, 2023

Oh! It seems that zuul is still testing with auth: none[1] which apparently worked fine until we started waiting around for behavior that only executes when using auth: basic! I'm not sure where that's slipping in, as I thought it was just using defaults the whole way through (since we removed the override in stf-v-c MR 103).

CC: @elfiesmelfie any ideas?

[1] https://review.rdoproject.org/zuul/build/1f7d700e607947cf833640fc48b7bf3e/log/controller/smoketest.log#1718

EDIT: I found it! https://github.com/infrawatch/service-telemetry-operator/blob/master/ci/vars-zuul-common.yml#L5

@csibbitt csibbitt enabled auto-merge (squash) November 13, 2023 20:40
@csibbitt csibbitt removed the do-not-merge Code is not ready to be merged label Nov 13, 2023
@leifmadsen leifmadsen closed this Nov 13, 2023
auto-merge was automatically disabled November 13, 2023 20:42

Pull request was closed

@leifmadsen leifmadsen reopened this Nov 13, 2023
@leifmadsen
Copy link
Member

Oops! I fat fingered a button. Sorry!

@leifmadsen leifmadsen enabled auto-merge (squash) November 13, 2023 20:43
@elfiesmelfie
Copy link
Collaborator

Oh! It seems that zuul is still testing with auth: none[1] which apparently worked fine until we started waiting around for behavior that only executes when using auth: basic! I'm not sure where that's slipping in, as I thought it was just using defaults the whole way through (since we removed the override in stf-v-c MR 103).

CC: @elfiesmelfie any ideas?

[1] https://review.rdoproject.org/zuul/build/1f7d700e607947cf833640fc48b7bf3e/log/controller/smoketest.log#1718

EDIT: I found it! https://github.com/infrawatch/service-telemetry-operator/blob/master/ci/vars-zuul-common.yml#L5

Yup, I added that when the default became "auth: basic" and the smoke tests stopped passing.

@leifmadsen leifmadsen merged commit 16b8197 into master Nov 13, 2023
8 of 9 checks passed
@leifmadsen leifmadsen deleted the csibbitt/STF-1541/qdr-restart-on-pw-change branch November 13, 2023 20:54
csibbitt added a commit that referenced this pull request Nov 13, 2023
* Restart QDR after changing the password

* Fixes bug reported here: #517 (comment)
* Avoids an extra manual step when changing password
* Would affect users who upgrade from earlier STF and subsequently enable basic auth
* Also users who need to change their passwords

* Fixing ansible lint

* Update roles/servicetelemetry/tasks/component_qdr.yml

* Adjust QDR restarts to account for HA

* [smoketest] Wait for qdr-test to be Running

* [smoketest] Wait for QDR password upgrade

* Remove zuul QDR auth override

(cherry picked from commit 16b8197)
leifmadsen pushed a commit that referenced this pull request Nov 14, 2023
* Restart QDR after changing the password

* Fixes bug reported here: #517 (comment)
* Avoids an extra manual step when changing password
* Would affect users who upgrade from earlier STF and subsequently enable basic auth
* Also users who need to change their passwords

* Fixing ansible lint

* Update roles/servicetelemetry/tasks/component_qdr.yml

* Adjust QDR restarts to account for HA

* [smoketest] Wait for qdr-test to be Running

* [smoketest] Wait for QDR password upgrade

* Remove zuul QDR auth override

(cherry picked from commit 16b8197)
vkmc added a commit that referenced this pull request Feb 14, 2024
* Add gitleaks.toml for rh-gitleaks (#510)

Add a .gitleaks.toml file to avoid the false positive leak for the
example certificate when deploying for Elasticsearch.

* [stf-collect-logs] Move describe build|pod from ci/ to the role (#505)

* [stf-run-ci] Fix check to include bool filter (#511)

Update the check to use bool filter instead of a bar var.
By default, ansible parses vars as strings, and without the | bool
filter, this check is invalid, as it will always resolve to true, since
it is a non-empty string. Other instances of the same check did this,
but this one was missed.

* [allow_skip_clone] Allow skipping of the cloning stages (#512)

* [allow_skip_clone] Use <repo>_dir instead of hardcoding all directories relative to base_dir

This will allow configuration of the repo clone destination, so we can
use pre-cloned dirs instead of explicitly cloning the dirs each time.

This is essential for CI systems like zuul, that set-up the repos with
particular versions/branches prior to running the test scripts.

* [zuul] List the other infrawatch repos as required for the job

* [zuul] Set the {sgo,sg-bridge,sg-core,prometheus-webhook-snmp}_dir vars

Add in the repo dir locations where the repos should be pre-cloned by
zuul

* Replace base_dir with sto_dir

* set sto_dir relative to base_dir is it isn't already set

* [ci] use absolute dir for requirements.txt

* [ci] Update sto_dir using explicit reference

zuul.project.src_dir refers to the current project dir. When using the jobs
in another infrawatch project, this becomes invalid.
Instead, sto_dir is explicitly set using
zuul.projects[<project_name>].src_dir, the same way that the other repo dirs
are set in vars-zuul-common

---------

Co-authored-by: Chris Sibbitt <csibbitt@redhat.com>

* Fix qdr auth one_time_upgrade label check (#518)

* Fix qdr auth one_time_upgrade label check

* Fix incorrect variable naming on one_time_upgrade label check

* Adjust QDR authentication password generation (#520)

Adjust the passwords being generated for QDR authentication since
certain characters (such as colon) will cause a failure in the parsing
routine within qpid-dispatch. Updates the lookup function to only use
ascii_letters and digits and increases the length to 32 characters.

---------

Co-authored-by: Leif Madsen <lmadsen@redhat.com>

* Add docs for skip_clone (#515)

* [allow_skip_clone] Add docs for clone_repos and *_dir vars

* Align README table column spacing (#516)

* Align README table column spacing

* Update build/stf-run-ci/README.md

---------

Co-authored-by: Emma Foley <elfiesmelfie@users.noreply.github.com>

---------

Co-authored-by: Leif Madsen <lmadsen@redhat.com>

* [zuul] Add STO to required repos (#524)

It appears that STO is not included explictly when running jobs from
SGO [1]. This will be the case in all the other repos.
This change explicitly add it, in case it's not already included by
zuul.

[1] https://review.rdoproject.org/zuul/build/edd8f17bfdac4360a94186b46c4cea3f

* QDR Auth in smoketest (#525)

* QDR Auth in smoketest

* Added qdr-test as a mock of the OSP-side QDR
* Connection from qdr-test -> default-interconnect is TLS+Auth
* Collectors point at qdr-test instead of default-interconnect directly
* Much more realistic than the existing setup
* Eliminated a substitution in sensubility config
* Used default QDR basic auth in Jenkinsfile

* QDR Auth for infrared 17.1 script (#517)

* QDR Auth for infrared 17.1 script

* Fix missing substitution for AMQP_PASS in infrared script

* [zuul] Define a project template for stf-crc-jobs (#514)

* [allow_skip_clone] Use <repo>_dir instead of hardcoding all directories relative to base_dir

This will allow configuration of the repo clone destination, so we can
use pre-cloned dirs instead of explicitly cloning the dirs each time.

This is essential for CI systems like zuul, that set-up the repos with
particular versions/branches prior to running the test scripts.

* [zuul] List the other infrawatch repos as required for the job

* [zuul] Set the {sgo,sg-bridge,sg-core,prometheus-webhook-snmp}_dir vars

Add in the repo dir locations where the repos should be pre-cloned by
zuul

* Replace base_dir with sto_dir

* set sto_dir relative to base_dir is it isn't already set

* [ci] use absolute dir for requirements.txt

* [ci] Update sto_dir using explicit reference

zuul.project.src_dir refers to the current project dir. When using the jobs
in another infrawatch project, this becomes invalid.
Instead, sto_dir is explicitly set using
zuul.projects[<project_name>].src_dir, the same way that the other repo dirs
are set in vars-zuul-common

* [zuul] Define a project template for stf-crc-jobs

Instead of listing all the jobs for each preoject in-repo, and needing to update the list every time
that a new job is added, the project template can be updated and the changes propogated to the
other infrawatch projects

* [zuul] don't enable using the template

* Revert "[zuul] don't enable using the template"

This reverts commit 56e2009.

---------

Co-authored-by: Chris Sibbitt <csibbitt@redhat.com>

* Restart QDR after changing the password (#530)

* Restart QDR after changing the password

* Fixes bug reported here: #517 (comment)
* Avoids an extra manual step when changing password
* Would affect users who upgrade from earlier STF and subsequently enable basic auth
* Also users who need to change their passwords

* Fixing ansible lint

* Update roles/servicetelemetry/tasks/component_qdr.yml

* Adjust QDR restarts to account for HA

* [smoketest] Wait for qdr-test to be Running

* [smoketest] Wait for QDR password upgrade

* Remove zuul QDR auth override

* [zuul] Add jobs to test with different versions of OCP (#432)


* Add crc_ocp_bundle value to select OCP version
* zuul: add log collection post-task to get crc logs
* Add ocp v13 and a timeout to the job

* Update README for 17.1 IR test (#533)

* Update README for 17.1 IR test

Update the 17.1 infrared test script README to show how to deploy a
virtualized workload on the deployed overcloud infrastructure. Helps
with testing by providing additional telemetry to STF required in
certain dashboards.

* Update tests/infrared/17.1/README.md

Co-authored-by: Chris Sibbitt <csibbitt@redhat.com>

* Update tests/infrared/17.1/README.md

---------

Co-authored-by: Chris Sibbitt <csibbitt@redhat.com>

* Support OCP v4.12 through v4.14 (#535)

Support STF 1.5.3 starting at OpenShift version 4.12 due to
incompatibility with 4.11 due to dependency requirements. Our primary
target is support of OCP EUS releases.

Closes: STF-1632

* [stf-collect-logs] Add ignore_errors to task (#529)

The "Question the deployment" task didn't have
ignore_errors: true set, so when the task fails, the play
is finished. This means that we don't get to the
"copy logs" task and can't see the job logs in zuul.

ignore_errors is set to true to be consistent with other tasks

* Mgirgisf/stf 1580/fix log commands (#526)

* update stf-collect-logs tasks
* Update log path
* solve log bugs in stf-run-ci tasks
* create log directory

* Adjust Operator dependency version requirements (#538)

Adjust the operator package dependency requirements to align to known
required versions. Primarily reduce the version of
openshift-cert-manager from 1.10 to 1.7 in order to support the
tech-preview channel which was previously used.

Lowering the version requirement allows for the
openshift-cert-manager-operator installed previously to be used during
the STF 1.5.2 to 1.5.3 update, removing the update from being blocked.

Related: STF-1636

* Clean up stf-run-ci for OCP 4.12 minimum version (#539)

Update the stf-run-ci base setup to no longer need testing against OCP
4.10 and earlier, meaning we can rely on a single workflow for
installation. Also update the deployment to use
cluster-observability-operator via the redhat-operators CatalogSource
for installation via use_redhat and use_hybrid strategies.

* [zuul] Add job to build locally and do an index-based deployment (#495)

* [zuul] Add job to build locally and do an index-based deployment

* Only require Interconnect and Smart Gateway (#541)

* Only require Interconnect and Smart Gateway

Update the dependency management within Service Telemetry Operator to
only require AMQ Interconnect and Smart Gateway Operator, which is
enough to deploy STF with observabilityStrategy: none. Other Operators
can be installed in order to satisfy data storage of telemetry and
events.

Installation of cert-manager is also required, but needs to be
pre-installed similar to Cluster Observability Operator, either as a
cluster-scoped operator with the tech-preview channel, or a single time
on the cluster as a namespace scoped operator, which is how the
stable-v1 channel installs.

Documentation will be updated to adjust for this change.

Related: STF-1636

* Perform CI update to match docs install changes (#542)

* Perform CI update to match docs install changes

Update the stf-run-ci scripting to match the documented installation
procedures which landed in
infrawatch/documentation#513. These changes are
also reflected in #541.

* Update build/stf-run-ci/tasks/setup_base.yml

Co-authored-by: Emma Foley <elfiesmelfie@users.noreply.github.com>

---------

Co-authored-by: Emma Foley <elfiesmelfie@users.noreply.github.com>

* Also drop cert-manager project

The cert-manager project gets created with workload items when deploying
the cert-manager from the cert-manager-operator project. When removing
cert-manager this project is not cleaned up, so we need to delete it as
well.

---------

Co-authored-by: Emma Foley <elfiesmelfie@users.noreply.github.com>

* [stf-run-ci] Explicitly check the validate_daployment was successful (#545)

In [1], the validate_deployment step is successful, despite the
deployment not being successful.
This causes the job to timeout because the following steps continue to
run despite an invalid state.

To get the expected behaviour, the output should be checked for a string
indicating success.
i.e. * [info] CI Build complete. You can now run tests.
[2] shows the output for a successful run.

[1] https://review.rdoproject.org/zuul/build/245ae63e41884dc09353d938ec9058d7/console#5/0/144/controller
[2] https://review.rdoproject.org/zuul/build/802432b23da24649b818985b7b1633bb/console#5/0/82/controller

* Implement dashboard management (#548)

* Implement dashboard management

Implement a new configuration option graphing.grafana.dashboards.enabled
which results in dashboards objects being created for the Grafana
Operator. Previously loading dashboards would be done manually via 'oc
apply' using instructions from documentation.

The new CRD parameters to the ServiceTelemetry object allows the Service
Telemetry Operator to now make the GrafanaDashboard objects directly.

Related: OSPRH-825

* Drop unnecessary cluster roles

* Update CSV for owned parameter

* Remove basic-auth method from grafana (#550)

* Only openshift auth will be allowed

* Adjust Alertmanager SAR to be more specific

* This matches recent changes in prometheus[1] and grafana[2]

[1] https://github.com/infrawatch/service-telemetry-operator/pull/549/files#diff-2cf84bcf66f12393c86949ec0d3f16c473a650173d55549bb02556d23aa22bd2R46
[2] https://github.com/infrawatch/service-telemetry-operator/pull/550/files#diff-ae71801975adb4f8dd4aa5479a66ad46e46f17de40f9d147b2e09e13ce26633eR45

* Revert "Adjust Alertmanager SAR to be more specific"

This reverts commit 0f94fd5.

* Auth to prometheus using token instead of basicauth (#549)

* Auth to prometheus using token instead of basicauth

* Add present/absent logic to prometheus-reader resources

* s/password/token in smoketest output

* [zuul] Make nightly_bundles jobs non-voting (#551)

---------

Co-authored-by: Emma Foley <elfiesmelfie@users.noreply.github.com>

* Fix branch co-ordination in stf-run-ci (#555)

I think it got broken by an oops recently[1].

Since that change, working_branch (`branch` at that point) is never used because version_branches.sgo has a default value.

This breaks the branch co-ordination in Jenkins[2] and in local testing[3].

[1] https://github.com/infrawatch/service-telemetry-operator/pull/512/files#diff-c073fe1e346d08112920aa0bbc8a7453bbd3032b7a9b09ae8cbc70df4db4ea2dR19
[2] https://github.com/infrawatch/service-telemetry-operator/blob/0f94fd577617aee6a85fc4141f98ebdfc49a9f92/Jenkinsfile#L157
[3] https://github.com/infrawatch/service-telemetry-operator/blob/0f94fd577617aee6a85fc4141f98ebdfc49a9f92/README.md?plain=1#L62

* Adjust Alertmanager SAR to be more specific (#553)

* This matches recent changes in prometheus[1] and grafana[2]

[1] https://github.com/infrawatch/service-telemetry-operator/pull/549/files#diff-2cf84bcf66f12393c86949ec0d3f16c473a650173d55549bb02556d23aa22bd2R46
[2] https://github.com/infrawatch/service-telemetry-operator/pull/550/files#diff-ae71801975adb4f8dd4aa5479a66ad46e46f17de40f9d147b2e09e13ce26633eR45

* Add optional spec.replaces field to CSV for update graph compliance

The way we generate our CSVs uses OLM's skipRange functionality. This is fine,
but using only this leads to older versions becoming unavailable after the
fact -- see the warning at [1].

By adding an optional spec.replaces to our CSV we allow update testing as
well as actual production updates for downstream builds that leverage it.

Populating the field requires knowledge of the latest-released bundle,
so we take it from an environment variable to be provided by the
builder. If this is unset we don't include the spec.replaces field at
all -- leaving previous behavior unchanged.

Resolves #559
Related: STF-1658

[1] https://olm.operatorframework.io/docs/concepts/olm-architecture/operator-catalog/creating-an-update-graph/#skiprange

* Stop using ephemeral storage for testing (#547)

Update the __service_telemetry_storage_persistent_storage_class to use CRC PVs
Use the default value (false) for __service_telemetry_storage_ephemeral_enabled

* [zuul] Use extracted CRC nodes in stf-base (#531)

* [zuul] Update base job for stf-base

* Add in required projects: dataplane-operator, infra-operator, openstack-operator

* Remove nodeset from stf-base
  it overrides the nodeset set in the base job.
  The nodeset is going to be used to select the OCP version

* [zuul] define nodesets for easy reuse

* Define the nodeset
* Rename the base
* Select OCP version with the nodeset

* [zuul] Add a login command to get initial kubeconfig file

* [stf-run-ci] Add retries to pre-clean

* Update galaxy requirements

* [ci] Add retry to login command

* [ci] Configure kubeconfig for rhol_crc role

* Apply suggestions from code review

* Zuul: Update how we get the initial kubeconfig (#558)

* use ci-framework infra playbook
* add make targets to do set-up
* link the kubeconfig files
* Remove pre-get_kubeconfig.yml; the script is no longer used

* [ci] Add common-tasks.yml to cover the tasks that setup every play (#556)

* [zuul] Update the labels used for extracted CRC

* Remove non-default cifmw_rhol_crc_kubeconfig value

* Implement support for Grafana Operator v5 (#561)

* Implement support for Grafana Operator v5

Implement changes to support Grafana Operator v5 when the new
grafana.integreatly.org CRD is available. Use the new CRDs as default
when they are available. Fallover to deploying with Grafana Operator v4
when the Grafana Operator v5 CRDs are not available, thereby providing
backwards compatibility to allow administrators time to migrate.

Additionally, the polystat plugin has been removed from the rhos-cloud
dashboard due to compatibility issues with grafana-cli usage when
dynamically loading plugins. Usage of Grafana Operator v5 is also a
target for disconnected support, and dynamically loading plugins in
these environments is expected to be a problem.

Related: OSPRH-2577
Closes: STF-1667

* Default Grafana role set to Admin

In order to match the previous (Grafana Operator v4) role, set
auto_assign_org_role to the Admin value. Default is Viewer.

* Remove old vendored operator_sdk/util collection (#563)

Remove the old 0.1.0 vendored collection operator_sdk/util from the
upstream Dockerfile and repository. Instead use the default
operator_sdk/util in the base image which is a newer version of 0.4.0.

We only use the util collection for one call to k8s_status when
ephemeral storage is enabled. The newer collection also provides a
k8s_event module which could be useful in the future.

Closes: STF-1683

* Add nightly_bundle jobs to periodic pipeline (#564)

The nightly_bundle jobs will run once a day

* Remove hard-coded Prometheus version in template (#565)

Remove the hard-coded Prometheus version in the Prometheus template when
using observabilityStrategy use_redhat, which uses Cluster Observability
Operator to manage the Prometheus instance requests.

Previously this value was hard-coded to prevent a potential rollback
when moving from Community Prometheus Operator to Cluster Observability
Operator.

Resolves: JIRA#OSPRH-2140

* Set features.operators.openshift.io/disconnected to True (#570)

STF can now be deployed in disconnected mode. This change updates
the features.operators.openshift.io/disconnected annotation to
reflect this.

* [stf-run-ci] Update validation check for bundle URLs (#571)

* [stf-run-ci] Update validation check for bundle URLs

An empty string passed as the bundle URL will pass the existing test
of "is defined" and "is not None" and still be invalid.

The validation for the bundle URL can be done in one check per var:

* If the var is undefined, it becomes "", and the check fails, because of length
* If the var is None, there's an error because None does not have a length
* If the var is an empty string, the check fails because of the length

This simplifies the check and improves readability

* Prefer Grafana 9 workload (#575)

Prefer usage of Grafana 9 container image from RHCC. Grafana 7 is EOL
upstream and receives no security support. Prefer use of Grafana 9 which
is still supported.

---------

Co-authored-by: Leif Madsen <lmadsen@redhat.com>
Co-authored-by: Emma Foley <elfiesmelfie@users.noreply.github.com>
Co-authored-by: Chris Sibbitt <csibbitt@redhat.com>
Co-authored-by: Marihan Girgis <102027102+mgirgisf@users.noreply.github.com>
Co-authored-by: Miguel Garcia <migarcia@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

5 participants