-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove basic-auth method from grafana #550
Conversation
csibbitt
commented
Dec 5, 2023
- Only openshift auth will be allowed
* Only openshift auth will be allowed
- -tls-cert=/etc/tls/private/tls.crt | ||
- -tls-key=/etc/tls/private/tls.key | ||
- -upstream=http://localhost:3000 | ||
- -cookie-secret-file=/etc/proxy/secrets/session_secret | ||
- -openshift-service-account=grafana-serviceaccount | ||
- '-openshift-sar={"resource": "namespaces", "verb": "get"}' | ||
- '-openshift-sar={"namespace":"{{ ansible_operator_meta.namespace }}","resource": "grafana", "group":"integreatly.org", "verb":"get"}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another reminder for myself that this sort of change is release-notes worthy. Basically I'm proposing that instead of requiring accounts with cluster-wide admin-like permissions (the ability to see all namespaces) we allow any accounts with access to read grafana objects in our namespace.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, making this SAR the same as the one that I'm proposing we use on prometheus (permission to read prometheus objects in our namespace) would also make sense to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to read through https://github.com/openshift/oauth-proxy#limiting-access-to-users to get a better understanding about what this was doing, but now I get it :)
- -tls-cert=/etc/tls/private/tls.crt | ||
- -tls-key=/etc/tls/private/tls.key | ||
- -upstream=http://localhost:3000 | ||
- -cookie-secret-file=/etc/proxy/secrets/session_secret | ||
- -openshift-service-account=grafana-serviceaccount | ||
- '-openshift-sar={"resource": "namespaces", "verb": "get"}' | ||
- '-openshift-sar={"namespace":"{{ ansible_operator_meta.namespace }}","resource": "grafana", "group":"integreatly.org", "verb":"get"}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to read through https://github.com/openshift/oauth-proxy#limiting-access-to-users to get a better understanding about what this was doing, but now I get it :)
Is this all standalone code, or does it depend on any of the other work you've been doing around authentication access? |
Standalone with no other dependencies. The basic auth only existed to provide backwards-compatibility to what we had before we implemented oauth (https://github.com/infrawatch/service-telemetry-operator/pull/281/files#diff-ae71801975adb4f8dd4aa5479a66ad46e46f17de40f9d147b2e09e13ce26633eL19) . Now that OAuth has been in place for several releases (since 1.4.0), and our threat model is suggesting removing all basic-auth and passwords in CRs, we can just turn this all off. |
Awesome! Then approved and ready for merge at your discretion :) |
Build failed (check pipeline). Post https://review.rdoproject.org/zuul/buildset/703375534b404088b4cee5a17d8a5bb1 ✔️ stf-crc-ocp_412-nightly_bundles SUCCESS in 24m 16s |
recheck |
* Only openshift auth will be allowed
* Add gitleaks.toml for rh-gitleaks (#510) Add a .gitleaks.toml file to avoid the false positive leak for the example certificate when deploying for Elasticsearch. * [stf-collect-logs] Move describe build|pod from ci/ to the role (#505) * [stf-run-ci] Fix check to include bool filter (#511) Update the check to use bool filter instead of a bar var. By default, ansible parses vars as strings, and without the | bool filter, this check is invalid, as it will always resolve to true, since it is a non-empty string. Other instances of the same check did this, but this one was missed. * [allow_skip_clone] Allow skipping of the cloning stages (#512) * [allow_skip_clone] Use <repo>_dir instead of hardcoding all directories relative to base_dir This will allow configuration of the repo clone destination, so we can use pre-cloned dirs instead of explicitly cloning the dirs each time. This is essential for CI systems like zuul, that set-up the repos with particular versions/branches prior to running the test scripts. * [zuul] List the other infrawatch repos as required for the job * [zuul] Set the {sgo,sg-bridge,sg-core,prometheus-webhook-snmp}_dir vars Add in the repo dir locations where the repos should be pre-cloned by zuul * Replace base_dir with sto_dir * set sto_dir relative to base_dir is it isn't already set * [ci] use absolute dir for requirements.txt * [ci] Update sto_dir using explicit reference zuul.project.src_dir refers to the current project dir. When using the jobs in another infrawatch project, this becomes invalid. Instead, sto_dir is explicitly set using zuul.projects[<project_name>].src_dir, the same way that the other repo dirs are set in vars-zuul-common --------- Co-authored-by: Chris Sibbitt <csibbitt@redhat.com> * Fix qdr auth one_time_upgrade label check (#518) * Fix qdr auth one_time_upgrade label check * Fix incorrect variable naming on one_time_upgrade label check * Adjust QDR authentication password generation (#520) Adjust the passwords being generated for QDR authentication since certain characters (such as colon) will cause a failure in the parsing routine within qpid-dispatch. Updates the lookup function to only use ascii_letters and digits and increases the length to 32 characters. --------- Co-authored-by: Leif Madsen <lmadsen@redhat.com> * Add docs for skip_clone (#515) * [allow_skip_clone] Add docs for clone_repos and *_dir vars * Align README table column spacing (#516) * Align README table column spacing * Update build/stf-run-ci/README.md --------- Co-authored-by: Emma Foley <elfiesmelfie@users.noreply.github.com> --------- Co-authored-by: Leif Madsen <lmadsen@redhat.com> * [zuul] Add STO to required repos (#524) It appears that STO is not included explictly when running jobs from SGO [1]. This will be the case in all the other repos. This change explicitly add it, in case it's not already included by zuul. [1] https://review.rdoproject.org/zuul/build/edd8f17bfdac4360a94186b46c4cea3f * QDR Auth in smoketest (#525) * QDR Auth in smoketest * Added qdr-test as a mock of the OSP-side QDR * Connection from qdr-test -> default-interconnect is TLS+Auth * Collectors point at qdr-test instead of default-interconnect directly * Much more realistic than the existing setup * Eliminated a substitution in sensubility config * Used default QDR basic auth in Jenkinsfile * QDR Auth for infrared 17.1 script (#517) * QDR Auth for infrared 17.1 script * Fix missing substitution for AMQP_PASS in infrared script * [zuul] Define a project template for stf-crc-jobs (#514) * [allow_skip_clone] Use <repo>_dir instead of hardcoding all directories relative to base_dir This will allow configuration of the repo clone destination, so we can use pre-cloned dirs instead of explicitly cloning the dirs each time. This is essential for CI systems like zuul, that set-up the repos with particular versions/branches prior to running the test scripts. * [zuul] List the other infrawatch repos as required for the job * [zuul] Set the {sgo,sg-bridge,sg-core,prometheus-webhook-snmp}_dir vars Add in the repo dir locations where the repos should be pre-cloned by zuul * Replace base_dir with sto_dir * set sto_dir relative to base_dir is it isn't already set * [ci] use absolute dir for requirements.txt * [ci] Update sto_dir using explicit reference zuul.project.src_dir refers to the current project dir. When using the jobs in another infrawatch project, this becomes invalid. Instead, sto_dir is explicitly set using zuul.projects[<project_name>].src_dir, the same way that the other repo dirs are set in vars-zuul-common * [zuul] Define a project template for stf-crc-jobs Instead of listing all the jobs for each preoject in-repo, and needing to update the list every time that a new job is added, the project template can be updated and the changes propogated to the other infrawatch projects * [zuul] don't enable using the template * Revert "[zuul] don't enable using the template" This reverts commit 56e2009. --------- Co-authored-by: Chris Sibbitt <csibbitt@redhat.com> * Restart QDR after changing the password (#530) * Restart QDR after changing the password * Fixes bug reported here: #517 (comment) * Avoids an extra manual step when changing password * Would affect users who upgrade from earlier STF and subsequently enable basic auth * Also users who need to change their passwords * Fixing ansible lint * Update roles/servicetelemetry/tasks/component_qdr.yml * Adjust QDR restarts to account for HA * [smoketest] Wait for qdr-test to be Running * [smoketest] Wait for QDR password upgrade * Remove zuul QDR auth override * [zuul] Add jobs to test with different versions of OCP (#432) * Add crc_ocp_bundle value to select OCP version * zuul: add log collection post-task to get crc logs * Add ocp v13 and a timeout to the job * Update README for 17.1 IR test (#533) * Update README for 17.1 IR test Update the 17.1 infrared test script README to show how to deploy a virtualized workload on the deployed overcloud infrastructure. Helps with testing by providing additional telemetry to STF required in certain dashboards. * Update tests/infrared/17.1/README.md Co-authored-by: Chris Sibbitt <csibbitt@redhat.com> * Update tests/infrared/17.1/README.md --------- Co-authored-by: Chris Sibbitt <csibbitt@redhat.com> * Support OCP v4.12 through v4.14 (#535) Support STF 1.5.3 starting at OpenShift version 4.12 due to incompatibility with 4.11 due to dependency requirements. Our primary target is support of OCP EUS releases. Closes: STF-1632 * [stf-collect-logs] Add ignore_errors to task (#529) The "Question the deployment" task didn't have ignore_errors: true set, so when the task fails, the play is finished. This means that we don't get to the "copy logs" task and can't see the job logs in zuul. ignore_errors is set to true to be consistent with other tasks * Mgirgisf/stf 1580/fix log commands (#526) * update stf-collect-logs tasks * Update log path * solve log bugs in stf-run-ci tasks * create log directory * Adjust Operator dependency version requirements (#538) Adjust the operator package dependency requirements to align to known required versions. Primarily reduce the version of openshift-cert-manager from 1.10 to 1.7 in order to support the tech-preview channel which was previously used. Lowering the version requirement allows for the openshift-cert-manager-operator installed previously to be used during the STF 1.5.2 to 1.5.3 update, removing the update from being blocked. Related: STF-1636 * Clean up stf-run-ci for OCP 4.12 minimum version (#539) Update the stf-run-ci base setup to no longer need testing against OCP 4.10 and earlier, meaning we can rely on a single workflow for installation. Also update the deployment to use cluster-observability-operator via the redhat-operators CatalogSource for installation via use_redhat and use_hybrid strategies. * [zuul] Add job to build locally and do an index-based deployment (#495) * [zuul] Add job to build locally and do an index-based deployment * Only require Interconnect and Smart Gateway (#541) * Only require Interconnect and Smart Gateway Update the dependency management within Service Telemetry Operator to only require AMQ Interconnect and Smart Gateway Operator, which is enough to deploy STF with observabilityStrategy: none. Other Operators can be installed in order to satisfy data storage of telemetry and events. Installation of cert-manager is also required, but needs to be pre-installed similar to Cluster Observability Operator, either as a cluster-scoped operator with the tech-preview channel, or a single time on the cluster as a namespace scoped operator, which is how the stable-v1 channel installs. Documentation will be updated to adjust for this change. Related: STF-1636 * Perform CI update to match docs install changes (#542) * Perform CI update to match docs install changes Update the stf-run-ci scripting to match the documented installation procedures which landed in infrawatch/documentation#513. These changes are also reflected in #541. * Update build/stf-run-ci/tasks/setup_base.yml Co-authored-by: Emma Foley <elfiesmelfie@users.noreply.github.com> --------- Co-authored-by: Emma Foley <elfiesmelfie@users.noreply.github.com> * Also drop cert-manager project The cert-manager project gets created with workload items when deploying the cert-manager from the cert-manager-operator project. When removing cert-manager this project is not cleaned up, so we need to delete it as well. --------- Co-authored-by: Emma Foley <elfiesmelfie@users.noreply.github.com> * [stf-run-ci] Explicitly check the validate_daployment was successful (#545) In [1], the validate_deployment step is successful, despite the deployment not being successful. This causes the job to timeout because the following steps continue to run despite an invalid state. To get the expected behaviour, the output should be checked for a string indicating success. i.e. * [info] CI Build complete. You can now run tests. [2] shows the output for a successful run. [1] https://review.rdoproject.org/zuul/build/245ae63e41884dc09353d938ec9058d7/console#5/0/144/controller [2] https://review.rdoproject.org/zuul/build/802432b23da24649b818985b7b1633bb/console#5/0/82/controller * Implement dashboard management (#548) * Implement dashboard management Implement a new configuration option graphing.grafana.dashboards.enabled which results in dashboards objects being created for the Grafana Operator. Previously loading dashboards would be done manually via 'oc apply' using instructions from documentation. The new CRD parameters to the ServiceTelemetry object allows the Service Telemetry Operator to now make the GrafanaDashboard objects directly. Related: OSPRH-825 * Drop unnecessary cluster roles * Update CSV for owned parameter * Remove basic-auth method from grafana (#550) * Only openshift auth will be allowed * Adjust Alertmanager SAR to be more specific * This matches recent changes in prometheus[1] and grafana[2] [1] https://github.com/infrawatch/service-telemetry-operator/pull/549/files#diff-2cf84bcf66f12393c86949ec0d3f16c473a650173d55549bb02556d23aa22bd2R46 [2] https://github.com/infrawatch/service-telemetry-operator/pull/550/files#diff-ae71801975adb4f8dd4aa5479a66ad46e46f17de40f9d147b2e09e13ce26633eR45 * Revert "Adjust Alertmanager SAR to be more specific" This reverts commit 0f94fd5. * Auth to prometheus using token instead of basicauth (#549) * Auth to prometheus using token instead of basicauth * Add present/absent logic to prometheus-reader resources * s/password/token in smoketest output * [zuul] Make nightly_bundles jobs non-voting (#551) --------- Co-authored-by: Emma Foley <elfiesmelfie@users.noreply.github.com> * Fix branch co-ordination in stf-run-ci (#555) I think it got broken by an oops recently[1]. Since that change, working_branch (`branch` at that point) is never used because version_branches.sgo has a default value. This breaks the branch co-ordination in Jenkins[2] and in local testing[3]. [1] https://github.com/infrawatch/service-telemetry-operator/pull/512/files#diff-c073fe1e346d08112920aa0bbc8a7453bbd3032b7a9b09ae8cbc70df4db4ea2dR19 [2] https://github.com/infrawatch/service-telemetry-operator/blob/0f94fd577617aee6a85fc4141f98ebdfc49a9f92/Jenkinsfile#L157 [3] https://github.com/infrawatch/service-telemetry-operator/blob/0f94fd577617aee6a85fc4141f98ebdfc49a9f92/README.md?plain=1#L62 * Adjust Alertmanager SAR to be more specific (#553) * This matches recent changes in prometheus[1] and grafana[2] [1] https://github.com/infrawatch/service-telemetry-operator/pull/549/files#diff-2cf84bcf66f12393c86949ec0d3f16c473a650173d55549bb02556d23aa22bd2R46 [2] https://github.com/infrawatch/service-telemetry-operator/pull/550/files#diff-ae71801975adb4f8dd4aa5479a66ad46e46f17de40f9d147b2e09e13ce26633eR45 * Add optional spec.replaces field to CSV for update graph compliance The way we generate our CSVs uses OLM's skipRange functionality. This is fine, but using only this leads to older versions becoming unavailable after the fact -- see the warning at [1]. By adding an optional spec.replaces to our CSV we allow update testing as well as actual production updates for downstream builds that leverage it. Populating the field requires knowledge of the latest-released bundle, so we take it from an environment variable to be provided by the builder. If this is unset we don't include the spec.replaces field at all -- leaving previous behavior unchanged. Resolves #559 Related: STF-1658 [1] https://olm.operatorframework.io/docs/concepts/olm-architecture/operator-catalog/creating-an-update-graph/#skiprange * Stop using ephemeral storage for testing (#547) Update the __service_telemetry_storage_persistent_storage_class to use CRC PVs Use the default value (false) for __service_telemetry_storage_ephemeral_enabled * [zuul] Use extracted CRC nodes in stf-base (#531) * [zuul] Update base job for stf-base * Add in required projects: dataplane-operator, infra-operator, openstack-operator * Remove nodeset from stf-base it overrides the nodeset set in the base job. The nodeset is going to be used to select the OCP version * [zuul] define nodesets for easy reuse * Define the nodeset * Rename the base * Select OCP version with the nodeset * [zuul] Add a login command to get initial kubeconfig file * [stf-run-ci] Add retries to pre-clean * Update galaxy requirements * [ci] Add retry to login command * [ci] Configure kubeconfig for rhol_crc role * Apply suggestions from code review * Zuul: Update how we get the initial kubeconfig (#558) * use ci-framework infra playbook * add make targets to do set-up * link the kubeconfig files * Remove pre-get_kubeconfig.yml; the script is no longer used * [ci] Add common-tasks.yml to cover the tasks that setup every play (#556) * [zuul] Update the labels used for extracted CRC * Remove non-default cifmw_rhol_crc_kubeconfig value * Implement support for Grafana Operator v5 (#561) * Implement support for Grafana Operator v5 Implement changes to support Grafana Operator v5 when the new grafana.integreatly.org CRD is available. Use the new CRDs as default when they are available. Fallover to deploying with Grafana Operator v4 when the Grafana Operator v5 CRDs are not available, thereby providing backwards compatibility to allow administrators time to migrate. Additionally, the polystat plugin has been removed from the rhos-cloud dashboard due to compatibility issues with grafana-cli usage when dynamically loading plugins. Usage of Grafana Operator v5 is also a target for disconnected support, and dynamically loading plugins in these environments is expected to be a problem. Related: OSPRH-2577 Closes: STF-1667 * Default Grafana role set to Admin In order to match the previous (Grafana Operator v4) role, set auto_assign_org_role to the Admin value. Default is Viewer. * Remove old vendored operator_sdk/util collection (#563) Remove the old 0.1.0 vendored collection operator_sdk/util from the upstream Dockerfile and repository. Instead use the default operator_sdk/util in the base image which is a newer version of 0.4.0. We only use the util collection for one call to k8s_status when ephemeral storage is enabled. The newer collection also provides a k8s_event module which could be useful in the future. Closes: STF-1683 * Add nightly_bundle jobs to periodic pipeline (#564) The nightly_bundle jobs will run once a day * Remove hard-coded Prometheus version in template (#565) Remove the hard-coded Prometheus version in the Prometheus template when using observabilityStrategy use_redhat, which uses Cluster Observability Operator to manage the Prometheus instance requests. Previously this value was hard-coded to prevent a potential rollback when moving from Community Prometheus Operator to Cluster Observability Operator. Resolves: JIRA#OSPRH-2140 * Set features.operators.openshift.io/disconnected to True (#570) STF can now be deployed in disconnected mode. This change updates the features.operators.openshift.io/disconnected annotation to reflect this. * [stf-run-ci] Update validation check for bundle URLs (#571) * [stf-run-ci] Update validation check for bundle URLs An empty string passed as the bundle URL will pass the existing test of "is defined" and "is not None" and still be invalid. The validation for the bundle URL can be done in one check per var: * If the var is undefined, it becomes "", and the check fails, because of length * If the var is None, there's an error because None does not have a length * If the var is an empty string, the check fails because of the length This simplifies the check and improves readability * Prefer Grafana 9 workload (#575) Prefer usage of Grafana 9 container image from RHCC. Grafana 7 is EOL upstream and receives no security support. Prefer use of Grafana 9 which is still supported. --------- Co-authored-by: Leif Madsen <lmadsen@redhat.com> Co-authored-by: Emma Foley <elfiesmelfie@users.noreply.github.com> Co-authored-by: Chris Sibbitt <csibbitt@redhat.com> Co-authored-by: Marihan Girgis <102027102+mgirgisf@users.noreply.github.com> Co-authored-by: Miguel Garcia <migarcia@redhat.com>