Subscription watch tracks usage and capacity at an account-level. Account-level reporting means that subscriptions are not directly associated to machines, containers, or service instances.
Subscription watch can be thought of as several services that provide related functionality:
Networking diagrams show how requests are routed:
There are currently 3 different ways to deploy the components, with running them locally as the preferred development workflow.
Local Development
First, ensure you have podman-compose, podman and java 17 installed:
sudo dnf install -y podman-compose podman java-17-openjdk-devel
Ensure the checkout has the HBI submodule initialized:
git submodule update --init --recursive
Make sure you have user.email
and user.name
set in the local .git/config
.
The Nebula release plugin has an annoying property where it introspects the Git
config for the name and email address. When doing container builds, there is no
global Git config so if you haven't set the name and email locally, the
container build will fail. Use git config --local
to set these values.
git config --local user.name "John Doe"
git config --local user.email johndoe@example.com
NOTE: in order to deploy insights-inventory (not always useful), you'll need to login to quay.io first.
Start via:
podman-compose up -d
NOTE: if the DB hasn't finished starting up (likely), HBI will fail to
start, to remedy: podman start rhsm-subscriptions_inventory_1
.
For more details about what services are defined, see docker-compose.yml
Note that the compose assumes that none of the services are already running
locally (hint: might need to sudo systemctl stop postgresql
). If you want to
use only some of the services via podman-compose, then podman-compose up --no-start
can be used to define the services (you can then subsequently
manually start containers for the services you wish to deploy locally.
If you prefer to use local postgresql service, you can use init_dbs.sh
.
podman-compose
deploys a kafka instance w/ a UI at http://localhost:3030
Two environment variables can be used to manipulate the offsets of the kafka consumers:
KAFKA_SEEK_OVERRIDE_END
when set totrue
seeks to the very endKAFKA_SEEK_OVERRIDE_TIMESTAMP
when set to an OffsetDateTime, seeks the queue to this position.
These changes are permanent, committed the next time the kafka consumer is detected as idle.
Some services like swatch-contracts need an AMQ service to handle the UMB messages. For these services, we can start RabbitMQ as an AMQ service locally via:
podman-compose -f config/rabbitmq/docker-compose.yml up -d
RabbitMQ will be listening on the port 5672.
NOTE: Our services might be configured to listen on a different hostname and port. For example,
for the SWATCH contracts service, we need to provide the UMB_HOSTNAME and UMB_PORT to point out to RabbitMQ:
java -DUMB_HOSTNAME=localhost -DUMB_PORT=5672 -jar swatch-contracts/build/quarkus-app/quarkus-run.jar
Some services export the logging traces to an externalize service via an exporter. Exporters act like a broker to configure what to do with these traces. For local development, we can start an OTEL exporter that simply log the traces into the container logs. We can start it via:
podman-compose -f config/otel/docker-compose.yml up -d
The OTEL exporter will be listening for gRPC connections on port 4317.
NOTE: Our services might be configured to listen on a different hostname and port. For example,
for the SWATCH contracts service, we need to provide the OTEL_ENDPOINT property to point out to the
local otel exporter: java -DOTEL_ENDPOINT=http://localhost:4317 -jar swatch-contracts/build/quarkus-app/quarkus-run.jar
Some services integrate the log traces into Splunk. For these services, we can start Splunk via:
podman-compose -f config/splunk/docker-compose.yml up -d
Splunk will be accepting events over port 8088. The web interface is on port
8000 and you can log in as admin
/admin123
. There are several different
environment variables you can pass to the container and the
documentation is helpful.
NOTE: We need to configure our services to work with this Splunk instance. For example, for the SWATCH Azure producer, we need:
SERVER_PORT=8001 \
ENABLE_SPLUNK_HEC=true \
SPLUNK_HEC_URL=https://localhost:8088 \
SPLUNK_HEC_TOKEN=29fe2838-cab6-4d17-a392-37b7b8f41f75 \
./gradlew :swatch-producer-azure:quarkusDev
Some of these environment variables are our own (e.g. SPLUNK_HEC_URL
) and are
piped into the quarkus.log.handler.splunk
namespace which is what ultimately
controls the Splunk HEC configuration. By default SSL/TLS certificate
validation is disabled in the dev
profile for the swatch-producer-aws
,
swatch-producer-azure
, and swatch-contracts
projects. If you are seeing
SSL/TLS error (generally something like unable to find valid certification path to requested target
), then setting QUARKUS_LOG_HANDLER_SPLUNK_DISABLE_CERTIFICATE_VALIDATION=true
is the sure-fire way to disable SSL/TLS certificate validation.
The Export service is a RedHat service that allows users to request and download data archives for auditing or use with their own tooling. Some swatch services like Subscription Sync listen for the Export service requests and upload the generated report into the Export service. For these services, we can start the Export service via:
NOTE: The export service strongly depends on the Kafka and Postgres services being previously started. So, make sure you run podman-compose up -d
before running the Export service.
podman-compose -f config/export-service/docker-compose.yml up -d
The above export-service will expose the internal API via the port 10000 and the public API via the port 8002. The subscription-sync service is already configured to use the port 10000. To create an export request, you can use the following example as a reference:
curl -sS -X POST http://localhost:8002/api/export/v1/exports -H "x-rh-identity: <IDENTITY>" -H "Content-Type: application/json" -d '{
"name": "Test Export Request",
"format": "json",
"expires_at": "2025-01-01T00:00:00Z",
"sources": [
{
"application": "subscriptions",
"resource": "subscriptions",
"filters": {
"product_id": "rosa"
}
}
]
}'
Where IDENTITY is a base64 payload with the org_id:
echo -n '{"identity":{"account_number":"<any>","org_id":"<org_id>","internal":{"org_id":"<org_id>"},"type":"User","user":{"username":"<any>"}}}' | base64 -w 0
> eyJpZGVudGl0eSI6eyJhY2NvdW50X251bWJlciI6IjEwMDAxIiwib3JnX2lkIjoiMTMyNTk3NzUiLCJpbnRlcm5hbCI6eyJvcmdfaWQiOiIxMzI1OTc3NSJ9LCJ0eXBlIjoiVXNlciIsInVzZXIiOnsidXNlcm5hbWUiOiJ1c2VyX2RldiJ9fX0=
The above request will trigger an event from the export-service to swatch via topics and swatch will eventually upload the report using the internal export service API.
The uploaded reports are stored in minio (s3). You can verify that this report has been uploaded by checking the folder 'config/export-service/tmp/minio/exports-bucket' where should be a file at <BUCKET ID>.json/xl.meta
. The xl.meta
file is binary, but if you open the txt plain editor, you should see:
{"data":[{"sku":"MW02393","product_name":"OpenShift Dedicated","service_level":"Premium","usage":"Production","org_id":"13259775","subscription_number":"13579750","quantity":1,"measurements":[{"metric_id":"Cores","capacity":5.0,"measurement_type":"PHYSICAL"},{"metric_id":"Instance-hours","capacity":1.0,"measurement_type":"PHYSICAL"}]}]}
Furthermore, you can also check the export-service database (by default, it's: "localhost:5342", user: postgres, password: postgres, database name: postgres), where we can see the records in the table "sources" with all the statuses. For example:
f9288f52-59dc-4ad6-9307-8c67de75c09c fb177ea7-a8ea-43e4-8c2e-cb4bd7170601 subscriptions failed subscriptions {"product_id": "Wrong!"} 400 ProductId: Wrong! not found in configuration
Wiremock is a very helpful tool that helps us to mock other services like Prometheus. At the moment, the wiremock service is configured to stub a prometheus instance which is necessary for swatch-metrics. Let's see how we can start the swatch-metrics service to use the wiremock service as prometheus. We can start the Wiremock service via:
podman-compose -f config/wiremock/docker-compose.yml up -d
Next, we start the swatch metrics app using: SERVER_PORT=8002 QUARKUS_MANAGEMENT_PORT=9002 EVENT_SOURCE=telemeter PROM_URL="http://localhost:8101/api/v1/" ./gradlew :swatch-metrics:quarkusDev
Finally, when syncing all the accounts by: curl -v -H 'Origin: https://service.redhat.com' -X PUT http://localhost:8002/api/swatch-metrics/v1/internal/metering/sync
, we should see some events in the service logs and the Kafka topic.
./gradlew :bootRun
Spring Boot defines many properties
that can be overridden via args or environment variables. (We prefer
environment variables). To determine the environment variable name,
uppercase, remove dashes and replace .
with _
(per
Spring docs)
We also define a number of service-specific properties (see Environment Variables)
For example, the server.port
(or SERVER_PORT
env var) property changes the listening port:
SERVER_PORT=9090 ./gradlew :bootRun
We have a number of profiles. Each profile activates a subset of components in the codebase.
api
: Run the user-facing APIcapacity-ingress
: Run the internal only capacity ingress APIkafka-queue
: Run with a kafka queue (instead of the default in-memory queue)liquibase-only
: Run the Liquibase migrations and stoprh-marketplace
: Run the worker responsible for processing tally summaries and emitting usage to Red Hat Marketplace.purge-snapshots
: Run the retention job and exitworker
: Process jobs off the job queue
These can be specified most easily via the SPRING_PROFILES_ACTIVE
environment variable. For example:
SPRING_PROFILES_ACTIVE=api,kafka-queue ./gradlew bootRun
Each profile has a @Configuration
class that controls which components get activated, See ApplicationConfiguration for more details.
If no profiles are specified, the default profiles list in application.yaml
is applied.
RHSM Subscriptions is meant to be deployed under the context path "/". This unusual configuration is due to external requirements that our application base its context path on the value of an environment variable. Using "/" as the context path means that we can have certain resources (such as health checks) with a known, static name while others can vary based on an environment variable given to the pod.
These are served on port 9000. When running locally, you can access them via http://localhost:9000.
- /health - A Spring Actuator that we use as k8s liveness/readiness probe.
- /info - An actuator that reads the information from
META-INF/build-info.properties
and reports it. The response includes things like the version number.
Both the health actuator and info actuator can be modified, expanded, or extended. Please see the documentation for a discussion of extension points.
rhsm-subscriptions uses an RBAC service to determine application authorization. The RBAC service can via configured by environment variables (see below).
For development purposes, the RBAC service can be stubbed out so that the connection
to the RBAC service is bypassed and all users recieve the 'subscriptions::' role. This
can be enabled by setting RHSM_RBAC_USE_STUB=true
RHSM_RBAC_USE_STUB=true ./gradlew bootRun
DEV_MODE
: disable anti-CSRF, account filtering, and RBAC role checkDEVTEST_SUBSCRIPTION_EDITING_ENABLED
: allow subscription/offering edits via internal APIs.DEVTEST_EVENT_EDITING_ENABLED
: allow event edits via internal APIs.PRETTY_PRINT_JSON
: configure Jackson to indent outputted JSONAPP_NAME
: application name for URLs (default: rhsm-subscriptions)PATH_PREFIX
: path prefix in the URLs (default: api)INVENTORY_USE_STUB
: Use stubbed inventory REST APIINVENTORY_API_KEY
: API key for inventory serviceHOST_LAST_SYNC_THRESHOLD
: reject hosts that haven't checked in since this duration (e.g. 24h)INVENTORY_DATABASE_HOST
: inventory DB hostINVENTORY_DATABASE_DATABASE
: inventory DB databaseINVENTORY_DATABASE_USERNAME
: inventory DB userINVENTORY_DATABASE_PASSWORD
: inventory DB passwordINVENTORY_DATABASE_SCHEMA
: inventory DB schemaPRODUCT_DENYLIST_RESOURCE_LOCATION
: location of the product denylistACCOUNT_LIST_RESOURCE_LOCATION
: location of the account list (opt-in used otherwise)DATABASE_HOST
: DB hostDATABASE_PORT
: DB portDATABASE_DATABASE
: DB databaseDATABASE_USERNAME
: DB usernameDATABASE_PASSWORD
: DB passwordCAPTURE_SNAPSHOT_SCHEDULE
: cron schedule for capturing tally snapshotsACCOUNT_BATCH_SIZE
: number of accounts to tally at onceTALLY_RETENTION_HOURLY
: number of hourly tallies to keepTALLY_RETENTION_DAILY
: number of daily tallies to keepTALLY_RETENTION_WEEKLY
: number of weekly tallies to keepTALLY_RETENTION_MONTHLY
: number of monthly tallies to keepTALLY_RETENTION_QUARTERLY
: number of quarterly tallies to keepTALLY_RETENTION_YEARLY
: number of yearly tallies to keepKAFKA_TOPIC
: topic for rhsm-subscriptions tasksKAFKA_GROUP_ID
kafka consumer group IDKAFKA_CONSUMER_MAX_POLL_INTERVAL_MS
: kafka max poll interval in millisecondsKAFKA_MESSAGE_THREADS
: number of consumer threadsKAFKA_CONSUMER_RECONNECT_BACKOFF_MS
: kafka consumer reconnect backoff in millisecondsKAFKA_CONSUMER_RECONNECT_BACKOFF_MAX_MS
: kafka consumer reconnect max backoff in millisecondsKAFKA_API_RECONNECT_TIMEOUT_MS
: kafka connection timeout in millisecondsRHSM_RBAC_USE_STUB
: stub out the rbac serviceRHSM_RBAC_APPLICATION_NAME
: name of the RBAC permission application name (<APP_NAME>:*:*
), by default this property is set to 'subscriptions'.RHSM_RBAC_URL
: RBAC service urlRHSM_RBAC_MAX_CONNECTIONS
: max concurrent connections to RBAC serviceSWATCH_*_PSK
: pre-shared keys for internal service-to-service authentication where the*
represents the name of an authorized serviceENABLE_SYNCHRONOUS_OPERATIONS
: allow any supported APIs to bypass kafka and run the operation immediately when requested.
Clowder
Clowder exposes the services it provides in an Openshift config map. This config map appears
in the container as a JSON file located by default at the path defined by ACG_CONFIG
environment
variable (typically /cdapp/cdappconfig.json
). The ClowderJsonEnvironmentPostProcessor
takes
this JSON file and flattens it into Java style properties (with the namespace clowder
prefixed).
For example,
{ "kafka": {
"brokers": [{
"hostname": "localhost"
}]
}}
These properties are then passed into the Spring Environment and may be used elsewhere (the
ClowderJsonEnvironmentPostProcessor
runs before most other environment processing classes).
The pattern we follow is to assign the Clowder style properties to an intermediate property that follows Spring Boot's environment variable binding conventions
It is important to note, this intermediate property must be given a default via the $ {value:default}
syntax. If a default is not provided and the Clowder JSON is not available
(such as in development runs), Spring will fail to start because the clowder.
property will
not resolve to anything.
An example of an intermediate property would be
KAFKA_BOOTSTRAP_HOST=${clowder.kafka.brokers:localhost}
This pattern has the useful property of allowing us to override any Clowder settings (in
development, for example) with environment variables since a value specified in the environment
has a higher precedence
than values defined in config data files (e.g. application.properties
).
The intermediate property is then assigned to any actual property that we wish to use, e.g.
spring.kafka.bootstrap-servers
. Thus, it is trivial to either allow a value to be specified
by Clowder, overridden from Clowder via environment variable, or not given by Clowder at all and
instead based on a default.
A Clowder environment can be simulated in development by pointing the ACG_CONFIG
environment var
to a mock Clowder JSON file.
E.g.
$ ACG_CONFIG=$(pwd)/swatch-core/src/test/resources/test-clowder-config.json ./gradlew bootRun
Note that there are 3 properties which ClowderJsonEnvironmentPostProcessor
actually creates.
The clowder.endpoint....trust-store-*
properties are actually synthetic. They don't appear in the cdappconfig.json
file. Clowder provides the CA information via a tlsCAPath
property which is just a pointer to a PEM file.
Our code generally doesn't want just a bare PEM file. Instead, we take that path and construct a temporary Java truststore (a PKCS12 file) from it. We pass the reference to that keystore whenever CA information is required.
Following the pattern of clowder-quarkus-config-source, we also configure the truststore information on a per-endpoint
basis. To do that, if and only if the endpoint has a tlsPort
key, then we generate three synthetic properties:
trust-store-path, trust-store-password, and trust-store-type. We populate those synthetic properties with the
information from the generated PKS12.
However, if an endpoint doesn't have a tlsPort
defined on it or if the tlsCAPath
isn't specified, the
ClowderJsonPropertyResolver
can't resolve the synthetic clowder.endpoints.[...].trust-store-path
property. The
Spring property resolver behavior is to just return the literal string instead.
- Get a token and login via
oc login
. - Switch to the ephemeral namespace via
oc project $namespace
- Remotely exec kakfa-console-consumer.sh with the desired topic (replace
$topic
below):
oc rsh \
$(oc get pod -o name -l app.kubernetes.io/name=kafka) \
bin/kafka-console-consumer.sh \
--topic $topic \
--from-beginning \
--bootstrap-server localhost:9092
Deploy to Openshift via Templates
Prerequisite secrets:
pinhead
: secret withkeystore.jks
- keystore for HTTPS communication with RHSM API (formerly Pinhead).swatch-tally-db
: DB connection info, havingdb.host
,db.port
,db.user
,db.password
, anddb.name
properties.host-inventory-db-readonly
: inventory read-only clone DB connection info, havingdb.host
,db.port
,db.user
,db.password
, anddb.name
properties.ingress
: secret withkeystore.jks
andtruststore.jks
- keystores for mTLS communication with subscription-conduit.tls
: havingkeystore.password
, the password used for capacity ingress.
Prequisite configmaps:
capacity-denylist
havingproduct-denylist.txt
which is a newline-separated list of which SKUs have been approved for capacity ingress.
Adjust as desired:
oc process -f templates/rhsm-subscriptions-api.yml | oc create -f -
oc process -f templates/rhsm-subscriptions-capacity-ingress.yml | oc create -f -
oc process -f templates/rhsm-subscriptions-scheduler.yml | oc create -f -
oc process -f templates/rhsm-subscriptions-worker.yml | oc create -f -
Deploying with Bonfire
-
sudo dnf install golang
-
Install
bonfire
following the instructions here -
Configure
bonfire
to use your checkout. This cat command is just a short-cut so the instructions will be succinct. You should open the file and paste in the name and component bits yourself under theapps:
key. If you paste in the contents, replace$(pwd)
with the directory where your subscription-watch checkout isYou can override parameters as shown below, or alternatively with the bonfire
-p
argument during the deploy step. The parameters in the example below are useful for development environments.
bonfire config write-default
cat <<BONFIRE >> ~/.config/bonfire/config.yaml
- name: rhsm #Name of app-sre 'application' folder this component lives in
components:
- name: swatch-tally
host: local
repo: $(pwd)/swatch-tally
path: /deploy/clowdapp.yaml
parameters:
REPLICAS: 1
DEV_MODE: "true"
swatch-tally/IMAGE: quay.io/cloudservices/rhsm-subscriptions
RHSM_RBAC_USE_STUB: "true"
- name: swatch-producer-red-hat-marketplace
host: local
repo: $(pwd)/rhsm-subscriptions/swatch-producer-red-hat-marketplace
path: /deploy/clowdapp.yaml
parameters:
REPLICAS: 1
- name: swatch-metrics
host: local
repo: $(pwd)/swatch-metrics
path: /deploy/clowdapp.yaml
parameters:
DEV_MODE: "true"
REPLICAS: 1
swatch-metrics/IMAGE: quay.io/cloudservices/rhsm-subscriptions
- name: swatch-subscription-sync
host: local
repo: $(pwd)/swatch-subscription-sync
path: /deploy/clowdapp.yaml
parameters:
DEV_MODE: "true"
REPLICAS: 1
swatch-subscription-sync/IMAGE: quay.io/cloudservices/rhsm-subscriptions
- name: swatch-system-conduit
host: local
repo: $(pwd)/swatch-system-conduit
path: /deploy/clowdapp.yaml
parameters:
REPLICAS: 1
swatch-system-conduit/IMAGE: quay.io/cloudservices/swatch-system-conduit
- name: swatch-api
host: local
repo: $(pwd)/rhsm-subscriptions/swatch-api
path: /deploy/clowdapp.yaml
parameters:
REPLICAS: 1
swatch-api/IMAGE: quay.io/cloudservices/rhsm-subscriptions
RHSM_RBAC_USE_STUB: "true"
- name: swatch-producer-aws
host: local
repo: $(pwd)/rhsm-subscriptions/swatch-producer-aws
path: /deploy/clowdapp.yaml
parameters:
REPLICAS: 1
swatch-producer-aws/IMAGE: quay.io/cloudservices/swatch-producer-aws
- name: swatch-contracts
host: local
repo: $(pwd)/rhsm-subscriptions/swatch-contracts
path: /deploy/clowdapp.yaml
parameters:
REPLICAS: 1
swatch-contracts/IMAGE: quay.io/cloudservices/swatch-contracts
- name: swatch-producer-azure
host: local
repo: $(pwd)/rhsm-subscriptions/swatch-producer-azure
path: /deploy/clowdapp.yaml
parameters:
REPLICAS: 1
swatch-producer-azure/IMAGE: quay.io/cloudservices/swatch-producer-azure
- name: swatch-billable-usage
host: local
repo: $(pwd)/rhsm-subscriptions/swatch-billable-usage
path: /deploy/clowdapp.yaml
parameters:
REPLICAS: 1
swatch-billable-usage/IMAGE: quay.io/cloudservices/swatch-billable-usage
BONFIRE
The definitive reference is going to be the "Onboarding to the Ephemeral Cluster" page in the Cloud-dot documentation, but here are some essentials:
-
Make sure you’re part of the RedHatInsights GitHub org and a member of the
ephemeral-users
role in your file under theusers
directory in app-interface. -
Install
oc
from theCLI Tools Download Page
on the cluster. -
Activate your virtualenv for Bonfire
source $ENV_LOCATION/bin/activate
-
Namespaces can be reserved with
bonfire
. E.g.bonfire namespace reserve --duration HOURS
will reserve a random available namespace for the number of hours you specify. You can always increase a reservation by reserving the namespace again:bonfire namespace reserve NAMESPACE
. -
Create an account on
quay.io
and create an image repository for each component (Currently, one for rhsm-subscriptions and one for swatch-system-conduit). Usepodman login
so that you can build and push your test images there. -
You can do the builds with the script in
bin/build-images.sh
.By default, bonfire/clowder use the first 7 characters of the git hash as the image tag. Note that currently Clowder has an enforced image pull policy of "IfNotPresent" so using a static tag (even "latest") is not a workable option.
-
When you deploy with bonfire during development, you'll want to specify the image and image tag you want to use like so:
bonfire deploy rhsm-subscriptions -n NAMESPACE --no-remove-resources=rhsm-subscriptions -i quay.io/my-repo/my-image=my-tag -p rhsm-subscriptions/IMAGE=quay.io/my-repo/my-image -i quay.io/my-repo/my-conduit-image=my-tag -p rhsm-subscriptions/CONDUIT_IMAGE=quay. io/my-repo/my-conduit-image
The
-i
argument overrides the image tag that you're using. The-p
overrides parameters in specific ClowdApp components (defined in~/.config/bonfire/config.yaml
). In this case, we override theIMAGE
andCONDUIT_IMAGE
parameters in our template with the image to use.Note that you can also locally change the images used without the parameters - simply add
IMAGE
andCONDUIT_IMAGE
toparameters
in~/.config/bonfire/config.yaml
. (If you do this, the-p
arguments tobonfire
are redundant)If you don't specify the tag to use with
-i
bonfire is going to use the first 7 characters of the git hash for HEAD. If you don't specify the repo with the-p
argument,bonfire
is going to use what's defined in the ClowdApp which is going to be the production image that's been pushed to the official repo.The
--no-remove-resources=all
argument is extremely important. Without it, bonfire will process the template and will not include our resource requests. This "feature" is to prevent apps from requesting too much but the default resources given are vastly insufficient for our purposes. -
If you want to reset your ephemeral environment from the RHSM stuff entirely, you can delete the special "app" resource that Clowder creates. So
oc delete app rhsm
will essentially delete all the pods, deployments, etc. associate with RHSM while leaving other apps (like RBAC) in place. -
Expose your pods using
oc port-forward
-
Here's a one-liner to see who has what ephemeral environment reserved
oc get project -l ephemeral-ns-reserved -L ephemeral-ns-requester-name,ephemeral-ns-reserved
-
Here's a way to monitor events (useful for tracking down deployment issues)
oc get events --sort-by=.metadata.creationTimestamp
-
You can use port-forwarding to connect a debugger to EE pods, e.g. (replace deployment name as needed)
oc port-forward deployment/swatch-tally-service 5005:5005
Then you can connect to localhost:5005 to attach the debugger. For IntelliJ, see the official IntelliJ tutorial.
If you use bonfire deploy
without already having a namespace reserved, it will
reserve the namespace for you BUT if the app doesn't start up in the default
amount of time, bonfire will take down/give up the namespace it reserved to
begin with. To get around this, you can manually reserve the namespace, then
pass -n <NAMESPACE>
as an argument when running bonfire deploy
.
- Start bonfire virtual environment
- Reserve a namespace
- Deploy rhsm with
bonfire deploy -n NAMESPACE
Merges to main
will trigger deployment to a preprod environment. Production
deployments will be handled in an internal App-SRE automation repo.
See App-SRE documentation on updating dashboards for more info.
Essentially:
- Edit the dashboard on the stage grafana instance.
- Export the dashboard, choosing to "export for sharing externally", save JSON to a file.
- Export the dashboard again, this time not selecting the external sharing option and save that JSON to a file.
- For both pieces of JSON, drop them into the
subscription-watch.json
section underdata
ingrafana-dashboard-subscription-watch.configmap.yaml
and update the indentation. - Do a
git diff
. Select the export that makes the most sense. In my experience, not selecting the "external sharing" option leads to more correct results. A export formatted for sharing has an__inputs
section that hardcodes some values we don't want hardcoded. - Rename the file to
subscription-watch.json
.
OR
- Edit the dashboard on the stage grafana instance.
- Navigate to Dashboard Settings (cogwheel top right of page)
- Navigate to JSON Model (left nav)
- Save contents of the JSON Model into a file named
subscription-watch.json
.
Use the following command to update the configmap YAML:
oc create configmap grafana-dashboard-subscription-watch --from-file=subscription-watch.json -o yaml --dry-run=true > ./grafana-dashboard-subscription-watch.configmap.yaml
cat << EOF >> ./grafana-dashboard-subscription-watch.configmap.yaml
annotations:
grafana-folder: /grafana-dashboard-definitions/Insights
labels:
grafana_dashboard: "true"
EOF
Possibly useful, to extract the JSON from the k8s configmap file:
oc extract -f .rhcicd/grafana/grafana-dashboard-subscription-watch.configmap.yaml --confirm
Once you extract it from the .yaml that's checked into this repo, you can import it into the stage instance of grafana by going to Dashboards -> +Import
from the left nav.
Links to Swagger UI and API specs:
- Customer-facing API (source)
- Internal Billing Producer API (source)
- Internal Metering API (source)
- Internal Subscriptions API (source)
- Internal Tally API (source)
- Internal Contracts API (source)
- Internal AWS Producer API (source)
- Internal Azure Producer API (source)
- Internal System Conduit API (source)
Topics with their associated profiles and pods
Service that syncs system data from Hosted Candlepin into HBI.profile | topic(s) | openshift pod |
---|---|---|
platform.rhsm-subscriptions.metering-tasks | swatch-metrics | |
platform.rhsm-subscriptions.service-instance-ingress | swatch-metrics | |
platform.rhsm-subscriptions.metering-rhel-tasks | swatch-metrics-rhel | |
platform.rhsm-subscriptions.service-instance-ingress | swatch-metrics-rhel | |
orgsync | platform.rhsm-conduit.tasks | swatch-system-conduit-sync |
orgsync | platform.rhsm-conduit.tasks | swatch-system-conduit |
platform.inventory.host-ingress | swatch-system-conduit | |
worker | platform.rhsm-subscriptions.tasks | swatch-tally |
worker | platform.rhsm-subscriptions.tally | swatch-tally |
platform.rhsm-subscriptions.billable-usage | swatch-billable-usage | |
worker | platform.rhsm-subscriptions.service-instance-ingress | swatch-tally |
purge-snapshots | ||
rh-marketplace | platform.rhsm-subscriptions.billable-usage | swatch-producer-red-hat-marketplace |
platform.rhsm-subscriptions.billable-usage | swatch-producer-aws | |
capacity-ingress | platform.rhsm-subscriptions.subscription-sync | swatch-contracts |
capacity-ingress | platform.rhsm-subscriptions.offering-sync | swatch-contracts |
capacity-ingress | platform.rhsm-subscriptions.capacity-reconcile | swatch-contracts |
In order to generate mock data for BASILISK, use PRODUCT=BASILISK bin/prometheus-mock-data.sh
(this generates mock data into a prometheus process),
or use bin/import-events.py --file bin/BASILISK.csv
to import BASILISK data directly in, bypassing prometheus.
Subscription watch components are licensed GPLv3 (see LICENSE for more details).