-
Notifications
You must be signed in to change notification settings - Fork 7k
Description
Status: Open for comments
Need
Plugins often deal with external systems and reference things inside them from annotations. e.g. jenkins.io/github-folder: 'folder-name/project-name' or datadoghq.com/graph-token: <TOKEN>
Sometimes (especially for SaaS systems) there is a single instance and references are obvious (e.g. there is a single datadog.com and a graph token is a reference into that single namespace regardless of whose graph it is).
In other scenarios, there are multiple instances of an external system and each is its own namespace (i.e. a jenkins job called "my-service/build-job" is only meaningful in the context of a single jenkins instance).
There are many reasons for wanting multiple instances of an external system including poor organisation design (which backstage would be a good first step in fixing ;-) ), acquisitions or isolation between teams (Perhaps we deliberately provision a separate jenkins instance for each team).
Finally, I think this is a pattern which can be applied to a significant number of plugins and a consistent approach would be easier to understand for users.
Proposal
When defining a configuration schema for your backend plugin, allow for multiple named external systems (in this kafka example, a cluster is a named external system):
kafka:
clientId: backstage
clusters:
- name: cluster-name
brokers:
- localhost:9092When defining the value of an annotation, include the name of the external system as a / separated prefix e.g. for kafka, the annotation is:
kafka.apache.org/consumer-groups: cluster-name/consumer-group-name
multiple external systems with duplicated namespace
Some plugins are currently configured with a list of external systems and their annotation is expected to be present in all of them (i.e. they share a duplicated namespace), such as kubernetes:
kubernetes:
serviceLocatorMethod:
type: 'multiTenant'
clusterLocatorMethods:
- type: 'config'
clusters:
- url: http://127.0.0.1:9999
name: minikube
authProvider: 'serviceAccount'
skipTLSVerify: false
serviceAccountToken: ${K8S_MINIKUBE_TOKEN}
- url: http://127.0.0.2:9999
name: aws-cluster-1
authProvider: 'aws'
- type: 'gke'
projectId: 'gke-clusters'
region: 'europe-west1'This config defines a number of clusters (2 for local testing, as many as can be found in the gke-clusters project on google) but if an entity was annotated with 'backstage.io/kubernetes-id': dice-roller we would expect to find a dice-roller pod in every one of those clusters (let's say dev, test and prod).
If we now imagine 2 departments, each with their own kubernetes clusters, we should not expect to find the same dice-roller pod in both department's clusters and when we annotate our entity, we should instead use 'backstage.io/kubernetes-id': department-a/dice-roller. For this setup, we would define the department's cluseters in config as follows:
kubernetes:
serviceLocatorMethod:
type: 'multiTenant'
clusterGroups:
- name: department-a
clusterLocatorMethods:
- type: 'config'
clusters:
- url: http://127.0.0.1:9999
name: minikube
authProvider: 'serviceAccount'
skipTLSVerify: false
serviceAccountToken: ${K8S_MINIKUBE_TOKEN}
- url: http://127.0.0.2:9999
name: aws-cluster-1
authProvider: 'aws'
- type: 'gke'
projectId: 'gke-clusters'
region: 'europe-west1'
- name: department-b
clusterLocatorMethods:
- type: 'gke'
projectId: 'gke-clusters-deptB'
region: 'europe-west1'I suspect this is what serviceLocatorMethod is trying to solve but I'm not familiar with the plans for this field so comments from people who are would be particularly welcome.
multiple external systems with split namespace
I'm not sure if this is a requirement but I can image plugins which would want to list multiple external systems where some annotation values are present in one and some in the other and the backend doesn't need to know which as it will try them in turn.
I suggest this is handled similarly to the kubernetes example above.
Alternatives
In addition, plugins could support a hook for returning the appropriate config (perhaps dynamically generated) for a given entity. This would default to reading the first part of the annotation value and looking it up in the config but could be replaced with something completely dynamic (search another system by entityRef or use {hostname: `${entity.spec.owner}.jenkins.example.com`})
Risks
Outstanding Questions
- Is this just for plugins which have a specific backend plugin and authenticate to the external system as backstage?
- Should we explicitly support a "default" named external system so we can avoid the prefix in the annotation in the simple case? Does this make parsing to ambiguous if the part after the (now optional)
/could optionally contain a/? If we don't so this is backwards compatibility too hard/ugly to maintain - Do we need to standardise the name of the config property storing the list-of-named-external-systems (
clustersin the kafka example above) or is this going too far?