-
Notifications
You must be signed in to change notification settings - Fork 497
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sidecar usage extension for the kiali operator. #5028
Comments
I don't fully grok the proposed solution, but... wouldn't it be simpler to just update the Kiali CR with the new prometheus credentials? The operator will see the change and immediately propagate the new credentials down to the Kiali ConfigMap and automatically restart the Kiali pod so it picks up the new credentials. This is already supported. You would just need to patch the Kiali CR with the new credentials in the |
My guess is that it comes down to efficiency. I note that @libesz says that the token expires "very often". The typical expiration I've seen for these cases is 5 minutes. You will want to roll out a fresh token before the old one expires to avoid the situation when Kiali can no longer reach Prometheus, so let's assume that you refresh every 4 minutes. That's 15 updates in an hour. The operator will use way much more resources for all these reconciliations (which may also flood the kubeapi), than the resources a sidecar may use by doing a simple request to refresh a token. Also, a sidecar can refresh on-demand (i.e. if nobody uses Kiali for some time, no need to refresh the token). @libesz I would like to ask if your Prometheus lives in the same cluster where Kiali would live? If it does, I may think that it is easier to connect Kiali directly to prometheus (skipping whatever thing is securing it), and protect the Prometheus API only for incomig out-of-cluster requests. If not possible... well... we need to think about it... |
I was not aware that config change in Kiali CR triggers the kiali pod's restart, good to know! However as @israel-hdez said it would be great to run this exchange on demand. I would avoid flooding our IAM system when no one is using the kiali dashboard (I assume otherwise Kiali does not pull prometheus data). This is a managed service so a lot of instances are going to be installed eventually by different customers. As this proxy would sit in the data path, it would be fully on-demand. The tokens are valid for 20minutes in this scenario, and I think it would be a bad UX to wait for kiali restarts in every 20 minutes. With a single kiali pod, it takes 20-30 seconds to become Ready in our system. Or do we have any HA solution to overcome on this part? @israel-hdez the thing what provides PromQL API is also an external managed service, with a remote endpoint (the one we asked the custom http headers for last year 🙂). |
I think if this is all you need to get done what you need, it would be the simplest solution and easiest for us to implement on the Kiali side. We'd have to figure out what the Kiali CR yaml would look like for this. Perhaps something like spec:
deployment:
pod_sidecar_yaml:
name: my_sidecar
image: your/sidecar/image:v1.0
command: ...
...and whatever else needs to go in this named container... When Kiali operator creates the Kiali Server Deployment, it would add this as a container to the list of existing Kiali containers (of which there is only one - the main kiali server binary). kind: Deployment
metadata:
name: kiali
spec:
template:
spec:
containers:
- name: kiali
image: quay.io/kiali/kiali:v1.50.0
command: ['/opt/kiali/kiali', '-config', '/kiali-configuration/config.yaml']
### HERE IS WHERE THE SIDECAR YAML WILL GO ###
- name: my_sidecar
image: your/sidecar/image:v1.0
command: ....
...and the rest.... Is this what you need and will work for your use-case? Would there ever be a need to define multiple sidecars/containers? The above example only supports 1. I suppose we could support multiple as a generic solution: spec:
deployment:
additional_pod_containers_yaml:
- name: my_sidecar
image: your/sidecar/image:v1.0
command: ...
...and whatever else needs to go in this named container...
- name: my_sidecar2
image: your/sidecar2/image:v2.2
command: ...
...and whatever else needs to go in this second named container... |
@jmazzitelli yep, these were exactly the ideas in my mind as well. So far I would think one is enough, but also a generic approach with possibly multiple sidecars could be more future proof. |
part of: kiali/kiali#5028 operator PR: kiali/kiali-operator#524
part of: kiali/kiali#5028 operator PR: kiali/kiali-operator#524
Because this potentially has security implications (you are starting an ad-hoc container in the Kiali pod and all that entails), we are going to add a new "allow-ad-hoc" setting in the operator. "allowAdHocContainers" must be set to true if you want to allow Kiali CR creators to be able to set that. By default, it will be |
The PRs are still in draft mode, because we do not yet know if we want to merge this into master. Waiting to hear from @libesz if this is truly what he needs. That said, the PRs are finished and ready to be tested/reviewed. To test the new functionality:
Notice the resultant output shows the echoserver container image name as well as the Kiali container image name. This shows you have an added container in the Kiali pod. You can look at the Deployment yaml to confirm you have two containers (but the above shows that you do). |
Thank you for the quick implementation! I already have a working poc sidecar that solves the auth issues I have with the promql API. Will try it out with the new operator asap. |
I was worried about that :) This is going to make it more complicated because now we are adding volumes which are outside of the container. I'm starting to worry if this is going to introduce more security holes that I don't see. Do you require volumes? You can easily pass env vars (those are part of the container definition), but if the env values are coming from mounting things, that might be an issue. |
BTW: there is a way today that you can mount your own secrets already. See: This may be all you need. |
Here is how you can test this to see that you can pass data to your container via secrets. This is how I believe we will want to support this in Kiali (as opposed to, say, providing ad-hoc volume mounts to other file systems). This is using the already-existing feature of "spec.deployment.custom_secrets"
|
@jmazzitelli thanks! Will try it out tomorrow. |
@jmazzitelli I think I am mostly done with my tests. So far one interesting behavior popped up. |
Ah... that probably has to do with the way k8s does patch-merge to lists. I've seen this same kind of thing before with other settings. I believe the way you'd have to do it (short of removing and re-creating the CR like you did) is you need to edit the CR to set to "null" the additional containers yaml setting in the CR, and then re-re-edit the CR and add in what you want. (basically you need to null out the list and then recreate it). But even then it might not work - I didn't try it :) I don't think there is an easier way to do it other than what you did. And it is a rare edge case that I don't think we need to spend any time trying to come up with a complicated solution. A user won't typically be editing and changing this additional container yaml - you'll typically know what you want, set it, and forget it. |
part of: kiali/kiali#5028 operator PR: kiali/kiali-operator#524
part of: kiali/kiali#5028 operator PR: kiali/kiali-operator#524
@jmazzitelli Any recent thoughts on this? |
Going to hold off on this. Adds a feature that no one else is asking for and does give me some uneasiness around potential security issues with it. Unless/until we get a compelling use-case that cannot be done with an alternative solution, we should not merge those PRs. |
Closing until we get more push for this enhancement. |
This is also related to connecting to the istiod remotely: #5533. With a sidecar proxy, Kiali wouldn't need to implement all the auth mechanisms, including proprietary ones, that external istiods might require. Kiali could instead talk to the proxy via localhost and the proxy could handle authenticating to istiod. Although we may want to wait until the investigation for: #5626 is complete before supporting this feature. |
part of: kiali/kiali#5028 operator PR: kiali/kiali-operator#524
part of: kiali/kiali#5028 operator PR: kiali/kiali-operator#524
part of: kiali/kiali#5028 operator PR: kiali/kiali-operator#524
part of: kiali/kiali#5028 operator PR: kiali/kiali-operator#524
Hi!
I would like to discuss a generic extension possibility to the Kiali operator. The PromQL API that I would like to integrate Kiali to, is requesting a API bearer token that is expiring very often. Since Kiali is not able (and I guess also not planning to be able 😄 ) to renew tokens or exchange API keys to tokens, I am thinking on another solution. (see previous ticket from our colleague: #4677).
So the idea would be that:
Having the sidecar in a standalone pod would need it's own authentication and TLS on it's HTTP interface to avoid misuse. Implementing it as a classic proxy sidecar seems to be cleaner.
This could be done via adding a new kiali operator config object which let's the integrator to add a complete container to the kiali pod spec. This would be very similar to what additional_service_yaml does with the kiali service object.
The text was updated successfully, but these errors were encountered: