Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sidecar usage extension for the kiali operator. #5028

Closed
libesz opened this issue Apr 29, 2022 · 23 comments · May be fixed by kiali/kiali-operator#524
Closed

Sidecar usage extension for the kiali operator. #5028

libesz opened this issue Apr 29, 2022 · 23 comments · May be fixed by kiali/kiali-operator#524
Assignees
Labels
enhancement This is the preferred way to describe new end-to-end features. stale Issue has no activity

Comments

@libesz
Copy link
Contributor

libesz commented Apr 29, 2022

Hi!
I would like to discuss a generic extension possibility to the Kiali operator. The PromQL API that I would like to integrate Kiali to, is requesting a API bearer token that is expiring very often. Since Kiali is not able (and I guess also not planning to be able 😄 ) to renew tokens or exchange API keys to tokens, I am thinking on another solution. (see previous ticket from our colleague: #4677).
So the idea would be that:

  • the integrator (us) is implementing the token renewal magic and a (transparent) proxy as part of a sidecar container in the kiali pod
  • prometheus endpoint would point to this sidecar (i.e. localhost)
  • the integrator configures the sidecar container in the kiali cr

Having the sidecar in a standalone pod would need it's own authentication and TLS on it's HTTP interface to avoid misuse. Implementing it as a classic proxy sidecar seems to be cleaner.

This could be done via adding a new kiali operator config object which let's the integrator to add a complete container to the kiali pod spec. This would be very similar to what additional_service_yaml does with the kiali service object.

@libesz libesz added the enhancement This is the preferred way to describe new end-to-end features. label Apr 29, 2022
@jmazzitelli
Copy link
Collaborator

I don't fully grok the proposed solution, but... wouldn't it be simpler to just update the Kiali CR with the new prometheus credentials? The operator will see the change and immediately propagate the new credentials down to the Kiali ConfigMap and automatically restart the Kiali pod so it picks up the new credentials. This is already supported. You would just need to patch the Kiali CR with the new credentials in the spec.external_services.prometheus.auth section. Or am I missing something?

@israel-hdez
Copy link
Contributor

wouldn't it be simpler to just update the Kiali CR with the new prometheus credentials?

My guess is that it comes down to efficiency.

I note that @libesz says that the token expires "very often". The typical expiration I've seen for these cases is 5 minutes. You will want to roll out a fresh token before the old one expires to avoid the situation when Kiali can no longer reach Prometheus, so let's assume that you refresh every 4 minutes. That's 15 updates in an hour. The operator will use way much more resources for all these reconciliations (which may also flood the kubeapi), than the resources a sidecar may use by doing a simple request to refresh a token. Also, a sidecar can refresh on-demand (i.e. if nobody uses Kiali for some time, no need to refresh the token).

@libesz I would like to ask if your Prometheus lives in the same cluster where Kiali would live? If it does, I may think that it is easier to connect Kiali directly to prometheus (skipping whatever thing is securing it), and protect the Prometheus API only for incomig out-of-cluster requests. If not possible... well... we need to think about it...

@libesz
Copy link
Contributor Author

libesz commented Apr 30, 2022

I was not aware that config change in Kiali CR triggers the kiali pod's restart, good to know! However as @israel-hdez said it would be great to run this exchange on demand. I would avoid flooding our IAM system when no one is using the kiali dashboard (I assume otherwise Kiali does not pull prometheus data). This is a managed service so a lot of instances are going to be installed eventually by different customers. As this proxy would sit in the data path, it would be fully on-demand.

The tokens are valid for 20minutes in this scenario, and I think it would be a bad UX to wait for kiali restarts in every 20 minutes. With a single kiali pod, it takes 20-30 seconds to become Ready in our system. Or do we have any HA solution to overcome on this part?

@israel-hdez the thing what provides PromQL API is also an external managed service, with a remote endpoint (the one we asked the custom http headers for last year 🙂).

@jmazzitelli
Copy link
Collaborator

This could be done via adding a new kiali operator config object which let's the integrator to add a complete container to the kiali pod spec. This would be very similar to what additional_service_yaml does with the kiali service object.

I think if this is all you need to get done what you need, it would be the simplest solution and easiest for us to implement on the Kiali side. We'd have to figure out what the Kiali CR yaml would look like for this. Perhaps something like spec.deployment.pod_sidecar_yaml - you would then provide a full named container yaml and the operator would simply "pass through" that yaml to the Deployment. So I envision something like:

spec:
  deployment:
    pod_sidecar_yaml:
      name: my_sidecar
      image: your/sidecar/image:v1.0
      command: ...
      ...and whatever else needs to go in this named container...

When Kiali operator creates the Kiali Server Deployment, it would add this as a container to the list of existing Kiali containers (of which there is only one - the main kiali server binary).

kind: Deployment
metadata:
  name: kiali
spec:
  template:
    spec:
      containers:
      - name: kiali
        image: quay.io/kiali/kiali:v1.50.0
        command: ['/opt/kiali/kiali', '-config', '/kiali-configuration/config.yaml']
      ### HERE IS WHERE THE SIDECAR YAML WILL GO ###
      - name: my_sidecar
        image: your/sidecar/image:v1.0
        command: ....
        ...and the rest....

Is this what you need and will work for your use-case?

Would there ever be a need to define multiple sidecars/containers? The above example only supports 1. I suppose we could support multiple as a generic solution:

spec:
  deployment:
    additional_pod_containers_yaml:
    - name: my_sidecar
      image: your/sidecar/image:v1.0
      command: ...
      ...and whatever else needs to go in this named container...
    - name: my_sidecar2
      image: your/sidecar2/image:v2.2
      command: ...
      ...and whatever else needs to go in this second named container...

@libesz
Copy link
Contributor Author

libesz commented Apr 30, 2022

@jmazzitelli yep, these were exactly the ideas in my mind as well. So far I would think one is enough, but also a generic approach with possibly multiple sidecars could be more future proof.
We will start working on the proxy PoC asap to see if this is really all we need.
Thanks for the proposal!

@jmazzitelli jmazzitelli self-assigned this May 2, 2022
@jmazzitelli jmazzitelli added the backlog Triaged Issue added to backlog label May 2, 2022
jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue May 2, 2022
jmazzitelli added a commit to jmazzitelli/helm-charts that referenced this issue May 2, 2022
jmazzitelli added a commit to jmazzitelli/helm-charts that referenced this issue May 2, 2022
@jmazzitelli
Copy link
Collaborator

Because this potentially has security implications (you are starting an ad-hoc container in the Kiali pod and all that entails), we are going to add a new "allow-ad-hoc" setting in the operator. "allowAdHocContainers" must be set to true if you want to allow Kiali CR creators to be able to set that. By default, it will be false - the operator will not allow a user to set this new additional_pod_containers_yaml setting. You have to tell the operator to allow for that. See the draft helm PR for the start of this new flag's implementation: https://github.com/kiali/helm-charts/pull/137/files

jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue May 2, 2022
jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue May 2, 2022
@jmazzitelli
Copy link
Collaborator

jmazzitelli commented May 3, 2022

The PRs are still in draft mode, because we do not yet know if we want to merge this into master. Waiting to hear from @libesz if this is truly what he needs. That said, the PRs are finished and ready to be tested/reviewed.

To test the new functionality:

  1. Install minikube with Istio (you can use hack/k8s-minikube.sh)
  2. Make sure you have a full Kiali dev environment
  3. Checkout the operator PR branch and the helm PR branch in their respective local repos on your dev box
  4. From the kiali/kiali local repo, run make to build and push the Kiali Server and the new Operator to your cluster (make -e CLUSTER_TYPE=minikube build build-ui cluster-push)
  5. From the helm-charts local repo, run make to build the helm charts (make clean build-helm-charts)
  6. Install the default operator and try to add a new container (it should fail - the default is for the operator to not allow this):
helm install \
  --create-namespace \
  --set cr.create=true \
  --set cr.namespace=istio-system \
  --namespace kiali-operator \
  --create-namespace kiali-operator \
  _output/charts/kiali-operator-*.tgz \
  --set image.repo=localhost:5000/kiali/kiali-operator \
  --set image.tag=dev \
  --set cr.spec.deployment.additional_pod_containers_yaml[0].image=k8s.gcr.io/echoserver:1.4 \
  --set cr.spec.deployment.additional_pod_containers_yaml[0].imagePullPolicy=Always \
  --set cr.spec.deployment.additional_pod_containers_yaml[0].name=mysidecar
  1. kubectl logs -n kiali-operator -l app=kiali-operator --tail=-1 | grep forbidden should show the failure message in the logs: fatal: [localhost]: FAILED! => {"changed": false, "msg": "The operator is forbidden from installing additional containers into the Kiali pod."}
  2. kubectl get kiali kiali -n istio-system -o jsonpath={.status} | grep "forbidden" should show you the error in the status fields somewhere (The operator is forbidden from installing additional containers into the Kiali pod)
  3. Uninstall everything: kubectl delete kiali kiali -n istio-system && helm uninstall --namespace kiali-operator kiali-operator
  4. Now reinstall with allowAdHocContainers set to true:
helm install \
  --set allowAdHocContainers=true \
  --create-namespace \
  --set cr.create=true \
  --set cr.namespace=istio-system \
  --namespace kiali-operator \
  --create-namespace kiali-operator \
  _output/charts/kiali-operator-*.tgz \
  --set image.repo=localhost:5000/kiali/kiali-operator \
  --set image.tag=dev \
  --set cr.spec.deployment.additional_pod_containers_yaml[0].image=k8s.gcr.io/echoserver:1.4 \
  --set cr.spec.deployment.additional_pod_containers_yaml[0].imagePullPolicy=Always \
  --set cr.spec.deployment.additional_pod_containers_yaml[0].name=mysidecar
  1. Now see that the Kiali Deployment was created and you have two containers in the pod:
$ kubectl get pods -n istio-system -l app=kiali -o jsonpath={.items[0].spec.containers[*].image}
quay.io/kiali/kiali:v1.51.0-SNAPSHOT k8s.gcr.io/echoserver:1.4

Notice the resultant output shows the echoserver container image name as well as the Kiali container image name. This shows you have an added container in the Kiali pod. You can look at the Deployment yaml to confirm you have two containers (but the above shows that you do).
13. No more tests needed. It all works if you got this far. To clean up just purge the Kiali CR and uninstall (see step 10).

@libesz
Copy link
Contributor Author

libesz commented May 3, 2022

Thank you for the quick implementation! I already have a working poc sidecar that solves the auth issues I have with the promql API. Will try it out with the new operator asap.
The only thing came to my mind is that now all the credentials (to the sidecar) need to be supplied in env vars, possibly referenced from secrets. It seems to be perfectly fine for my use-case, just wondering if there will be a way to mount stuff as well (the issue may be to define the sidecar volumes, which is outside of the container specs).

@jmazzitelli
Copy link
Collaborator

I was worried about that :) This is going to make it more complicated because now we are adding volumes which are outside of the container. I'm starting to worry if this is going to introduce more security holes that I don't see.

Do you require volumes? You can easily pass env vars (those are part of the container definition), but if the env values are coming from mounting things, that might be an issue.

@jmazzitelli
Copy link
Collaborator

BTW: there is a way today that you can mount your own secrets already. See:
https://kiali.io/docs/configuration/kialis.kiali.io/#.spec.deployment.custom_secrets

This may be all you need.

@jmazzitelli
Copy link
Collaborator

jmazzitelli commented May 3, 2022

Here is how you can test this to see that you can pass data to your container via secrets. This is how I believe we will want to support this in Kiali (as opposed to, say, providing ad-hoc volume mounts to other file systems). This is using the already-existing feature of "spec.deployment.custom_secrets"

  1. Create a secret with some data (this little test secret will have two name/value pairs - USER_NAME=admin) and PASSWORD=supersecret, but obviously the secret can have any number of data in here):
$ cat <<EOM | kubectl apply -n istio-system -f -
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  USER_NAME: $(echo -n "admin" | base64)
  PASSWORD: $(echo -n "supersecret" | base64)
EOM
  1. Install Kiali with a second container and show how it can get data from the secret:
helm install \
  --set allowAdHocContainers=true \
  --create-namespace \
  --set cr.create=true \
  --set cr.namespace=istio-system \
  --namespace kiali-operator \
  --create-namespace kiali-operator \
  _output/charts/kiali-operator-*.tgz \
  --set image.repo=localhost:5000/kiali/kiali-operator \
  --set image.tag=dev \
  --set cr.spec.deployment.additional_pod_containers_yaml[0].image=k8s.gcr.io/busybox:latest \
  --set cr.spec.deployment.additional_pod_containers_yaml[0].imagePullPolicy=Always \
  --set cr.spec.deployment.additional_pod_containers_yaml[0].name=mysidecar \
  --set cr.spec.deployment.additional_pod_containers_yaml[0].command[0]=/bin/sh \
  --set cr.spec.deployment.additional_pod_containers_yaml[0].command[1]=-c \
  --set cr.spec.deployment.additional_pod_containers_yaml[0].command[2]=env \
  --set cr.spec.deployment.additional_pod_containers_yaml[0].envFrom[0].secretRef.name=mysecret \
  --set cr.spec.deployment.custom_secrets[0].name=mysecret \
  --set cr.spec.deployment.custom_secrets[0].mount=/mysecret
  1. See that the secret data was accessible to the additional container:
$ kubectl logs -c mysidecar -n istio-system -l app=kiali | grep -E "USER_NAME|PASSWORD"
USER_NAME=admin
PASSWORD=supersecret

@libesz
Copy link
Contributor Author

libesz commented May 3, 2022

@jmazzitelli thanks! Will try it out tomorrow.

@libesz
Copy link
Contributor Author

libesz commented May 4, 2022

@jmazzitelli I think I am mostly done with my tests. So far one interesting behavior popped up.
I started with your example helm install and the echoserver sidecar. Than I took the Kiali CR and changed it to my prometheus proxy (removed echoserver). I applied the Kiali CR and I ended up with a kiali pod with 3 containers in it :). Both the echoserver and my sidecar was part of the pod. I removed and then re-created the same Kiali CR and it recovered to the expected 1 sidecar.

@jmazzitelli
Copy link
Collaborator

jmazzitelli commented May 4, 2022

Ah... that probably has to do with the way k8s does patch-merge to lists. I've seen this same kind of thing before with other settings. I believe the way you'd have to do it (short of removing and re-creating the CR like you did) is you need to edit the CR to set to "null" the additional containers yaml setting in the CR, and then re-re-edit the CR and add in what you want. (basically you need to null out the list and then recreate it). But even then it might not work - I didn't try it :)

I don't think there is an easier way to do it other than what you did. And it is a rare edge case that I don't think we need to spend any time trying to come up with a complicated solution. A user won't typically be editing and changing this additional container yaml - you'll typically know what you want, set it, and forget it.

jmazzitelli added a commit to jmazzitelli/kiali.io that referenced this issue Sep 1, 2022
jmazzitelli added a commit to jmazzitelli/helm-charts that referenced this issue Sep 1, 2022
jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue Sep 1, 2022
@jmazzitelli jmazzitelli removed the backlog Triaged Issue added to backlog label Sep 1, 2022
jmazzitelli added a commit to jmazzitelli/helm-charts that referenced this issue Oct 3, 2022
jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue Oct 3, 2022
jmazzitelli added a commit to jmazzitelli/kiali.io that referenced this issue Oct 3, 2022
@jshaughn
Copy link
Collaborator

@jmazzitelli Any recent thoughts on this?

@jmazzitelli
Copy link
Collaborator

Going to hold off on this. Adds a feature that no one else is asking for and does give me some uneasiness around potential security issues with it. Unless/until we get a compelling use-case that cannot be done with an alternative solution, we should not merge those PRs.

@jshaughn
Copy link
Collaborator

Closing until we get more push for this enhancement.

@nrfox
Copy link
Contributor

nrfox commented Nov 17, 2022

This is also related to connecting to the istiod remotely: #5533. With a sidecar proxy, Kiali wouldn't need to implement all the auth mechanisms, including proprietary ones, that external istiods might require. Kiali could instead talk to the proxy via localhost and the proxy could handle authenticating to istiod. Although we may want to wait until the investigation for: #5626 is complete before supporting this feature.

jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue Jun 1, 2023
jmazzitelli added a commit to jmazzitelli/helm-charts that referenced this issue Jun 1, 2023
jmazzitelli added a commit to jmazzitelli/kiali.io that referenced this issue Jun 1, 2023
jmazzitelli added a commit to jmazzitelli/kiali.io that referenced this issue Jun 2, 2023
jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue Jun 23, 2023
jmazzitelli added a commit to jmazzitelli/helm-charts that referenced this issue Jun 23, 2023
jmazzitelli added a commit to jmazzitelli/kiali.io that referenced this issue Jun 23, 2023
jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue Sep 27, 2023
jmazzitelli added a commit to jmazzitelli/helm-charts that referenced this issue Sep 27, 2023
jmazzitelli added a commit to jmazzitelli/kiali.io that referenced this issue Sep 27, 2023
jmazzitelli added a commit to jmazzitelli/helm-charts that referenced this issue Dec 6, 2023
jmazzitelli added a commit to jmazzitelli/kiali-operator that referenced this issue Dec 6, 2023
jmazzitelli added a commit to jmazzitelli/kiali.io that referenced this issue Dec 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement This is the preferred way to describe new end-to-end features. stale Issue has no activity
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants