-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Kubernetes service deployer #239
Comments
@mtojek how about supporting both cases (managed==fleet + unmanaged==standalone)? To my mind, standalone version will be a way to go for many people out there doing k8s ops based on infra as code principles, and hence I expect us having both ways supported. |
We may consider to support both modes, maybe even break it into two iterations. Speaking about unmanaged - let's build the design together:
Question:
|
Speaking about managed: I would assume that the Elastic Agent is deployed as part of the Elastic stack and we pass all required configuration options/secrets. It's not distributed as DaemonSet, but it can reach out to the cluster via HTTP API. This method might be easier to implement in the elastic-package, but it's not the original way of running the Agent. |
Sounds good to me more or less. We have something similar in Beats started in elastic/beats#17538 and maybe we can borrow some ideas from. cc: @jsoriano I guess that after deploying Agent we will check that data actually flows in, right?
That's true and this is the reason I actually had to deploy ES stack on k8s too in the past. From my personal experience on developing for k8s, I don't think we have an easy way to make it work with Docker-based stack and I'm afraid that messing up k8s networking with Docker networking wouldn't be a good idea (but maybe i'm mistaken here so no strong opinion). Also we have another prerequisite which is |
Hmm, connecting to k8s API server should be doable but not sure if connecting to Kubelet's APIs would be possible though from the outside world. I think we miss information on how the managed approach would look like though. |
That is correct, it's the standard verification flow. |
Ok, let me reiterate on this topic. Agree, we'll achieve better/more natural testing experience if the Elastic Agent is deployed on the real (or minikube) Kubernetes cluster. Here is another approach, which may cover the managed fleet (we need to find a compromise): Prerequisites:
Notice:
Steps:
In the future we may here the
cc @ChrsMark @ycombinator I appreciate your comments on this idea. |
Thanks @mtojek, it looks good to me! Some questions, since I miss some of
|
Yes. Once the agent's Docker image is started, this entrypoint kicks off and performs enrolling:
EDIT: I have concerns regarding introduction of the EDIT2: @ChrsMark Is it possible to run |
@mtojek I think your proposal makes sense. I just have one question: with the introduction of |
As long as the Elasticsearch and Kibana are accessible and the Package Registry contains the package, it's expected to work. We're just changing the docker-compose to Kubernetes. |
@mtojek I ve no strong opinion about Endpoints:
See the example at https://github.com/elastic/beats/pull/23679/files#diff-7896a70414721b8d0b3d8b90808b92c750d40c56bdf2ad01bf629c9499cde64eR38 @mtojek @ycombinator will |
Just trying to think through how the Docker Compose service deployer (which would be used to spin up the Apache service container when |
Yes, in this case it would be possible, but...
I don't have solution/idea for log collection, but I believe it's solvable too. In fact we don't need the Docker Compose service deployer, because we don't have a service (talking only about monitoring Kubernetes API, it's similar to the system integration). What we need here is "null" service deployer - something that doesn't deploy any service. EDIT: In the mean time I tried to reverse
manually write
Initialize:
Try with curl:
Works :) I bet that the hardest part is to figure out the networking properties. |
I'm talking about the Docker Compose service deployer that will be used if we are system testing the [EDIT] I think the solution to this may lie in making the service deployers aware of the engine the stack is running with, so as to make them "smarter" about how/where they deploy the service, e.g. as @ChrsMark alluded to in his comment about the Apache service running as a pod. |
Honestly I would bind a service deployer with an engine it supports. I wouldn't like to implement an adapter which can transform Compose files to pod definitions. EDIT: Support matrix? |
You mean like this: https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/? 😉
++ I'm good with restricting certain service deployers to certain stack deployment engines. At the very least, I think this is a fine starting point, since there isn't a use case for system testing the same integration in multiple stack deployment engines AFAIK. We just need to make sure the unsupported cases are handled well with good error messaging, etc. |
I spent some time wiring Docker Compose definitions for Kubernetes: and managed to run the control node a Docker container. Similar actions are performed by kind. I wonder if it can replace existing Prometheus mocks. I know we would need to introduce some adjustments in the
but maybe we don't need to use kind or minikube at all (these are just wrappers either to Docker compose or VM). I didn't go deeper with worker nodes, but would like to hear your (@ChrsMark @ycombinator) feedback (to be convinced that our decision path is correct). This looks promising if we decide to create a similar executor like Terraform, Kubernetes service deployer would be responsible for deploying the Kubernetes stack in ca. 30s) and then apply custom user definitions. EDIT: I see it's pending on kube-proxy to fully boot up (that's why no coredns pods are present), I believe it's networking issue. EDIT2: I figured out this. Replace the
It looks like we don't need EDIT3: I deployed nginx application:
|
I think I've finished researching/PoC this area and would stick to The
|
Follow-up on: #89
This issue might be blocked by lack of recommendation for the managed fleet and Kubernetes cluster. Not sure if it's a temporary situation, but maybe we need to support unmanaged mode as well.
I suppose that technical details depend on the decision above (managed vs unmanaged).
cc @ChrsMark @ycombinator
The text was updated successfully, but these errors were encountered: