-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Ingest Manager] Add support for multiples Kibana hosts. #72731
Comments
Pinging @elastic/ingest-management (Team:Ingest Management) |
@neptunian I think we should simplify the case here and enforce some rules. Case 1: Only IPs are different KibanaA => https://192.168.1.2/ok kibana:
protocol: https
hosts: [192.168.1.2, 192.168.1.3, 192.168.1.4]
path: ok
timeout: 30s Case 2: Ports can be different Resutls: VALID kibana:
protocol: https
hosts: [192.168.1.2:5551, 192.168.1.3:5552, 192.168.1.4:5553]
path: ok
timeout: 30s Case 3: Different Protocol* KibanaA => https://192.168.1.2:5551/ok This case should fail, one of the node doesn't have the same protocol as the other. Only the HOSTS can be different everything else should be constant. I've double checked the beats implementation and everything is like that, we do round robin on the hosts. |
We must enforce this rule on the API level. But I would also be fine to not have the rules at first and only allow a list to move this forward. For the Elasticsearch url, I had a conversation with @nchaulet in the past that it is confusing that it can be configured in two places. We agree, there should be only 1 place for this config and the other one should be removed. My personal preference is to have it in the UI as it allows to make changes without requiring a restart of Kibana. This becomes especially useful in case more Kibana instances are added or in the case of Elasticsearch, that we support multiple Elasticsearch outputs in the future. If the users has to restart Kibana every time such a change is made, I don't think this is expected. @ph Lets agree on the path forward on the above in a separate issue and make a call for 7.10. |
This is the same case for the kibana urls. It can be configured in the kibana yaml config file or in the UI, but it only works the first time in the yaml file.
Do we not currently support multiple elasticsearch outputs? Why do we allow them to enter in multiple urls otherwise?
If you change the yaml config urls and restart kibana, they are ignored. It only works the first time (before we add some default url during setup) which is intended as @ph informed me. it's meant to function as a bootstrap. |
@ph can you update the description to replace the general word |
We on support 1 elasticsearch cluster output, but you can send data to different node |
@ruflin is what you had in mind is the following;
|
@ph |
@ph Your summary is correct. I think it is odd that a config is only used for bootstrapping. One thing that bugs me about removing the config completely is for the use cases where someone deploys Kibana with lets say Puppet and wants to all the setup via config file which is not possible. At the same time, to fully setup and prepare Ingest Management, API calls are needed anyways. |
@ph @ruflin In order to trigger the |
Jumping in here because I'm nosy
I don't think we need to add a new What do you think of:
The caveat in our current agent policy service implementation is that we can only update/bump revision of a policy one at a time, which is probably not performant. This can be done better by implementing AFAIK |
Thanks @jen-huang! If we're okay with bumping revisions without actually changing fields, this sounds good to me. |
I would try as much as possible to limits the number of action we have to implement and keep the configuration as a single atomic "concept". See in our configuration in the Elastic-Agent we already a place for that information, its important that you keep it that way. We are actually giving that information down to endpoint so they can connect to Kibana and query their manifest. @blakerouse I know that the above host key support array of host, should that key be plurial? cc @nchaulet for awareness here, since you are working with the performance side of Kibana, it would be good to have your input here. |
I am wondering if we can have support for script update in saved object API it will make this a lot more performant, otherwise we need to fetch each config before updating it. something like
|
@ph The full agent policy sent to the Agent would just have the new fleet attribute: This is step 1 of what Jen mentioned in her comment which is easy enough. It's getting the CONFIG_CHANGE action to trigger which only happens when the agent policy SO is revised that is an issue.I guess I was thinking it would be good if there was a way to not have to bump every agent policy config when they update the kibana_url in settings but since that conceptually is part of the config, then it sounds like this might be our best option. @nchaulet Looking at the current |
@neptunian Looks good, I have confused by the kibana_url reference, I will check but I think we will need to improve support on agent side for parsing the URLs. |
@ph okay, updated my comment re: configuration as a single concept |
@nchaulet We may run into unintended consequences down the line if we edit the saved object document directly, that's why Platform always pushes to use the scoped saved object client when working with SOs. @neptunian it might be a good idea to have a chat with Platform about your performance concerns and get their recommendation. There is also a |
As the url is part of the config, I think bumping the revision is a must and expected, it is a new config. What happens if someone changes the Kibana hosts to a host not available? Will the agent disconnect or refuse the new connection? |
@ruflin I think the thing that I found confusing is the url is not part of the |
@ruflin I believe at the moment Agent is not handling the url coming down in the configuration. We still need to define that behavior. I agree that Agent should do a sanity check to ensure that it can talk over the new API before switching and making that change permanent. |
As the url of Kibana can already today be changed in the Settings, I assume most of the above discussion applies independent if it now becomes an array or not? Should we perhaps split these into 2 parts:
|
Yes its possible to split it if we want the update action to go into 7.9, with the updating action PR needing to happen first before the kibana_url type is changed into an array. If we're wanting to get both kibana url and elasticsearch url #76136 updating the agent config, this would require more changes than in my current PR as elasticserach URL is not part of settings saved object or API, but part of the output api. It seems like they should happen in the same PR. I assume changes will need to be made on the agent side to handle the modified agent policy (@blakerouse) now containing these urls. |
@neptunian Agent will handle updated |
Yes, @blakerouse I do not expect it in 7.9 but 7.10. I think we have an issue open for that for the agent. I will make sure we raised that. |
We are using Puppet for everything. |
Summary of the problem
The elastic agent should connect to more than Fleet hosts.
Users Stories
Other
Elastic agent already define a place in the configuration.
The text was updated successfully, but these errors were encountered: