Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Provide best practices for the safest way of upgrading Kibana #46473

Closed
rwaight opened this issue Sep 24, 2019 · 7 comments · Fixed by #48126
Closed

[Docs] Provide best practices for the safest way of upgrading Kibana #46473

rwaight opened this issue Sep 24, 2019 · 7 comments · Fixed by #48126
Assignees
Labels
enhancement New value added to drive a business result Team:Docs

Comments

@rwaight
Copy link
Contributor

rwaight commented Sep 24, 2019

Describe the feature:
Last week we discussed our upgrade documentation and ways we can improve the upgrade experience; for larger deployments, the proper order becomes important.

While we provide this high level overview for upgrading Kibana, we should also include best practices around the safest way to upgrade Kibana, especially in larger deployments.

For overall Elastic Stack upgrades, I've opened elastic/stack-docs#537 for best practices to be included.

@rwaight rwaight added Team:Docs enhancement New value added to drive a business result labels Sep 24, 2019
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-docs

@KOTungseth
Copy link
Contributor

@joshdover can you help us with one?

@joshdover
Copy link
Contributor

joshdover commented Sep 30, 2019

I'm not sure I have any additional guidance here that isn't already outlined in the Stack Upgrade docs.

I think the most important thing we could emphasize is that Kibana does not currently support "rolling upgrades." This means that before bringing up a Kibana instance of the next version, all older instances need to be shut down first. Another way to put this would be: you can't have multiple Kibana nodes running different versions at the same time against the same .kibana index.

@KOTungseth KOTungseth self-assigned this Oct 4, 2019
@KOTungseth
Copy link
Contributor

KOTungseth commented Oct 7, 2019

@joshdover in the current Kibana upgrade docs, it says, "The recommended path is to upgrade to 6.7 before upgrading to 7.0. This makes it easier to identify the changes you need to make to upgrade and enables you to perform a rolling upgrade with no downtime."

Since Kibana does not currently support rolling upgrades, should this be removed from the docs?

@joshdover
Copy link
Contributor

@KOTungseth Yes, I think we should add a section about this. In particular, this is what we need to get across:

  • Kibana does not support running more than one version of Kibana against the same Elasticsearch index.
  • When upgrading, you must shutdown all nodes of the old version before bringing up any nodes of the next version.

"The recommended path is to upgrade to 6.7 before upgrading to 7.0. This makes it easier to identify the changes you need to make to upgrade and enables you to perform a rolling upgrade with no downtime."

This still seems pertinent, but I think we should clarify that we mean this "enables you to perform a rolling upgrade of Elasticsearch with no downtime." (not Kibana).

@rwaight
Copy link
Contributor Author

rwaight commented Oct 15, 2019

Thank you very much @joshdover and @KOTungseth!!

@b-deam
Copy link
Member

b-deam commented Jan 20, 2020

Hey @joshdover and @KOTungseth 👋!

The important step of ensuring that all Kibana instances are shutdown prior to an upgrade is missing from the documentation for version 6.8.

6.8:
image

7.x:
image

Is it possible to backport this PR/add the upgrade advice to the 6.8 "before you begin" page?

I'm happy to raise a seperate PR to the 6.8 branch if that's determined the best way forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New value added to drive a business result Team:Docs
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants