Skip to content

Commit

Permalink
Merge branch 'master' into issue-60981-remove-namspace-agnostic
Browse files Browse the repository at this point in the history
  • Loading branch information
elasticmachine authored Apr 14, 2020
2 parents ca0f2c8 + f44d951 commit da045bb
Show file tree
Hide file tree
Showing 677 changed files with 10,853 additions and 6,670 deletions.
1 change: 1 addition & 0 deletions .i18nrc.json
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
"src/legacy/core_plugins/management",
"src/plugins/management"
],
"maps_legacy": "src/plugins/maps_legacy",
"indexPatternManagement": "src/plugins/index_pattern_management",
"advancedSettings": "src/plugins/advanced_settings",
"kibana_legacy": "src/plugins/kibana_legacy",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,12 @@

## ScopedHistory.createHref property

Creates an href (string) to the location.
Creates an href (string) to the location. If `prependBasePath` is true (default), it will prepend the location's path with the scoped history basePath.

<b>Signature:</b>

```typescript
createHref: (location: LocationDescriptorObject<HistoryLocationState>) => string;
createHref: (location: LocationDescriptorObject<HistoryLocationState>, { prependBasePath }?: {
prependBasePath?: boolean | undefined;
}) => string;
```
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ export declare class ScopedHistory<HistoryLocationState = unknown> implements Hi
| --- | --- | --- | --- |
| [action](./kibana-plugin-core-public.scopedhistory.action.md) | | <code>Action</code> | The last action dispatched on the history stack. |
| [block](./kibana-plugin-core-public.scopedhistory.block.md) | | <code>(prompt?: string &#124; boolean &#124; History.TransitionPromptHook&lt;HistoryLocationState&gt; &#124; undefined) =&gt; UnregisterCallback</code> | Not supported. Use [AppMountParameters.onAppLeave](./kibana-plugin-core-public.appmountparameters.onappleave.md)<!-- -->. |
| [createHref](./kibana-plugin-core-public.scopedhistory.createhref.md) | | <code>(location: LocationDescriptorObject&lt;HistoryLocationState&gt;) =&gt; string</code> | Creates an href (string) to the location. |
| [createHref](./kibana-plugin-core-public.scopedhistory.createhref.md) | | <code>(location: LocationDescriptorObject&lt;HistoryLocationState&gt;, { prependBasePath }?: {</code><br/><code> prependBasePath?: boolean &#124; undefined;</code><br/><code> }) =&gt; string</code> | Creates an href (string) to the location. If <code>prependBasePath</code> is true (default), it will prepend the location's path with the scoped history basePath. |
| [createSubHistory](./kibana-plugin-core-public.scopedhistory.createsubhistory.md) | | <code>&lt;SubHistoryLocationState = unknown&gt;(basePath: string) =&gt; ScopedHistory&lt;SubHistoryLocationState&gt;</code> | Creates a <code>ScopedHistory</code> for a subpath of this <code>ScopedHistory</code>. Useful for applications that may have sub-apps that do not need access to the containing application's history. |
| [go](./kibana-plugin-core-public.scopedhistory.go.md) | | <code>(n: number) =&gt; void</code> | Send the user forward or backwards in the history stack. |
| [goBack](./kibana-plugin-core-public.scopedhistory.goback.md) | | <code>() =&gt; void</code> | Send the user one location back in the history stack. Equivalent to calling [ScopedHistory.go(-1)](./kibana-plugin-core-public.scopedhistory.go.md)<!-- -->. If no more entries are available backwards, this is a no-op. |
Expand Down
Binary file added docs/images/tutorial-ilm-custom-policy.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/tutorial-ilm-delete-rollover.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -1,23 +1,179 @@
[role="xpack"]

[[example-using-index-lifecycle-policy]]
=== Example of using an index lifecycle policy
=== Tutorial: Use {ilm-init} to manage {filebeat} time-based indices

With {ilm} ({ilm-init}), you can create policies that perform actions automatically
on indices as they age and grow. {ilm-init} policies help you to manage
performance, resilience, and retention of your data during its lifecycle. This tutorial shows
you how to use {kib}’s *Index Lifecycle Policies* to modify and create {ilm-init}
policies. You can learn more about all of the actions, benefits, and lifecycle
phases in the {ref}/overview-index-lifecycle-management.html[{ilm-init} overview].


[discrete]
[[example-using-index-lifecycle-policy-scenario]]
==== Scenario

You’re tasked with sending syslog files to an {es} cluster. This
log data has the following data retention guidelines:

* Keep logs on hot data nodes for 30 days
* Roll over to a new index if the size reaches 50GB
* After 30 days:
** Move the logs to warm data nodes
** Set {ref}/glossary.html#glossary-replica-shard[replica shards] to 1
** {ref}/indices-forcemerge.html[Force merge] multiple index segments to free up the space used by deleted documents
* Delete logs after 90 days


[discrete]
[[example-using-index-lifecycle-policy-prerequisites]]
==== Prerequisites

To complete this tutorial, you'll need:

* An {es} cluster with hot and warm nodes configured for shard allocation
awareness. If you’re using {cloud}/ec-getting-started-templates-hot-warm.html[{ess}],
choose the hot-warm architecture deployment template.

+
For a self-managed cluster, add node attributes as described for {ref}/shard-allocation-filtering.html[shard allocation filtering]
to label data nodes as hot or warm. This step is required to migrate shards between
nodes configured with specific hardware for the hot or warm phases.
+
For example, you can set this in your `elasticsearch.yml` for each data node:
+
[source,yaml]
--------------------------------------------------------------------------------
node.attr.data: "warm"
--------------------------------------------------------------------------------

* A server with {filebeat} installed and configured to send logs to the `elasticsearch`
output as described in {filebeat-ref}/filebeat-getting-started.html[Getting Started with {filebeat}].

[discrete]
[[example-using-index-lifecycle-policy-view-fb-ilm-policy]]
==== View the {filebeat} {ilm-init} policy

{filebeat} includes a default {ilm-init} policy that enables rollover. {ilm-init}
is enabled automatically if you’re using the default `filebeat.yml` and index template.

To view the default policy in {kib}, go to *Management > Index Lifecycle Policies*,
search for _filebeat_, and choose the _filebeat-version_ policy.

This policy initiates the rollover action when the index size reaches 50GB or
becomes 30 days old.

[role="screenshot"]
image::images/tutorial-ilm-hotphaserollover-default.png["Default policy"]


[float]
==== Modify the policy

The default policy is enough to prevent the creation of many tiny daily indices.
You can modify the policy to meet more complex requirements.

. Activate the warm phase.

+
. Set either of the following options to control when the index moves to the warm phase:

** Provide a value for *Timing for warm phase*. Setting this to *15* keeps the
indices on hot nodes for a range of 15-45 days, depending on when the initial
rollover occurred.

** Enable *Move to warm phase on rollover*. The index might move to the warm phase
more quickly than intended if it reaches the *Maximum index size* before the
the *Maximum age*.

. In the *Select a node attribute to control shard allocation* dropdown, select
*data:warm(2)* to migrate shards to warm data nodes.

. Change *Number of replicas* to *1*.

. Enable *Force merge data* and set *Number of segments* to *1*.
+
NOTE: When rollover is enabled in the hot phase, action timing in the other phases
is based on the rollover date.

+
[role="screenshot"]
image::images/tutorial-ilm-modify-default-warm-phase-rollover.png["Modify to add warm phase"]

. Activate the delete phase and set *Timing for delete phase* to *90* days.
+
[role="screenshot"]
image::images/tutorial-ilm-delete-rollover.png["Add a delete phase"]

[float]
==== Create a custom policy

If meeting a specific retention time period is most important, you can create a
custom policy. For this option, you will use {filebeat} daily indices without
rollover.

. Create a custom policy in {kib}, go to *Management > Index Lifecycle Policies >
Create Policy*.

. Activate the warm phase and configure it as follows:
+
|===
|*Setting* |*Value*

|Timing for warm phase
|30 days from index creation

|Node attribute
|`data:warm`

|Number of replicas
|1

|Force merge data
|enable

|Number of segments
|1
|===

+
[role="screenshot"]
image::images/tutorial-ilm-custom-policy.png["Modify the custom policy to add a warm phase"]


A common use case for managing index lifecycle policies is when you’re using
{beats-ref}/beats-reference.html[Beats] to continually send time-series data,
such as metrics and log data, to {es}. When you create the Beats packages, an
index template is installed. The template includes a default policy to apply
when new indices are created.
+
. Activate the delete phase and set the timing.
+
|===
|*Setting* |*Value*
|Timing for delete phase
|90
|===

You can edit the policy in {kib}'s *Index Lifecycle Policies*. For example, you might:
+
[role="screenshot"]
image::images/tutorial-ilm-delete-phase-creation.png["Delete phase"]

* Rollover the index when it reaches 50 GB in size or is 30 days old. These
settings are the default for the Beats lifecycle policy. This avoids
having 1000s of tiny indices. When a rollover occurs, a new “hot” index is
created and added to the index alias.
. Configure the index to use the new policy in *{kib} > Management > Index Lifecycle
Policies*

* Move the index into the warm phase, shrink the index down to a single shard,
and force merge to a single segment.
.. Find your {ilm-init} policy.
.. Click the *Actions* link next to your policy name.
.. Choose *Add policy to index template*.
.. Select your {filebeat} index template name from the *Index template* list. For example, `filebeat-7.5.x`.
.. Click *Add Policy* to save the changes.

* After 60 days, move the index into the cold phase and onto less expensive hardware.
+
NOTE: If you initially used the default {filebeat} {ilm-init} policy, you will
see a notice that the template already has a policy associated with it. Confirm
that you want to overwrite that configuration.

* Delete the index after 90 days.
+
+
TIP: When you change the policy associated with the index template, the active
index will continue to use the policy it was associated with at index creation
unless you manually update it. The next new index will use the updated policy.
For more reasons that your {ilm-init} policy changes might be delayed, see
{ref}/update-lifecycle-policy.html#update-lifecycle-policy[Update Lifecycle Policy].
4 changes: 2 additions & 2 deletions docs/settings/alert-action-settings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Alerts and actions are enabled by default in {kib}, but require you configure th

. <<using-kibana-with-security,Set up {kib} to work with {stack} {security-features}>>.
. <<configuring-tls-kib-es,Set up TLS encryption between {kib} and {es}>>.
. <<general-alert-action-settings,Specify a value for `xpack.encrypted_saved_objects.encryptionKey`>>.
. <<general-alert-action-settings,Specify a value for `xpack.encryptedSavedObjects.encryptionKey`>>.

You can configure the following settings in the `kibana.yml` file.

Expand All @@ -18,7 +18,7 @@ You can configure the following settings in the `kibana.yml` file.
[[general-alert-action-settings]]
==== General settings

`xpack.encrypted_saved_objects.encryptionKey`::
`xpack.encryptedSavedObjects.encryptionKey`::

A string of 32 or more characters used to encrypt sensitive properties on alerts and actions before they're stored in {es}. Third party credentials &mdash; such as the username and password used to connect to an SMTP service &mdash; are an example of encrypted properties.
+
Expand Down
2 changes: 1 addition & 1 deletion docs/user/alerting/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ Pre-packaged *alert types* simplify setup, hide the details complex domain-speci
If you are using an *on-premises* Elastic Stack deployment with <<using-kibana-with-security, *security*>>:

* TLS must be configured for communication <<configuring-tls-kib-es, between {es} and {kib}>>. {kib} alerting uses <<api-keys, API keys>> to secure background alert checks and actions, and API keys require {ref}/configuring-tls.html#tls-http[TLS on the HTTP interface].
* In the kibana.yml configuration file, add the <<alert-action-settings-kb,`xpack.encrypted_saved_objects.encryptionKey` setting>>
* In the kibana.yml configuration file, add the <<alert-action-settings-kb,`xpack.encryptedSavedObjects.encryptionKey` setting>>

[float]
[[alerting-security]]
Expand Down
4 changes: 2 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@
"@elastic/filesaver": "1.1.2",
"@elastic/good": "8.1.1-kibana2",
"@elastic/numeral": "2.4.0",
"@elastic/request-crypto": "1.1.2",
"@elastic/request-crypto": "1.1.4",
"@elastic/ui-ace": "0.2.3",
"@hapi/good-squeeze": "5.2.1",
"@hapi/wreck": "^15.0.2",
Expand Down Expand Up @@ -401,7 +401,7 @@
"chai": "3.5.0",
"chance": "1.0.18",
"cheerio": "0.22.0",
"chromedriver": "^80.0.1",
"chromedriver": "^81.0.0",
"classnames": "2.2.6",
"dedent": "^0.7.0",
"delete-empty": "^2.0.0",
Expand Down
2 changes: 1 addition & 1 deletion rfcs/text/0002_encrypted_attributes.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ take a look at the source code of this library to know how encryption is perform
parameters are used, but in short it's AES Encryption with AES-256-GCM that uses random initialization vector and salt.

As with encryption key for Kibana's session cookie, master encryption key used by `encrypted_saved_objects` plugin can be
defined as a configuration value (`xpack.encrypted_saved_objects.encryptionKey`) via `kibana.yml`, but it's **highly
defined as a configuration value (`xpack.encryptedSavedObjects.encryptionKey`) via `kibana.yml`, but it's **highly
recommended** to define this key in the [Kibana Keystore](https://www.elastic.co/guide/en/kibana/current/secure-settings.html)
instead. The master key should be cryptographically safe and be equal or greater than 32 bytes.

Expand Down
61 changes: 61 additions & 0 deletions src/core/MIGRATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
- [7. Switch to new platform services](#7-switch-to-new-platform-services)
- [8. Migrate to the new plugin system](#8-migrate-to-the-new-plugin-system)
- [Bonus: Tips for complex migration scenarios](#bonus-tips-for-complex-migration-scenarios)
- [Keep Kibana fast](#keep-kibana-fast)
- [Frequently asked questions](#frequently-asked-questions)
- [Is migrating a plugin an all-or-nothing thing?](#is-migrating-a-plugin-an-all-or-nothing-thing)
- [Do plugins need to be converted to TypeScript?](#do-plugins-need-to-be-converted-to-typescript)
Expand Down Expand Up @@ -933,6 +934,66 @@ For a few plugins, some of these steps (such as angular removal) could be a mont

One convention that is useful for this is creating a dedicated `public/np_ready` directory to house the code that is ready to migrate, and gradually move more and more code into it until the rest of your plugin is essentially empty. At that point, you'll be able to copy your `index.ts`, `plugin.ts`, and the contents of `./np_ready` over into your plugin in the new platform, leaving your legacy shim behind. This carries the added benefit of providing a way for us to introduce helpful tooling in the future, such as [custom eslint rules](https://github.com/elastic/kibana/pull/40537), which could be run against that specific directory to ensure your code is ready to migrate.

## Keep Kibana fast
**tl;dr**: Load as much code lazily as possible.
Everyone loves snappy applications with responsive UI and hates spinners. Users deserve the best user experiences regardless of whether they run Kibana locally or in the cloud, regardless of their hardware & environment.
There are 2 main aspects of the perceived speed of an application: loading time and responsiveness to user actions.
New platform loads and bootstraps **all** the plugins whenever a user lands on any page. It means that adding every new application affects overall **loading performance** in the new platform, as plugin code is loaded **eagerly** to initialize the plugin and provide plugin API to dependent plugins.
However, it's usually not necessary that the whole plugin code should be loaded and initialized at once. The plugin could keep on loading code covering API functionality on Kibana bootstrap but load UI related code lazily on-demand, when an application page or management section is mounted.
Always prefer to require UI root components lazily when possible (such as in mount handlers). Even if their size may seem negligible, they are likely using some heavy-weight libraries that will also be removed from the initial plugin bundle, therefore, reducing its size by a significant amount.

```typescript
import { Plugin, CoreSetup, AppMountParameters } from 'src/core/public';
export class MyPlugin implements Plugin<MyPluginSetup> {
setup(core: CoreSetup, plugins: SetupDeps){
core.application.register({
id: 'app',
title: 'My app',
async mount(params: AppMountParameters) {
const { mountApp } = await import('./app/mount_app');
return mountApp(await core.getStartServices(), params);
},
});
plugins.management.sections.getSection('another').registerApp({
id: 'app',
title: 'My app',
order: 1,
async mount(params) {
const { mountManagementSection } = await import('./app/mount_management_section');
return mountManagementSection(coreSetup, params);
},
})
return {
doSomething(){}
}
}
}
```

#### How to understand how big the bundle size of my plugin is?
New platform plugins are distributed as a pre-built with `@kbn/optimizer` package artifacts. It allows us to get rid of the shipping of `optimizer` in the distributable version of Kibana.
Every NP plugin artifact contains all plugin dependencies required to run the plugin, except some stateful dependencies shared across plugin bundles via `@kbn/ui-shared-deps`.
It means that NP plugin artifacts tend to have a bigger size than the legacy platform version.
To understand the current size of your plugin artifact, run `@kbn/optimizer` as
```bash
node scripts/build_kibana_platform_plugins.js --dist --no-examples
```
and check the output in the `target` sub-folder of your plugin folder
```bash
ls -lh plugins/my_plugin/target/public/
# output
# an async chunk loaded on demand
... 262K 0.plugin.js
# eagerly loaded chunk
... 50K my_plugin.plugin.js
```
you might see at least one js bundle - `my_plugin.plugin.js`. This is the only artifact loaded by the platform during bootstrap in the browser. The rule of thumb is to keep its size as small as possible.
Other lazily loaded parts of your plugin present in the same folder as separate chunks under `{number}.plugin.js` names.
If you want to investigate what your plugin bundle consists of you need to run `@kbn/optimizer` with `--profile` flag to get generated [webpack stats file](https://webpack.js.org/api/stats/).
Many OSS tools are allowing you to analyze generated stats file
- [an official tool](http://webpack.github.io/analyse/#modules) from webpack authors
- [webpack-visualizer](https://chrisbateman.github.io/webpack-visualizer/)

## Frequently asked questions

### Is migrating a plugin an all-or-nothing thing?
Expand Down
25 changes: 23 additions & 2 deletions src/core/public/application/scoped_history.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -268,11 +268,32 @@ describe('ScopedHistory', () => {
const gh = createMemoryHistory();
gh.push('/app/wow');
const h = new ScopedHistory(gh, '/app/wow');
expect(h.createHref({ pathname: '' })).toEqual(`/`);
expect(h.createHref({ pathname: '' })).toEqual(`/app/wow`);
expect(h.createHref({})).toEqual(`/app/wow`);
expect(h.createHref({ pathname: '/new-page', search: '?alpha=true' })).toEqual(
`/new-page?alpha=true`
`/app/wow/new-page?alpha=true`
);
});

it('behave correctly with slash-ending basePath', () => {
const gh = createMemoryHistory();
gh.push('/app/wow/');
const h = new ScopedHistory(gh, '/app/wow/');
expect(h.createHref({ pathname: '' })).toEqual(`/app/wow/`);
expect(h.createHref({ pathname: '/new-page', search: '?alpha=true' })).toEqual(
`/app/wow/new-page?alpha=true`
);
});

it('skips the scoped history path when `prependBasePath` is false', () => {
const gh = createMemoryHistory();
gh.push('/app/wow');
const h = new ScopedHistory(gh, '/app/wow');
expect(h.createHref({ pathname: '' }, { prependBasePath: false })).toEqual(`/`);
expect(
h.createHref({ pathname: '/new-page', search: '?alpha=true' }, { prependBasePath: false })
).toEqual(`/new-page?alpha=true`);
});
});

describe('action', () => {
Expand Down
Loading

0 comments on commit da045bb

Please sign in to comment.