Skip to content

Commit

Permalink
Merge branch 'master' into event-log/query-support
Browse files Browse the repository at this point in the history
* master:
  [Drilldowns] Dashboard state fixes for drilldowns (elastic#61457)
  allow null for filterQuery (elastic#62310)
  [ML] call job validation endpoint with complete payload (elastic#62331)
  removing configuration from agents (elastic#62290)
  Allow Enterprise license for service map (elastic#62371)
  docs: updates to apm agent config (elastic#61893)
  [Ingest] Fix package info request returning 500 (elastic#61712)
  move crypto to server utils (elastic#62344)
  Start indexing documents by default (elastic#62159)
  [Endpoint] Update host field accordion (elastic#61878)
  Add more definitions about Ingest Management (elastic#62049)
  • Loading branch information
gmmorris committed Apr 3, 2020
2 parents 2bbac1a + ea7a81d commit a492453
Show file tree
Hide file tree
Showing 49 changed files with 605 additions and 168 deletions.
49 changes: 15 additions & 34 deletions docs/apm/agent-configuration.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -31,37 +31,18 @@ Kibana communicates any changed settings to APM Server so that your agents only
[float]
==== Supported configurations

[float]
===== `CAPTURE_BODY`

added[7.5.0] Can be `"off"`, `"errors"`, `"transactions"`, or `"all"`. Defaults to `"off"`.

For transactions that are HTTP requests, the Agent can optionally capture the request body, e.g., POST variables.
Remember, request bodies often contain sensitive values like passwords, credit card numbers, etc.
If your service handles sensitive data, enable this feature with care.
Turning on body capturing can also significantly increase the overhead the overhead of the Agent,
and the Elasticsearch index size.

[float]
===== `TRANSACTION_MAX_SPANS`

added[7.5.0] A number between `0` and `32000`. Defaults to `500`.

Limit the number of spans that are recorded per transaction.
This is helpful in cases where a transaction creates a very high amount of spans, e.g., thousands of SQL queries.
Setting an upper limit will help prevent the Agent and the APM Server from being overloaded.

[float]
===== `TRANSACTION_SAMPLE_RATE`

added[7.3.0] A sample rate between `0.000` and `1.0`. Default configuration is `1.0` (100% of traces).

Adjusting the sampling rate controls what percent of requests are traced.
`1.0` means _all_ requests are traced. If you set the `TRANSACTION_SAMPLE_RATE` to a value below `1.0`,
the agent will randomly sample only a subset of transactions.
Unsampled transactions only record the name of the transaction, the overall transaction time, and the result.

IMPORTANT: In a distributed trace, the sampling decision is propagated by the initializing Agent.
This means if you're using multiple agents, only the originating service's sampling rate will be used.
Be sure to set sensible defaults in _all_ of your agents, especially the
{apm-rum-ref}/configuration.html#transaction-sample-rate[JavaScript RUM Agent].
Each Agent has its own list of supported configurations.
After selecting a Service name and environment in the APM app,
a list of all available configuration options,
including descriptions and default values, will be displayed.

Supported configurations are also marked in each Agent's configuration documentation:

[horizontal]
Go Agent:: {apm-go-ref}/configuration.html[Configuration reference]
Java Agent:: {apm-java-ref}/configuration.html[Configuration reference]
.NET Agent:: {apm-dotnet-ref}/configuration.html[Configuration reference]
Node.js Agent:: {apm-node-ref}/configuration.html[Configuration reference]
Python Agent:: {apm-py-ref}/configuration.html[Configuration reference]
Ruby Agent:: {apm-ruby-ref}/configuration.html[Configuration reference]
Real User Monitoring (RUM) Agent:: {apm-rum-ref}/configuration.html[Configuration reference]
Binary file modified docs/apm/images/apm-agent-configuration.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
61 changes: 58 additions & 3 deletions docs/epm/index.asciidoc → docs/ingest_manager/index.asciidoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
[role="xpack"]
[[epm]]
== Elastic Package Manager
== Ingest Manager

These are the docs for the Elastic Package Manager (EPM).
These are the docs for the Ingest Manager.


=== Configuration
Expand Down Expand Up @@ -39,16 +39,71 @@ curl -X DELETE localhost:5601/api/ingest_manager/epm/packages/iptables-1.0.4

This section is to define terms used across ingest management.

==== Data Source

A data source is a definition on how to collect data from a service, for example `nginx`. A data source contains
definitions for one or multiple inputs and each input can contain one or multiple streams.

With the example of the nginx Data Source, it contains to inputs: `logs` and `nginx/metrics`. Logs and metrics are collected
differently. The `logs` input contains two streams, `access` and `error`, the `nginx/metrics` input contains the stubstatus stream.


==== Data Stream

Data Streams are a [new concept](https://github.com/elastic/elasticsearch/issues/53100) in Elasticsearch which simplify
ingesting data and the setup of Elasticsearch.

==== Elastic Agent

A single, unified agent that users can deploy to hosts or containers. It controls which data is collected from the host or containers and where the data is sent. It will run Beats, Endpoint or other monitoring programs as needed. It can operate standalone or pull a configuration policy from Fleet.


==== Elastic Package Registry

The Elastic Package Registry (EPR) is a service which runs under [https://epr.elastic.co]. It serves the packages through its API.
More details about the registry can be found [here](https://github.com/elastic/package-registry).

==== Fleet

Fleet is the part of the Ingest Manager UI in Kibana that handles the part of enrolling Elastic Agents,
managing agents and sending configurations to the Elastic Agent.

==== Indexing Strategy

Ingest Management + Elastic Agent follow a strict new indexing strategy: `{type}-{dataset}-{namespace}`. An example
for this is `logs-nginx.access-default`. More details about it can be found in the Index Strategy below. All data of
the index strategy is sent to Data Streams.

==== Input

An input is the configuration unit in an Agent Config that defines the options on how to collect data from
an endpoint. This could be username / password which are need to authenticate with a service or a host url
as an example.

An input is part of a Data Source and contains streams.

==== Integration

An integration is a package with the type integration. An integration package has at least 1 data source
and usually collects data from / about a service.


==== Namespace

A user-specified string that will be used to part of the index name in Elasticsearch. It helps users identify logs coming from a specific environment (like prod or test), an application, or other identifiers.


==== Package

A package contains all the assets for the Elastic Stack. A more detailed definition of a package can be found under https://github.com/elastic/package-registry.
A package contains all the assets for the Elastic Stack. A more detailed definition of a
package can be found under https://github.com/elastic/package-registry.

Besides the assets, a package contains the data source definitions with its inputs and streams.

==== Stream

A stream is a configuration unit in the Elastic Agent config. A stream is part of an input and defines how the data
fetched by this input should be processed and which Data Stream to send it to.

== Indexing Strategy

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jest.mock('fs', () => ({ readFileSync: mockReadFileSync }));

export const mockReadPkcs12Keystore = jest.fn();
export const mockReadPkcs12Truststore = jest.fn();
jest.mock('../../utils', () => ({
jest.mock('../utils', () => ({
readPkcs12Keystore: mockReadPkcs12Keystore,
readPkcs12Truststore: mockReadPkcs12Truststore,
}));
2 changes: 1 addition & 1 deletion src/core/server/elasticsearch/elasticsearch_config.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@ describe('throws when config is invalid', () => {
beforeAll(() => {
const realFs = jest.requireActual('fs');
mockReadFileSync.mockImplementation((path: string) => realFs.readFileSync(path));
const utils = jest.requireActual('../../utils');
const utils = jest.requireActual('../utils');
mockReadPkcs12Keystore.mockImplementation((path: string, password?: string) =>
utils.readPkcs12Keystore(path, password)
);
Expand Down
2 changes: 1 addition & 1 deletion src/core/server/elasticsearch/elasticsearch_config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ import { schema, TypeOf } from '@kbn/config-schema';
import { Duration } from 'moment';
import { readFileSync } from 'fs';
import { ConfigDeprecationProvider } from 'src/core/server';
import { readPkcs12Keystore, readPkcs12Truststore } from '../../utils';
import { readPkcs12Keystore, readPkcs12Truststore } from '../utils';
import { ServiceConfigDescriptor } from '../internal_types';

const hostURISchema = schema.uri({ scheme: ['http', 'https'] });
Expand Down
2 changes: 1 addition & 1 deletion src/core/server/http/ssl_config.test.mocks.ts
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ jest.mock('fs', () => {

export const mockReadPkcs12Keystore = jest.fn();
export const mockReadPkcs12Truststore = jest.fn();
jest.mock('../../utils', () => ({
jest.mock('../utils', () => ({
readPkcs12Keystore: mockReadPkcs12Keystore,
readPkcs12Truststore: mockReadPkcs12Truststore,
}));
2 changes: 1 addition & 1 deletion src/core/server/http/ssl_config.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ describe('#SslConfig', () => {
beforeEach(() => {
const realFs = jest.requireActual('fs');
mockReadFileSync.mockImplementation((path: string) => realFs.readFileSync(path));
const utils = jest.requireActual('../../utils');
const utils = jest.requireActual('../utils');
mockReadPkcs12Keystore.mockImplementation((path: string, password?: string) =>
utils.readPkcs12Keystore(path, password)
);
Expand Down
2 changes: 1 addition & 1 deletion src/core/server/http/ssl_config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
import { schema, TypeOf } from '@kbn/config-schema';
import crypto from 'crypto';
import { readFileSync } from 'fs';
import { readPkcs12Keystore, readPkcs12Truststore } from '../../utils';
import { readPkcs12Keystore, readPkcs12Truststore } from '../utils';

// `crypto` type definitions doesn't currently include `crypto.constants`, see
// https://github.com/DefinitelyTyped/DefinitelyTyped/blob/fa5baf1733f49cf26228a4e509914572c1b74adf/types/node/v6/index.d.ts#L3412
Expand Down
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ import {
import { NO_CA_PATH, NO_CERT_PATH, NO_KEY_PATH, TWO_CAS_PATH, TWO_KEYS_PATH } from './__fixtures__';
import { readFileSync } from 'fs';

import { readPkcs12Keystore, Pkcs12ReadResult, readPkcs12Truststore } from '.';
import { readPkcs12Keystore, Pkcs12ReadResult, readPkcs12Truststore } from './index';

const reformatPem = (pem: string) => {
// ensure consistency in line endings when comparing two PEM files
Expand Down
File renamed without changes.
1 change: 1 addition & 0 deletions src/core/server/utils/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,5 +17,6 @@
* under the License.
*/

export * from './crypto';
export * from './from_root';
export * from './package_json';
1 change: 0 additions & 1 deletion src/core/utils/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@

export * from './assert_never';
export * from './context';
export * from './crypto';
export * from './deep_freeze';
export * from './get';
export * from './map_to_object';
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ import {
IndexPattern,
IndexPatternsContract,
Query,
QueryState,
SavedQuery,
syncQueryStateWithUrl,
} from '../../../../../../plugins/data/public';
Expand Down Expand Up @@ -132,13 +133,6 @@ export class DashboardAppController {
const queryFilter = filterManager;
const timefilter = queryService.timefilter.timefilter;

// starts syncing `_g` portion of url with query services
// note: dashboard_state_manager.ts syncs `_a` portion of url
const {
stop: stopSyncingQueryServiceStateWithUrl,
hasInheritedQueryFromUrl: hasInheritedGlobalStateFromUrl,
} = syncQueryStateWithUrl(queryService, kbnUrlStateStorage);

let lastReloadRequestTime = 0;
const dash = ($scope.dash = $route.current.locals.dash);
if (dash.id) {
Expand Down Expand Up @@ -170,9 +164,24 @@ export class DashboardAppController {

// The hash check is so we only update the time filter on dashboard open, not during
// normal cross app navigation.
if (dashboardStateManager.getIsTimeSavedWithDashboard() && !hasInheritedGlobalStateFromUrl) {
dashboardStateManager.syncTimefilterWithDashboard(timefilter);
if (dashboardStateManager.getIsTimeSavedWithDashboard()) {
const initialGlobalStateInUrl = kbnUrlStateStorage.get<QueryState>('_g');
if (!initialGlobalStateInUrl?.time) {
dashboardStateManager.syncTimefilterWithDashboardTime(timefilter);
}
if (!initialGlobalStateInUrl?.refreshInterval) {
dashboardStateManager.syncTimefilterWithDashboardRefreshInterval(timefilter);
}
}

// starts syncing `_g` portion of url with query services
// note: dashboard_state_manager.ts syncs `_a` portion of url
// it is important to start this syncing after `dashboardStateManager.syncTimefilterWithDashboard(timefilter);` above is run,
// otherwise it will case redundant browser history record
const { stop: stopSyncingQueryServiceStateWithUrl } = syncQueryStateWithUrl(
queryService,
kbnUrlStateStorage
);
$scope.showSaveQuery = dashboardCapabilities.saveQuery as boolean;

const getShouldShowEditHelp = () =>
Expand Down Expand Up @@ -652,26 +661,21 @@ export class DashboardAppController {
// This is only necessary for new dashboards, which will default to Edit mode.
updateViewMode(ViewMode.VIEW);

// We need to do a hard reset of the timepicker. appState will not reload like
// it does on 'open' because it's been saved to the url and the getAppState.previouslyStored() check on
// reload will cause it not to sync.
if (dashboardStateManager.getIsTimeSavedWithDashboard()) {
dashboardStateManager.syncTimefilterWithDashboardTime(timefilter);
dashboardStateManager.syncTimefilterWithDashboardRefreshInterval(timefilter);
}

// Angular's $location skips this update because of history updates from syncState which happen simultaneously
// when calling kbnUrl.change() angular schedules url update and when angular finally starts to process it,
// the update is considered outdated and angular skips it
// so have to use implementation of dashboardStateManager.changeDashboardUrl, which workarounds those issues
dashboardStateManager.changeDashboardUrl(
dash.id ? createDashboardEditUrl(dash.id) : DashboardConstants.CREATE_NEW_DASHBOARD_URL
);

// We need to do a hard reset of the timepicker. appState will not reload like
// it does on 'open' because it's been saved to the url and the getAppState.previouslyStored() check on
// reload will cause it not to sync.
if (dashboardStateManager.getIsTimeSavedWithDashboard()) {
// have to use $evalAsync here until '_g' is migrated from $location to state sync utility ('history')
// When state sync utility changes url, angular's $location is missing it's own updates which happen during the same digest cycle
// temporary solution is to delay $location updates to next digest cycle
// unfortunately, these causes 2 browser history entries, but this is temporary and will be fixed after migrating '_g' to state_sync utilities
$scope.$evalAsync(() => {
dashboardStateManager.syncTimefilterWithDashboard(timefilter);
});
}
}

overlays
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ describe('DashboardState', function() {
mockTime.to = '2015-09-29 06:31:44.000';

initDashboardState();
dashboardState.syncTimefilterWithDashboard(mockTimefilter);
dashboardState.syncTimefilterWithDashboardTime(mockTimefilter);

expect(mockTime.to).toBe('now/w');
expect(mockTime.from).toBe('now/w');
Expand All @@ -74,7 +74,7 @@ describe('DashboardState', function() {
mockTime.to = '2015-09-29 06:31:44.000';

initDashboardState();
dashboardState.syncTimefilterWithDashboard(mockTimefilter);
dashboardState.syncTimefilterWithDashboardTime(mockTimefilter);

expect(mockTime.to).toBe('now');
expect(mockTime.from).toBe('now-13d');
Expand All @@ -89,7 +89,7 @@ describe('DashboardState', function() {
mockTime.to = 'now/w';

initDashboardState();
dashboardState.syncTimefilterWithDashboard(mockTimefilter);
dashboardState.syncTimefilterWithDashboardTime(mockTimefilter);

expect(mockTime.to).toBe(savedDashboard.timeTo);
expect(mockTime.from).toBe(savedDashboard.timeFrom);
Expand Down
Loading

0 comments on commit a492453

Please sign in to comment.