diff --git a/docs/reference/mapping/removal_of_types.asciidoc b/docs/reference/mapping/removal_of_types.asciidoc index 67ef215931b48..03982bac802c2 100644 --- a/docs/reference/mapping/removal_of_types.asciidoc +++ b/docs/reference/mapping/removal_of_types.asciidoc @@ -4,7 +4,8 @@ IMPORTANT: Indices created in Elasticsearch 6.0.0 or later may only contain a single <>. Indices created in 5.x with multiple mapping types will continue to function as before in Elasticsearch 6.x. -Mapping types will be completely removed in Elasticsearch 7.0.0. +Types will be deprecated in APIs in Elasticsearch 7.0.0, and completely +removed in 8.0.0. [float] === What are mapping types? @@ -256,30 +257,28 @@ Elasticsearch 6.x:: * The `_default_` mapping type is deprecated. +* In 6.7, the index creation, index template, and mapping APIs support a query + string parameter (`include_type_name`) which indicates whether requests and + responses should include a type name. It defaults to `true`, and not setting + `include_type_name=false` will result in a deprecation warning. Indices which + don't have an explicit type will use the dummy type name `_doc`. + Elasticsearch 7.x:: -* The `type` parameter in URLs are deprecated. For instance, indexing - a document no longer requires a document `type`. The new index APIs +* Specifying types in requests is deprecated. For instance, indexing a + document no longer requires a document `type`. The new index APIs are `PUT {index}/_doc/{id}` in case of explicit ids and `POST {index}/_doc` for auto-generated ids. -* The index creation, `GET|PUT _mapping` and document APIs support a query - string parameter (`include_type_name`) which indicates whether requests and - responses should include a type name. It defaults to `true`. - 7.x indices which don't have an explicit type will use the dummy type name - `_doc`. Not setting `include_type_name=false` will result in a deprecation - warning. +* The `include_type_name` parameter in the index creation, index template, + and mapping APIs will default to `false`. Setting the parameter will result + in a deprecation warning. * The `_default_` mapping type is removed. Elasticsearch 8.x:: -* The `type` parameter is no longer supported in URLs. - -* The `include_type_name` parameter is deprecated, default to `false` and fails - the request when set to `true`. - -Elasticsearch 9.x:: +* Specifying types in requests is no longer supported. * The `include_type_name` parameter is removed. @@ -425,7 +424,87 @@ POST _reindex // NOTCONSOLE [float] -=== Index templates +=== Typeless APIs in 7.0 + +In Elasticsearch 7.0, each API will support typeless requests, +and specifying a type will produce a deprecation warning. Certain +typeless APIs are also available in 6.7, to enable a smooth upgrade +path to 7.0. + +[float] +==== Indices APIs + +Index creation, index template, and mapping APIs support a new `include_type_name` +url parameter that specifies whether mapping definitions in requests and responses +should contain the type name. The parameter defaults to `true` in version 6.7 to +match the pre-7.0 behavior of using type names in mappings. It defaults to `false` +in version 7.0 and will be removed in version 8.0. + +See some examples of interactions with Elasticsearch with this option provided: + +[source,js] +-------------------------------------------------- +PUT index?include_type_name=false +{ + "mappings": { + "properties": { <1> + "foo": { + "type": "keyword" + } + } + } +} +-------------------------------------------------- +// CONSOLE +<1> Mappings are included directly under the `mappings` key, without a type name. + +[source,js] +-------------------------------------------------- +PUT index/_mappings?include_type_name=false +{ + "properties": { <1> + "bar": { + "type": "text" + } + } +} +-------------------------------------------------- +// CONSOLE +// TEST[continued] +<1> Mappings are included directly under the `mappings` key, without a type name. + +[source,js] +-------------------------------------------------- +GET index/_mappings?include_type_name=false +-------------------------------------------------- +// CONSOLE +// TEST[continued] + +The above call returns + +[source,js] +-------------------------------------------------- +{ + "index": { + "mappings": { + "properties": { <1> + "foo": { + "type": "keyword" + }, + "bar": { + "type": "text" + } + } + } + } +} +-------------------------------------------------- +// CONSOLE +// TESTRESPONSE +<1> Mappings are included directly under the `mappings` key, without a type name. + +[float] +==== Index templates It is recommended to make index templates typeless before upgrading to 7.0 by re-adding them with `include_type_name` set to `false`. @@ -495,5 +574,18 @@ PUT index-2-01?include_type_name=false In case of implicit index creation, because of documents that get indexed in an index that doesn't exist yet, the template is always honored. This is -usually not a problem due to the fact that typless index calls work on typed +usually not a problem due to the fact that typeless index calls work on typed indices. + +[float] +==== Mixed-version clusters + +In a cluster composed of both 6.7 and 7.0 nodes, the parameter +`include_type_name` should be specified in indices APIs like index +creation. This is because the parameter has a different default between +6.7 and 7.0, so the same mapping definition will not be valid for both +node versions. + +Typeless document APIs such as `bulk` and `update` are only available as of +7.0, and will not work with 6.7 nodes. This also holds true for the typeless +versions of queries that perform document lookups, such as `terms`. diff --git a/docs/reference/settings/security-settings.asciidoc b/docs/reference/settings/security-settings.asciidoc index fbcf2e4c09a56..7c3543c82b411 100644 --- a/docs/reference/settings/security-settings.asciidoc +++ b/docs/reference/settings/security-settings.asciidoc @@ -147,6 +147,36 @@ method. The length of time that a token is valid for. By default this value is `20m` or 20 minutes. The maximum value is 1 hour. +[float] +[[api-key-service-settings]] +==== API key service settings + +You can set the following API key service settings in +`elasticsearch.yml`. + +`xpack.security.authc.api_key.enabled`:: +Set to `false` to disable the built-in API key service. Defaults to `true` unless + `xpack.security.http.ssl.enabled` is `false`. This prevents sniffing the API key + from a connection over plain http. + +`xpack.security.authc.api_key.hashing.algorithm`:: +Specifies the hashing algorithm that is used for securing API key credentials. +See <>. Defaults to `pbkdf2`. + +`xpack.security.authc.api_key.cache.ttl`:: +The time-to-live for cached API key entries. A API key id and a hash of its +API key are cached for this period of time. Specify the time period using +the standard {es} <>. Defaults to `1d`. + +`xpack.security.authc.api_key.cache.max_keys`:: +The maximum number of API key entries that can live in the +cache at any given time. Defaults to 10,000. + +`xpack.security.authc.api_key.cache.hash_algo`:: (Expert Setting) +The hashing algorithm that is used for the +in-memory cached API key credentials. For possible values, see <>. +Defaults to `ssha256`. + [float] [[realm-settings]] ==== Realm settings diff --git a/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetIndexTemplateAction.java b/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetIndexTemplateAction.java index ddaa31b6bbad7..8c99a18aa4078 100644 --- a/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetIndexTemplateAction.java +++ b/server/src/main/java/org/elasticsearch/rest/action/admin/indices/RestGetIndexTemplateAction.java @@ -51,9 +51,10 @@ public class RestGetIndexTemplateAction extends BaseRestHandler { Collections.singleton(INCLUDE_TYPE_NAME_PARAMETER), Settings.FORMAT_PARAMS)); private static final DeprecationLogger deprecationLogger = new DeprecationLogger( LogManager.getLogger(RestGetIndexTemplateAction.class)); - public static final String TYPES_DEPRECATION_MESSAGE = "[types removal] The response format of get index template requests will change" - + " in the next major version. Please start using the `include_type_name` parameter set to `false` in the request to " - + "move to the new, typeless response format that will be the default in 7.0."; + public static final String TYPES_DEPRECATION_MESSAGE = "[types removal] The response format of get index " + + "template requests will change in 7.0. Please start using the include_type_name parameter set to false " + + "to move to the new, typeless response format that will become the default."; + public RestGetIndexTemplateAction(final Settings settings, final RestController controller) { super(settings); controller.registerHandler(GET, "/_template", this); diff --git a/x-pack/docs/en/rest-api/security/create-api-keys.asciidoc b/x-pack/docs/en/rest-api/security/create-api-keys.asciidoc index e4fa1be71d40e..741a9d79feaf0 100644 --- a/x-pack/docs/en/rest-api/security/create-api-keys.asciidoc +++ b/x-pack/docs/en/rest-api/security/create-api-keys.asciidoc @@ -24,6 +24,8 @@ applicable for the API key in milliseconds. NOTE: By default API keys never expire. You can specify expiration at the time of creation for the API keys. +See <> for configuration settings related to API key service. + ==== Request Body The following parameters can be specified in the body of a POST or PUT request: @@ -97,3 +99,13 @@ API key information. <1> unique id for this API key <2> optional expiration in milliseconds for this API key <3> generated API key + +The API key returned by this API can then be used by sending a request with a +`Authorization` header with a value having the prefix `ApiKey ` followed +by the _credentials_, where _credentials_ is the base64 encoding of `id` and `api_key` joined by a colon. + +[source,shell] +-------------------------------------------------- +curl -H "Authorization: ApiKey VnVhQ2ZHY0JDZGJrUW0tZTVhT3g6dWkybHAyYXhUTm1zeWFrdzl0dk5udw==" http://localhost:9200/_cluster/health +-------------------------------------------------- +// NOTCONSOLE diff --git a/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ml/job/messages/Messages.java b/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ml/job/messages/Messages.java index 910a3651ee924..1192c2f94731c 100644 --- a/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ml/job/messages/Messages.java +++ b/x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ml/job/messages/Messages.java @@ -79,6 +79,7 @@ public final class Messages { public static final String JOB_AUDIT_DATAFEED_STARTED_FROM_TO = "Datafeed started (from: {0} to: {1}) with frequency [{2}]"; public static final String JOB_AUDIT_DATAFEED_STARTED_REALTIME = "Datafeed started in real-time"; public static final String JOB_AUDIT_DATAFEED_STOPPED = "Datafeed stopped"; + public static final String JOB_AUDIT_DATAFEED_ISOLATED = "Datafeed isolated"; public static final String JOB_AUDIT_DELETING = "Deleting job by task with id ''{0}''"; public static final String JOB_AUDIT_DELETING_FAILED = "Error deleting job: {0}"; public static final String JOB_AUDIT_DELETED = "Job deleted"; diff --git a/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/MlLifeCycleService.java b/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/MlLifeCycleService.java index 261534556968a..af88cbd796df8 100644 --- a/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/MlLifeCycleService.java +++ b/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/MlLifeCycleService.java @@ -46,7 +46,7 @@ public synchronized void stop() { // datafeeds, so they get reallocated. We have to do this first, otherwise the datafeeds // could fail if they send data to a dead autodetect process. if (datafeedManager != null) { - datafeedManager.isolateAllDatafeedsOnThisNode(); + datafeedManager.isolateAllDatafeedsOnThisNodeBeforeShutdown(); } NativeController nativeController = NativeControllerHolder.getNativeController(environment); if (nativeController != null) { diff --git a/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/action/TransportSetUpgradeModeAction.java b/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/action/TransportSetUpgradeModeAction.java index d2d085a5fa6b9..c2a9416813396 100644 --- a/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/action/TransportSetUpgradeModeAction.java +++ b/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/action/TransportSetUpgradeModeAction.java @@ -264,6 +264,9 @@ private void unassignPersistentTasks(PersistentTasksCustomMetaData tasksCustomMe .sorted(Comparator.comparing(PersistentTask::getTaskName)) .collect(Collectors.toList()); + logger.info("Un-assigning persistent tasks : " + + datafeedAndJobTasks.stream().map(PersistentTask::getId).collect(Collectors.joining(", ", "[ ", " ]"))); + TypedChainTaskExecutor> chainTaskExecutor = new TypedChainTaskExecutor<>(client.threadPool().executor(executor()), r -> true, @@ -288,6 +291,7 @@ private void isolateDatafeeds(PersistentTasksCustomMetaData tasksCustomMetaData, ActionListener> listener) { Set datafeedsToIsolate = MlTasks.startedDatafeedIds(tasksCustomMetaData); + logger.info("Isolating datafeeds: " + datafeedsToIsolate.toString()); TypedChainTaskExecutor isolateDatafeedsExecutor = new TypedChainTaskExecutor<>(client.threadPool().executor(executor()), r -> true, ex -> true); diff --git a/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/datafeed/DatafeedManager.java b/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/datafeed/DatafeedManager.java index 1d26d78ae3668..921e20fcbb46f 100644 --- a/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/datafeed/DatafeedManager.java +++ b/x-pack/plugin/ml/src/main/java/org/elasticsearch/xpack/ml/datafeed/DatafeedManager.java @@ -64,7 +64,6 @@ public class DatafeedManager { private final DatafeedJobBuilder datafeedJobBuilder; private final TaskRunner taskRunner = new TaskRunner(); private final AutodetectProcessManager autodetectProcessManager; - private volatile boolean isolated; public DatafeedManager(ThreadPool threadPool, Client client, ClusterService clusterService, DatafeedJobBuilder datafeedJobBuilder, Supplier currentTimeSupplier, Auditor auditor, AutodetectProcessManager autodetectProcessManager) { @@ -130,18 +129,20 @@ public void stopAllDatafeedsOnThisNode(String reason) { * This is used before the JVM is killed. It differs from stopAllDatafeedsOnThisNode in that it leaves * the datafeed tasks in the "started" state, so that they get restarted on a different node. */ - public void isolateAllDatafeedsOnThisNode() { - isolated = true; + public void isolateAllDatafeedsOnThisNodeBeforeShutdown() { Iterator iter = runningDatafeedsOnThisNode.values().iterator(); while (iter.hasNext()) { Holder next = iter.next(); next.isolateDatafeed(); - next.setRelocating(); + // TODO: it's not ideal that this "isolate" method does something a bit different to the one below + next.setNodeIsShuttingDown(); iter.remove(); } } public void isolateDatafeed(long allocationId) { + // This calls get() rather than remove() because we expect that the persistent task will + // be removed shortly afterwards and that operation needs to be able to find the holder Holder holder = runningDatafeedsOnThisNode.get(allocationId); if (holder != null) { holder.isolateDatafeed(); @@ -195,7 +196,7 @@ protected void doRun() { holder.stop("general_lookback_failure", TimeValue.timeValueSeconds(20), e); return; } - if (isolated == false) { + if (holder.isIsolated() == false) { if (next != null) { doDatafeedRealtime(next, holder.datafeedJob.getJobId(), holder); } else { @@ -298,7 +299,7 @@ public class Holder { private final ProblemTracker problemTracker; private final Consumer finishHandler; volatile Scheduler.Cancellable cancellable; - private volatile boolean isRelocating; + private volatile boolean isNodeShuttingDown; Holder(TransportStartDatafeedAction.DatafeedTask task, String datafeedId, DatafeedJob datafeedJob, ProblemTracker problemTracker, Consumer finishHandler) { @@ -324,7 +325,7 @@ boolean isIsolated() { } public void stop(String source, TimeValue timeout, Exception e) { - if (isRelocating) { + if (isNodeShuttingDown) { return; } @@ -344,11 +345,12 @@ public void stop(String source, TimeValue timeout, Exception e) { if (cancellable != null) { cancellable.cancel(); } - auditor.info(datafeedJob.getJobId(), Messages.getMessage(Messages.JOB_AUDIT_DATAFEED_STOPPED)); + auditor.info(datafeedJob.getJobId(), + Messages.getMessage(isIsolated() ? Messages.JOB_AUDIT_DATAFEED_ISOLATED : Messages.JOB_AUDIT_DATAFEED_STOPPED)); finishHandler.accept(e); logger.info("[{}] datafeed [{}] for job [{}] has been stopped{}", source, datafeedId, datafeedJob.getJobId(), acquired ? "" : ", but there may be pending tasks as the timeout [" + timeout.getStringRep() + "] expired"); - if (autoCloseJob) { + if (autoCloseJob && isIsolated() == false) { closeJob(); } if (acquired) { @@ -361,16 +363,18 @@ public void stop(String source, TimeValue timeout, Exception e) { } /** - * This stops a datafeed WITHOUT updating the corresponding persistent task. It must ONLY be called - * immediately prior to shutting down a node. Then the datafeed task can remain "started", and be - * relocated to a different node. Calling this method at any other time will ruin the datafeed. + * This stops a datafeed WITHOUT updating the corresponding persistent task. When called it + * will stop the datafeed from sending data to its job as quickly as possible. The caller + * must do something sensible with the corresponding persistent task. If the node is shutting + * down the task will automatically get reassigned. Otherwise the caller must take action to + * remove or reassign the persistent task, or the datafeed will be left in limbo. */ public void isolateDatafeed() { datafeedJob.isolate(); } - public void setRelocating() { - isRelocating = true; + public void setNodeIsShuttingDown() { + isNodeShuttingDown = true; } private Long executeLookBack(long startTime, Long endTime) throws Exception { diff --git a/x-pack/plugin/security/cli/build.gradle b/x-pack/plugin/security/cli/build.gradle index 8515b538bd562..1c684809a3203 100644 --- a/x-pack/plugin/security/cli/build.gradle +++ b/x-pack/plugin/security/cli/build.gradle @@ -24,6 +24,7 @@ dependencyLicenses { if (project.inFipsJvm) { unitTest.enabled = false + testingConventions.enabled = false // Forbiden APIs non-portable checks fail because bouncy castle classes being used from the FIPS JDK since those are // not part of the Java specification - all of this is as designed, so we have to relax this check for FIPS. tasks.withType(CheckForbiddenApis) { @@ -32,4 +33,5 @@ if (project.inFipsJvm) { // FIPS JVM includes many classes from bouncycastle which count as jar hell for the third party audit, // rather than provide a long list of exclusions, disable the check on FIPS. thirdPartyAudit.enabled = false + } diff --git a/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authc/ApiKeyService.java b/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authc/ApiKeyService.java index a6d30814cd9dd..6323a79fdc43a 100644 --- a/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authc/ApiKeyService.java +++ b/x-pack/plugin/security/src/main/java/org/elasticsearch/xpack/security/authc/ApiKeyService.java @@ -108,7 +108,7 @@ public class ApiKeyService { static final String API_KEY_LIMITED_ROLE_DESCRIPTORS_KEY = "_security_api_key_limited_by_role_descriptors"; public static final Setting PASSWORD_HASHING_ALGORITHM = new Setting<>( - "xpack.security.authc.api_key_hashing.algorithm", "pbkdf2", Function.identity(), v -> { + "xpack.security.authc.api_key.hashing.algorithm", "pbkdf2", Function.identity(), v -> { if (Hasher.getAvailableAlgoStoredHash().contains(v.toLowerCase(Locale.ROOT)) == false) { throw new IllegalArgumentException("Invalid algorithm: " + v + ". Valid values for password hashing are " + Hasher.getAvailableAlgoStoredHash().toString()); diff --git a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/authc/ApiKeyIntegTests.java b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/authc/ApiKeyIntegTests.java index 0ea5a6a6f3c0f..80afadd805dad 100644 --- a/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/authc/ApiKeyIntegTests.java +++ b/x-pack/plugin/security/src/test/java/org/elasticsearch/xpack/security/authc/ApiKeyIntegTests.java @@ -316,6 +316,7 @@ public void testInvalidatedApiKeysDeletedByRemover() throws Exception { }, 30, TimeUnit.SECONDS); } + @AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/38408") public void testExpiredApiKeysDeletedAfter1Week() throws Exception { List responses = createApiKeys(2, null); Instant created = Instant.now(); diff --git a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/set_upgrade_mode.yml b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/set_upgrade_mode.yml index 1507314fb5fd7..e27f8a8bf59fe 100644 --- a/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/set_upgrade_mode.yml +++ b/x-pack/plugin/src/test/resources/rest-api-spec/test/ml/set_upgrade_mode.yml @@ -6,6 +6,10 @@ setup: indices.create: index: airline-data body: + settings: + index: + number_of_replicas: 0 + number_of_shards: 1 mappings: response: properties: @@ -44,7 +48,8 @@ setup: body: > { "job_id":"set-upgrade-mode-job", - "indexes":["airline-data"] + "indexes":["airline-data"], + "types": ["response"] } - do: @@ -54,10 +59,9 @@ setup: job_id: set-upgrade-mode-job - do: - headers: - Authorization: "Basic eF9wYWNrX3Jlc3RfdXNlcjp4LXBhY2stdGVzdC1wYXNzd29yZA==" # run as x_pack_rest_user, i.e. the test setup superuser - xpack.ml.start_datafeed: - datafeed_id: set-upgrade-mode-job-datafeed + cluster.health: + index: airline-data + wait_for_status: green --- teardown: @@ -71,6 +75,10 @@ teardown: --- "Test setting upgrade_mode to false when it is already false": + - do: + xpack.ml.start_datafeed: + datafeed_id: set-upgrade-mode-job-datafeed + - do: xpack.ml.set_upgrade_mode: enabled: false @@ -93,6 +101,10 @@ teardown: --- "Setting upgrade_mode to enabled": + - do: + xpack.ml.start_datafeed: + datafeed_id: set-upgrade-mode-job-datafeed + - do: cat.tasks: {} - match: @@ -138,6 +150,10 @@ teardown: --- "Setting upgrade mode to disabled from enabled": + - do: + xpack.ml.start_datafeed: + datafeed_id: set-upgrade-mode-job-datafeed + - do: cat.tasks: {} - match: