Skip to content

Commit

Permalink
Merge branch '6.7' into deprecate-llrc-java-7
Browse files Browse the repository at this point in the history
* 6.7:
  Fix build on Fips (elastic#38546)
  ML: update set_upgrade_mode, add logging (elastic#38372) (elastic#38539)
  Update 'removal of types' docs to reflect the new plan. (elastic#38548)
  Make the 'get templates' types deprecation message consistent. (elastic#38532)
  Add API key settings documentation (elastic#38490) (elastic#38499)
  Mute testExpiredApiKeysDeletedAfter1Week in 6.7 (elastic#38528)
  • Loading branch information
jasontedor committed Feb 8, 2019
2 parents 596ca3e + 293ed6d commit b442cc5
Show file tree
Hide file tree
Showing 12 changed files with 204 additions and 41 deletions.
126 changes: 109 additions & 17 deletions docs/reference/mapping/removal_of_types.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@
IMPORTANT: Indices created in Elasticsearch 6.0.0 or later may only contain a
single <<mapping-type,mapping type>>. Indices created in 5.x with multiple
mapping types will continue to function as before in Elasticsearch 6.x.
Mapping types will be completely removed in Elasticsearch 7.0.0.
Types will be deprecated in APIs in Elasticsearch 7.0.0, and completely
removed in 8.0.0.

[float]
=== What are mapping types?
Expand Down Expand Up @@ -256,30 +257,28 @@ Elasticsearch 6.x::

* The `_default_` mapping type is deprecated.

* In 6.7, the index creation, index template, and mapping APIs support a query
string parameter (`include_type_name`) which indicates whether requests and
responses should include a type name. It defaults to `true`, and not setting
`include_type_name=false` will result in a deprecation warning. Indices which
don't have an explicit type will use the dummy type name `_doc`.

Elasticsearch 7.x::

* The `type` parameter in URLs are deprecated. For instance, indexing
a document no longer requires a document `type`. The new index APIs
* Specifying types in requests is deprecated. For instance, indexing a
document no longer requires a document `type`. The new index APIs
are `PUT {index}/_doc/{id}` in case of explicit ids and `POST {index}/_doc`
for auto-generated ids.

* The index creation, `GET|PUT _mapping` and document APIs support a query
string parameter (`include_type_name`) which indicates whether requests and
responses should include a type name. It defaults to `true`.
7.x indices which don't have an explicit type will use the dummy type name
`_doc`. Not setting `include_type_name=false` will result in a deprecation
warning.
* The `include_type_name` parameter in the index creation, index template,
and mapping APIs will default to `false`. Setting the parameter will result
in a deprecation warning.

* The `_default_` mapping type is removed.

Elasticsearch 8.x::

* The `type` parameter is no longer supported in URLs.

* The `include_type_name` parameter is deprecated, default to `false` and fails
the request when set to `true`.

Elasticsearch 9.x::
* Specifying types in requests is no longer supported.

* The `include_type_name` parameter is removed.

Expand Down Expand Up @@ -425,7 +424,87 @@ POST _reindex
// NOTCONSOLE

[float]
=== Index templates
=== Typeless APIs in 7.0

In Elasticsearch 7.0, each API will support typeless requests,
and specifying a type will produce a deprecation warning. Certain
typeless APIs are also available in 6.7, to enable a smooth upgrade
path to 7.0.

[float]
==== Indices APIs

Index creation, index template, and mapping APIs support a new `include_type_name`
url parameter that specifies whether mapping definitions in requests and responses
should contain the type name. The parameter defaults to `true` in version 6.7 to
match the pre-7.0 behavior of using type names in mappings. It defaults to `false`
in version 7.0 and will be removed in version 8.0.

See some examples of interactions with Elasticsearch with this option provided:

[source,js]
--------------------------------------------------
PUT index?include_type_name=false
{
"mappings": {
"properties": { <1>
"foo": {
"type": "keyword"
}
}
}
}
--------------------------------------------------
// CONSOLE
<1> Mappings are included directly under the `mappings` key, without a type name.

[source,js]
--------------------------------------------------
PUT index/_mappings?include_type_name=false
{
"properties": { <1>
"bar": {
"type": "text"
}
}
}
--------------------------------------------------
// CONSOLE
// TEST[continued]
<1> Mappings are included directly under the `mappings` key, without a type name.

[source,js]
--------------------------------------------------
GET index/_mappings?include_type_name=false
--------------------------------------------------
// CONSOLE
// TEST[continued]

The above call returns

[source,js]
--------------------------------------------------
{
"index": {
"mappings": {
"properties": { <1>
"foo": {
"type": "keyword"
},
"bar": {
"type": "text"
}
}
}
}
}
--------------------------------------------------
// CONSOLE
// TESTRESPONSE
<1> Mappings are included directly under the `mappings` key, without a type name.

[float]
==== Index templates

It is recommended to make index templates typeless before upgrading to 7.0 by
re-adding them with `include_type_name` set to `false`.
Expand Down Expand Up @@ -495,5 +574,18 @@ PUT index-2-01?include_type_name=false

In case of implicit index creation, because of documents that get indexed in
an index that doesn't exist yet, the template is always honored. This is
usually not a problem due to the fact that typless index calls work on typed
usually not a problem due to the fact that typeless index calls work on typed
indices.

[float]
==== Mixed-version clusters

In a cluster composed of both 6.7 and 7.0 nodes, the parameter
`include_type_name` should be specified in indices APIs like index
creation. This is because the parameter has a different default between
6.7 and 7.0, so the same mapping definition will not be valid for both
node versions.

Typeless document APIs such as `bulk` and `update` are only available as of
7.0, and will not work with 6.7 nodes. This also holds true for the typeless
versions of queries that perform document lookups, such as `terms`.
30 changes: 30 additions & 0 deletions docs/reference/settings/security-settings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,36 @@ method.
The length of time that a token is valid for. By default this value is `20m` or
20 minutes. The maximum value is 1 hour.

[float]
[[api-key-service-settings]]
==== API key service settings

You can set the following API key service settings in
`elasticsearch.yml`.

`xpack.security.authc.api_key.enabled`::
Set to `false` to disable the built-in API key service. Defaults to `true` unless
`xpack.security.http.ssl.enabled` is `false`. This prevents sniffing the API key
from a connection over plain http.

`xpack.security.authc.api_key.hashing.algorithm`::
Specifies the hashing algorithm that is used for securing API key credentials.
See <<password-hashing-algorithms>>. Defaults to `pbkdf2`.

`xpack.security.authc.api_key.cache.ttl`::
The time-to-live for cached API key entries. A API key id and a hash of its
API key are cached for this period of time. Specify the time period using
the standard {es} <<time-units,time units>>. Defaults to `1d`.

`xpack.security.authc.api_key.cache.max_keys`::
The maximum number of API key entries that can live in the
cache at any given time. Defaults to 10,000.

`xpack.security.authc.api_key.cache.hash_algo`:: (Expert Setting)
The hashing algorithm that is used for the
in-memory cached API key credentials. For possible values, see <<cache-hash-algo>>.
Defaults to `ssha256`.

[float]
[[realm-settings]]
==== Realm settings
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,9 +51,10 @@ public class RestGetIndexTemplateAction extends BaseRestHandler {
Collections.singleton(INCLUDE_TYPE_NAME_PARAMETER), Settings.FORMAT_PARAMS));
private static final DeprecationLogger deprecationLogger = new DeprecationLogger(
LogManager.getLogger(RestGetIndexTemplateAction.class));
public static final String TYPES_DEPRECATION_MESSAGE = "[types removal] The response format of get index template requests will change"
+ " in the next major version. Please start using the `include_type_name` parameter set to `false` in the request to "
+ "move to the new, typeless response format that will be the default in 7.0.";
public static final String TYPES_DEPRECATION_MESSAGE = "[types removal] The response format of get index " +
"template requests will change in 7.0. Please start using the include_type_name parameter set to false " +
"to move to the new, typeless response format that will become the default.";

public RestGetIndexTemplateAction(final Settings settings, final RestController controller) {
super(settings);
controller.registerHandler(GET, "/_template", this);
Expand Down
12 changes: 12 additions & 0 deletions x-pack/docs/en/rest-api/security/create-api-keys.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ applicable for the API key in milliseconds.
NOTE: By default API keys never expire. You can specify expiration at the time of
creation for the API keys.

See <<api-key-service-settings>> for configuration settings related to API key service.

==== Request Body

The following parameters can be specified in the body of a POST or PUT request:
Expand Down Expand Up @@ -97,3 +99,13 @@ API key information.
<1> unique id for this API key
<2> optional expiration in milliseconds for this API key
<3> generated API key

The API key returned by this API can then be used by sending a request with a
`Authorization` header with a value having the prefix `ApiKey ` followed
by the _credentials_, where _credentials_ is the base64 encoding of `id` and `api_key` joined by a colon.

[source,shell]
--------------------------------------------------
curl -H "Authorization: ApiKey VnVhQ2ZHY0JDZGJrUW0tZTVhT3g6dWkybHAyYXhUTm1zeWFrdzl0dk5udw==" http://localhost:9200/_cluster/health
--------------------------------------------------
// NOTCONSOLE
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ public final class Messages {
public static final String JOB_AUDIT_DATAFEED_STARTED_FROM_TO = "Datafeed started (from: {0} to: {1}) with frequency [{2}]";
public static final String JOB_AUDIT_DATAFEED_STARTED_REALTIME = "Datafeed started in real-time";
public static final String JOB_AUDIT_DATAFEED_STOPPED = "Datafeed stopped";
public static final String JOB_AUDIT_DATAFEED_ISOLATED = "Datafeed isolated";
public static final String JOB_AUDIT_DELETING = "Deleting job by task with id ''{0}''";
public static final String JOB_AUDIT_DELETING_FAILED = "Error deleting job: {0}";
public static final String JOB_AUDIT_DELETED = "Job deleted";
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ public synchronized void stop() {
// datafeeds, so they get reallocated. We have to do this first, otherwise the datafeeds
// could fail if they send data to a dead autodetect process.
if (datafeedManager != null) {
datafeedManager.isolateAllDatafeedsOnThisNode();
datafeedManager.isolateAllDatafeedsOnThisNodeBeforeShutdown();
}
NativeController nativeController = NativeControllerHolder.getNativeController(environment);
if (nativeController != null) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -264,6 +264,9 @@ private void unassignPersistentTasks(PersistentTasksCustomMetaData tasksCustomMe
.sorted(Comparator.comparing(PersistentTask::getTaskName))
.collect(Collectors.toList());

logger.info("Un-assigning persistent tasks : " +
datafeedAndJobTasks.stream().map(PersistentTask::getId).collect(Collectors.joining(", ", "[ ", " ]")));

TypedChainTaskExecutor<PersistentTask<?>> chainTaskExecutor =
new TypedChainTaskExecutor<>(client.threadPool().executor(executor()),
r -> true,
Expand All @@ -288,6 +291,7 @@ private void isolateDatafeeds(PersistentTasksCustomMetaData tasksCustomMetaData,
ActionListener<List<IsolateDatafeedAction.Response>> listener) {
Set<String> datafeedsToIsolate = MlTasks.startedDatafeedIds(tasksCustomMetaData);

logger.info("Isolating datafeeds: " + datafeedsToIsolate.toString());
TypedChainTaskExecutor<IsolateDatafeedAction.Response> isolateDatafeedsExecutor =
new TypedChainTaskExecutor<>(client.threadPool().executor(executor()), r -> true, ex -> true);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,6 @@ public class DatafeedManager {
private final DatafeedJobBuilder datafeedJobBuilder;
private final TaskRunner taskRunner = new TaskRunner();
private final AutodetectProcessManager autodetectProcessManager;
private volatile boolean isolated;

public DatafeedManager(ThreadPool threadPool, Client client, ClusterService clusterService, DatafeedJobBuilder datafeedJobBuilder,
Supplier<Long> currentTimeSupplier, Auditor auditor, AutodetectProcessManager autodetectProcessManager) {
Expand Down Expand Up @@ -130,18 +129,20 @@ public void stopAllDatafeedsOnThisNode(String reason) {
* This is used before the JVM is killed. It differs from stopAllDatafeedsOnThisNode in that it leaves
* the datafeed tasks in the "started" state, so that they get restarted on a different node.
*/
public void isolateAllDatafeedsOnThisNode() {
isolated = true;
public void isolateAllDatafeedsOnThisNodeBeforeShutdown() {
Iterator<Holder> iter = runningDatafeedsOnThisNode.values().iterator();
while (iter.hasNext()) {
Holder next = iter.next();
next.isolateDatafeed();
next.setRelocating();
// TODO: it's not ideal that this "isolate" method does something a bit different to the one below
next.setNodeIsShuttingDown();
iter.remove();
}
}

public void isolateDatafeed(long allocationId) {
// This calls get() rather than remove() because we expect that the persistent task will
// be removed shortly afterwards and that operation needs to be able to find the holder
Holder holder = runningDatafeedsOnThisNode.get(allocationId);
if (holder != null) {
holder.isolateDatafeed();
Expand Down Expand Up @@ -195,7 +196,7 @@ protected void doRun() {
holder.stop("general_lookback_failure", TimeValue.timeValueSeconds(20), e);
return;
}
if (isolated == false) {
if (holder.isIsolated() == false) {
if (next != null) {
doDatafeedRealtime(next, holder.datafeedJob.getJobId(), holder);
} else {
Expand Down Expand Up @@ -298,7 +299,7 @@ public class Holder {
private final ProblemTracker problemTracker;
private final Consumer<Exception> finishHandler;
volatile Scheduler.Cancellable cancellable;
private volatile boolean isRelocating;
private volatile boolean isNodeShuttingDown;

Holder(TransportStartDatafeedAction.DatafeedTask task, String datafeedId, DatafeedJob datafeedJob,
ProblemTracker problemTracker, Consumer<Exception> finishHandler) {
Expand All @@ -324,7 +325,7 @@ boolean isIsolated() {
}

public void stop(String source, TimeValue timeout, Exception e) {
if (isRelocating) {
if (isNodeShuttingDown) {
return;
}

Expand All @@ -344,11 +345,12 @@ public void stop(String source, TimeValue timeout, Exception e) {
if (cancellable != null) {
cancellable.cancel();
}
auditor.info(datafeedJob.getJobId(), Messages.getMessage(Messages.JOB_AUDIT_DATAFEED_STOPPED));
auditor.info(datafeedJob.getJobId(),
Messages.getMessage(isIsolated() ? Messages.JOB_AUDIT_DATAFEED_ISOLATED : Messages.JOB_AUDIT_DATAFEED_STOPPED));
finishHandler.accept(e);
logger.info("[{}] datafeed [{}] for job [{}] has been stopped{}", source, datafeedId, datafeedJob.getJobId(),
acquired ? "" : ", but there may be pending tasks as the timeout [" + timeout.getStringRep() + "] expired");
if (autoCloseJob) {
if (autoCloseJob && isIsolated() == false) {
closeJob();
}
if (acquired) {
Expand All @@ -361,16 +363,18 @@ public void stop(String source, TimeValue timeout, Exception e) {
}

/**
* This stops a datafeed WITHOUT updating the corresponding persistent task. It must ONLY be called
* immediately prior to shutting down a node. Then the datafeed task can remain "started", and be
* relocated to a different node. Calling this method at any other time will ruin the datafeed.
* This stops a datafeed WITHOUT updating the corresponding persistent task. When called it
* will stop the datafeed from sending data to its job as quickly as possible. The caller
* must do something sensible with the corresponding persistent task. If the node is shutting
* down the task will automatically get reassigned. Otherwise the caller must take action to
* remove or reassign the persistent task, or the datafeed will be left in limbo.
*/
public void isolateDatafeed() {
datafeedJob.isolate();
}

public void setRelocating() {
isRelocating = true;
public void setNodeIsShuttingDown() {
isNodeShuttingDown = true;
}

private Long executeLookBack(long startTime, Long endTime) throws Exception {
Expand Down
2 changes: 2 additions & 0 deletions x-pack/plugin/security/cli/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ dependencyLicenses {

if (project.inFipsJvm) {
unitTest.enabled = false
testingConventions.enabled = false
// Forbiden APIs non-portable checks fail because bouncy castle classes being used from the FIPS JDK since those are
// not part of the Java specification - all of this is as designed, so we have to relax this check for FIPS.
tasks.withType(CheckForbiddenApis) {
Expand All @@ -32,4 +33,5 @@ if (project.inFipsJvm) {
// FIPS JVM includes many classes from bouncycastle which count as jar hell for the third party audit,
// rather than provide a long list of exclusions, disable the check on FIPS.
thirdPartyAudit.enabled = false

}
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ public class ApiKeyService {
static final String API_KEY_LIMITED_ROLE_DESCRIPTORS_KEY = "_security_api_key_limited_by_role_descriptors";

public static final Setting<String> PASSWORD_HASHING_ALGORITHM = new Setting<>(
"xpack.security.authc.api_key_hashing.algorithm", "pbkdf2", Function.identity(), v -> {
"xpack.security.authc.api_key.hashing.algorithm", "pbkdf2", Function.identity(), v -> {
if (Hasher.getAvailableAlgoStoredHash().contains(v.toLowerCase(Locale.ROOT)) == false) {
throw new IllegalArgumentException("Invalid algorithm: " + v + ". Valid values for password hashing are " +
Hasher.getAvailableAlgoStoredHash().toString());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -316,6 +316,7 @@ public void testInvalidatedApiKeysDeletedByRemover() throws Exception {
}, 30, TimeUnit.SECONDS);
}

@AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/38408")
public void testExpiredApiKeysDeletedAfter1Week() throws Exception {
List<CreateApiKeyResponse> responses = createApiKeys(2, null);
Instant created = Instant.now();
Expand Down
Loading

0 comments on commit b442cc5

Please sign in to comment.