Skip to content

Commit

Permalink
Merge pull request #146 from elastic/master
Browse files Browse the repository at this point in the history
  • Loading branch information
costin authored Jun 15, 2022
2 parents 7425701 + aebb7bc commit 0f0f0b0
Show file tree
Hide file tree
Showing 21 changed files with 369 additions and 47 deletions.
1 change: 1 addition & 0 deletions .ci/bwcVersions
Original file line number Diff line number Diff line change
Expand Up @@ -64,5 +64,6 @@ BWC_VERSION:
- "8.2.1"
- "8.2.2"
- "8.2.3"
- "8.2.4"
- "8.3.0"
- "8.4.0"
2 changes: 1 addition & 1 deletion .ci/snapshotBwcVersions
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
BWC_VERSION:
- "7.17.5"
- "8.2.3"
- "8.2.4"
- "8.3.0"
- "8.4.0"
7 changes: 4 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -450,11 +450,12 @@ causes and their causes, as well as any suppressed exceptions and so on:
logger.debug("operation failed", exception);

If you wish to use placeholders and an exception at the same time, construct a
`ParameterizedMessage`:
`Supplier<String>` and use `org.elasticsearch.core.Strings.format`
- note java.util.Formatter syntax

logger.debug(() -> "failed at offset [" + offset + "]", exception);
logger.debug(() -> Strings.format("failed at offset [%s]", offset), exception);

You can also use a `Supplier<ParameterizedMessage>` to avoid constructing
You can also use a `java.util.Supplier<String>` to avoid constructing
expensive messages that will usually be discarded:

logger.debug(() -> "rarely seen output [" + expensiveMethod() + "]");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -321,8 +321,8 @@ public static boolean isMlCompatible(Version version) {
.map(v -> Version.fromString(v, Version.Mode.RELAXED))
.orElse(null);

// glibc version 2.35 introduced incompatibilities in ML syscall filters that were fixed in 7.17.5+ and 8.2.2+
if (glibcVersion != null && glibcVersion.onOrAfter(Version.fromString("2.35", Version.Mode.RELAXED))) {
// glibc version 2.34 introduced incompatibilities in ML syscall filters that were fixed in 7.17.5+ and 8.2.2+
if (glibcVersion != null && glibcVersion.onOrAfter(Version.fromString("2.34", Version.Mode.RELAXED))) {
if (version.before(Version.fromString("7.17.5"))) {
return false;
} else if (version.getMajor() > 7 && version.before(Version.fromString("8.2.2"))) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,3 +54,11 @@ java.util.concurrent.ScheduledThreadPoolExecutor#<init>(int)
java.util.concurrent.ScheduledThreadPoolExecutor#<init>(int, java.util.concurrent.ThreadFactory)
java.util.concurrent.ScheduledThreadPoolExecutor#<init>(int, java.util.concurrent.RejectedExecutionHandler)
java.util.concurrent.ScheduledThreadPoolExecutor#<init>(int, java.util.concurrent.ThreadFactory, java.util.concurrent.RejectedExecutionHandler)


@defaultMessage use java.util.Supplier<String> with String.format instead of ParameterizedMessage
org.apache.logging.log4j.message.ParameterizedMessage#<init>(java.lang.String, java.lang.String[], java.lang.Throwable)
org.apache.logging.log4j.message.ParameterizedMessage#<init>(java.lang.String, java.lang.Object[], java.lang.Throwable)
org.apache.logging.log4j.message.ParameterizedMessage#<init>(java.lang.String, java.lang.Object[])
org.apache.logging.log4j.message.ParameterizedMessage#<init>(java.lang.String, java.lang.Object)
org.apache.logging.log4j.message.ParameterizedMessage#<init>(java.lang.String, java.lang.Object, java.lang.Object)
5 changes: 5 additions & 0 deletions docs/changelog/87498.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 87498
summary: Avoid attempting PIT close on PIT open failure
area: EQL
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/87554.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 87554
summary: "[TSDB] Add Kahan support to downsampling summation"
area: "Rollup"
type: enhancement
issues: []
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,7 @@ GET _ml/datafeeds/datafeed-test2/_preview
// TEST[skip:continued]

<1> This runtime field uses the `toLowerCase` function to convert a string to
all lowercase letters. Likewise, you can use the `toUpperCase{}` function to
all lowercase letters. Likewise, you can use the `toUpperCase` function to
convert a string to uppercase letters.

The preview {dfeed} API returns the following results, which show that "JOE"
Expand Down
2 changes: 2 additions & 0 deletions docs/reference/release-notes.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@

This section summarizes the changes in each release.

* <<release-notes-8.2.3>>
* <<release-notes-8.2.2>>
* <<release-notes-8.2.1>>
* <<release-notes-8.2.0>>
Expand All @@ -23,6 +24,7 @@ This section summarizes the changes in each release.

--

include::release-notes/8.2.3.asciidoc[]
include::release-notes/8.2.2.asciidoc[]
include::release-notes/8.2.1.asciidoc[]
include::release-notes/8.2.0.asciidoc[]
Expand Down
35 changes: 35 additions & 0 deletions docs/reference/release-notes/8.2.3.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
[[release-notes-8.2.3]]
== {es} version 8.2.3

Also see <<breaking-changes-8.2,Breaking changes in 8.2>>.

[[bug-8.2.3]]
[float]
=== Bug fixes

Authorization::
* Fix resolution of wildcard application privileges {es-pull}87293[#87293]

CCR::
* Remove some blocking in CcrRepository {es-pull}87235[#87235]

Indices APIs::
* Add Resolve Index API to the "read" permission for an index {es-pull}87052[#87052] (issue: {es-issue}86977[#86977])

Infra/Core::
* Clean up `DeflateCompressor` after exception {es-pull}87163[#87163] (issue: {es-issue}87160[#87160])

Security::
* Security plugin close releasable realms {es-pull}87429[#87429] (issue: {es-issue}86286[#86286])

Snapshot/Restore::
* Fork after calling `getRepositoryData` from `StoreRecovery` {es-pull}87254[#87254] (issue: {es-issue}87237[#87237])

[[enhancement-8.2.3]]
[float]
=== Enhancements

Infra/Core::
* Force property expansion for security policy {es-pull}87396[#87396]


1 change: 1 addition & 0 deletions server/src/main/java/org/elasticsearch/Version.java
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,7 @@ public class Version implements Comparable<Version>, ToXContentFragment {
public static final Version V_8_2_1 = new Version(8_02_01_99, org.apache.lucene.util.Version.LUCENE_9_1_0);
public static final Version V_8_2_2 = new Version(8_02_02_99, org.apache.lucene.util.Version.LUCENE_9_1_0);
public static final Version V_8_2_3 = new Version(8_02_03_99, org.apache.lucene.util.Version.LUCENE_9_1_0);
public static final Version V_8_2_4 = new Version(8_02_04_99, org.apache.lucene.util.Version.LUCENE_9_1_0);
public static final Version V_8_3_0 = new Version(8_03_00_99, org.apache.lucene.util.Version.LUCENE_9_2_0);
public static final Version V_8_4_0 = new Version(8_04_00_99, org.apache.lucene.util.Version.LUCENE_9_2_0);
public static final Version CURRENT = V_8_4_0;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@
import org.elasticsearch.cluster.SimpleDiffable;
import org.elasticsearch.cluster.metadata.IndexMetadata;
import org.elasticsearch.cluster.metadata.MetadataIndexStateService;
import org.elasticsearch.common.collect.ImmutableOpenMap;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.util.set.Sets;
Expand All @@ -36,15 +35,15 @@
* Represents current cluster level blocks to block dirty operations done against the cluster.
*/
public class ClusterBlocks implements SimpleDiffable<ClusterBlocks> {
public static final ClusterBlocks EMPTY_CLUSTER_BLOCK = new ClusterBlocks(emptySet(), ImmutableOpenMap.of());
public static final ClusterBlocks EMPTY_CLUSTER_BLOCK = new ClusterBlocks(emptySet(), Map.of());

private final Set<ClusterBlock> global;

private final ImmutableOpenMap<String, Set<ClusterBlock>> indicesBlocks;
private final Map<String, Set<ClusterBlock>> indicesBlocks;

private final EnumMap<ClusterBlockLevel, ImmutableLevelHolder> levelHolders;

ClusterBlocks(Set<ClusterBlock> global, ImmutableOpenMap<String, Set<ClusterBlock>> indicesBlocks) {
ClusterBlocks(Set<ClusterBlock> global, Map<String, Set<ClusterBlock>> indicesBlocks) {
this.global = global;
this.indicesBlocks = indicesBlocks;
levelHolders = generateLevelHolders(global, indicesBlocks);
Expand Down Expand Up @@ -72,19 +71,19 @@ private Set<ClusterBlock> blocksForIndex(ClusterBlockLevel level, String index)

private static EnumMap<ClusterBlockLevel, ImmutableLevelHolder> generateLevelHolders(
Set<ClusterBlock> global,
ImmutableOpenMap<String, Set<ClusterBlock>> indicesBlocks
Map<String, Set<ClusterBlock>> indicesBlocks
) {

EnumMap<ClusterBlockLevel, ImmutableLevelHolder> levelHolders = new EnumMap<>(ClusterBlockLevel.class);
for (final ClusterBlockLevel level : ClusterBlockLevel.values()) {
Predicate<ClusterBlock> containsLevel = block -> block.contains(level);
Set<ClusterBlock> newGlobal = unmodifiableSet(global.stream().filter(containsLevel).collect(toSet()));

ImmutableOpenMap.Builder<String, Set<ClusterBlock>> indicesBuilder = ImmutableOpenMap.builder();
Map<String, Set<ClusterBlock>> indicesBuilder = new HashMap<>();
for (Map.Entry<String, Set<ClusterBlock>> entry : indicesBlocks.entrySet()) {
indicesBuilder.put(entry.getKey(), unmodifiableSet(entry.getValue().stream().filter(containsLevel).collect(toSet())));
}
levelHolders.put(level, new ImmutableLevelHolder(newGlobal, indicesBuilder.build()));
levelHolders.put(level, new ImmutableLevelHolder(newGlobal, Map.copyOf(indicesBuilder)));
}
return levelHolders;
}
Expand Down Expand Up @@ -275,10 +274,7 @@ private static void writeBlockSet(Set<ClusterBlock> blocks, StreamOutput out) th

public static ClusterBlocks readFrom(StreamInput in) throws IOException {
final Set<ClusterBlock> global = readBlockSet(in);
ImmutableOpenMap<String, Set<ClusterBlock>> indicesBlocks = in.readImmutableOpenMap(
i -> i.readString().intern(),
ClusterBlocks::readBlockSet
);
Map<String, Set<ClusterBlock>> indicesBlocks = in.readImmutableMap(i -> i.readString().intern(), ClusterBlocks::readBlockSet);
if (global.isEmpty() && indicesBlocks.isEmpty()) {
return EMPTY_CLUSTER_BLOCK;
}
Expand All @@ -294,7 +290,7 @@ public static Diff<ClusterBlocks> readDiffFrom(StreamInput in) throws IOExceptio
return SimpleDiffable.readDiffFrom(ClusterBlocks::readFrom, in);
}

record ImmutableLevelHolder(Set<ClusterBlock> global, ImmutableOpenMap<String, Set<ClusterBlock>> indices) {}
record ImmutableLevelHolder(Set<ClusterBlock> global, Map<String, Set<ClusterBlock>> indices) {}

public static Builder builder() {
return new Builder();
Expand Down Expand Up @@ -418,11 +414,11 @@ public ClusterBlocks build() {
return EMPTY_CLUSTER_BLOCK;
}
// We copy the block sets here in case of the builder is modified after build is called
ImmutableOpenMap.Builder<String, Set<ClusterBlock>> indicesBuilder = ImmutableOpenMap.builder(indices.size());
Map<String, Set<ClusterBlock>> indicesBuilder = new HashMap<>(indices.size());
for (Map.Entry<String, Set<ClusterBlock>> entry : indices.entrySet()) {
indicesBuilder.put(entry.getKey(), Set.copyOf(entry.getValue()));
}
return new ClusterBlocks(Set.copyOf(global), indicesBuilder.build());
return new ClusterBlocks(Set.copyOf(global), Map.copyOf(indicesBuilder));
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,10 @@ public CompensatedSum(double value, double delta) {
this.delta = delta;
}

public CompensatedSum() {
this(0, 0);
}

/**
* The value of the sum.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@

import org.elasticsearch.Version;
import org.elasticsearch.common.UUIDs;
import org.elasticsearch.common.collect.ImmutableOpenMap;
import org.elasticsearch.common.io.stream.BytesStreamOutput;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.rest.RestStatus;
Expand All @@ -19,6 +18,7 @@
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Map;

import static java.util.EnumSet.copyOf;
import static org.elasticsearch.test.VersionUtils.randomVersion;
Expand Down Expand Up @@ -56,7 +56,7 @@ public void testToStringDanglingComma() {

public void testGlobalBlocksCheckedIfNoIndicesSpecified() {
ClusterBlock globalBlock = randomClusterBlock();
ClusterBlocks clusterBlocks = new ClusterBlocks(Collections.singleton(globalBlock), ImmutableOpenMap.of());
ClusterBlocks clusterBlocks = new ClusterBlocks(Collections.singleton(globalBlock), Map.of());
ClusterBlockException exception = clusterBlocks.indicesBlockedException(randomFrom(globalBlock.levels()), new String[0]);
assertNotNull(exception);
assertEquals(exception.blocks(), Collections.singleton(globalBlock));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@

public class LoggersTests extends ESTestCase {

public void testParameterizedMessageLambda() throws Exception {
public void testStringSupplierAndFormatting() throws Exception {
// adding a random id to allow test to run multiple times. See AbstractConfiguration#addAppender
final MockAppender appender = new MockAppender("trace_appender" + randomInt());
appender.start();
Expand Down
39 changes: 29 additions & 10 deletions x-pack/docs/en/security/authorization/mapping-roles.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,35 @@
[[mapping-roles]]
=== Mapping users and groups to roles

If you authenticate users with the `native` or `file` realms, you can manage
role assignment by using the <<managing-native-users,user management APIs>> or
the <<users-command,users>> command-line tool respectively.
Role mapping is supported by all realms except `native` and `file`.

For other types of realms, you must create _role-mappings_ that define which
roles should be assigned to each user based on their username, groups, or
other metadata.
The native and file realms assign roles directly to users.
Native realms use <<managing-native-users,user management APIs>>.
File realms use <<roles-management-file,File-based role management>>.

You can map roles through the
<<mapping-roles-api, Role mapping API>> (recommended) or a <<mapping-roles-file, Role mapping file>>.


The PKI, LDAP, AD, Kerberos, OpenID Connect, JWT, and SAML realms support the
<<mapping-roles-api, Role mapping API>>. Only PKI, LDAP, and AD realms
support <<mapping-roles-file, Role mapping files>>.

The PKI, LDAP, AD, Kerberos, OpenID Connect, JWT, and
SAML realms also support <<authorization_realms,delegated authorization>>.
You can either map roles for a realm or use delegated authorization; you cannot use both simultaneously.

To use role mapping, you create roles and role mapping rules.
Role mapping rules can be based on realm name, realm type, username, groups,
other user metadata, or combinations of those values.

NOTE: When <<anonymous-access,anonymous access>> is enabled, the roles
of the anonymous user are assigned to all the other users as well.

If there are role-mapping rules created through the API as well as a role mapping file,
the rules are combined.
It's possible for a single user to have some roles that were mapped through the API,
and others assigned based on the role mapping file.
You can define role-mappings via an
<<mapping-roles-api, API>> or manage them through <<mapping-roles-file, files>>.
These two sources of role-mapping are combined inside of the {es}
Expand All @@ -21,18 +39,19 @@ possible for a single user to have some roles that have been mapped through
the API, and other roles that are mapped through files.

NOTE: Users with no roles assigned will be unauthorized for any action.
In other words, they may be able to authenticate, but they will have no roles.
No roles means no privileges, and no privileges means no authorizations to
make requests.

When you use role-mappings, you assign existing roles to users.
When you use role mappings to assign roles to users, the roles must exist.
There are two sources of roles.
The available roles should either be added using the
<<security-role-apis,role management APIs>> or defined in the
<<roles-management-file,roles file>>. Either role-mapping method can use
either role management method. For example, when you use the role mapping API,
you are able to map users to both API-managed roles and file-managed roles
(and likewise for file-based role-mappings).

TIP: The PKI, LDAP, Kerberos, OpenID Connect, JWT, and SAML realms support using
<<authorization_realms,authorization realms>> as an alternative to role mapping.

[[mapping-roles-api]]
==== Using the role mapping API

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -139,11 +139,14 @@ private <Response> void openPIT(ActionListener<Response> listener, Runnable runn

@Override
public void close(ActionListener<Boolean> listener) {
client.execute(
ClosePointInTimeAction.INSTANCE,
new ClosePointInTimeRequest(pitId),
map(listener, ClosePointInTimeResponse::isSucceeded)
);
pitId = null;
// the pitId could be null as a consequence of a failure on openPIT
if (pitId != null) {
client.execute(
ClosePointInTimeAction.INSTANCE,
new ClosePointInTimeRequest(pitId),
map(listener, ClosePointInTimeResponse::isSucceeded)
);
pitId = null;
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -388,7 +388,7 @@ <Response extends ActionResponse> void handleSearchRequest(ActionListener<Respon
0,
ShardSearchFailure.EMPTY_ARRAY,
SearchResponse.Clusters.EMPTY,
searchRequestsRemainingCount() == 1 ? searchRequest.pointInTimeBuilder().getEncodedId() : null
searchRequest.pointInTimeBuilder().getEncodedId()
);

if (searchRequestsRemainingCount() == 1) {
Expand Down Expand Up @@ -467,7 +467,7 @@ <Response extends ActionResponse> void handleSearchRequest(ActionListener<Respon
0,
failures,
SearchResponse.Clusters.EMPTY,
null
searchRequest.pointInTimeBuilder().getEncodedId()
);

// this should still be caught and the exception handled properly and circuit breaker cleared
Expand Down
Loading

0 comments on commit 0f0f0b0

Please sign in to comment.