Skip to content

Commit

Permalink
Merge branch 'main' into esql_heap_attack_1mb
Browse files Browse the repository at this point in the history
  • Loading branch information
nik9000 committed Jan 29, 2025
2 parents ad38f05 + d763805 commit c7ff46b
Show file tree
Hide file tree
Showing 32 changed files with 660 additions and 135 deletions.
5 changes: 5 additions & 0 deletions docs/changelog/120573.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 120573
summary: Optimize `IngestDocument` `FieldPath` allocation
area: Ingest Node
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/120807.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 120807
summary: Remove INDEX_REFRESH_BLOCK after index becomes searchable
area: CRUD
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/120824.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 120824
summary: Optimize some per-document hot paths in the geoip processor
area: Ingest Node
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/120997.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 120997
summary: Allow `SSHA-256` for API key credential hash
area: Authentication
type: enhancement
issues: []
6 changes: 2 additions & 4 deletions docs/reference/indices/shard-stores.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -198,10 +198,8 @@ The API returns the following response:
// TESTRESPONSE[s/"attributes": \{[^}]*\}/"attributes": $body.$_path/]
// TESTRESPONSE[s/"roles": \[[^]]*\]/"roles": $body.$_path/]
// TESTRESPONSE[s/"8.10.0"/\$node_version/]
// TESTRESPONSE[s/"min_index_version": 7000099/"min_index_version": $body.$_path/]
// TESTRESPONSE[s/"max_index_version": 8100099/"max_index_version": $body.$_path/]


// TESTRESPONSE[s/"min_index_version": [0-9]+/"min_index_version": $body.$_path/]
// TESTRESPONSE[s/"max_index_version": [0-9]+/"max_index_version": $body.$_path/]

<1> The key is the corresponding shard id for the store information
<2> A list of store information for all copies of the shard
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -96,8 +96,8 @@ or index documents to an enrich index.
Instead, update your source indices
and <<execute-enrich-policy-api,execute>> the enrich policy again.
This creates a new enrich index from your updated source indices.
The previous enrich index will deleted with a delayed maintenance job.
By default this is done every 15 minutes.
The previous enrich index will be deleted with a delayed maintenance
job that executes by default every 15 minutes.
// end::update-enrich-index[]

By default, this API is synchronous: It returns when a policy has been executed.
Expand Down
64 changes: 64 additions & 0 deletions docs/reference/settings/security-hash-settings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -124,4 +124,68 @@ following:
initial input with SHA512 first.
|=======================

Furthermore, {es} supports authentication via securely-generated high entropy tokens,
for instance <<security-api-create-api-key,API keys>>.
Analogous to passwords, only the tokens' hashes are stored. Since the tokens are guaranteed
to have sufficiently high entropy to resist offline attacks, secure salted hash functions are supported
in addition to the password-hashing algorithms mentioned above.

You can configure the algorithm for API key stored credential hashing
by setting the <<static-cluster-setting,static>>
`xpack.security.authc.api_key.hashing.algorithm` setting to one of the
following

[[secure-token-hashing-algorithms]]
.Secure token hashing algorithms
|=======================
| Algorithm | | | Description

| `ssha256` | | | Uses a salted `sha-256` algorithm. (default)
| `bcrypt` | | | Uses `bcrypt` algorithm with salt generated in 1024 rounds.
| `bcrypt4` | | | Uses `bcrypt` algorithm with salt generated in 16 rounds.
| `bcrypt5` | | | Uses `bcrypt` algorithm with salt generated in 32 rounds.
| `bcrypt6` | | | Uses `bcrypt` algorithm with salt generated in 64 rounds.
| `bcrypt7` | | | Uses `bcrypt` algorithm with salt generated in 128 rounds.
| `bcrypt8` | | | Uses `bcrypt` algorithm with salt generated in 256 rounds.
| `bcrypt9` | | | Uses `bcrypt` algorithm with salt generated in 512 rounds.
| `bcrypt10` | | | Uses `bcrypt` algorithm with salt generated in 1024 rounds.
| `bcrypt11` | | | Uses `bcrypt` algorithm with salt generated in 2048 rounds.
| `bcrypt12` | | | Uses `bcrypt` algorithm with salt generated in 4096 rounds.
| `bcrypt13` | | | Uses `bcrypt` algorithm with salt generated in 8192 rounds.
| `bcrypt14` | | | Uses `bcrypt` algorithm with salt generated in 16384 rounds.
| `pbkdf2` | | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 10000 iterations.
| `pbkdf2_1000` | | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 1000 iterations.
| `pbkdf2_10000` | | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 10000 iterations.
| `pbkdf2_50000` | | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 50000 iterations.
| `pbkdf2_100000` | | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 100000 iterations.
| `pbkdf2_500000` | | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 500000 iterations.
| `pbkdf2_1000000` | | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 1000000 iterations.
| `pbkdf2_stretch` | | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 10000 iterations, after hashing the
initial input with SHA512 first.
| `pbkdf2_stretch_1000` | | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 1000 iterations, after hashing the
initial input with SHA512 first.
| `pbkdf2_stretch_10000` | | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 10000 iterations, after hashing the
initial input with SHA512 first.
| `pbkdf2_stretch_50000` | | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 50000 iterations, after hashing the
initial input with SHA512 first.
| `pbkdf2_stretch_100000` | | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 100000 iterations, after hashing the
initial input with SHA512 first.
| `pbkdf2_stretch_500000` | | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 500000 iterations, after hashing the
initial input with SHA512 first.
| `pbkdf2_stretch_1000000`| | | Uses `PBKDF2` key derivation function with `HMAC-SHA512` as a
pseudorandom function using 1000000 iterations, after hashing the
initial input with SHA512 first.
|=======================
8 changes: 4 additions & 4 deletions docs/reference/settings/security-settings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ For more information about creating and updating the {es} keystore, see
==== General security settings
`xpack.security.enabled`::
(<<static-cluster-setting,Static>>)
Defaults to `true`, which enables {es} {security-features} on the node.
This setting must be enabled to use Elasticsearch's authentication,
Defaults to `true`, which enables {es} {security-features} on the node.
This setting must be enabled to use Elasticsearch's authentication,
authorization and audit features. +
+
--
Expand Down Expand Up @@ -229,7 +229,7 @@ Defaults to `7d`.

--
NOTE: Large real-time clock inconsistency across cluster nodes can cause problems
with evaluating the API key retention period. That is, if the clock on the node
with evaluating the API key retention period. That is, if the clock on the node
invalidating the API key is significantly different than the one performing the deletion,
the key may be retained for longer or shorter than the configured retention period.

Expand All @@ -252,7 +252,7 @@ Sets the timeout of the internal search and delete call.
`xpack.security.authc.api_key.hashing.algorithm`::
(<<static-cluster-setting,Static>>)
Specifies the hashing algorithm that is used for securing API key credentials.
See <<password-hashing-algorithms>>. Defaults to `pbkdf2`.
See <<secure-token-hashing-algorithms>>. Defaults to `ssha256`.

[discrete]
[[security-domain-settings]]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ GET _cluster/allocation/explain
[[fix-watermark-errors-temporary]]
==== Temporary Relief

To immediately restore write operations, you can temporarily increase the
To immediately restore write operations, you can temporarily increase
<<disk-based-shard-allocation,disk watermarks>> and remove the
<<index-block-settings,write block>>.

Expand Down Expand Up @@ -106,19 +106,33 @@ PUT _cluster/settings
[[fix-watermark-errors-resolve]]
==== Resolve

As a long-term solution, we recommend you do one of the following best suited
to your use case:
To resolve watermark errors permanently, perform one of the following actions:

* add nodes to the affected <<data-tiers,data tiers>>
+
TIP: You should enable <<xpack-autoscaling,autoscaling>> for clusters deployed using our {ess}, {ece}, and {eck} platforms.
* Horizontally scale nodes of the affected <<data-tiers,data tiers>>.

* upgrade existing nodes to increase disk space
+
TIP: On {ess}, https://support.elastic.co[Elastic Support] intervention may
become necessary if <<cluster-health,cluster health>> reaches `status:red`.
* Vertically scale existing nodes to increase disk space.

* delete unneeded indices using the <<indices-delete-index,delete index API>>
* Delete indices using the <<indices-delete-index,delete index API>>, either
permanently if the index isn't needed, or temporarily to later
<<snapshots-restore-snapshot,restore>>.

* update related <<index-lifecycle-management,ILM policy>> to push indices
through to later <<data-tiers,data tiers>>

TIP: On {ess} and {ece}, indices may need to be temporarily deleted via
its {cloud}/ec-api-console.html[Elasticsearch API Console] to later
<<snapshots-restore-snapshot,snapshot restore>> in order to resolve
<<cluster-health,cluster health>> `status:red` which will block
{cloud}/ec-activity-page.html[attempted changes]. If you experience issues
with this resolution flow on {ess}, kindly reach out to
https://support.elastic.co[Elastic Support] for assistance.

== Prevent watermark errors

To avoid watermark errors in future, , perform one of the following actions:

* If you're using {ess}, {ece}, or {eck}: Enable <<xpack-autoscaling,autoscaling>>.

* Set up {kibana-ref}/kibana-alerts.html[stack monitoring alerts] on top of
<<monitor-elasticsearch-cluster,{es} monitoring>> to be notified before
the flood-stage watermark is reached.
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@
import static org.elasticsearch.ingest.geoip.GeoIpTestUtils.copyDefaultDatabases;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;
import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertResponse;
import static org.hamcrest.Matchers.allOf;
import static org.hamcrest.Matchers.anEmptyMap;
import static org.hamcrest.Matchers.contains;
import static org.hamcrest.Matchers.containsInAnyOrder;
Expand Down Expand Up @@ -172,10 +173,15 @@ public void testInvalidTimestamp() throws Exception {
for (Path geoIpTmpDir : geoIpTmpDirs) {
try (Stream<Path> files = Files.list(geoIpTmpDir)) {
Set<String> names = files.map(f -> f.getFileName().toString()).collect(Collectors.toSet());
assertThat(names, not(hasItem("GeoLite2-ASN.mmdb")));
assertThat(names, not(hasItem("GeoLite2-City.mmdb")));
assertThat(names, not(hasItem("GeoLite2-Country.mmdb")));
assertThat(names, not(hasItem("MyCustomGeoLite2-City.mmdb")));
assertThat(
names,
allOf(
not(hasItem("GeoLite2-ASN.mmdb")),
not(hasItem("GeoLite2-City.mmdb")),
not(hasItem("GeoLite2-Country.mmdb")),
not(hasItem("MyCustomGeoLite2-City.mmdb"))
)
);
}
}
});
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,10 @@ public class DatabaseReaderLazyLoader implements IpDatabase {
private volatile boolean deleteDatabaseFileOnShutdown;
private final AtomicInteger currentUsages = new AtomicInteger(0);

// it seems insane, especially if you read the code for UnixPath, but calling toString on a path in advance here is faster enough
// than calling it on every call to cache.putIfAbsent that it makes the slight additional internal complication worth it
private final String cachedDatabasePathToString;

DatabaseReaderLazyLoader(GeoIpCache cache, Path databasePath, String md5) {
this.cache = cache;
this.databasePath = Objects.requireNonNull(databasePath);
Expand All @@ -61,6 +65,9 @@ public class DatabaseReaderLazyLoader implements IpDatabase {
this.databaseReader = new SetOnce<>();
this.databaseType = new SetOnce<>();
this.buildDate = new SetOnce<>();

// cache the toString on construction
this.cachedDatabasePathToString = databasePath.toString();
}

/**
Expand Down Expand Up @@ -99,7 +106,7 @@ int current() {
@Override
@Nullable
public <RESPONSE> RESPONSE getResponse(String ipAddress, CheckedBiFunction<Reader, String, RESPONSE, Exception> responseProvider) {
return cache.putIfAbsent(ipAddress, databasePath.toString(), ip -> {
return cache.putIfAbsent(ipAddress, cachedDatabasePathToString, ip -> {
try {
return responseProvider.apply(get(), ipAddress);
} catch (Exception e) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -206,12 +206,32 @@ public static Metadata fromXContent(XContentParser parser) {
}

public boolean isCloseToExpiration() {
return Instant.ofEpochMilli(lastCheck).isBefore(Instant.now().minus(25, ChronoUnit.DAYS));
final Instant now = Instant.ofEpochMilli(System.currentTimeMillis()); // millisecond precision is sufficient (and faster)
return Instant.ofEpochMilli(lastCheck).isBefore(now.minus(25, ChronoUnit.DAYS));
}

// these constants support the micro optimization below, see that note
private static final TimeValue THIRTY_DAYS = TimeValue.timeValueDays(30);
private static final long THIRTY_DAYS_MILLIS = THIRTY_DAYS.millis();

public boolean isNewEnough(Settings settings) {
TimeValue valid = settings.getAsTime("ingest.geoip.database_validity", TimeValue.timeValueDays(30));
return Instant.ofEpochMilli(lastCheck).isAfter(Instant.now().minus(valid.getMillis(), ChronoUnit.MILLIS));
// micro optimization: this looks a little silly, but the expected case is that database_validity is only used in tests.
// we run this code on every document, though, so the argument checking and other bits that getAsTime does is enough
// to show up in a flame graph.

// if you grep for "ingest.geoip.database_validity" and you'll see that it's not a 'real' setting -- it's only defined in
// AbstractGeoIpIT, that's why it's an inline string constant here and no some static final, and also why it cannot
// be the case that this setting exists in a real running cluster

final long valid;
if (settings.hasValue("ingest.geoip.database_validity")) {
valid = settings.getAsTime("ingest.geoip.database_validity", THIRTY_DAYS).millis();
} else {
valid = THIRTY_DAYS_MILLIS;
}

final Instant now = Instant.ofEpochMilli(System.currentTimeMillis()); // millisecond precision is sufficient (and faster)
return Instant.ofEpochMilli(lastCheck).isAfter(now.minus(valid, ChronoUnit.MILLIS));
}

@Override
Expand Down
3 changes: 0 additions & 3 deletions muted-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -154,9 +154,6 @@ tests:
- class: org.elasticsearch.xpack.ccr.rest.ShardChangesRestIT
method: testShardChangesNoOperation
issue: https://github.com/elastic/elasticsearch/issues/118800
- class: org.elasticsearch.smoketest.DocsClientYamlTestSuiteIT
method: test {yaml=reference/indices/shard-stores/line_150}
issue: https://github.com/elastic/elasticsearch/issues/118896
- class: org.elasticsearch.cluster.service.MasterServiceTests
method: testThreadContext
issue: https://github.com/elastic/elasticsearch/issues/118914
Expand Down
Loading

0 comments on commit c7ff46b

Please sign in to comment.