Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…111712_fix
  • Loading branch information
astefan committed Aug 14, 2024
2 parents 6a4f060 + 595628f commit 1352546
Show file tree
Hide file tree
Showing 53 changed files with 1,237 additions and 561 deletions.
6 changes: 6 additions & 0 deletions docs/changelog/111740.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 111740
summary: Fix Start Trial API output acknowledgement header for features
area: License
type: bug
issues:
- 111739
5 changes: 5 additions & 0 deletions docs/changelog/111807.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 111807
summary: Explain Function Score Query
area: Search
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/111843.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 111843
summary: Add maximum nested depth check to WKT parser
area: Geo
type: bug
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/111879.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 111879
summary: "ESQL: Have BUCKET generate friendlier intervals"
area: ES|QL
type: enhancement
issues:
- 110916
2 changes: 1 addition & 1 deletion docs/reference/esql/esql-query-api.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ Column `name` and `type` for each column returned in `values`. Each object is a
Column `name` and `type` for each queried column. Each object is a single column. This is only
returned if `drop_null_columns` is sent with the request.

`rows`::
`values`::
(array of arrays)
Values for the search results.

Expand Down
12 changes: 0 additions & 12 deletions docs/reference/esql/functions/kibana/definition/mv_count.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

12 changes: 0 additions & 12 deletions docs/reference/esql/functions/kibana/definition/mv_first.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

12 changes: 0 additions & 12 deletions docs/reference/esql/functions/kibana/definition/mv_last.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

12 changes: 0 additions & 12 deletions docs/reference/esql/functions/kibana/definition/mv_max.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

12 changes: 0 additions & 12 deletions docs/reference/esql/functions/kibana/definition/mv_min.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 0 additions & 1 deletion docs/reference/esql/functions/types/mv_count.asciidoc

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 0 additions & 1 deletion docs/reference/esql/functions/types/mv_first.asciidoc

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 0 additions & 1 deletion docs/reference/esql/functions/types/mv_last.asciidoc

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 0 additions & 1 deletion docs/reference/esql/functions/types/mv_max.asciidoc

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 0 additions & 1 deletion docs/reference/esql/functions/types/mv_min.asciidoc

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 2 additions & 3 deletions docs/reference/health/health.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -204,9 +204,8 @@ for health status set `verbose` to `false` to disable the more expensive analysi
`help_url` field.
`affected_resources`::
(Optional, array of strings) If the root cause pertains to multiple resources in the
cluster (like indices, shards, nodes, etc...) this will hold all resources that this
diagnosis is applicable for.
(Optional, object) An object where the keys represent resource types (for example, indices, shards),
and the values are lists of the specific resources affected by the issue.
`help_url`::
(string) A link to the troubleshooting guide that'll fix the health problem.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/index-modules.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ Index mode supports the following values:

`time_series`::: Index mode optimized for storage of metrics documented in <<tsds-index-settings,TSDS Settings>>.

`logs`::: Index mode optimized for storage of logs. It applies default sort settings on the `hostname` and `timestamp` fields and uses <<synthetic-source,synthetic `_source`>>. <<index-modules-index-sorting,Index sorting>> on different fields is still allowed.
`logsdb`::: Index mode optimized for storage of logs. It applies default sort settings on the `hostname` and `timestamp` fields and uses <<synthetic-source,synthetic `_source`>>. <<index-modules-index-sorting,Index sorting>> on different fields is still allowed.
preview:[]

[[routing-partition-size]] `index.routing_partition_size`::
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
26 changes: 20 additions & 6 deletions docs/reference/inference/inference-apis.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,21 +5,35 @@
experimental[]

IMPORTANT: The {infer} APIs enable you to use certain services, such as built-in
{ml} models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio or
Hugging Face. For built-in models and models uploaded through Eland, the {infer}
APIs offer an alternative way to use and manage trained models. However, if you
do not plan to use the {infer} APIs to use these models or if you want to use
non-NLP models, use the <<ml-df-trained-models-apis>>.
{ml} models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure,
Google AI Studio or Hugging Face. For built-in models and models uploaded
through Eland, the {infer} APIs offer an alternative way to use and manage
trained models. However, if you do not plan to use the {infer} APIs to use these
models or if you want to use non-NLP models, use the
<<ml-df-trained-models-apis>>.

The {infer} APIs enable you to create {infer} endpoints and use {ml} models of
different providers - such as Cohere, OpenAI, or HuggingFace - as a service. Use
different providers - such as Amazon Bedrock, Anthropic, Azure AI Studio,
Cohere, Google AI, Mistral, OpenAI, or HuggingFace - as a service. Use
the following APIs to manage {infer} models and perform {infer}:

* <<delete-inference-api>>
* <<get-inference-api>>
* <<post-inference-api>>
* <<put-inference-api>>

[[inference-landscape]]
.A representation of the Elastic inference landscape
image::images/inference-landscape.png[A representation of the Elastic inference landscape,align="center"]

An {infer} endpoint enables you to use the corresponding {ml} model without
manual deployment and apply it to your data at ingestion time through
<<semantic-search-semantic-text, semantic text>>.

Choose a model from your provider or use ELSER – a retrieval model trained by
Elastic –, then create an {infer} endpoint by the <<put-inference-api>>.
Now use <<semantic-search-semantic-text, semantic text>> to perform
<<semantic-search, semantic search>> on your data.

include::delete-inference.asciidoc[]
include::get-inference.asciidoc[]
Expand Down
15 changes: 8 additions & 7 deletions docs/reference/modules/discovery/fault-detection.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -168,19 +168,20 @@ reason, something other than {es} likely caused the connection to close. A
common cause is a misconfigured firewall with an improper timeout or another
policy that's <<long-lived-connections,incompatible with {es}>>. It could also
be caused by general connectivity issues, such as packet loss due to faulty
hardware or network congestion. If you're an advanced user, you can get more
detailed information about network exceptions by configuring the following
loggers:
hardware or network congestion. If you're an advanced user, configure the
following loggers to get more detailed information about network exceptions:

[source,yaml]
----
logger.org.elasticsearch.transport.TcpTransport: DEBUG
logger.org.elasticsearch.xpack.core.security.transport.netty4.SecurityNetty4Transport: DEBUG
----

In extreme cases, you may need to take packet captures using `tcpdump` to
determine whether messages between nodes are being dropped or rejected by some
other device on the network.
If these logs do not show enough information to diagnose the problem, obtain a
packet capture simultaneously from the nodes at both ends of an unstable
connection and analyse it alongside the {es} logs from those nodes to determine
if traffic between the nodes is being disrupted by another device on the
network.

[discrete]
===== Diagnosing `lagging` nodes
Expand Down Expand Up @@ -299,4 +300,4 @@ To reconstruct the output, base64-decode the data and decompress it using
----
cat shardlock.log | sed -e 's/.*://' | base64 --decode | gzip --decompress
----
//end::troubleshooting[]
//end::troubleshooting[]
20 changes: 11 additions & 9 deletions docs/reference/troubleshooting/network-timeouts.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,20 +16,22 @@ end::troubleshooting-network-timeouts-gc-vm[]
tag::troubleshooting-network-timeouts-packet-capture-elections[]
* Packet captures will reveal system-level and network-level faults, especially
if you capture the network traffic simultaneously at all relevant nodes. You
should be able to observe any retransmissions, packet loss, or other delays on
the connections between the nodes.
if you capture the network traffic simultaneously at all relevant nodes and
analyse it alongside the {es} logs from those nodes. You should be able to
observe any retransmissions, packet loss, or other delays on the connections
between the nodes.
end::troubleshooting-network-timeouts-packet-capture-elections[]

tag::troubleshooting-network-timeouts-packet-capture-fault-detection[]
* Packet captures will reveal system-level and network-level faults, especially
if you capture the network traffic simultaneously at the elected master and the
faulty node. The connection used for follower checks is not used for any other
traffic so it can be easily identified from the flow pattern alone, even if TLS
is in use: almost exactly every second there will be a few hundred bytes sent
each way, first the request by the master and then the response by the
follower. You should be able to observe any retransmissions, packet loss, or
other delays on such a connection.
faulty node and analyse it alongside the {es} logs from those nodes. The
connection used for follower checks is not used for any other traffic so it can
be easily identified from the flow pattern alone, even if TLS is in use: almost
exactly every second there will be a few hundred bytes sent each way, first the
request by the master and then the response by the follower. You should be able
to observe any retransmissions, packet loss, or other delays on such a
connection.
end::troubleshooting-network-timeouts-packet-capture-fault-detection[]

tag::troubleshooting-network-timeouts-threads[]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ public class WellKnownText {
public static final String RPAREN = ")";
public static final String COMMA = ",";
public static final String NAN = "NaN";
public static final int MAX_NESTED_DEPTH = 1000;

private static final String NUMBER = "<NUMBER>";
private static final String EOF = "END-OF-STREAM";
Expand Down Expand Up @@ -425,7 +426,7 @@ public static Geometry fromWKT(GeometryValidator validator, boolean coerce, Stri
tokenizer.whitespaceChars('\r', '\r');
tokenizer.whitespaceChars('\n', '\n');
tokenizer.commentChar('#');
Geometry geometry = parseGeometry(tokenizer, coerce);
Geometry geometry = parseGeometry(tokenizer, coerce, 0);
validator.validate(geometry);
return geometry;
} finally {
Expand All @@ -436,40 +437,35 @@ public static Geometry fromWKT(GeometryValidator validator, boolean coerce, Stri
/**
* parse geometry from the stream tokenizer
*/
private static Geometry parseGeometry(StreamTokenizer stream, boolean coerce) throws IOException, ParseException {
private static Geometry parseGeometry(StreamTokenizer stream, boolean coerce, int depth) throws IOException, ParseException {
final String type = nextWord(stream).toLowerCase(Locale.ROOT);
switch (type) {
case "point":
return parsePoint(stream);
case "multipoint":
return parseMultiPoint(stream);
case "linestring":
return parseLine(stream);
case "multilinestring":
return parseMultiLine(stream);
case "polygon":
return parsePolygon(stream, coerce);
case "multipolygon":
return parseMultiPolygon(stream, coerce);
case "bbox":
return parseBBox(stream);
case "geometrycollection":
return parseGeometryCollection(stream, coerce);
case "circle": // Not part of the standard, but we need it for internal serialization
return parseCircle(stream);
}
throw new IllegalArgumentException("Unknown geometry type: " + type);
}

private static GeometryCollection<Geometry> parseGeometryCollection(StreamTokenizer stream, boolean coerce) throws IOException,
ParseException {
return switch (type) {
case "point" -> parsePoint(stream);
case "multipoint" -> parseMultiPoint(stream);
case "linestring" -> parseLine(stream);
case "multilinestring" -> parseMultiLine(stream);
case "polygon" -> parsePolygon(stream, coerce);
case "multipolygon" -> parseMultiPolygon(stream, coerce);
case "bbox" -> parseBBox(stream);
case "geometrycollection" -> parseGeometryCollection(stream, coerce, depth + 1);
case "circle" -> // Not part of the standard, but we need it for internal serialization
parseCircle(stream);
default -> throw new IllegalArgumentException("Unknown geometry type: " + type);
};
}

private static GeometryCollection<Geometry> parseGeometryCollection(StreamTokenizer stream, boolean coerce, int depth)
throws IOException, ParseException {
if (nextEmptyOrOpen(stream).equals(EMPTY)) {
return GeometryCollection.EMPTY;
}
if (depth > MAX_NESTED_DEPTH) {
throw new ParseException("maximum nested depth of " + MAX_NESTED_DEPTH + " exceeded", stream.lineno());
}
List<Geometry> shapes = new ArrayList<>();
shapes.add(parseGeometry(stream, coerce));
shapes.add(parseGeometry(stream, coerce, depth));
while (nextCloserOrComma(stream).equals(COMMA)) {
shapes.add(parseGeometry(stream, coerce));
shapes.add(parseGeometry(stream, coerce, depth));
}
return new GeometryCollection<>(shapes);
}
Expand Down
Loading

0 comments on commit 1352546

Please sign in to comment.