Skip to content

Commit

Permalink
Merge branch 'master' into pr/40576
Browse files Browse the repository at this point in the history
* master: (25 commits)
  [DOCS] Correct keystore commands for Email and Jira actions in Watcher (elastic#40417)
  [DOCS] Document common settings for snapshot repository plugins (elastic#40475)
  Remove with(out)-system-key tests (elastic#40547)
  Geo Point parse error fix (elastic#40447)
  Handle null retention leases in WaitForNoFollowersStep (elastic#40477)
  [DOCS] Adds anchors for ruby client (elastic#39867)
  Mute DataFrameAuditorIT#testAuditorWritesAudits
  Disable integTest when Docker is not available (elastic#40585)
  Add randomScore function in script_score query (elastic#40186)
  Test fixtures krb5 (elastic#40297)
  Correct ILM metadata minimum compatibility version (elastic#40569)
  SQL: Centralize SQL test dependencies version handling (elastic#40551)
  Mute testTracerLog
  Mute testHttpInput
  Include functions' aliases in the list of functions (elastic#40584)
  Optimise rejection of out-of-range `long` values (elastic#40325)
  Add docs for cluster.remote.*.proxy setting (elastic#40281)
  Migrate systemd packaging tests from bats to java (elastic#39954)
  Move top-level pipeline aggs out of QuerySearchResult (elastic#40319)
  Use openjdk 12 in packer cache script (elastic#40498)
  ...
  • Loading branch information
jasontedor committed Mar 28, 2019
2 parents 70828f2 + dd8b4bb commit 828ac34
Show file tree
Hide file tree
Showing 100 changed files with 1,773 additions and 1,572 deletions.
2 changes: 1 addition & 1 deletion .ci/packer_cache.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,5 +20,5 @@ export JAVA_HOME="${HOME}"/.java/${ES_BUILD_JAVA}
# We are caching BWC versions too, need these so we can build those
export JAVA8_HOME="${HOME}"/.java/java8
export JAVA11_HOME="${HOME}"/.java/java11
export JAVA12_HOME="${HOME}"/.java/java12
export JAVA12_HOME="${HOME}"/.java/openjdk12
./gradlew --parallel clean --scan -Porg.elasticsearch.acceptScanTOS=true -s resolveAllDependencies
4 changes: 2 additions & 2 deletions build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -162,8 +162,8 @@ task verifyVersions {
* after the backport of the backcompat code is complete.
*/

boolean bwc_tests_enabled = true
final String bwc_tests_disabled_issue = "" /* place a PR link here when committing bwc changes */
boolean bwc_tests_enabled = false
final String bwc_tests_disabled_issue = "https://github.com/elastic/elasticsearch/pull/40319" /* place a PR link here when committing bwc changes */
if (bwc_tests_enabled == false) {
if (bwc_tests_disabled_issue.isEmpty()) {
throw new GradleException("bwc_tests_disabled_issue must be set when bwc_tests_enabled == false")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@
import org.gradle.api.Task;
import org.gradle.api.plugins.BasePlugin;
import org.gradle.api.plugins.ExtraPropertiesExtension;
import org.gradle.api.tasks.Input;
import org.gradle.api.tasks.TaskContainer;

import java.lang.reflect.InvocationTargetException;
Expand Down Expand Up @@ -104,6 +103,7 @@ public void apply(Project project) {
"but none could be found so these will be skipped", project.getPath()
);
disableTaskByType(tasks, getTaskClass("com.carrotsearch.gradle.junit4.RandomizedTestingTask"));
disableTaskByType(tasks, getTaskClass("org.elasticsearch.gradle.test.RestIntegTestTask"));
// conventions are not honored when the tasks are disabled
disableTaskByType(tasks, TestingConventionsTasks.class);
disableTaskByType(tasks, ComposeUp.class);
Expand All @@ -122,6 +122,7 @@ public void apply(Project project) {
fixtureProject,
(name, port) -> setSystemProperty(task, name, port)
);
task.dependsOn(fixtureProject.getTasks().getByName("postProcessFixture"));
})
);

Expand Down Expand Up @@ -155,7 +156,6 @@ private void configureServiceInfoForTask(Task task, Project fixtureProject, BiCo
);
}

@Input
public boolean dockerComposeSupported(Project project) {
if (OS.current().equals(OS.WINDOWS)) {
return false;
Expand Down
4 changes: 1 addition & 3 deletions docs/plugins/repository-azure.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -126,9 +126,7 @@ The Azure repository supports following settings:
setting doesn't affect index files that are already compressed by default.
Defaults to `true`.

`readonly`::

Makes repository read-only. Defaults to `false`.
include::repository-shared-settings.asciidoc[]

`location_mode`::

Expand Down
2 changes: 2 additions & 0 deletions docs/plugins/repository-gcs.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -240,6 +240,8 @@ The following settings are supported:
setting doesn't affect index files that are already compressed by default.
Defaults to `true`.

include::repository-shared-settings.asciidoc[]

`application_name`::

deprecated[7.0.0, This setting is now defined in the <<repository-gcs-client, client settings>>]
Expand Down
2 changes: 2 additions & 0 deletions docs/plugins/repository-hdfs.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,8 @@ The following settings are supported:

Whether to compress the metadata or not. (Enabled by default)

include::repository-shared-settings.asciidoc[]

`chunk_size`::

Override the chunk size. (Disabled by default)
Expand Down
2 changes: 2 additions & 0 deletions docs/plugins/repository-s3.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,8 @@ The following settings are supported:
setting doesn't affect index files that are already compressed by default.
Defaults to `true`.

include::repository-shared-settings.asciidoc[]

`server_side_encryption`::

When set to `true` files are encrypted on server side using AES256
Expand Down
11 changes: 11 additions & 0 deletions docs/plugins/repository-shared-settings.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
`max_restore_bytes_per_sec`::

Throttles per node restore rate. Defaults to `40mb` per second.

`max_snapshot_bytes_per_sec`::

Throttles per node snapshot rate. Defaults to `40mb` per second.

`readonly`::

Makes repository read-only. Defaults to `false`.
10 changes: 9 additions & 1 deletion docs/reference/modules/remote-clusters.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@ PUT _cluster/settings
clusters are kept alive. If set to `-1`, application-level ping messages to
this remote cluster are not sent. If unset, application-level ping messages
are sent according to the global `transport.ping_schedule` setting, which
defaults to ``-1` meaning that pings are not sent.
defaults to `-1` meaning that pings are not sent.

`cluster.remote.${cluster_alias}.transport.compress`::

Expand All @@ -237,6 +237,14 @@ PUT _cluster/settings
Elasticsearch compresses the response. If unset, the global
`transport.compress` is used as the fallback setting.

`cluster.remote.${cluster_alias}.proxy`::

Sets a proxy address for the specified remote cluster. By default this is not
set, meaning that Elasticsearch will connect directly to the nodes in the
remote cluster using their <<advanced-network-settings,publish addresses>>.
If this setting is set to an IP address or hostname then Elasticsearch will
connect to the nodes in the remote cluster using this address instead.

[float]
[[retrieve-remote-clusters-info]]
=== Retrieving remote clusters info
Expand Down
60 changes: 22 additions & 38 deletions docs/reference/query-dsl/script-score-query.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -182,60 +182,44 @@ different from the query's vector, 0 is used for missing dimensions
in the calculations of vector functions.


[[random-functions]]
===== Random functions
There are two predefined ways to produce random values:
`randomNotReproducible` and `randomReproducible`.
[[random-score-function]]
===== Random score function
`random_score` function generates scores that are uniformly distributed
from 0 up to but not including 1.

`randomNotReproducible()` uses `java.util.Random` class
to generate a random value of the type `long`.
The generated values are not reproducible between requests' invocations.
`randomScore` function has the following syntax:
`randomScore(<seed>, <fieldName>)`.
It has a required parameter - `seed` as an integer value,
and an optional parameter - `fieldName` as a string value.

[source,js]
--------------------------------------------------
"script" : {
"source" : "randomNotReproducible()"
"source" : "randomScore(100, '_seq_no')"
}
--------------------------------------------------
// NOTCONSOLE


`randomReproducible(String seedValue, int seed)` produces
reproducible random values of type `long`. This function requires
more computational time and memory than the non-reproducible version.

A good candidate for the `seedValue` is document field values that
are unique across documents and already pre-calculated and preloaded
in the memory. For example, values of the document's `_seq_no` field
is a good candidate, as documents on the same shard have unique values
for the `_seq_no` field.
If the `fieldName` parameter is omitted, the internal Lucene
document ids will be used as a source of randomness. This is very efficient,
but unfortunately not reproducible since documents might be renumbered
by merges.

[source,js]
--------------------------------------------------
"script" : {
"source" : "randomReproducible(Long.toString(doc['_seq_no'].value), 100)"
"source" : "randomScore(100)"
}
--------------------------------------------------
// NOTCONSOLE


A drawback of using `_seq_no` is that generated values change if
documents are updated. Another drawback is not absolute uniqueness, as
documents from different shards with the same sequence numbers
generate the same random values.

If you need random values to be distinct across different shards,
you can use a field with unique values across shards,
such as `_id`, but watch out for the memory usage as all
these unique values need to be loaded into memory.

[source,js]
--------------------------------------------------
"script" : {
"source" : "randomReproducible(doc['_id'].value, 100)"
}
--------------------------------------------------
// NOTCONSOLE
Note that documents that are within the same shard and have the
same value for field will get the same score, so it is usually desirable
to use a field that has unique values for all documents across a shard.
A good default choice might be to use the `_seq_no`
field, whose only drawback is that scores will change if the document is
updated since update operations also update the value of the `_seq_no` field.


[[decay-functions]]
Expand Down Expand Up @@ -349,8 +333,8 @@ the following script:

===== `random_score`

Use `randomReproducible` and `randomNotReproducible` functions
as described in <<random-functions, random functions>>.
Use `randomScore` function
as described in <<random-score-function, random score function>>.


===== `field_value_factor`
Expand Down
Loading

0 comments on commit 828ac34

Please sign in to comment.