Skip to content

Commit

Permalink
Merge branch '6.x' into ccr-6.x
Browse files Browse the repository at this point in the history
* 6.x:
  Watcher: Ensure mail message ids are unique per watch action (#30112)
  SQL: Correct error message (#30138)
  SQL: Add BinaryMathProcessor to named writeables list (#30127)
  Tests: Use buildDir as base for generated-resources (#30191)
  Fix SliceBuilderTests#testRandom failures
  Fix edge cases in CompositeKeyExtractorTests (#30175)
  Document time unit limitations for date histograms (#30177)
  Remove licenses missed by the migration
  [DOCS] Updates docker installation package details (#30110)
  Fix TermsSetQueryBuilder.doEquals() method (#29629)
  [Monitoring] Remove unhelpful Monitoring tests (#30144)
  [Test] Fix RenameProcessorTests.testRenameExistingFieldNullValue() (#29655)
  [test] include oss tar in packaging tests (#30155)
  TEST: Update settings should go through cluster state (#29682)
  Add additional shards routing info in ShardSearchRequest (#29533)
  Reinstate missing documentation (#28781)
  Clarify documentation of scroll_id (#29424)
  Fixes Eclipse build for sql jdbc project (#30114)
  Watcher: Fold two smoke test projects into smoke-test-watcher (#30137)
  • Loading branch information
dnhatn committed Apr 27, 2018
2 parents 950ddc3 + 6fed2d4 commit b7727c6
Show file tree
Hide file tree
Showing 83 changed files with 1,098 additions and 789 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,15 @@ class VagrantTestPlugin implements Plugin<Project> {
'ubuntu-1404',
]

/** All onboarded archives by default, available for Bats tests even if not used **/
static List<String> DISTRIBUTION_ARCHIVES = ['tar', 'rpm', 'deb', 'oss-rpm', 'oss-deb']
/** All distributions to bring into test VM, whether or not they are used **/
static List<String> DISTRIBUTIONS = [
'archives:tar',
'archives:oss-tar',
'packages:rpm',
'packages:oss-rpm',
'packages:deb',
'packages:oss-deb'
]

/** Packages onboarded for upgrade tests **/
static List<String> UPGRADE_FROM_ARCHIVES = ['rpm', 'deb']
Expand Down Expand Up @@ -117,13 +124,8 @@ class VagrantTestPlugin implements Plugin<Project> {
upgradeFromVersion = Version.fromString(upgradeFromVersionRaw)
}

DISTRIBUTION_ARCHIVES.each {
DISTRIBUTIONS.each {
// Adds a dependency for the current version
if (it == 'tar') {
it = 'archives:tar'
} else {
it = "packages:${it}"
}
project.dependencies.add(PACKAGING_CONFIGURATION,
project.dependencies.project(path: ":distribution:${it}", configuration: 'default'))
}
Expand Down
1 change: 0 additions & 1 deletion buildSrc/src/main/resources/checkstyle_suppressions.xml
Original file line number Diff line number Diff line change
Expand Up @@ -554,7 +554,6 @@
<suppress files="server[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]ReplicaShardAllocatorTests.java" checks="LineLength" />
<suppress files="server[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]gateway[/\\]ReusePeerRecoverySharedTest.java" checks="LineLength" />
<suppress files="server[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]get[/\\]GetActionIT.java" checks="LineLength" />
<suppress files="server[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexServiceTests.java" checks="LineLength" />
<suppress files="server[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]IndexingSlowLogTests.java" checks="LineLength" />
<suppress files="server[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]MergePolicySettingsTests.java" checks="LineLength" />
<suppress files="server[/\\]src[/\\]test[/\\]java[/\\]org[/\\]elasticsearch[/\\]index[/\\]SearchSlowLogTests.java" checks="LineLength" />
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,13 @@ POST /sales/_search?size=0
// CONSOLE
// TEST[setup:sales]

Available expressions for interval: `year`, `quarter`, `month`, `week`, `day`, `hour`, `minute`, `second`
Available expressions for interval: `year` (`1y`), `quarter` (`1q`), `month` (`1M`), `week` (`1w`),
`day` (`1d`), `hour` (`1h`), `minute` (`1m`), `second` (`1s`)

Time values can also be specified via abbreviations supported by <<time-units,time units>> parsing.
Note that fractional time values are not supported, but you can address this by shifting to another
time unit (e.g., `1.5h` could instead be specified as `90m`).
time unit (e.g., `1.5h` could instead be specified as `90m`). Also note that time intervals larger than
than days do not support arbitrary values but can only be one unit large (e.g. `1y` is valid, `2y` is not).

[source,js]
--------------------------------------------------
Expand Down
13 changes: 8 additions & 5 deletions docs/reference/docs/delete.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -39,11 +39,14 @@ The result of the above delete operation is:
[[delete-versioning]]
=== Versioning

Each document indexed is versioned. When deleting a document, the
`version` can be specified to make sure the relevant document we are
trying to delete is actually being deleted and it has not changed in the
meantime. Every write operation executed on a document, deletes included,
causes its version to be incremented.
Each document indexed is versioned. When deleting a document, the `version` can
be specified to make sure the relevant document we are trying to delete is
actually being deleted and it has not changed in the meantime. Every write
operation executed on a document, deletes included, causes its version to be
incremented. The version number of a deleted document remains available for a
short time after deletion to allow for control of concurrent operations. The
length of time for which a deleted document's version remains available is
determined by the `index.gc_deletes` index setting and defaults to 60 seconds.

[float]
[[delete-routing]]
Expand Down
21 changes: 21 additions & 0 deletions docs/reference/index-modules.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -193,6 +193,27 @@ specific index module:
The maximum number of terms that can be used in Terms Query.
Defaults to `65536`.

`index.routing.allocation.enable`::

Controls shard allocation for this index. It can be set to:
* `all` (default) - Allows shard allocation for all shards.
* `primaries` - Allows shard allocation only for primary shards.
* `new_primaries` - Allows shard allocation only for newly-created primary shards.
* `none` - No shard allocation is allowed.

`index.routing.rebalance.enable`::

Enables shard rebalancing for this index. It can be set to:
* `all` (default) - Allows shard rebalancing for all shards.
* `primaries` - Allows shard rebalancing only for primary shards.
* `replicas` - Allows shard rebalancing only for replica shards.
* `none` - No shard rebalancing is allowed.

`index.gc_deletes`::

The length of time that a <<delete-versioning,deleted document's version number>> remains available for <<index-versioning,further versioned operations>>.
Defaults to `60s`.

[float]
=== Settings in other index modules

Expand Down
6 changes: 3 additions & 3 deletions docs/reference/search/request/scroll.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -78,9 +78,9 @@ returned with each batch of results. Each call to the `scroll` API returns the
next batch of results until there are no more results left to return, ie the
`hits` array is empty.

IMPORTANT: The initial search request and each subsequent scroll request
returns a new `_scroll_id` -- only the most recent `_scroll_id` should be
used.
IMPORTANT: The initial search request and each subsequent scroll request each
return a `_scroll_id`, which may change with each request -- only the most
recent `_scroll_id` should be used.

NOTE: If the request specifies aggregations, only the initial search response
will contain the aggregations results.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ public void testRenameExistingFieldNullValue() throws Exception {
IngestDocument ingestDocument = RandomDocumentPicks.randomIngestDocument(random(), new HashMap<>());
String fieldName = RandomDocumentPicks.randomFieldName(random());
ingestDocument.setFieldValue(fieldName, null);
String newFieldName = RandomDocumentPicks.randomFieldName(random());
String newFieldName = randomValueOtherThanMany(ingestDocument::hasField, () -> RandomDocumentPicks.randomFieldName(random()));
Processor processor = new RenameProcessor(randomAlphaOfLength(10), fieldName, newFieldName, false);
processor.execute(ingestDocument);
assertThat(ingestDocument.hasField(fieldName), equalTo(false));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,8 @@ setup() {
}

@test "[TAR] archive is available" {
count=$(find . -type f -name 'elasticsearch*.tar.gz' | wc -l)
local version=$(cat version)
count=$(find . -type f -name "${PACKAGE_NAME}-${version}.tar.gz" | wc -l)
[ "$count" -eq 1 ]
}

Expand Down
6 changes: 5 additions & 1 deletion qa/vagrant/src/test/resources/packaging/utils/tar.bash
Original file line number Diff line number Diff line change
Expand Up @@ -35,10 +35,12 @@
install_archive() {
export ESHOME=${1:-/tmp/elasticsearch}

local version=$(cat version)

echo "Unpacking tarball to $ESHOME"
rm -rf /tmp/untar
mkdir -p /tmp/untar
tar -xzpf elasticsearch*.tar.gz -C /tmp/untar
tar -xzpf "${PACKAGE_NAME}-${version}.tar.gz" -C /tmp/untar

find /tmp/untar -depth -type d -name 'elasticsearch*' -exec mv {} "$ESHOME" \; > /dev/null

Expand Down Expand Up @@ -79,6 +81,8 @@ export_elasticsearch_paths() {
export ESSCRIPTS="$ESCONFIG/scripts"
export ESDATA="$ESHOME/data"
export ESLOG="$ESHOME/logs"

export PACKAGE_NAME=${PACKAGE_NAME:-"elasticsearch-oss"}
}

# Checks that all directories & files are correctly installed
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,10 @@
import org.elasticsearch.search.internal.ShardSearchTransportRequest;
import org.elasticsearch.transport.Transport;

import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.Executor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
Expand All @@ -62,6 +64,7 @@ abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> exten
private final long clusterStateVersion;
private final Map<String, AliasFilter> aliasFilter;
private final Map<String, Float> concreteIndexBoosts;
private final Map<String, Set<String>> indexRoutings;
private final SetOnce<AtomicArray<ShardSearchFailure>> shardFailures = new SetOnce<>();
private final Object shardFailuresMutex = new Object();
private final AtomicInteger successfulOps = new AtomicInteger();
Expand All @@ -72,6 +75,7 @@ abstract class AbstractSearchAsyncAction<Result extends SearchPhaseResult> exten
protected AbstractSearchAsyncAction(String name, Logger logger, SearchTransportService searchTransportService,
BiFunction<String, String, Transport.Connection> nodeIdToConnection,
Map<String, AliasFilter> aliasFilter, Map<String, Float> concreteIndexBoosts,
Map<String, Set<String>> indexRoutings,
Executor executor, SearchRequest request,
ActionListener<SearchResponse> listener, GroupShardsIterator<SearchShardIterator> shardsIts,
TransportSearchAction.SearchTimeProvider timeProvider, long clusterStateVersion,
Expand All @@ -89,6 +93,7 @@ protected AbstractSearchAsyncAction(String name, Logger logger, SearchTransportS
this.clusterStateVersion = clusterStateVersion;
this.concreteIndexBoosts = concreteIndexBoosts;
this.aliasFilter = aliasFilter;
this.indexRoutings = indexRoutings;
this.results = resultConsumer;
this.clusters = clusters;
}
Expand Down Expand Up @@ -128,17 +133,17 @@ public final void executeNextPhase(SearchPhase currentPhase, SearchPhase nextPha
onPhaseFailure(currentPhase, "all shards failed", cause);
} else {
Boolean allowPartialResults = request.allowPartialSearchResults();
assert allowPartialResults != null : "SearchRequest missing setting for allowPartialSearchResults";
assert allowPartialResults != null : "SearchRequest missing setting for allowPartialSearchResults";
if (allowPartialResults == false && shardFailures.get() != null ){
if (logger.isDebugEnabled()) {
final ShardOperationFailedException[] shardSearchFailures = ExceptionsHelper.groupBy(buildShardFailures());
Throwable cause = shardSearchFailures.length == 0 ? null :
ElasticsearchException.guessRootCauses(shardSearchFailures[0].getCause())[0];
logger.debug(() -> new ParameterizedMessage("{} shards failed for phase: [{}]",
logger.debug(() -> new ParameterizedMessage("{} shards failed for phase: [{}]",
shardSearchFailures.length, getName()), cause);
}
onPhaseFailure(currentPhase, "Partial shards failure", null);
} else {
onPhaseFailure(currentPhase, "Partial shards failure", null);
} else {
if (logger.isTraceEnabled()) {
final String resultsFrom = results.getSuccessfulResults()
.map(r -> r.getSearchShardTarget().toString()).collect(Collectors.joining(","));
Expand Down Expand Up @@ -271,14 +276,14 @@ public final SearchRequest getRequest() {

@Override
public final SearchResponse buildSearchResponse(InternalSearchResponse internalSearchResponse, String scrollId) {

ShardSearchFailure[] failures = buildShardFailures();
Boolean allowPartialResults = request.allowPartialSearchResults();
assert allowPartialResults != null : "SearchRequest missing setting for allowPartialSearchResults";
if (allowPartialResults == false && failures.length > 0){
raisePhaseFailure(new SearchPhaseExecutionException("", "Shard failures", null, failures));
}
raisePhaseFailure(new SearchPhaseExecutionException("", "Shard failures", null, failures));
}

return new SearchResponse(internalSearchResponse, scrollId, getNumShards(), successfulOps.get(),
skippedOps.get(), buildTookInMillis(), failures, clusters);
}
Expand Down Expand Up @@ -318,8 +323,11 @@ public final ShardSearchTransportRequest buildShardSearchRequest(SearchShardIter
AliasFilter filter = aliasFilter.get(shardIt.shardId().getIndex().getUUID());
assert filter != null;
float indexBoost = concreteIndexBoosts.getOrDefault(shardIt.shardId().getIndex().getUUID(), DEFAULT_INDEX_BOOST);
String indexName = shardIt.shardId().getIndex().getName();
final String[] routings = indexRoutings.getOrDefault(indexName, Collections.emptySet())
.toArray(new String[0]);
return new ShardSearchTransportRequest(shardIt.getOriginalIndices(), request, shardIt.shardId(), getNumShards(),
filter, indexBoost, timeProvider.getAbsoluteStartMillis(), clusterAlias);
filter, indexBoost, timeProvider.getAbsoluteStartMillis(), clusterAlias, routings);
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@
import org.elasticsearch.transport.Transport;

import java.util.Map;
import java.util.Set;
import java.util.concurrent.Executor;
import java.util.function.BiFunction;
import java.util.function.Function;
Expand All @@ -47,6 +48,7 @@ final class CanMatchPreFilterSearchPhase extends AbstractSearchAsyncAction<Searc
CanMatchPreFilterSearchPhase(Logger logger, SearchTransportService searchTransportService,
BiFunction<String, String, Transport.Connection> nodeIdToConnection,
Map<String, AliasFilter> aliasFilter, Map<String, Float> concreteIndexBoosts,
Map<String, Set<String>> indexRoutings,
Executor executor, SearchRequest request,
ActionListener<SearchResponse> listener, GroupShardsIterator<SearchShardIterator> shardsIts,
TransportSearchAction.SearchTimeProvider timeProvider, long clusterStateVersion,
Expand All @@ -56,9 +58,9 @@ final class CanMatchPreFilterSearchPhase extends AbstractSearchAsyncAction<Searc
* We set max concurrent shard requests to the number of shards to otherwise avoid deep recursing that would occur if the local node
* is the coordinating node for the query, holds all the shards for the request, and there are a lot of shards.
*/
super("can_match", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request,
listener, shardsIts, timeProvider, clusterStateVersion, task, new BitSetSearchPhaseResults(shardsIts.size()), shardsIts.size(),
clusters);
super("can_match", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, indexRoutings,
executor, request, listener, shardsIts, timeProvider, clusterStateVersion, task,
new BitSetSearchPhaseResults(shardsIts.size()), shardsIts.size(), clusters);
this.phaseFactory = phaseFactory;
this.shardsIts = shardsIts;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ public final void run() throws IOException {
if (shardsIts.size() > 0) {
int maxConcurrentShardRequests = Math.min(this.maxConcurrentShardRequests, shardsIts.size());
final boolean success = shardExecutionIndex.compareAndSet(0, maxConcurrentShardRequests);
assert success;
assert success;
assert request.allowPartialSearchResults() != null : "SearchRequest missing setting for allowPartialSearchResults";
if (request.allowPartialSearchResults() == false) {
final StringBuilder missingShards = new StringBuilder();
Expand All @@ -140,7 +140,7 @@ public final void run() throws IOException {
final SearchShardIterator shardRoutings = shardsIts.get(index);
if (shardRoutings.size() == 0) {
if(missingShards.length() >0 ){
missingShards.append(", ");
missingShards.append(", ");
}
missingShards.append(shardRoutings.shardId());
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
import org.elasticsearch.transport.Transport;

import java.util.Map;
import java.util.Set;
import java.util.concurrent.Executor;
import java.util.function.BiFunction;

Expand All @@ -37,11 +38,13 @@ final class SearchDfsQueryThenFetchAsyncAction extends AbstractSearchAsyncAction

SearchDfsQueryThenFetchAsyncAction(final Logger logger, final SearchTransportService searchTransportService,
final BiFunction<String, String, Transport.Connection> nodeIdToConnection, final Map<String, AliasFilter> aliasFilter,
final Map<String, Float> concreteIndexBoosts, final SearchPhaseController searchPhaseController, final Executor executor,
final Map<String, Float> concreteIndexBoosts, final Map<String, Set<String>> indexRoutings,
final SearchPhaseController searchPhaseController, final Executor executor,
final SearchRequest request, final ActionListener<SearchResponse> listener,
final GroupShardsIterator<SearchShardIterator> shardsIts, final TransportSearchAction.SearchTimeProvider timeProvider,
final long clusterStateVersion, final SearchTask task, SearchResponse.Clusters clusters) {
super("dfs", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request, listener,
super("dfs", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, indexRoutings,
executor, request, listener,
shardsIts, timeProvider, clusterStateVersion, task, new ArraySearchPhaseResults<>(shardsIts.size()),
request.getMaxConcurrentShardRequests(), clusters);
this.searchPhaseController = searchPhaseController;
Expand Down
Loading

0 comments on commit b7727c6

Please sign in to comment.