Skip to content

Commit

Permalink
Merge branch 'master' into translog-generation
Browse files Browse the repository at this point in the history
* master:
  Docs: Fix language on a few snippets
  Painless: Fix regex lexer and error messages (elastic#23634)
  Skip 5.4 bwc test for new name for now
  Count through the primary in list of strings test
  Skip testing new name if it isn't known
  Wait for all shards in list of strings test
  Deprecate request_cache for clear-cache (elastic#23638)
  • Loading branch information
jasontedor committed Mar 22, 2017
2 parents f59233c + 1c1b294 commit 3c78802
Show file tree
Hide file tree
Showing 15 changed files with 131 additions and 52 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ public static ClearIndicesCacheRequest fromRequest(final RestRequest request, Cl
if (Fields.QUERY.match(entry.getKey())) {
clearIndicesCacheRequest.queryCache(request.paramAsBoolean(entry.getKey(), clearIndicesCacheRequest.queryCache()));
}
if (Fields.REQUEST_CACHE.match(entry.getKey())) {
if (Fields.REQUEST.match(entry.getKey())) {
clearIndicesCacheRequest.requestCache(request.paramAsBoolean(entry.getKey(), clearIndicesCacheRequest.requestCache()));
}
if (Fields.FIELD_DATA.match(entry.getKey())) {
Expand All @@ -100,7 +100,7 @@ public static ClearIndicesCacheRequest fromRequest(final RestRequest request, Cl

public static class Fields {
public static final ParseField QUERY = new ParseField("query", "filter", "filter_cache");
public static final ParseField REQUEST_CACHE = new ParseField("request_cache");
public static final ParseField REQUEST = new ParseField("request", "request_cache");
public static final ParseField FIELD_DATA = new ParseField("field_data", "fielddata");
public static final ParseField RECYCLER = new ParseField("recycler");
public static final ParseField FIELDS = new ParseField("fields");
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

package org.elasticsearch.rest.action.admin.indices;

import org.elasticsearch.action.admin.indices.cache.clear.ClearIndicesCacheRequest;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.test.rest.FakeRestRequest;

import java.util.HashMap;

import static org.hamcrest.Matchers.equalTo;

public class RestClearIndicesCacheActionTests extends ESTestCase {

public void testRequestCacheSet() throws Exception {
final HashMap<String, String> params = new HashMap<>();
params.put("request", "true");
final RestRequest restRequest = new FakeRestRequest.Builder(xContentRegistry())
.withParams(params).build();
ClearIndicesCacheRequest cacheRequest = new ClearIndicesCacheRequest();
cacheRequest = RestClearIndicesCacheAction.fromRequest(restRequest, cacheRequest);
assertThat(cacheRequest.requestCache(), equalTo(true));
}
}
2 changes: 0 additions & 2 deletions docs/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -80,8 +80,6 @@ buildRestTests.expectedUnconvertedCandidates = [
'reference/analysis/tokenfilters/stop-tokenfilter.asciidoc',
'reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc',
'reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc',
'reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc',
'reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc',
'reference/cat/snapshots.asciidoc',
'reference/cat/templates.asciidoc',
'reference/cat/thread_pool.asciidoc',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

experimental[]

The `synonym_graph` token filter allows to easily handle synonyms,
The `synonym_graph` token filter allows to easily handle synonyms,
including multi-word synonyms correctly during the analysis process.

In order to properly handle multi-word synonyms this token filter
Expand All @@ -13,8 +13,8 @@ http://blog.mikemccandless.com/2012/04/lucenes-tokenstreams-are-actually.html[Lu

["NOTE",id="synonym-graph-index-note"]
===============================
This token filter is designed to be used as part of a search analyzer
only. If you want to apply synonyms during indexing please use the
This token filter is designed to be used as part of a search analyzer
only. If you want to apply synonyms during indexing please use the
standard <<analysis-synonym-tokenfilter,synonym token filter>>.
===============================

Expand Down Expand Up @@ -45,8 +45,8 @@ Here is an example:

The above configures a `search_synonyms` filter, with a path of
`analysis/synonym.txt` (relative to the `config` location). The
`search_synonyms` analyzer is then configured with the filter.
Additional settings are: `ignore_case` (defaults to `false`), and
`search_synonyms` analyzer is then configured with the filter.
Additional settings are: `ignore_case` (defaults to `false`), and
`expand` (defaults to `true`).

The `tokenizer` parameter controls the tokenizers that will be used to
Expand Down Expand Up @@ -106,7 +106,7 @@ configuration file (note use of `synonyms` instead of `synonyms_path`):
"synonyms" : [
"lol, laughing out loud",
"universe, cosmos"
]
]
}
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ Advance settings include:
A custom type mapping table, for example (when configured
using `type_table_path`):

[source,js]
[source,type_table]
--------------------------------------------------
# Map the $, %, '.', and ',' characters to DIGIT
# This might be useful for financial data.
Expand All @@ -94,4 +94,3 @@ NOTE: Using a tokenizer like the `standard` tokenizer may interfere with
the `catenate_*` and `preserve_original` parameters, as the original
string may already have lost punctuation during tokenization. Instead,
you may want to use the `whitespace` tokenizer.

Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ Advance settings include:
A custom type mapping table, for example (when configured
using `type_table_path`):

[source,js]
[source,type_table]
--------------------------------------------------
# Map the $, %, '.', and ',' characters to DIGIT
# This might be useful for financial data.
Expand All @@ -83,4 +83,3 @@ NOTE: Using a tokenizer like the `standard` tokenizer may interfere with
the `catenate_*` and `preserve_original` parameters, as the original
string may already have lost punctuation during tokenization. Instead,
you may want to use the `whitespace` tokenizer.

2 changes: 1 addition & 1 deletion docs/reference/modules/indices/request_cache.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ The cache can be expired manually with the <<indices-clearcache,`clear-cache` AP

[source,js]
------------------------
POST /kimchy,elasticsearch/_cache/clear?request_cache=true
POST /kimchy,elasticsearch/_cache/clear?request=true
------------------------
// CONSOLE
// TEST[s/^/PUT kimchy\nPUT elasticsearch\n/]
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/setup/sysconfig/swap.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ After starting Elasticsearch, you can see whether this setting was applied
successfully by checking the value of `mlockall` in the output from this
request:

[source,sh]
[source,js]
--------------
GET _nodes?filter_path=**.mlockall
--------------
Expand Down
2 changes: 1 addition & 1 deletion modules/lang-painless/src/main/antlr/PainlessLexer.g4
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ INTEGER: ( '0' | [1-9] [0-9]* ) [lLfFdD]?;
DECIMAL: ( '0' | [1-9] [0-9]* ) (DOT [0-9]+)? ( [eE] [+\-]? [0-9]+ )? [fFdD]?;

STRING: ( '"' ( '\\"' | '\\\\' | ~[\\"] )*? '"' ) | ( '\'' ( '\\\'' | '\\\\' | ~[\\'] )*? '\'' );
REGEX: '/' ( ~('/' | '\n') | '\\' ~'\n' )+ '/' [cilmsUux]* { slashIsRegex() }?;
REGEX: '/' ( '\\' ~'\n' | ~('/' | '\n') )+? '/' [cilmsUux]* { slashIsRegex() }?;

TRUE: 'true';
FALSE: 'false';
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
// ANTLR GENERATED CODE: DO NOT EDIT
package org.elasticsearch.painless.antlr;


import org.antlr.v4.runtime.Lexer;
import org.antlr.v4.runtime.CharStream;
import org.antlr.v4.runtime.Token;
Expand Down Expand Up @@ -211,29 +209,29 @@ private boolean TYPE_sempred(RuleContext _localctx, int predIndex) {
"\3N\3N\7N\u021a\nN\fN\16N\u021d\13N\3N\3N\3O\3O\3O\3O\3O\3P\3P\3P\3P\3"+
"P\3P\3Q\3Q\3Q\3Q\3Q\3R\3R\3R\3R\7R\u0235\nR\fR\16R\u0238\13R\3R\3R\3S"+
"\3S\7S\u023e\nS\fS\16S\u0241\13S\3T\3T\3T\7T\u0246\nT\fT\16T\u0249\13"+
"T\5T\u024b\nT\3T\3T\3U\3U\7U\u0251\nU\fU\16U\u0254\13U\3U\3U\6\u00b9\u00c3"+
"\u01fd\u0209\2V\4\3\6\4\b\5\n\6\f\7\16\b\20\t\22\n\24\13\26\f\30\r\32"+
"\16\34\17\36\20 \21\"\22$\23&\24(\25*\26,\27.\30\60\31\62\32\64\33\66"+
"\348\35:\36<\37> @!B\"D#F$H%J&L\'N(P)R*T+V,X-Z.\\/^\60`\61b\62d\63f\64"+
"h\65j\66l\67n8p9r:t;v<x=z>|?~@\u0080A\u0082B\u0084C\u0086D\u0088E\u008a"+
"F\u008cG\u008eH\u0090I\u0092J\u0094K\u0096L\u0098M\u009aN\u009cO\u009e"+
"P\u00a0Q\u00a2R\u00a4S\u00a6T\u00a8U\u00aaV\4\2\3\25\5\2\13\f\17\17\""+
"\"\4\2\f\f\17\17\3\2\629\4\2NNnn\4\2ZZzz\5\2\62;CHch\3\2\63;\3\2\62;\b"+
"\2FFHHNNffhhnn\4\2GGgg\4\2--//\6\2FFHHffhh\4\2$$^^\4\2))^^\4\2\f\f\61"+
"\61\3\2\f\f\t\2WWeekknouuwwzz\5\2C\\aac|\6\2\62;C\\aac|\u0277\2\4\3\2"+
"\2\2\2\6\3\2\2\2\2\b\3\2\2\2\2\n\3\2\2\2\2\f\3\2\2\2\2\16\3\2\2\2\2\20"+
"\3\2\2\2\2\22\3\2\2\2\2\24\3\2\2\2\2\26\3\2\2\2\2\30\3\2\2\2\2\32\3\2"+
"\2\2\2\34\3\2\2\2\2\36\3\2\2\2\2 \3\2\2\2\2\"\3\2\2\2\2$\3\2\2\2\2&\3"+
"\2\2\2\2(\3\2\2\2\2*\3\2\2\2\2,\3\2\2\2\2.\3\2\2\2\2\60\3\2\2\2\2\62\3"+
"\2\2\2\2\64\3\2\2\2\2\66\3\2\2\2\28\3\2\2\2\2:\3\2\2\2\2<\3\2\2\2\2>\3"+
"\2\2\2\2@\3\2\2\2\2B\3\2\2\2\2D\3\2\2\2\2F\3\2\2\2\2H\3\2\2\2\2J\3\2\2"+
"\2\2L\3\2\2\2\2N\3\2\2\2\2P\3\2\2\2\2R\3\2\2\2\2T\3\2\2\2\2V\3\2\2\2\2"+
"X\3\2\2\2\2Z\3\2\2\2\2\\\3\2\2\2\2^\3\2\2\2\2`\3\2\2\2\2b\3\2\2\2\2d\3"+
"\2\2\2\2f\3\2\2\2\2h\3\2\2\2\2j\3\2\2\2\2l\3\2\2\2\2n\3\2\2\2\2p\3\2\2"+
"\2\2r\3\2\2\2\2t\3\2\2\2\2v\3\2\2\2\2x\3\2\2\2\2z\3\2\2\2\2|\3\2\2\2\2"+
"~\3\2\2\2\2\u0080\3\2\2\2\2\u0082\3\2\2\2\2\u0084\3\2\2\2\2\u0086\3\2"+
"\2\2\2\u0088\3\2\2\2\2\u008a\3\2\2\2\2\u008c\3\2\2\2\2\u008e\3\2\2\2\2"+
"\u0090\3\2\2\2\2\u0092\3\2\2\2\2\u0094\3\2\2\2\2\u0096\3\2\2\2\2\u0098"+
"T\5T\u024b\nT\3T\3T\3U\3U\7U\u0251\nU\fU\16U\u0254\13U\3U\3U\7\u00b9\u00c3"+
"\u01fd\u0209\u0215\2V\4\3\6\4\b\5\n\6\f\7\16\b\20\t\22\n\24\13\26\f\30"+
"\r\32\16\34\17\36\20 \21\"\22$\23&\24(\25*\26,\27.\30\60\31\62\32\64\33"+
"\66\348\35:\36<\37> @!B\"D#F$H%J&L\'N(P)R*T+V,X-Z.\\/^\60`\61b\62d\63"+
"f\64h\65j\66l\67n8p9r:t;v<x=z>|?~@\u0080A\u0082B\u0084C\u0086D\u0088E"+
"\u008aF\u008cG\u008eH\u0090I\u0092J\u0094K\u0096L\u0098M\u009aN\u009c"+
"O\u009eP\u00a0Q\u00a2R\u00a4S\u00a6T\u00a8U\u00aaV\4\2\3\25\5\2\13\f\17"+
"\17\"\"\4\2\f\f\17\17\3\2\629\4\2NNnn\4\2ZZzz\5\2\62;CHch\3\2\63;\3\2"+
"\62;\b\2FFHHNNffhhnn\4\2GGgg\4\2--//\6\2FFHHffhh\4\2$$^^\4\2))^^\3\2\f"+
"\f\4\2\f\f\61\61\t\2WWeekknouuwwzz\5\2C\\aac|\6\2\62;C\\aac|\u0277\2\4"+
"\3\2\2\2\2\6\3\2\2\2\2\b\3\2\2\2\2\n\3\2\2\2\2\f\3\2\2\2\2\16\3\2\2\2"+
"\2\20\3\2\2\2\2\22\3\2\2\2\2\24\3\2\2\2\2\26\3\2\2\2\2\30\3\2\2\2\2\32"+
"\3\2\2\2\2\34\3\2\2\2\2\36\3\2\2\2\2 \3\2\2\2\2\"\3\2\2\2\2$\3\2\2\2\2"+
"&\3\2\2\2\2(\3\2\2\2\2*\3\2\2\2\2,\3\2\2\2\2.\3\2\2\2\2\60\3\2\2\2\2\62"+
"\3\2\2\2\2\64\3\2\2\2\2\66\3\2\2\2\28\3\2\2\2\2:\3\2\2\2\2<\3\2\2\2\2"+
">\3\2\2\2\2@\3\2\2\2\2B\3\2\2\2\2D\3\2\2\2\2F\3\2\2\2\2H\3\2\2\2\2J\3"+
"\2\2\2\2L\3\2\2\2\2N\3\2\2\2\2P\3\2\2\2\2R\3\2\2\2\2T\3\2\2\2\2V\3\2\2"+
"\2\2X\3\2\2\2\2Z\3\2\2\2\2\\\3\2\2\2\2^\3\2\2\2\2`\3\2\2\2\2b\3\2\2\2"+
"\2d\3\2\2\2\2f\3\2\2\2\2h\3\2\2\2\2j\3\2\2\2\2l\3\2\2\2\2n\3\2\2\2\2p"+
"\3\2\2\2\2r\3\2\2\2\2t\3\2\2\2\2v\3\2\2\2\2x\3\2\2\2\2z\3\2\2\2\2|\3\2"+
"\2\2\2~\3\2\2\2\2\u0080\3\2\2\2\2\u0082\3\2\2\2\2\u0084\3\2\2\2\2\u0086"+
"\3\2\2\2\2\u0088\3\2\2\2\2\u008a\3\2\2\2\2\u008c\3\2\2\2\2\u008e\3\2\2"+
"\2\2\u0090\3\2\2\2\2\u0092\3\2\2\2\2\u0094\3\2\2\2\2\u0096\3\2\2\2\2\u0098"+
"\3\2\2\2\2\u009a\3\2\2\2\2\u009c\3\2\2\2\2\u009e\3\2\2\2\2\u00a0\3\2\2"+
"\2\2\u00a2\3\2\2\2\2\u00a4\3\2\2\2\2\u00a6\3\2\2\2\3\u00a8\3\2\2\2\3\u00aa"+
"\3\2\2\2\4\u00ad\3\2\2\2\6\u00c8\3\2\2\2\b\u00cc\3\2\2\2\n\u00ce\3\2\2"+
Expand Down Expand Up @@ -358,9 +356,9 @@ private boolean TYPE_sempred(RuleContext _localctx, int predIndex) {
"\3\2\2\2\u0207\u0206\3\2\2\2\u0208\u020b\3\2\2\2\u0209\u020a\3\2\2\2\u0209"+
"\u0207\3\2\2\2\u020a\u020c\3\2\2\2\u020b\u0209\3\2\2\2\u020c\u020e\7)"+
"\2\2\u020d\u01f5\3\2\2\2\u020d\u0201\3\2\2\2\u020e\u009b\3\2\2\2\u020f"+
"\u0213\7\61\2\2\u0210\u0214\n\20\2\2\u0211\u0212\7^\2\2\u0212\u0214\n"+
"\21\2\2\u0213\u0210\3\2\2\2\u0213\u0211\3\2\2\2\u0214\u0215\3\2\2\2\u0215"+
"\u0213\3\2\2\2\u0215\u0216\3\2\2\2\u0216\u0217\3\2\2\2\u0217\u021b\7\61"+
"\u0213\7\61\2\2\u0210\u0211\7^\2\2\u0211\u0214\n\20\2\2\u0212\u0214\n"+
"\21\2\2\u0213\u0210\3\2\2\2\u0213\u0212\3\2\2\2\u0214\u0215\3\2\2\2\u0215"+
"\u0216\3\2\2\2\u0215\u0213\3\2\2\2\u0216\u0217\3\2\2\2\u0217\u021b\7\61"+
"\2\2\u0218\u021a\t\22\2\2\u0219\u0218\3\2\2\2\u021a\u021d\3\2\2\2\u021b"+
"\u0219\3\2\2\2\u021b\u021c\3\2\2\2\u021c\u021e\3\2\2\2\u021d\u021b\3\2"+
"\2\2\u021e\u021f\6N\3\2\u021f\u009d\3\2\2\2\u0220\u0221\7v\2\2\u0221\u0222"+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,9 @@ void analyze(Locals locals) {

try {
Pattern.compile(pattern, flags);
} catch (PatternSyntaxException exception) {
throw createError(exception);
} catch (PatternSyntaxException e) {
throw new Location(location.getSourceName(), location.getOffset() + 1 + e.getIndex()).createError(
new IllegalArgumentException("Error compiling regex: " + e.getDescription()));
}

constant = new Constant(location, Definition.PATTERN_TYPE.type, "regexAt$" + location.getOffset(), this::initializeConstant);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
package org.elasticsearch.painless;

import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.script.ScriptException;

import java.nio.CharBuffer;
import java.util.Arrays;
Expand All @@ -44,8 +45,17 @@ public void testPatternAfterReturn() {
assertEquals(false, exec("return 'bar' ==~ /foo/"));
}

public void testSlashesEscapePattern() {
assertEquals(true, exec("return '//' ==~ /\\/\\//"));
public void testBackslashEscapesForwardSlash() {
assertEquals(true, exec("'//' ==~ /\\/\\//"));
}

public void testBackslashEscapeBackslash() {
// Both of these are single backslashes but java escaping + Painless escaping....
assertEquals(true, exec("'\\\\' ==~ /\\\\/"));
}

public void testRegexIsNonGreedy() {
assertEquals(true, exec("def s = /\\\\/.split('.\\\\.'); return s[1] ==~ /\\./"));
}

public void testPatternAfterAssignment() {
Expand Down Expand Up @@ -248,11 +258,14 @@ public void testCantUsePatternCompile() {
}

public void testBadRegexPattern() {
PatternSyntaxException e = expectScriptThrows(PatternSyntaxException.class, () -> {
ScriptException e = expectThrows(ScriptException.class, () -> {
exec("/\\ujjjj/"); // Invalid unicode
});
assertThat(e.getMessage(), containsString("Illegal Unicode escape sequence near index 2"));
assertThat(e.getMessage(), containsString("\\ujjjj"));
assertEquals("Error compiling regex: Illegal Unicode escape sequence", e.getCause().getMessage());

// And make sure the location of the error points to the offset inside the pattern
assertEquals("/\\ujjjj/", e.getScriptStack().get(0));
assertEquals(" ^---- HERE", e.getScriptStack().get(1));
}

public void testRegexAgainstNumber() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,10 @@
"type" : "boolean",
"description" : "Clear the recycler cache"
},
"request_cache": {
"type" : "boolean",
"description" : "Clear request cache"
},
"request": {
"type" : "boolean",
"description" : "Clear request cache"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@

- do:
count:
# we count through the primary in case there is a replica that has not yet fully recovered
preference: _primary
index: test_index

- match: {count: 2}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,26 @@
"clear_cache test":
- do:
indices.clear_cache: {}

---
"clear_cache with request set to false":
- skip:
version: " - 5.4.99"
reason: this name was added in 5.4 - temporarilly skipping 5.4 until snapshot is finished

- do:
indices.clear_cache:
request: false

---
"clear_cache with request_cache set to false":
- skip:
version: " - 5.4.99"
reason: request_cache was deprecated in 5.4.0 - temporarilly skipping 5.4 until snapshot is finished
features: "warnings"

- do:
warnings:
- 'Deprecated field [request_cache] used, expected [request] instead'
indices.clear_cache:
request_cache: false

0 comments on commit 3c78802

Please sign in to comment.