Skip to content

Commit

Permalink
Remove include_type_name in asciidoc where possible (#37568)
Browse files Browse the repository at this point in the history
The "include_type_name" parameter was temporarily introduced in #37285 to facilitate
moving the default parameter setting to "false" in many places in the documentation
code snippets. Most of the places can simply be reverted without causing errors.
In this change I looked for asciidoc files that contained the
"include_type_name=true" addition when creating new indices but didn't look
likey they made use of the "_doc" type for mappings. This is mostly the case
e.g. in the analysis docs where index creating often only contains settings. I
manually corrected the use of types in some places where the docs still used an
explicit type name and not the dummy "_doc" type.
  • Loading branch information
Christoph Büscher authored Jan 18, 2019
1 parent 2f0e0b2 commit 25aac4f
Show file tree
Hide file tree
Showing 71 changed files with 252 additions and 272 deletions.
26 changes: 12 additions & 14 deletions docs/painless/painless-contexts/painless-context-examples.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -43,22 +43,20 @@ the request URL.
+
[source,js]
----
PUT /seats?include_type_name=true
PUT /seats
{
"mappings": {
"seat": {
"properties": {
"theatre": { "type": "keyword" },
"play": { "type": "text" },
"actors": { "type": "text" },
"row": { "type": "integer" },
"number": { "type": "integer" },
"cost": { "type": "double" },
"sold": { "type": "boolean" },
"datetime": { "type": "date" },
"date": { "type": "keyword" },
"time": { "type": "keyword" }
}
"properties": {
"theatre": { "type": "keyword" },
"play": { "type": "text" },
"actors": { "type": "text" },
"row": { "type": "integer" },
"number": { "type": "integer" },
"cost": { "type": "double" },
"sold": { "type": "boolean" },
"datetime": { "type": "date" },
"date": { "type": "keyword" },
"time": { "type": "keyword" }
}
}
}
Expand Down
14 changes: 7 additions & 7 deletions docs/plugins/analysis-kuromoji.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ Then create an analyzer as follows:

[source,js]
--------------------------------------------------
PUT kuromoji_sample?include_type_name=true
PUT kuromoji_sample
{
"settings": {
"index": {
Expand Down Expand Up @@ -186,7 +186,7 @@ BaseFormAttribute. This acts as a lemmatizer for verbs and adjectives. Example:

[source,js]
--------------------------------------------------
PUT kuromoji_sample?include_type_name=true
PUT kuromoji_sample
{
"settings": {
"index": {
Expand Down Expand Up @@ -243,7 +243,7 @@ For example:

[source,js]
--------------------------------------------------
PUT kuromoji_sample?include_type_name=true
PUT kuromoji_sample
{
"settings": {
"index": {
Expand Down Expand Up @@ -317,7 +317,7 @@ katakana reading form:

[source,js]
--------------------------------------------------
PUT kuromoji_sample?include_type_name=true
PUT kuromoji_sample
{
"settings": {
"index":{
Expand Down Expand Up @@ -381,7 +381,7 @@ This token filter accepts the following setting:

[source,js]
--------------------------------------------------
PUT kuromoji_sample?include_type_name=true
PUT kuromoji_sample
{
"settings": {
"index": {
Expand Down Expand Up @@ -434,7 +434,7 @@ predefined list, then use the

[source,js]
--------------------------------------------------
PUT kuromoji_sample?include_type_name=true
PUT kuromoji_sample
{
"settings": {
"index": {
Expand Down Expand Up @@ -493,7 +493,7 @@ to regular Arabic decimal numbers in half-width characters. For example:

[source,js]
--------------------------------------------------
PUT kuromoji_sample?include_type_name=true
PUT kuromoji_sample
{
"settings": {
"index": {
Expand Down
8 changes: 4 additions & 4 deletions docs/plugins/analysis-nori.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Then create an analyzer as follows:

[source,js]
--------------------------------------------------
PUT nori_sample?include_type_name=true
PUT nori_sample
{
"settings": {
"index": {
Expand Down Expand Up @@ -164,7 +164,7 @@ the `user_dictionary_rules` option:

[source,js]
--------------------------------------------------
PUT nori_sample?include_type_name=true
PUT nori_sample
{
"settings": {
"index": {
Expand Down Expand Up @@ -332,7 +332,7 @@ For example:

[source,js]
--------------------------------------------------
PUT nori_sample?include_type_name=true
PUT nori_sample
{
"settings": {
"index": {
Expand Down Expand Up @@ -398,7 +398,7 @@ The `nori_readingform` token filter rewrites tokens written in Hanja to their Ha

[source,js]
--------------------------------------------------
PUT nori_sample?include_type_name=true
PUT nori_sample
{
"settings": {
"index":{
Expand Down
2 changes: 1 addition & 1 deletion docs/plugins/analysis-phonetic.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ The `phonetic` token filter takes the following settings:

[source,js]
--------------------------------------------------
PUT phonetic_sample?include_type_name=true
PUT phonetic_sample
{
"settings": {
"index": {
Expand Down
2 changes: 1 addition & 1 deletion docs/plugins/store-smb.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ It can also be set on a per-index basis at index creation time:

[source,js]
----
PUT my_index?include_type_name=true
PUT my_index
{
"settings": {
"index.store.type": "smb_mmap_fs"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,9 @@ price for the product. The mapping could look like:

[source,js]
--------------------------------------------------
PUT /index?include_type_name=true
PUT /index
{
"mappings": {
"product" : {
"mappings": {
"properties" : {
"resellers" : { <1>
"type" : "nested",
Expand All @@ -22,7 +21,6 @@ PUT /index?include_type_name=true
}
}
}
}
}
--------------------------------------------------
// CONSOLE
Expand Down Expand Up @@ -52,7 +50,7 @@ GET /_search
--------------------------------------------------
// CONSOLE
// TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/]
// TEST[s/^/PUT index\/product\/0\?refresh\n{"name":"led", "resellers": [{"name": "foo", "price": 350.00}, {"name": "bar", "price": 500.00}]}\n/]
// TEST[s/^/PUT index\/_doc\/0\?refresh\n{"name":"led", "resellers": [{"name": "foo", "price": 350.00}, {"name": "bar", "price": 500.00}]}\n/]

As you can see above, the nested aggregation requires the `path` of the nested documents within the top level documents.
Then one can define any type of aggregation over these nested documents.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,21 +17,19 @@ the issue documents as nested documents. The mapping could look like:

[source,js]
--------------------------------------------------
PUT /issues?include_type_name=true
PUT /issues
{
"mappings": {
"issue" : {
"properties" : {
"tags" : { "type" : "keyword" },
"comments" : { <1>
"type" : "nested",
"properties" : {
"username" : { "type" : "keyword" },
"comment" : { "type" : "text" }
}
}
}
}
"properties" : {
"tags" : { "type" : "keyword" },
"comments" : { <1>
"type" : "nested",
"properties" : {
"username" : { "type" : "keyword" },
"comment" : { "type" : "text" }
}
}
}
}
}
--------------------------------------------------
Expand All @@ -45,7 +43,7 @@ tags of the issues the user has commented on:
[source,js]
--------------------------------------------------
POST /issues/issue/0?refresh
POST /issues/_doc/0?refresh
{"tags": ["tag_1"], "comments": [{"username": "username_1"}]}
--------------------------------------------------
// CONSOLE
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,23 +19,21 @@ that is significant and probably very relevant to their search. 5/10,000,000 vs
[source,js]
--------------------------------------------------
PUT /reports?include_type_name=true
PUT /reports
{
"mappings": {
"report": {
"properties": {
"force": {
"type": "keyword"
},
"crime_type": {
"type": "keyword"
}
"properties": {
"force": {
"type": "keyword"
},
"crime_type": {
"type": "keyword"
}
}
}
}
POST /reports/report/_bulk?refresh
POST /reports/_bulk?refresh
{"index":{"_id":0}}
{"force": "British Transport Police", "crime_type": "Bicycle theft"}
{"index":{"_id":1}}
Expand Down
18 changes: 8 additions & 10 deletions docs/reference/aggregations/bucket/terms-aggregation.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,23 +7,21 @@ A multi-bucket value source based aggregation where buckets are dynamically buil
[source,js]
--------------------------------------------------
PUT /products?include_type_name=true
PUT /products
{
"mappings": {
"product": {
"properties": {
"genre": {
"type": "keyword"
},
"product": {
"type": "keyword"
}
"properties": {
"genre": {
"type": "keyword"
},
"product": {
"type": "keyword"
}
}
}
}
POST /products/product/_bulk?refresh
POST /products/_bulk?refresh
{"index":{"_id":0}}
{"genre": "rock", "product": "Product A"}
{"index":{"_id":1}}
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/analysis/analyzers/custom-analyzer.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Token Filters::

[source,js]
--------------------------------
PUT my_index?include_type_name=true
PUT my_index
{
"settings": {
"analysis": {
Expand Down Expand Up @@ -157,7 +157,7 @@ Here is an example:

[source,js]
--------------------------------------------------
PUT my_index?include_type_name=true
PUT my_index
{
"settings": {
"analysis": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ pre-defined list of English stop words:

[source,js]
----------------------------
PUT my_index?include_type_name=true
PUT my_index
{
"settings": {
"analysis": {
Expand Down Expand Up @@ -158,7 +158,7 @@ customization:

[source,js]
----------------------------------------------------
PUT /fingerprint_example?include_type_name=true
PUT /fingerprint_example
{
"settings": {
"analysis": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ for further customization:

[source,js]
----------------------------------------------------
PUT /keyword_example?include_type_name=true
PUT /keyword_example
{
"settings": {
"analysis": {
Expand Down
Loading

0 comments on commit 25aac4f

Please sign in to comment.