Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding standard analyzer docs #8528

Merged
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
96 changes: 96 additions & 0 deletions _analyzers/standard.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
---
layout: default
title: Standard analyzer
nav_order: 40
---

# Standard analyzer

The `standard` analyzer is the default analyzer used when no other analyzer is specified. It is designed to provide a basic and efficient approach to generic text processing.

This analyzer consists of the following tokenizers and token filters:

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line 13: I believe we had been using "split text at" (as opposed to "on"), but "on" started appearing in recent docs. Please ensure global consistency.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will address this as a separate PR globally.

- `standard` tokenizer: Removes most punctuation and splits text on spaces and other common delimiters.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- `standard` tokenizer: Removes most punctuation and splits text on spaces and other common delimiters.
- `standard` tokenizer: Removes most punctuation and splits text at spaces and other common delimiters.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will consistently use "on".

- `lowercase` token filter: Converts all tokens to lowercase, ensuring case-insensitive matching.
- `stop` token filter: Removes common stopwords, such as "the", "is", and "and", from the tokenized output.

## Example

Use the following command to create an index named `my_standard_index` with a `standard` analyzer:

```json
PUT /my_standard_index
{
"mappings": {
"properties": {
"my_field": {
"type": "text",
"analyzer": "standard"
}
}
}
}
```
{% include copy-curl.html %}

## Parameters

You can configure a `standard` analyzer with the following parameters.

Parameter | Required/Optional | Data type | Description
:--- | :--- | :--- | :---
`max_token_length` | Optional | Integer | Sets the maximum length of the produced token. If this length is exceeded, the token is split into multiple tokens at the length configured in `max_token_length`. Default is `255`.
`stopwords` | Optional | String or list of strings | A string specifying a predefined list of stopwords (such as `_english_`) or an array specifying a custom list of stopwords. Default is `_none_`.
`stopwords_path` | Optional | String | The path (absolute or relative to the config directory) to the file containing a list of stop words.


## Configuring a custom analyzer

Use the following command to configure an index with a custom analyzer that is equivalent to the `standard` analyzer:

```json
PUT /my_custom_index
{
"settings": {
"analysis": {
"analyzer": {
"my_custom_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"stop"
]
}
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
POST /my_custom_index/_analyze
{
"analyzer": "my_custom_analyzer",
"text": "The slow turtle swims away"
}
```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{"token": "slow","start_offset": 4,"end_offset": 8,"type": "<ALPHANUM>","position": 1},
{"token": "turtle","start_offset": 9,"end_offset": 15,"type": "<ALPHANUM>","position": 2},
{"token": "swims","start_offset": 16,"end_offset": 21,"type": "<ALPHANUM>","position": 3},
{"token": "away","start_offset": 22,"end_offset": 26,"type": "<ALPHANUM>","position": 4}
]
}
```
Loading