Skip to content

Commit

Permalink
Adding standard analyzer docs (#8528)
Browse files Browse the repository at this point in the history
* adding standard analyzer docs

Signed-off-by: Anton Rubin <anton.rubin@eliatra.com>

* adding further details

Signed-off-by: Anton Rubin <anton.rubin@eliatra.com>

* adding more examples

Signed-off-by: Anton Rubin <anton.rubin@eliatra.com>

* Doc review

Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>

* Apply suggestions from code review

Co-authored-by: Nathan Bower <nbower@amazon.com>
Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com>

---------

Signed-off-by: Anton Rubin <anton.rubin@eliatra.com>
Signed-off-by: Fanit Kolchina <kolchfa@amazon.com>
Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com>
Co-authored-by: Fanit Kolchina <kolchfa@amazon.com>
Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com>
Co-authored-by: Nathan Bower <nbower@amazon.com>
  • Loading branch information
4 people authored Dec 10, 2024
1 parent acb1e9f commit b81500f
Showing 1 changed file with 96 additions and 0 deletions.
96 changes: 96 additions & 0 deletions _analyzers/standard.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
---
layout: default
title: Standard analyzer
nav_order: 40
---

# Standard analyzer

The `standard` analyzer is the default analyzer used when no other analyzer is specified. It is designed to provide a basic and efficient approach to generic text processing.

This analyzer consists of the following tokenizers and token filters:

- `standard` tokenizer: Removes most punctuation and splits text on spaces and other common delimiters.
- `lowercase` token filter: Converts all tokens to lowercase, ensuring case-insensitive matching.
- `stop` token filter: Removes common stopwords, such as "the", "is", and "and", from the tokenized output.

## Example

Use the following command to create an index named `my_standard_index` with a `standard` analyzer:

```json
PUT /my_standard_index
{
"mappings": {
"properties": {
"my_field": {
"type": "text",
"analyzer": "standard"
}
}
}
}
```
{% include copy-curl.html %}

## Parameters

You can configure a `standard` analyzer with the following parameters.

Parameter | Required/Optional | Data type | Description
:--- | :--- | :--- | :---
`max_token_length` | Optional | Integer | Sets the maximum length of the produced token. If this length is exceeded, the token is split into multiple tokens at the length configured in `max_token_length`. Default is `255`.
`stopwords` | Optional | String or list of strings | A string specifying a predefined list of stopwords (such as `_english_`) or an array specifying a custom list of stopwords. Default is `_none_`.
`stopwords_path` | Optional | String | The path (absolute or relative to the config directory) to the file containing a list of stop words.


## Configuring a custom analyzer

Use the following command to configure an index with a custom analyzer that is equivalent to the `standard` analyzer:

```json
PUT /my_custom_index
{
"settings": {
"analysis": {
"analyzer": {
"my_custom_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"stop"
]
}
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
POST /my_custom_index/_analyze
{
"analyzer": "my_custom_analyzer",
"text": "The slow turtle swims away"
}
```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{"token": "slow","start_offset": 4,"end_offset": 8,"type": "<ALPHANUM>","position": 1},
{"token": "turtle","start_offset": 9,"end_offset": 15,"type": "<ALPHANUM>","position": 2},
{"token": "swims","start_offset": 16,"end_offset": 21,"type": "<ALPHANUM>","position": 3},
{"token": "away","start_offset": 22,"end_offset": 26,"type": "<ALPHANUM>","position": 4}
]
}
```

0 comments on commit b81500f

Please sign in to comment.