Skip to content

Commit

Permalink
Add Execute Command to API documentation. (#711) (#762)
Browse files Browse the repository at this point in the history
* Add Execute Command to API documentation.

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Remove Anomaly

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Add editorial review

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Final feedback

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Make sure note renders properly

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>

* Fix capitalization

Signed-off-by: Naarcha-AWS <naarcha@amazon.com>
(cherry picked from commit 16b3099)

Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com>
  • Loading branch information
1 parent 460629a commit 283c436
Show file tree
Hide file tree
Showing 2 changed files with 146 additions and 4 deletions.
8 changes: 4 additions & 4 deletions _ml-commons-plugin/algorithms.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,9 +198,9 @@ time_zone | string | The time zone for the time_field field | "UTC"

For FIT RCF, you can train the model with historical data and store the trained model in your index. The model will be deserialized and predict new data points when using the Predict API. However, the model in the index will not be refreshed with new data, because the model is fixed in time.

## Anomaly Localization
## Localization

The Anomaly Localization algorithm finds subset level-information for aggregate data (for example, aggregated over time) that demonstrates the activity of interest, such as spikes, drops, changes, or anomalies. Localization can be applied in different scenarios, such as data exploration or root cause analysis, to expose the contributors driving the activity of interest in the aggregate data.
The Localization algorithm finds subset-level information for aggregate data (for example, aggregated over time) that demonstrates the activity of interest, such as spikes, drops, changes, or anomalies. Localization can be applied in different scenarios, such as data exploration or root cause analysis, to expose the contributors driving the activity of interest in the aggregate data.

### Parameters

Expand All @@ -219,9 +219,9 @@ num_outputs | integer | The maximum number of values from localization/slicing |
filter_query | Long | (Optional) Reduces the collection of data for analysis | Optional.empty()
anomaly_star | QueryBuilder | (Optional) The time after which the data will be analyzed | Optional.empty()

### Example
### Example: Execute localization

The following example executes Anomaly Localization against an RCA index.
The following example executes Localization against an RCA index.

**Request**

Expand Down
142 changes: 142 additions & 0 deletions _ml-commons-plugin/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -644,6 +644,9 @@ GET /_plugins/_ml/tasks/_search

Delete a task based on the task_id.

ML Commons does not check the task status when running the `Delete` request. There is a risk that a currently running task could be deleted before the task completes. To check the status of a task, run `GET /_plugins/_ml/tasks/<task_id>` before task deletion.
{: .note}

```json
DELETE /_plugins/_ml/tasks/{task_id}
```
Expand Down Expand Up @@ -729,8 +732,147 @@ GET /_plugins/_ml/stats
}
```

## Execute

Some algorithms, such as [Localization]({{site.url}}{{site.baseurl}}/ml-commons-plugin/algorithms#localization), don't require trained models. You can run no-model-based algorithms using the `execute` API.

```json
POST _plugins/_ml/_execute/<algorithm_name>
```

### Example: Execute localization

The following example uses the Localization algorithm to find subset-level information for aggregate data (for example, aggregated over time) that demonstrates the activity of interest, such as spikes, drops, changes, or anomalies.

```json
POST /_plugins/_ml/_execute/anomaly_localization
{
"index_name": "rca-index",
"attribute_field_names": [
"attribute"
],
"aggregations": [
{
"sum": {
"sum": {
"field": "value"
}
}
}
],
"time_field_name": "timestamp",
"start_time": 1620630000000,
"end_time": 1621234800000,
"min_time_interval": 86400000,
"num_outputs": 10
}
```

Upon execution, the API returns the following:

```json
"results" : [
{
"name" : "sum",
"result" : {
"buckets" : [
{
"start_time" : 1620630000000,
"end_time" : 1620716400000,
"overall_aggregate_value" : 65.0
},
{
"start_time" : 1620716400000,
"end_time" : 1620802800000,
"overall_aggregate_value" : 75.0,
"entities" : [
{
"key" : [
"attr0"
],
"contribution_value" : 1.0,
"base_value" : 2.0,
"new_value" : 3.0
},
{
"key" : [
"attr1"
],
"contribution_value" : 1.0,
"base_value" : 3.0,
"new_value" : 4.0
},
{
"key" : [
"attr2"
],
"contribution_value" : 1.0,
"base_value" : 4.0,
"new_value" : 5.0
},
{
"key" : [
"attr3"
],
"contribution_value" : 1.0,
"base_value" : 5.0,
"new_value" : 6.0
},
{
"key" : [
"attr4"
],
"contribution_value" : 1.0,
"base_value" : 6.0,
"new_value" : 7.0
},
{
"key" : [
"attr5"
],
"contribution_value" : 1.0,
"base_value" : 7.0,
"new_value" : 8.0
},
{
"key" : [
"attr6"
],
"contribution_value" : 1.0,
"base_value" : 8.0,
"new_value" : 9.0
},
{
"key" : [
"attr7"
],
"contribution_value" : 1.0,
"base_value" : 9.0,
"new_value" : 10.0
},
{
"key" : [
"attr8"
],
"contribution_value" : 1.0,
"base_value" : 10.0,
"new_value" : 11.0
},
{
"key" : [
"attr9"
],
"contribution_value" : 1.0,
"base_value" : 11.0,
"new_value" : 12.0
}
]
},
...
]
}
}
]
}
```

0 comments on commit 283c436

Please sign in to comment.