-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any plans to add geospatial search? #4
Comments
Yes, very much. There is a Lucene spatial module that I need to look at. There are other solutions running around. When I support that, I want that to be very closely integrated in terms that you will have a type: location (or something similar). Not sure it will make it into 0.5, but its certainly on the short list for 0.6.0. |
Thanks - it's a very exciting project, thanks for releasing it! |
I will close this for now, I will add an issue that says "Spatial Support" later on |
Due to refactoring in 0.21.x we have to update this plugin Closes #4.
We don't want to have too much INFO log level. So we move some parts to log.debug. Closes #4.
changed locale to root
Today, SecureSM has a mechanism that enables a hardcoded list of test packages to exit if the SecureSM instance is constructed with a boolean flag indicating that these packages will be permitted to exit. This commit replaces this mechanism by allowing the SecureSM instance to be constructed with a whitelist of packages that can exit. Relates elastic#4
Today, SecureSM has a mechanism that enables a hardcoded list of test packages to exit if the SecureSM instance is constructed with a boolean flag indicating that these packages will be permitted to exit. This commit replaces this mechanism by allowing the SecureSM instance to be constructed with a whitelist of packages that can exit. Relates #4
… fixes on PR elastic#4 Relates to elastic#10708
* Add retention to Snapshot Lifecycle Management (#46407) This commit adds retention to the existing Snapshot Lifecycle Management feature (#38461) as described in #43663. This allows a user to configure SLM to automatically delete older snapshots based on a number of criteria. An example policy would look like: ``` PUT /_slm/policy/snapshot-every-day { "schedule": "0 30 2 * * ?", "name": "<production-snap-{now/d}>", "repository": "my-s3-repository", "config": { "indices": ["foo-*", "important"] }, // Newly configured retention options "retention": { // Snapshots should be deleted after 14 days "expire_after": "14d", // Keep a maximum of thirty snapshots "max_count": 30, // Keep a minimum of the four most recent snapshots "min_count": 4 } } ``` SLM Retention is run on a scheduled configurable with the `slm.retention_schedule` setting, which supports cron expressions. Deletions are run for a configurable time bounded by the `slm.retention_duration` setting, which defaults to 1 hour. Included in this work is a new SLM stats API endpoint available through ``` json GET /_slm/stats ``` That returns statistics about snapshot taken and deleted, as well as successful retention runs, failures, and the time spent deleting snapshots. #45362 has more information as well as an example of the output. These stats are also included when retrieving SLM policies via the API. * Add base framework for snapshot retention (#43605) * Add base framework for snapshot retention This adds a basic `SnapshotRetentionService` and `SnapshotRetentionTask` to start as the basis for SLM's retention implementation. Relates to #38461 * Remove extraneous 'public' * Use a local var instead of reading class var repeatedly * Add SnapshotRetentionConfiguration for retention configuration (#43777) * Add SnapshotRetentionConfiguration for retention configuration This commit adds the `SnapshotRetentionConfiguration` class and its HLRC counterpart to encapsulate the configuration for SLM retention. Currently only a single parameter is supported as an example (we still need to discuss the different options we want to support and their names) to keep the size of the PR down. It also does not yet include version serialization checks since the original SLM branch has not yet been merged. Relates to #43663 * Fix REST tests * Fix more documentation * Use Objects.equals to avoid NPE * Put `randomSnapshotLifecyclePolicy` in only one place * Occasionally return retention with no configuration * Implement SnapshotRetentionTask's snapshot filtering and delet… (#44764) * Implement SnapshotRetentionTask's snapshot filtering and deletion This commit implements the snapshot filtering and deletion for `SnapshotRetentionTask`. Currently only the expire-after age is used for determining whether a snapshot is eligible for deletion. Relates to #43663 * Fix deletes running on the wrong thread * Handle missing or null policy in snap metadata differently * Convert Tuple<String, List<SnapshotInfo>> to Map<String, List<SnapshotInfo>> * Use the `OriginSettingClient` to work with security, enhance logging * Prevent NPE in test by mocking Client * Allow empty/missing SLM retention configuration (#45018) Semi-related to #44465, this allows the `"retention"` configuration map to be missing. Relates to #43663 * Add min_count and max_count as SLM retention predicates (#44926) This adds the configuration options for `min_count` and `max_count` as well as the logic for determining whether a snapshot meets this criteria to SLM's retention feature. These options are optional and one, two, or all three can be specified in an SLM policy. Relates to #43663 * Time-bound deletion of snapshots in retention delete function (#45065) * Time-bound deletion of snapshots in retention delete function With a cluster that has a large number of snapshots, it's possible that snapshot deletion can take a very long time (especially since deletes currently have to happen in a serial fashion). To prevent snapshot deletion from taking forever in a cluster and blocking other operations, this commit adds a setting to allow configuring a maximum time to spend deletion snapshots during retention. This dynamic setting defaults to 1 hour and is best-effort, meaning that it doesn't hard stop a deletion at an hour mark, but ensures that once the time has passed, all subsequent deletions are deferred until the next retention cycle. Relates to #43663 * Wow snapshots suuuure can take a long time. * Use a LongSupplier instead of actually sleeping * Remove TestLogging annotation * Remove rate limiting * Add SLM metrics gathering and endpoint (#45362) * Add SLM metrics gathering and endpoint This commit adds the infrastructure to gather metrics about the different SLM actions that a cluster takes. These actions are stored in `SnapshotLifecycleStats` and perpetuated in cluster state. The stats stored include the number of snapshots taken, failed, deleted, the number of retention runs, as well as per-policy counts for snapshots taken, failed, and deleted. It also includes the amount of time spent deleting snapshots from SLM retention. This commit also adds an endpoint for retrieving all stats (further commits will expose this in the SLM get-policy API) that looks like: ``` GET /_slm/stats { "retention_runs" : 13, "retention_failed" : 0, "retention_timed_out" : 0, "retention_deletion_time" : "1.4s", "retention_deletion_time_millis" : 1404, "policy_metrics" : { "daily-snapshots2" : { "snapshots_taken" : 7, "snapshots_failed" : 0, "snapshots_deleted" : 6, "snapshot_deletion_failures" : 0 }, "daily-snapshots" : { "snapshots_taken" : 12, "snapshots_failed" : 0, "snapshots_deleted" : 12, "snapshot_deletion_failures" : 6 } }, "total_snapshots_taken" : 19, "total_snapshots_failed" : 0, "total_snapshots_deleted" : 18, "total_snapshot_deletion_failures" : 6 } ``` This does not yet include HLRC for this, as this commit is quite large on its own. That will be added in a subsequent commit. Relates to #43663 * Version qualify serialization * Initialize counters outside constructor * Use computeIfAbsent instead of being too verbose * Move part of XContent generation into subclass * Fix REST action for master merge * Unused import * Record history of SLM retention actions (#45513) This commit records the deletion of snapshots by the retention component of SLM into the SLM history index for the purposes of reviewing operations taken by SLM and alerting. * Retry SLM retention after currently running snapshot completes (#45802) * Retry SLM retention after currently running snapshot completes This commit adds a ClusterStateObserver to wait until the currently running snapshot is complete before proceeding with snapshot deletion. SLM retention waits for the maximum allowed deletion time for the snapshot to complete, however, the waiting time is not factored into the limit on actual deletions. Relates to #43663 * Increase timeout waiting for snapshot completion * Apply patch From https://github.com/original-brownbear/elasticsearch/commit/2374316f0d1912c9e1498bece195546a1dc60bce.patch * Rename test variables * [TEST] Be less strict for stats checking * Skip SLM retention if ILM is STOPPING or STOPPED (#45869) This adds a check to ensure we take no action during SLM retention if ILM is currently stopped or in the process of stopping. Relates to #43663 * Check all actions preventing snapshot delete during retention (#45992) * Check all actions preventing snapshot delete during retention run Previously we only checked to see if a snapshot was currently running, but it turns out that more things can block snapshot deletion. This changes the check to be a check for: - a snapshot currently running - a deletion already in progress - a repo cleanup in progress - a restore currently running This was found by CI where a third party delete in a test caused SLM retention deletion to throw an exception. Relates to #43663 * Add unit test for okayToDeleteSnapshots * Fix bug where SLM retention task would be scheduled on every node * Enhance test logging * Ignore if snapshot is already deleted * Missing import * Fix SnapshotRetentionServiceTests * Expose SLM policy stats in get SLM policy API (#45989) This also adds support for the SLM stats endpoint to the high level rest client. Retrieving a policy now looks like: ```json { "daily-snapshots" : { "version": 1, "modified_date": "2019-04-23T01:30:00.000Z", "modified_date_millis": 1556048137314, "policy" : { "schedule": "0 30 1 * * ?", "name": "<daily-snap-{now/d}>", "repository": "my_repository", "config": { "indices": ["data-*", "important"], "ignore_unavailable": false, "include_global_state": false }, "retention": {} }, "stats": { "snapshots_taken": 0, "snapshots_failed": 0, "snapshots_deleted": 0, "snapshot_deletion_failures": 0 }, "next_execution": "2019-04-24T01:30:00.000Z", "next_execution_millis": 1556048160000 } } ``` Relates to #43663 * Rewrite SnapshotLifecycleIT as as ESIntegTestCase (#46356) * Rewrite SnapshotLifecycleIT as as ESIntegTestCase This commit splits `SnapshotLifecycleIT` into two different tests. `SnapshotLifecycleRestIT` which includes the tests that do not require slow repositories, and `SLMSnapshotBlockingIntegTests` which is now an integration test using `MockRepository` to simulate a snapshot being in progress. Relates to #43663 Resolves #46205 * Add error logging when exceptions are thrown * Update serialization versions * Fix type inference * Use non-Cancellable HLRC return value * Fix Client mocking in test * Fix SLMSnapshotBlockingIntegTests for 7.x branch * Update SnapshotRetentionTask for non-multi-repo snapshot retrieval * Add serialization guards for SnapshotLifecyclePolicy
Address the issue [ansible-elasticsearch elastic#4](elastic/ansible-elasticsearch#4) which add proxy for plugin installation and more generally es_proxy_host and es_proxy_port for global usage. Right now, it’s only used for plugins installation. In the futur, it can be used (or defined) in the java_opts within the default file for instance (elasticsearch.j2)
…9765) Previously we did not resolve the attributes recursively which meant that if a field or expression was re-aliased multiple times (through multiple levels of subqueries), the aliases were only resolved one level down. This led to failed query translation because `ReferenceAttribute`s were pointing to non-existing attributes during query translation. For example the query ```sql SELECT i AS j FROM ( SELECT int AS i FROM test) ORDER BY j ``` failed during translation because the `OrderBy` resolved the `j` ReferenceAttribute to another `i` ReferenceAttribute that was later removed by an Optimization: ``` OrderBy[[Order[j{r}#4,ASC,LAST]]] ! OrderBy[[Order[i{r}#2,ASC,LAST]]] \_Project[[j]] = \_Project[[j]] \_Project[[i]] ! \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] ! ``` By resolving the `Attributes` recursively both `j{r}` and `i{r}` will resolve to `test.int{f}` above: ``` OrderBy[[Order[test.int{f}#22,ASC,LAST]]] = OrderBy[[Order[test.int{f}#22,ASC,LAST]]] \_Project[[j]] = \_Project[[j]] \_Project[[i]] ! \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] ! ``` The scope of recursive resolution depends on how the `AttributeMap` is constructed and populated. Fixes #67237
…astic#69765) Previously we did not resolve the attributes recursively which meant that if a field or expression was re-aliased multiple times (through multiple levels of subqueries), the aliases were only resolved one level down. This led to failed query translation because `ReferenceAttribute`s were pointing to non-existing attributes during query translation. For example the query ```sql SELECT i AS j FROM ( SELECT int AS i FROM test) ORDER BY j ``` failed during translation because the `OrderBy` resolved the `j` ReferenceAttribute to another `i` ReferenceAttribute that was later removed by an Optimization: ``` OrderBy[[Order[j{r}elastic#4,ASC,LAST]]] ! OrderBy[[Order[i{r}elastic#2,ASC,LAST]]] \_Project[[j]] = \_Project[[j]] \_Project[[i]] ! \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..] \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..] ! ``` By resolving the `Attributes` recursively both `j{r}` and `i{r}` will resolve to `test.int{f}` above: ``` OrderBy[[Order[test.int{f}elastic#22,ASC,LAST]]] = OrderBy[[Order[test.int{f}elastic#22,ASC,LAST]]] \_Project[[j]] = \_Project[[j]] \_Project[[i]] ! \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..] \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..] ! ``` The scope of recursive resolution depends on how the `AttributeMap` is constructed and populated. Fixes elastic#67237
…astic#69765) Previously we did not resolve the attributes recursively which meant that if a field or expression was re-aliased multiple times (through multiple levels of subqueries), the aliases were only resolved one level down. This led to failed query translation because `ReferenceAttribute`s were pointing to non-existing attributes during query translation. For example the query ```sql SELECT i AS j FROM ( SELECT int AS i FROM test) ORDER BY j ``` failed during translation because the `OrderBy` resolved the `j` ReferenceAttribute to another `i` ReferenceAttribute that was later removed by an Optimization: ``` OrderBy[[Order[j{r}elastic#4,ASC,LAST]]] ! OrderBy[[Order[i{r}elastic#2,ASC,LAST]]] \_Project[[j]] = \_Project[[j]] \_Project[[i]] ! \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..] \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..] ! ``` By resolving the `Attributes` recursively both `j{r}` and `i{r}` will resolve to `test.int{f}` above: ``` OrderBy[[Order[test.int{f}elastic#22,ASC,LAST]]] = OrderBy[[Order[test.int{f}elastic#22,ASC,LAST]]] \_Project[[j]] = \_Project[[j]] \_Project[[i]] ! \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..] \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..] ! ``` The scope of recursive resolution depends on how the `AttributeMap` is constructed and populated. Fixes elastic#67237
…9765) (#70325) Previously we did not resolve the attributes recursively which meant that if a field or expression was re-aliased multiple times (through multiple levels of subqueries), the aliases were only resolved one level down. This led to failed query translation because `ReferenceAttribute`s were pointing to non-existing attributes during query translation. For example the query ```sql SELECT i AS j FROM ( SELECT int AS i FROM test) ORDER BY j ``` failed during translation because the `OrderBy` resolved the `j` ReferenceAttribute to another `i` ReferenceAttribute that was later removed by an Optimization: ``` OrderBy[[Order[j{r}#4,ASC,LAST]]] ! OrderBy[[Order[i{r}#2,ASC,LAST]]] \_Project[[j]] = \_Project[[j]] \_Project[[i]] ! \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] ! ``` By resolving the `Attributes` recursively both `j{r}` and `i{r}` will resolve to `test.int{f}` above: ``` OrderBy[[Order[test.int{f}#22,ASC,LAST]]] = OrderBy[[Order[test.int{f}#22,ASC,LAST]]] \_Project[[j]] = \_Project[[j]] \_Project[[i]] ! \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] ! ``` The scope of recursive resolution depends on how the `AttributeMap` is constructed and populated. Fixes #67237
…9765) (#70322) Previously we did not resolve the attributes recursively which meant that if a field or expression was re-aliased multiple times (through multiple levels of subqueries), the aliases were only resolved one level down. This led to failed query translation because `ReferenceAttribute`s were pointing to non-existing attributes during query translation. For example the query ```sql SELECT i AS j FROM ( SELECT int AS i FROM test) ORDER BY j ``` failed during translation because the `OrderBy` resolved the `j` ReferenceAttribute to another `i` ReferenceAttribute that was later removed by an Optimization: ``` OrderBy[[Order[j{r}#4,ASC,LAST]]] ! OrderBy[[Order[i{r}#2,ASC,LAST]]] \_Project[[j]] = \_Project[[j]] \_Project[[i]] ! \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] ! ``` By resolving the `Attributes` recursively both `j{r}` and `i{r}` will resolve to `test.int{f}` above: ``` OrderBy[[Order[test.int{f}#22,ASC,LAST]]] = OrderBy[[Order[test.int{f}#22,ASC,LAST]]] \_Project[[j]] = \_Project[[j]] \_Project[[i]] ! \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] ! ``` The scope of recursive resolution depends on how the `AttributeMap` is constructed and populated. Fixes #67237
…4b7a43bf7af9c parent 1920f00 author Alan Woodward <romseygeek@apache.org> 1626700953 +0100 committer Alan Woodward <romseygeek@apache.org> 1626700953 +0100 Make NestedObjectMapper its own class (elastic#74410) Nested objects are implemented via a Nested class directly on object mappers, even though nested and non-nested objects have quite different semantics. In addition, most call-sites that need to get an object mapper in fact need a nested object mapper. To make it clearer that nested and object mappers are different beasts with different implementations and different requirements, we should split them into different classes. server:test compiles :server:tests pass # This is the commit message elastic#4: fix merge includes
…ons_prepare Prepare commit phase transport layer
Fixes elastic/elasticsearch-internal#497 Fixes ESQL-560 A query like `from test | sort data | limit 2 | project count` fails because `LocalToGlobalLimitAndTopNExec` planning rule adds a collecting `TopNExec` after last GATHER exchange, to perform last reduce, see ``` TopNExec[[Order[data{f}#6,ASC,LAST]],2[INTEGER]] \_ExchangeExec[GATHER,SINGLE_DISTRIBUTION] \_ProjectExec[[count{f}#4]] // <- `data` is projected away but still used by the TopN node above \_FieldExtractExec[count{f}#4] \_TopNExec[[Order[data{f}#6,ASC,LAST]],2[INTEGER]] \_FieldExtractExec[data{f}#6] \_ExchangeExec[REPARTITION,FIXED_ARBITRARY_DISTRIBUTION] \_EsQueryExec[test], query[][_doc_id{f}#9, _segment_id{f}#10, _shard_id{f}#11] ``` Unfortunately, at that stage the inputs needed by the TopNExec could have been projected away by a ProjectExec, so they could be no longer available. This PR adapts the plan as follows: - add all the projections used by the `TopNExec` to the existing `ProjectExec`, so that they are available when needed - add another ProjectExec on top of the plan, to project away the originally removed projections and preserve the query semantics This approach is a bit dangerous, because it bypasses the mechanism of input/output resolution and validation that happens on the logical plan. The alternative would be to do this manipulation on the logical plan, but it's probably hard to do, because there is no concept of Exchange at that level.
It would be incredibly useful to be able to index documents with latitude/longitude positions and run searches that return results ordered by distance from a specific latitude/longitude point.
The text was updated successfully, but these errors were encountered: