Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ML] Space permision checks for job deletion #83871

Merged

Conversation

jgowdyelastic
Copy link
Member

@jgowdyelastic jgowdyelastic commented Nov 19, 2020

Endpoint to check what delete job tasks the user is able to perform on a specified list of jobs.

Endpoint returns a list of jobs with canDelete and canUnTag flags for each.

This check will be performed in the UI when a user attempts to delete a job.
UI work is coming in a different PR.
API test will be added in a follow up PR.

Job is in individual spaces, user cannot see all of them
Job can only be untagged from the current space

{
  "foo": {
    "canDelete": false,
    "canUntag": true
  }
}

Job is individual spaces, which the user can see all of
Job can be deleted or untagged from current space

{
  "foo": {
    "canDelete": true,
    "canUntag": true
  }
}

Job is in * space, user cannot see all spaces
Job cannot be untagged from space or deleted.

{
  "foo": {
    "canDelete": false,
    "canUntag": false
  }
}

Job is in * space, user can see all spaces
Delete job only, no option to untag

{
  "foo": {
    "canDelete": true,
    "canUntag": false
  }
}

Spaces plugin is disabled
Delete job only, no option to untag

{
  "foo": {
    "canDelete": true,
    "canUntag": false
  }
}

User does not have canDeleteJob or canDeleteDataFrameAnalytics capabilities
Job cannot be untagged from space or deleted.

{
  "foo": {
    "canDelete": false,
    "canUntag": false
  }
}

Checklist

@jgowdyelastic jgowdyelastic marked this pull request as ready for review November 20, 2020 14:08
@jgowdyelastic jgowdyelastic requested a review from a team as a code owner November 20, 2020 14:08
@jgowdyelastic jgowdyelastic self-assigned this Nov 20, 2020
@jgowdyelastic jgowdyelastic added Feature:Anomaly Detection ML anomaly detection Feature:Data Frame Analytics ML data frame analytics features Feature:Transforms ML transforms non-issue Indicates to automation that a pull request should not appear in the release notes review release_note:skip Skip the PR/issue when compiling release notes v7.11.0 v8.0.0 and removed Feature:Transforms ML transforms labels Nov 20, 2020
}

if (
mlCapabilities.canDeleteJob === false ||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible that the user could have one permission and not the other? Or is job deletion a single permission across ML?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at the moment no, we only have ML viewer or ML admin roles.
But just in case we ever allow fine grained capabilities i'll change this check to be more explicit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed in 5951246

results[jobId] = {
canDelete: true,
canUnTag: false,
// GLOBAL JOB BOOL????
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this comment needed?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it was a reminder that we might need information in this endpoint that the UI can use to inform the user why they can't untag or delete the job.
I'll remove the comment for now, but when the UI work is done we may need to add in flags like isGlobalJob

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed ceb41ea

@alvarezmelissa87
Copy link
Contributor

Code LGTM but having trouble testing it. Will update first thing monday.

@jgowdyelastic
Copy link
Member Author

@elasticmachine merge upstream


export interface DeleteJobPermission {
canDelete: boolean;
canUnTag: boolean;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking for this word online, it seems untag is the generally accepted spelling, rather than un-tag. In which case, I think it makes sense to go with canUntag here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed ceb41ea

*
* @api {get} /api/ml/saved_objects/delete_job_check Check whether user can delete a job
* @apiName DeleteJobCheck
* @apiDescription Check the users ability to delete jobs. Returns whether they are able
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit - user's.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed ceb41ea


// jobs with are in individual spaces can only be untagged
// from current space if the job is in more than 1 space
const canUnTag = namespaces.length > 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As above, should probably be canUntag.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed ceb41ea

* @apiSchema (body) jobIdsSchema (params) jobTypeSchema
*
*/
router.post(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be DELETE instead

Suggested change
router.post(
router.delete(

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's not deleting anything. it's checking whether you can delete

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I read it wrong! It'd be more straightforward to start the method with Check then :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, i can see how this could be confused.
we could change this to check_delete_job_ability or something like that?
@peteharverson

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or perhaps can_delete_job or check_can_delete_job ?

Comment on lines 242 to 246
path: '/api/ml/saved_objects/delete_job_check/{jobType}',
validate: {
params: jobTypeSchema,
body: jobIdsSchema,
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe it's worth putting everything in params, e.g. '/api/ml/saved_objects/delete_job_check/{jobType}/{jobId}'

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jobIds is an array of ids, for all other similar kibana endpoints we pass the ids as an array in the body (not including endpoints which are simple wrappers around es endpoints)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, I recall some endpoints with comma-separated job ids

@jgowdyelastic
Copy link
Member Author

@elasticmachine merge upstream

@elasticmachine
Copy link
Contributor

Pinging @elastic/ml-ui (:ml)

Copy link
Contributor

@peteharverson peteharverson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@alvarezmelissa87 alvarezmelissa87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ⚡

Copy link
Contributor

@darnautov darnautov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jgowdyelastic
Copy link
Member Author

@elasticmachine merge upstream

@jgowdyelastic jgowdyelastic merged commit 24f262b into elastic:master Nov 24, 2020
@jgowdyelastic jgowdyelastic deleted the delete-job-space-permission-check branch November 24, 2020 16:29
jgowdyelastic added a commit that referenced this pull request Nov 24, 2020
* [ML] Space permision checks for job deletion

* updating spaces dependency

* updating endpoint comments

* adding delete job capabilities check

* small change based on review

* improving permissions checks

* renaming function and endpoint

Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>

Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
rylnd added a commit to rylnd/kibana that referenced this pull request Nov 24, 2020
* master: (41 commits)
  [Maps] fix code-owners (elastic#84265)
  [@kbn/utils] Clean target before build (elastic#84253)
  [code coverage] collect for oss integration tests (elastic#83907)
  [APM] Use `asTransactionRate` consistently everywhere (elastic#84213)
  Attempt to fix incremental build error (elastic#84152)
  Unskip "Copy dashboards to space" (elastic#84115)
  Remove expressions.legacy from README (elastic#79681)
  Expression: Add render mode and use it for canvas interactivity (elastic#83559)
  [deb/rpm] Move systemd service to /usr/lib/systemd/system (elastic#83571)
  [Security Solution][Resolver] Allow a configurable entity_id field (elastic#81679)
  [ML] Space permision checks for job deletion (elastic#83871)
  [build] Provide ARM build of RE2 (elastic#84163)
  TSVB should use "histogram:maxBars" and "histogram:barTarget" settings for auto instead of a default 100 buckets (elastic#83628)
  [Workplace Search] Initial rendering of Org Sources (elastic#84164)
  update geckodriver to 0.28 (elastic#84085)
  Fix timelion vis escapes single quotes (elastic#84196)
  [Security Solution] Fix incorrect time for dns histogram (elastic#83532)
  [DX] Bump TS version to v4.1 (elastic#83397)
  [Security Solution] Add endpoint policy revision number (elastic#83982)
  [Fleet] Integration Policies List view (elastic#83634)
  ...
@kibanamachine
Copy link
Contributor

kibanamachine commented Dec 10, 2020

💔 Build Failed

Failed CI Steps


Test Failures

X-Pack API Integration Tests.x-pack/test/api_integration/apis/ml/results/get_anomalies_table_data·ts.apis Machine Learning ResultsService GetAnomaliesTableData should fetch anomalies table data

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 1 times on tracked branches: https://dryrun

[00:00:00]       │
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook
[00:06:45]           └-: Machine Learning
[00:06:45]             └-> "before all" hook
[00:06:45]             └-> "before all" hook
[00:06:45]               │ debg creating role ft_ml_source
[00:06:45]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_ml_source]
[00:06:45]               │ debg creating role ft_ml_source_readonly
[00:06:45]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_ml_source_readonly]
[00:06:45]               │ debg creating role ft_ml_dest
[00:06:45]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_ml_dest]
[00:06:45]               │ debg creating role ft_ml_dest_readonly
[00:06:45]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_ml_dest_readonly]
[00:06:45]               │ debg creating role ft_ml_ui_extras
[00:06:45]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_ml_ui_extras]
[00:06:45]               │ debg creating role ft_default_space_ml_all
[00:06:45]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_default_space_ml_all]
[00:06:45]               │ debg creating role ft_default_space_ml_read
[00:06:45]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_default_space_ml_read]
[00:06:45]               │ debg creating role ft_default_space_ml_none
[00:06:45]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_default_space_ml_none]
[00:06:45]               │ debg creating user ft_ml_poweruser
[00:06:45]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added user [ft_ml_poweruser]
[00:06:45]               │ debg created user ft_ml_poweruser
[00:06:45]               │ debg creating user ft_ml_poweruser_spaces
[00:06:45]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added user [ft_ml_poweruser_spaces]
[00:06:45]               │ debg created user ft_ml_poweruser_spaces
[00:06:45]               │ debg creating user ft_ml_viewer
[00:06:45]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added user [ft_ml_viewer]
[00:06:45]               │ debg created user ft_ml_viewer
[00:06:45]               │ debg creating user ft_ml_viewer_spaces
[00:06:45]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added user [ft_ml_viewer_spaces]
[00:06:45]               │ debg created user ft_ml_viewer_spaces
[00:06:45]               │ debg creating user ft_ml_unauthorized
[00:06:45]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added user [ft_ml_unauthorized]
[00:06:45]               │ debg created user ft_ml_unauthorized
[00:06:45]               │ debg creating user ft_ml_unauthorized_spaces
[00:06:45]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added user [ft_ml_unauthorized_spaces]
[00:06:45]               │ debg created user ft_ml_unauthorized_spaces
[00:09:36]             └-: ResultsService
[00:09:36]               └-> "before all" hook
[00:09:36]               └-: GetAnomaliesTableData
[00:09:36]                 └-> "before all" hook
[00:09:36]                 └-> "before all" hook
[00:09:36]                   │ info [ml/farequote] Loading "mappings.json"
[00:09:36]                   │ info [ml/farequote] Loading "data.json.gz"
[00:09:36]                   │ info [ml/farequote] Skipped restore for existing index "ft_farequote"
[00:09:37]                   │ debg applying update to kibana config: {"dateFormat:tz":"UTC"}
[00:09:37]                   │ debg Creating anomaly detection job with id 'fq_multi_1_ae'...
[00:09:37]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-anomalies-shared] creating index, cause [api], templates [.ml-anomalies-], shards [1]/[1]
[00:09:37]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] updating number_of_replicas to [0] for indices [.ml-anomalies-shared]
[00:09:37]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-annotations-6] creating index, cause [api], templates [], shards [1]/[1]
[00:09:37]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] updating number_of_replicas to [0] for indices [.ml-annotations-6]
[00:09:37]                   │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-anomalies-shared/1fHRECxdQzWe914yi3QZgw] update_mapping [_doc]
[00:09:37]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-config] creating index, cause [auto(bulk api)], templates [.ml-config], shards [1]/[1]
[00:09:37]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] updating number_of_replicas to [0] for indices [.ml-config]
[00:09:37]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-notifications-000001] creating index, cause [auto(bulk api)], templates [.ml-notifications-000001], shards [1]/[1]
[00:09:37]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] updating number_of_replicas to [0] for indices [.ml-notifications-000001]
[00:09:38]                   │ debg Waiting up to 5000ms for 'fq_multi_1_ae' to exist...
[00:09:38]                   │ debg Creating datafeed with id 'datafeed-fq_multi_1_se'...
[00:09:39]                   │ debg Waiting up to 5000ms for 'datafeed-fq_multi_1_se' to exist...
[00:09:39]                   │ debg Opening anomaly detection job 'fq_multi_1_ae'...
[00:09:39]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] Opening job [fq_multi_1_ae]
[00:09:39]                   │ info [o.e.x.c.m.u.MlIndexAndAlias] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] About to create first concrete index [.ml-state-000001] with alias [.ml-state-write]
[00:09:39]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-state-000001] creating index, cause [api], templates [.ml-state], shards [1]/[1]
[00:09:39]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] updating number_of_replicas to [0] for indices [.ml-state-000001]
[00:09:39]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ml-state-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ml-size-based-ilm-policy]
[00:09:39]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ml-state-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] in policy [ml-size-based-ilm-policy]
[00:09:39]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] Loading model snapshot [N/A], job latest_record_timestamp [N/A]
[00:09:39]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] in policy [ml-size-based-ilm-policy]
[00:09:40]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] [autodetect/319670] [CResourceMonitor.cc@74] Setting model memory limit to 20 MB
[00:09:40]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] Successfully set job state to [opened] for job [fq_multi_1_ae]
[00:09:40]                   │ debg Starting datafeed 'datafeed-fq_multi_1_se' with start: '0', end: '1607617056365'...
[00:09:40]                   │ info [o.e.x.m.d.DatafeedJob] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] Datafeed started (from: 1970-01-01T00:00:00.000Z to: 2020-12-10T16:17:36.365Z) with frequency [600000ms]
[00:09:40]                   │ debg Waiting up to 120000ms for datafeed state to be stopped...
[00:09:40]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:40]                   │ debg --- retry.waitForWithTimeout error: expected job state to be stopped but got started
[00:09:40]                   │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-anomalies-shared/1fHRECxdQzWe914yi3QZgw] update_mapping [_doc]
[00:09:40]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 10000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:40]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:40]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:40]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 20000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:41]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 30000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:41]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:41]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] to [{"phase":"hot","action":"unfollow","name":"pause-follower-index"}] in policy [ilm-history-ilm-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.kibana-event-log-8.0.0-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] to [{"phase":"hot","action":"unfollow","name":"pause-follower-index"}] in policy [kibana-event-log-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] to [{"phase":"hot","action":"unfollow","name":"pause-follower-index"}] in policy [ml-size-based-ilm-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"pause-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"close-follower-index"}] in policy [ilm-history-ilm-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.kibana-event-log-8.0.0-000001] from [{"phase":"hot","action":"unfollow","name":"pause-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"close-follower-index"}] in policy [kibana-event-log-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"pause-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"close-follower-index"}] in policy [ml-size-based-ilm-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"close-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"unfollow-follower-index"}] in policy [ilm-history-ilm-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.kibana-event-log-8.0.0-000001] from [{"phase":"hot","action":"unfollow","name":"close-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"unfollow-follower-index"}] in policy [kibana-event-log-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"close-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"unfollow-follower-index"}] in policy [ml-size-based-ilm-policy]
[00:09:41]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 40000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"unfollow-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"open-follower-index"}] in policy [ilm-history-ilm-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.kibana-event-log-8.0.0-000001] from [{"phase":"hot","action":"unfollow","name":"unfollow-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"open-follower-index"}] in policy [kibana-event-log-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"unfollow-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"open-follower-index"}] in policy [ml-size-based-ilm-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"open-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-index-color"}] in policy [ilm-history-ilm-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.kibana-event-log-8.0.0-000001] from [{"phase":"hot","action":"unfollow","name":"open-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-index-color"}] in policy [kibana-event-log-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"open-follower-index"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-index-color"}] in policy [ml-size-based-ilm-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-index-color"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ilm-history-ilm-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.kibana-event-log-8.0.0-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-index-color"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [kibana-event-log-policy]
[00:09:41]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-index-color"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ml-size-based-ilm-policy]
[00:09:41]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:41]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:41]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 50000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:42]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:42]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:42]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 60000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:42]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 70000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:42]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:42]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:43]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 80000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:43]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:43]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:43]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:43]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:44]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:44]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:44]                   │ info [o.e.x.m.d.DatafeedJob] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] Lookback has finished
[00:09:44]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [no_realtime] attempt to stop datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae]
[00:09:44]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [no_realtime] try lock [20s] to stop datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae]...
[00:09:44]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [no_realtime] stopping datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae], acquired [true]...
[00:09:44]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [no_realtime] datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae] has been stopped
[00:09:44]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] Closing job [fq_multi_1_ae], because [close job (api)]
[00:09:44]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] [autodetect/319670] [CCmdSkeleton.cc@51] Handled 86274 records
[00:09:44]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] [autodetect/319670] [CAnomalyJob.cc@1569] Pruning all models
[00:09:44]                   │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-anomalies-shared/1fHRECxdQzWe914yi3QZgw] update_mapping [_doc]
[00:09:44]                   │ info [o.e.x.m.p.AbstractNativeProcess] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] State output finished
[00:09:44]                   │ info [o.e.x.m.j.p.a.o.AutodetectResultProcessor] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 120 buckets parsed from autodetect output
[00:09:44]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:44]                   │ debg Waiting up to 120000ms for job state to be closed...
[00:09:44]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:44]                   │ debg --- retry.waitForWithTimeout error: expected job state to be closed but got closing
[00:09:45]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:45]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:45]                   │ info [o.e.x.m.j.p.a.AutodetectCommunicator] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] job closed
[00:09:45]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:45]                 └-> should fetch anomalies table data
[00:09:45]                   └-> "before each" hook: global before each
[00:09:45]                   └- ✖ fail: apis Machine Learning ResultsService GetAnomaliesTableData should fetch anomalies table data
[00:09:45]                   │       Error: expected 13 to sort of equal 12
[00:09:45]                   │       + expected - actual
[00:09:45]                   │ 
[00:09:45]                   │       -13
[00:09:45]                   │       +12
[00:09:45]                   │       
[00:09:45]                   │       at Assertion.assert (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:100:11)
[00:09:45]                   │       at Assertion.eql (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:244:8)
[00:09:45]                   │       at Context.<anonymous> (test/api_integration/apis/ml/results/get_anomalies_table_data.ts:79:40)
[00:09:45]                   │       at Object.apply (/dev/shm/workspace/parallel/3/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16)
[00:09:45]                   │ 
[00:09:45]                   │ 

Stack Trace

Error: expected 13 to sort of equal 12
    at Assertion.assert (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:100:11)
    at Assertion.eql (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:244:8)
    at Context.<anonymous> (test/api_integration/apis/ml/results/get_anomalies_table_data.ts:79:40)
    at Object.apply (/dev/shm/workspace/parallel/3/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16) {
  actual: '13',
  expected: '12',
  showDiff: true
}

X-Pack API Integration Tests.x-pack/test/api_integration/apis/ml/results/get_anomalies_table_data·ts.apis Machine Learning ResultsService GetAnomaliesTableData should fetch anomalies table data

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has not failed recently on tracked branches

[00:00:00]       │
[00:00:00]         │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ds-ilm-history-5-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
[00:00:00]         │ info [o.e.c.m.MetadataCreateDataStreamService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-000001] and backing indices []
[00:00:00]         │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ds-ilm-history-5-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
[00:00:00]         │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-000001][0]]])." previous.health="YELLOW" reason="shards started [[.ds-ilm-history-5-000001][0]]"
[00:00:00]         │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ds-ilm-history-5-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] in policy [ilm-history-ilm-policy]
[00:00:00]         │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] in policy [ilm-history-ilm-policy]
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook
[00:06:33]           └-: Machine Learning
[00:06:33]             └-> "before all" hook
[00:06:33]             └-> "before all" hook
[00:06:33]               │ debg creating role ft_ml_source
[00:06:33]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_ml_source]
[00:06:33]               │ debg creating role ft_ml_source_readonly
[00:06:33]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_ml_source_readonly]
[00:06:33]               │ debg creating role ft_ml_dest
[00:06:33]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_ml_dest]
[00:06:33]               │ debg creating role ft_ml_dest_readonly
[00:06:33]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_ml_dest_readonly]
[00:06:33]               │ debg creating role ft_ml_ui_extras
[00:06:33]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_ml_ui_extras]
[00:06:33]               │ debg creating role ft_default_space_ml_all
[00:06:33]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_default_space_ml_all]
[00:06:33]               │ debg creating role ft_default_space_ml_read
[00:06:33]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_default_space_ml_read]
[00:06:33]               │ debg creating role ft_default_space_ml_none
[00:06:33]               │ info [o.e.x.s.a.r.TransportPutRoleAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added role [ft_default_space_ml_none]
[00:06:33]               │ debg creating user ft_ml_poweruser
[00:06:33]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added user [ft_ml_poweruser]
[00:06:33]               │ debg created user ft_ml_poweruser
[00:06:33]               │ debg creating user ft_ml_poweruser_spaces
[00:06:33]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added user [ft_ml_poweruser_spaces]
[00:06:33]               │ debg created user ft_ml_poweruser_spaces
[00:06:33]               │ debg creating user ft_ml_viewer
[00:06:33]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added user [ft_ml_viewer]
[00:06:33]               │ debg created user ft_ml_viewer
[00:06:33]               │ debg creating user ft_ml_viewer_spaces
[00:06:33]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added user [ft_ml_viewer_spaces]
[00:06:33]               │ debg created user ft_ml_viewer_spaces
[00:06:33]               │ debg creating user ft_ml_unauthorized
[00:06:34]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added user [ft_ml_unauthorized]
[00:06:34]               │ debg created user ft_ml_unauthorized
[00:06:34]               │ debg creating user ft_ml_unauthorized_spaces
[00:06:34]               │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] added user [ft_ml_unauthorized_spaces]
[00:06:34]               │ debg created user ft_ml_unauthorized_spaces
[00:09:20]             └-: ResultsService
[00:09:20]               └-> "before all" hook
[00:09:20]               └-: GetAnomaliesTableData
[00:09:20]                 └-> "before all" hook
[00:09:20]                 └-> "before all" hook
[00:09:20]                   │ info [ml/farequote] Loading "mappings.json"
[00:09:20]                   │ info [ml/farequote] Loading "data.json.gz"
[00:09:20]                   │ info [ml/farequote] Skipped restore for existing index "ft_farequote"
[00:09:21]                   │ debg applying update to kibana config: {"dateFormat:tz":"UTC"}
[00:09:21]                   │ debg Creating anomaly detection job with id 'fq_multi_1_ae'...
[00:09:21]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-anomalies-shared] creating index, cause [api], templates [.ml-anomalies-], shards [1]/[1]
[00:09:21]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] updating number_of_replicas to [0] for indices [.ml-anomalies-shared]
[00:09:22]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-annotations-6] creating index, cause [api], templates [], shards [1]/[1]
[00:09:22]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] updating number_of_replicas to [0] for indices [.ml-annotations-6]
[00:09:22]                   │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-anomalies-shared/9mDDrjL5Q_m-36WRo_Fk5g] update_mapping [_doc]
[00:09:22]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-config] creating index, cause [auto(bulk api)], templates [.ml-config], shards [1]/[1]
[00:09:22]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] updating number_of_replicas to [0] for indices [.ml-config]
[00:09:22]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-notifications-000001] creating index, cause [auto(bulk api)], templates [.ml-notifications-000001], shards [1]/[1]
[00:09:22]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] updating number_of_replicas to [0] for indices [.ml-notifications-000001]
[00:09:22]                   │ debg Waiting up to 5000ms for 'fq_multi_1_ae' to exist...
[00:09:22]                   │ debg Creating datafeed with id 'datafeed-fq_multi_1_se'...
[00:09:23]                   │ debg Waiting up to 5000ms for 'datafeed-fq_multi_1_se' to exist...
[00:09:23]                   │ debg Opening anomaly detection job 'fq_multi_1_ae'...
[00:09:24]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] Opening job [fq_multi_1_ae]
[00:09:24]                   │ info [o.e.x.c.m.u.MlIndexAndAlias] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] About to create first concrete index [.ml-state-000001] with alias [.ml-state-write]
[00:09:24]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-state-000001] creating index, cause [api], templates [.ml-state], shards [1]/[1]
[00:09:24]                   │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] updating number_of_replicas to [0] for indices [.ml-state-000001]
[00:09:24]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ml-state-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ml-size-based-ilm-policy]
[00:09:24]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ml-state-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] in policy [ml-size-based-ilm-policy]
[00:09:24]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] Loading model snapshot [N/A], job latest_record_timestamp [N/A]
[00:09:24]                   │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] moving index [.ml-state-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] in policy [ml-size-based-ilm-policy]
[00:09:24]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] [autodetect/232329] [CResourceMonitor.cc@74] Setting model memory limit to 20 MB
[00:09:24]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] Successfully set job state to [opened] for job [fq_multi_1_ae]
[00:09:24]                   │ debg Starting datafeed 'datafeed-fq_multi_1_se' with start: '0', end: '1607615488822'...
[00:09:24]                   │ info [o.e.x.m.d.DatafeedJob] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] Datafeed started (from: 1970-01-01T00:00:00.000Z to: 2020-12-10T15:51:28.822Z) with frequency [600000ms]
[00:09:24]                   │ debg Waiting up to 120000ms for datafeed state to be stopped...
[00:09:24]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:24]                   │ debg --- retry.waitForWithTimeout error: expected job state to be stopped but got started
[00:09:24]                   │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-anomalies-shared/9mDDrjL5Q_m-36WRo_Fk5g] update_mapping [_doc]
[00:09:24]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 10000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:25]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:25]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:25]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 20000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:25]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 30000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:25]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:25]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:25]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 40000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:26]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:26]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:26]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 50000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:26]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:26]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:26]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 60000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:27]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:27]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:27]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 70000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:27]                   │ info [o.e.x.m.j.p.DataCountsReporter] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 80000 records written to autodetect; missingFieldCount=0, invalidDateCount=0, outOfOrderCount=0
[00:09:27]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:27]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:28]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:28]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:28]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:28]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:29]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:29]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:29]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:29]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:30]                   │ info [o.e.x.m.d.DatafeedJob] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] Lookback has finished
[00:09:30]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [no_realtime] attempt to stop datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae]
[00:09:30]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [no_realtime] try lock [20s] to stop datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae]...
[00:09:30]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [no_realtime] stopping datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae], acquired [true]...
[00:09:30]                   │ info [o.e.x.m.d.DatafeedManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [no_realtime] datafeed [datafeed-fq_multi_1_se] for job [fq_multi_1_ae] has been stopped
[00:09:30]                   │ info [o.e.x.m.j.p.a.AutodetectProcessManager] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] Closing job [fq_multi_1_ae], because [close job (api)]
[00:09:30]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] [autodetect/232329] [CCmdSkeleton.cc@51] Handled 86274 records
[00:09:30]                   │ info [o.e.x.m.p.l.CppLogMessageHandler] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] [autodetect/232329] [CAnomalyJob.cc@1569] Pruning all models
[00:09:30]                   │ debg Fetching datafeed state for datafeed datafeed-fq_multi_1_se
[00:09:30]                   │ debg Waiting up to 120000ms for job state to be closed...
[00:09:30]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:30]                   │ debg --- retry.waitForWithTimeout error: expected job state to be closed but got closing
[00:09:30]                   │ info [o.e.c.m.MetadataMappingService] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [.ml-anomalies-shared/9mDDrjL5Q_m-36WRo_Fk5g] update_mapping [_doc]
[00:09:30]                   │ info [o.e.x.m.p.AbstractNativeProcess] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] State output finished
[00:09:30]                   │ info [o.e.x.m.j.p.a.o.AutodetectResultProcessor] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] 120 buckets parsed from autodetect output
[00:09:30]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:30]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:31]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:31]                   │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:09:31]                   │ info [o.e.x.m.j.p.a.AutodetectCommunicator] [kibana-ci-immutable-centos-tests-xxl-1607612177781668663] [fq_multi_1_ae] job closed
[00:09:31]                   │ debg Fetching anomaly detection job stats for job fq_multi_1_ae...
[00:09:31]                 └-> should fetch anomalies table data
[00:09:31]                   └-> "before each" hook: global before each
[00:09:31]                   └- ✖ fail: apis Machine Learning ResultsService GetAnomaliesTableData should fetch anomalies table data
[00:09:31]                   │       Error: expected 13 to sort of equal 12
[00:09:31]                   │       + expected - actual
[00:09:31]                   │ 
[00:09:31]                   │       -13
[00:09:31]                   │       +12
[00:09:31]                   │       
[00:09:31]                   │       at Assertion.assert (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:100:11)
[00:09:31]                   │       at Assertion.eql (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:244:8)
[00:09:31]                   │       at Context.<anonymous> (test/api_integration/apis/ml/results/get_anomalies_table_data.ts:79:40)
[00:09:31]                   │       at Object.apply (/dev/shm/workspace/parallel/3/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16)
[00:09:31]                   │ 
[00:09:31]                   │ 

Stack Trace

Error: expected 13 to sort of equal 12
    at Assertion.assert (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:100:11)
    at Assertion.eql (/dev/shm/workspace/parallel/3/kibana/packages/kbn-expect/expect.js:244:8)
    at Context.<anonymous> (test/api_integration/apis/ml/results/get_anomalies_table_data.ts:79:40)
    at Object.apply (/dev/shm/workspace/parallel/3/kibana/packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:84:16) {
  actual: '13',
  expected: '12',
  showDiff: true
}

Metrics [docs]

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
ml 5.2MB 5.2MB +40.0B

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature:Anomaly Detection ML anomaly detection Feature:Data Frame Analytics ML data frame analytics features :ml non-issue Indicates to automation that a pull request should not appear in the release notes release_note:skip Skip the PR/issue when compiling release notes review v7.11.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants