Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kibana: mapping set to strict, dynamic introduction of [references] within [doc] is not allowed #35061

Closed
raulk89 opened this issue Apr 15, 2019 · 27 comments
Labels
Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc Team:Operations Team label for Operations Team

Comments

@raulk89
Copy link

raulk89 commented Apr 15, 2019

Kibana version: 7.0.0

Elasticsearch version: 7.0.0

Server OS version: Centos 7.6 64 bit

Browser version: Chrome Version 73.0.3683.86 (Official Build) (64-bit)
(tried with IE also)

Browser OS version: Version 73.0.3683.86 (Official Build) (64-bit)

Original install method (e.g. download page, yum, from source, etc.): yum

Describe the bug:
Updated both from 6.7.1 to 7.0.0.
Cluster is green, but when browsing kibana on browser, I get:

{"message":"mapping set to strict, dynamic introduction of [references] within [doc] is not allowed: [strict_dynamic_mapping_exception] mapping set to strict, dynamic introduction of [references] within [doc] is not allowed","statusCode":400,"error":"Bad Request"}

Steps to reproduce:

  1. Just upgrade 6.7.1 elasticsearch and kibana to 7.0.0
  2. From browser, go to kibana URL and you get an error.

(kibana is using default conf)

Expected behavior: Kibana UI should open up.

Screenshots (if relevant):

Errors in browser console (if relevant):
image

Provide logs and/or server output (if relevant):
Elasticsearch has following errors in logfile, when visiting kibana UI:

[2019-04-11T15:32:31,084][DEBUG][o.e.a.b.TransportShardBulkAction] [dev-dc1-rk] [.kibana_1][0] failed to execute bulk item (create) index {[.kibana][_doc][config:7.0.0], source[{"config":{"buildNum":23117,"defaultIndex":"11ed3b50-4180-11e9-8c13-3da5f557b485"},"type":"config","references":[],"updated_at":"2019-04-11T12:32:31.082Z"}]}
org.elasticsearch.index.mapper.StrictDynamicMappingException: mapping set to strict, dynamic introduction of [references] within [doc] is not allowed
        at org.elasticsearch.index.mapper.DocumentParser.parseArray(DocumentParser.java:536) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:394) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:381) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:98) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:71) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:267) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:770) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:747) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:719) ~[elasticsearch-7.0.0.jar:7.0.0]

Any additional context:
I opened this discussion here, and there are lot of people who are having the same problem.

https://discuss.elastic.co/t/kibana-mapping-set-to-strict-dynamic-introduction-of-references-within-doc-is-not-allowed/176403

@azasypkin azasypkin added the Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc label Apr 15, 2019
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-platform

@azasypkin
Copy link
Member

cc @mikecote

@mikecote
Copy link
Contributor

Hi @raulk89,

It seems like migrations may have failed to execute. When migrating on startup, kibana does the following steps:

  1. create a new index (ex: .kibana-2)
  2. add the new mappings in the new index
  3. migrates documents over to the new index while applying data migrations to them
  4. changes the .kibana alias to the newly created index

I believe a failure happened at step 3 where kibana is still pointing to the un-migrated index that contain old mappings. In that scenario, kibana should have created console logs from along the lines of:

server    log   [09:17:36.745] [info][migrations] Migrating .kibana_1 saved objects to .kibana_2
server    log   [09:17:36.759] [warning][migrations] Failed to transform document [object Object]. Transform: index-pattern:7.0.0

You can also confirm if migrations failed by looking at where the .kibana index alias is pointing to (curl -XGET 'localhost:9200/_cat/aliases'), if it currently points to .kibana-6 and there a .kibana-x greater than 6 (curl -XGET 'localhost:9200/_cat/indices') it would confirm migrations failed.

Is there any chance logs like those have been created?

cc @tylersmalley let me know if I missed anything about the migration process or if you can step in as well.

@raulk89
Copy link
Author

raulk89 commented Apr 15, 2019

Hi

Do you mean these messages are in kibana log or elasticsearch logs..?

At the moment, .kibana points to .kibana_1:

.kibana .kibana_1 - - -

[root@dev-dc1-rk ~]# curl -sXGET `hostname`':9200/_cat/indices?v&s=health,status,index:desc' | grep kibana
green  open   .monitoring-kibana-7-2019.04.15 Pp-ME7TvTYePaGoXUC9Nuw   1   1      10130            0      5.3mb          2.6mb
green  open   .monitoring-kibana-7-2019.04.14 NmNnTncCQ62gtzf9X5hL0g   1   1      17278            0      8.8mb          4.4mb
green  open   .monitoring-kibana-7-2019.04.13 pYU1Ks1YRCO1VDhsrwl7RA   1   1      17278            0      8.8mb          4.4mb
green  open   .monitoring-kibana-7-2019.04.12 cCKC8KD9QoCSr7JmQLFmvg   1   1      17278            0      8.8mb          4.4mb
green  open   .monitoring-kibana-7-2019.04.11 edV_9b1yQkSFungLJIpIJw   1   1       8359            0      5.3mb          2.6mb
green  open   .monitoring-kibana-6-2019.04.11 IB3Uy0GWSwmpqfYUyYxdBA   1   1        888            0    837.8kb        412.2kb
green  open   .kibana_task_manager            otlBv6ONQ66Ur0ym3bR67g   1   1          2            0     41.1kb         20.5kb
green  open   .kibana_2                       5ovfWL6dQ2CoALdUwNrBSQ   1   1          0            0       566b           283b
green  open   .kibana_1                       jOBlVfcdSGefj1csT4ta4A   1   1         12            0       53kb         26.5kb

Regards
Raul

@mikecote
Copy link
Contributor

@raulk89 I meant the kibana logs. Every-time kibana starts it would have a message like Migrating .kibana_1 saved objects to .kibana_2 with any errors below if any. What logs are created when you start kibana up?

@mikecote
Copy link
Contributor

@raulk89 from those logs, it confirms .kibana_2 is created but failed at step 3 of my description above. You can try re-running the migrations by deleting the .kibana_2 (maybe name it to something else to be safe) and restarting kibana. Kibana should put error logs when migrations fail to run.

@dguendisch
Copy link

@mikecote: when does migration of a .kibana index actually happen, it is on startup?
For me the migration seems to have worked when I did a rolling upgrade, but it failed (or probay was never kicked off) when I setup a fresh 7.0 installation and triggered a snapshot restore.
Would it have been enough to e.g. simply restart kibana after the snapshot restore?

@mikecote
Copy link
Contributor

Hi @dguendisch, yes the migrations execute on startup of kibana. In regards to snapshot restore, I'm not 100% sure but by restarting kibana it would migrate the saved objects that are restored from the snapshot. You can try that and keep a close eye on the kibana logs (just confirm you're restoring the proper kibana index as they increase on every startup .kibana-1, .kibana-2 etc, you can look at the alias .kibana to see which one it's currently being used). I can try and get answers towards the rolling upgrade if you like.

@dguendisch
Copy link

No worry, rolling upgrade worked for me, just the "new-cluster->snapshot-restore" approach left me with the error msg of this issue here. I'll retry and add a kibana restart as the last step.

@raulk89
Copy link
Author

raulk89 commented Apr 15, 2019

For me there is no logs at all:

[root@dev-dc1-rk ~]# /usr/share/kibana/bin/kibana

(empty, nothing here)

@tylersmalley
Copy link
Contributor

@raulk89 for me there is about a 10-second delay until there is output. Are you in a VM or a resource-restricted environment which would maybe delay that further? If you think this is a bug, create a new issue or if you would like more assistance post on the Discuss forums

@raulk89
Copy link
Author

raulk89 commented Apr 15, 2019

I usually start this through service instead:
systemctl start kibana

I have waited for like several minutes already.
Kibana is up, well, if I go to kibana UI url, there I can see this error "mapping set to strict, dyna...", so it is running, but .. yeah..

But, so I should delete this .kibana_2 ?

Raul

@tylersmalley
Copy link
Contributor

@raulk89, to get this "mapping set to strict, dyna..." error - I am pretty sure you would have had to start Kibana 7, then modify/swap out the underlying index with one from a 6.x instance. Can you confirm this is the case? If you, you will need to restart Kibana. We do not support swapping out the underlying index while Kibana is running.

For the logs, you probably need to look at the service level logs journalctl -u kibana

@dguendisch
Copy link

@mikecote, retried my empty 7.0 setup, restored from snapshot and upon restart of kibana it migrated its indexes, so fine with me so far, thanks for the hints.

@raulk89
Copy link
Author

raulk89 commented Apr 15, 2019

Ok, I do not know what I did differently last week, when I got these errors, now I cloned same VM tmeplate again, aga tried again 6.7.1 -> 7.0.0

{"type":"log","@timestamp":"2019-04-15T20:29:35Z","tags":["info","migrations"],"pid":7127,"message":"Detected mapping change in \"_meta\""}
{"type":"log","@timestamp":"2019-04-15T20:29:35Z","tags":["info","migrations"],"pid":7127,"message":"Removing index templates: kibana_index_template:.kibana"}
{"type":"log","@timestamp":"2019-04-15T20:29:35Z","tags":["info","migrations"],"pid":7127,"message":"Creating index .kibana_2."}
{"type":"log","@timestamp":"2019-04-15T20:29:36Z","tags":["info","migrations"],"pid":7127,"message":"Migrating .kibana_1 saved objects to .kibana_2"}
{"type":"log","@timestamp":"2019-04-15T20:29:36Z","tags":["info","migrations"],"pid":7127,"message":"Pointing alias .kibana to .kibana_2."}
{"type":"log","@timestamp":"2019-04-15T20:29:36Z","tags":["info","migrations"],"pid":7127,"message":"Finished in 510ms."}
{"type":"log","@timestamp":"2019-04-15T20:29:36Z","tags":["listening","info"],"pid":7127,"message":"Server running at http://localhost:5601"}
{"type":"log","@timestamp":"2019-04-15T20:29:36Z","tags":["status","plugin:spaces@7.0.0","info"],"pid":7127,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}

And it worked.

Raul

@tylersmalley
Copy link
Contributor

That's great @raulk89 - as long as you aren't restoring an old index you should be fine. We good to close this issue now?

@raulk89
Copy link
Author

raulk89 commented Apr 16, 2019

Well, this still is confusing, as why it happened..:)
But I think yeah, you can close it, but just could someone sum it up (before closing) in a single sentence or two, as what was the probleme here exactly.

Raul

@tylersmalley
Copy link
Contributor

Yup, to sum it up:

Kibana runs migrations on startup, so if you swap out the underlying index with one from a previous version there are issues which could arise due to the data structure. An easy solution would be to then restart Kibana and allow the migrations to be run on the restored data.

@jbudz jbudz added the Team:Operations Team label for Operations Team label Jun 20, 2019
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-operations

@jbudz jbudz reopened this Jun 20, 2019
@jbudz
Copy link
Member

jbudz commented Jun 20, 2019

We're seeing this on a few more instances, and the discuss thread has more discussion. Opening for another look and steps for recovering.

editing so I don't spam too many notifications:

  • possible older .kibana templates?

@rashmivkulkarni
Copy link
Contributor

More reference here:

Environment: 7.2.0 BC9 cloud instance
Created an user : super - with super user permission .
Executed the cURL request as below:

curl -XGET https://<username>:<pwd>@55c8096ae8a7403fa9daa22cbbd42a49.us-east-1.aws.staging.foundit.no:9243/api/status

Response :

{"statusCode":400,"error":"Bad Request","message":"mapping set to strict, dynamic introduction of [references] within [_doc] is not allowed: [strict_dynamic_mapping_exception] mapping set to strict, dynamic introduction of [references] within [_doc] is not allowed"}

Then deleted the .kibana index and restarted entire deployment on cloud and reissued the command - still getting the same error as above. All requests including GET are getting the same error.

cc @LeeDr

@hume-github
Copy link

I've just had this occur after a 6.7.2 -> 7.1.1 kibana+es upgrade.

I think this can occur when ES is running slowly on startup. My ES was coming up, reasonably performant but clearly still initializing. I "systemctl start"'d kibana and loaded the browser. Kibana began migrating the indices, but ran into this:

kibana[52860]: {"type":"log","@timestamp":"2019-06-21T12:32:23Z","tags":["fatal","root"],"pid":52860,"message":"{ Error: Request Timeout after 30000ms\n at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15\n at Timeout.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)\n status: undefined,\n displayName: 'RequestTimeout',\n message: 'Request Timeout after 30000ms',\n body: undefined,\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { statusCode: 503,\n error: 'Service Unavailable',\n message: 'Request Timeout after 30000ms' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}

That kibana process exited, and systemctl restarted a new one, which produced the following:

kibana[61831]: {"type":"log","@timestamp":"2019-06-21T12:33:03Z","tags":["warning","migrations"],"pid":61831,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_3 and restarting Kibana."}

Because systemd was so quick to restart it I had no idea anything went wrong at first.

So I think the sum of problems can be described as:

  • Kibana can start a migration but fail, and doesn't clean up if this occurs (which may not be possible, to be fair...)
  • systemd can hide the failure
  • ES doesn't have a means to signal to kibana that it isn't ready

@liza-mae
Copy link
Contributor

liza-mae commented Jun 24, 2019

The issue @Rasroh referenced here that came from the functional tests canvas smoke test is not related to this as we can tell it is just a test issue. This issue is related to migrations and upgrades.

@jbudz
Copy link
Member

jbudz commented Aug 9, 2019

Wanted to bump this to say it's still happening, and if there's a way we can run the migrations check more than once or include a more detailed workflow.

related: #32237

@rudolf
Copy link
Contributor

rudolf commented Mar 5, 2020

I'm closing this as we haven't seen any further occurrences and the root cause seems to always be one of the following two reasons:

  1. An exception occurs while trying to migrate a specific object (e.g. a dashboard) and the unmigrated object is then written to the new kibana index which has updated mappings. This would usually have a log item like the following, please open a bug report if you see this:

server log [09:17:36.759] [warning][migrations] Failed to transform document [object Object]. Transform: index-pattern:7.0.0

  1. While Kibana is kept running, a snapshot with an old, unmigrated index is restored. This is expected behaviour. Kibana should always be restarted when restoring .kibana* indices from a snapshot.

@rudolf rudolf closed this as completed Mar 5, 2020
@danksim
Copy link

danksim commented Mar 24, 2020

  1. While Kibana is kept running, a snapshot with an old, unmigrated index is restored. This is expected behaviour. Kibana should always be restarted when restoring data from a snapshot.

Hm... Are you sure it needs to be restarted? If so, why does Kibana need to be restarted after restoring from a snapshot? @rudolf

@rudolf
Copy link
Contributor

rudolf commented Apr 20, 2020

@danksim Sorry I missed your comment. When a snapshot with .kibana* indices from a previous version of Kibana (e.g. 7.1.0) is restored while a newer version is running (7.6.0) the newer version won't detect that the saved object indices in elasticsearch now requires a migration. This could lead to the following data-loss scenarios:

  • If between v7.1.0 and 7.6.0 mapping changes were introduced writes from 7.6.0 might fail because of the mismatch between mappings and lead to data loss.
  • Queries on fields which were introduced in 7.6.0 won't match any of the outdated documents and will return 0 results. If there are any writes based on the result of this query the written document will include incorrect/inconsistent data.

Even if a snapshot from the same version of Kibana is restored it might lead to inconsistent data / data corruption:

  1. The browser reads a document from ES, or performs a query
  2. An earlier snapshot is restored
  3. The browser sends a write request based on the results from (1). The data stored in ES is now in an inconsistent state.

Although it's difficult to give concrete examples for all the plugins, I would say most plugins are susceptible to these scenarios. Although some scenarios could be mitigated by using optimistic concurrency controls (specifying the version field in the saved objects client / API).

If you're restoring from a snapshot, I would therefore always recommend first stopping the Kibana process, then restoring the snapshot, and only then bring the Kibana node back online.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc Team:Operations Team label for Operations Team
Projects
None yet
Development

No branches or pull requests