Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kibana no such index - Kibana re-adds index incorrectly. #53322

Closed
GummyDonut opened this issue Dec 17, 2019 · 5 comments
Closed

Kibana no such index - Kibana re-adds index incorrectly. #53322

GummyDonut opened this issue Dec 17, 2019 · 5 comments
Labels
Team:Operations Team label for Operations Team

Comments

@GummyDonut
Copy link

Kibana version: 7.4.2

Elasticsearch version: 7.4.2

Server OS version: Windows

Browser version: Google Chrome

Original install method (e.g. download page, yum, from source, etc.): Download Page

Describe the bug:
This is a very unique bug, so we currently have elasticsearch (7.4.2) and kibana (7.4.2) running.
Everything is fine, however if I manually delete the data folder under elasticsearch. Kibana will output an error message (see below for more).

It will then add the kibana index without the proper alias. In my case it was "ui-kibana"
Without the proper alias, everytime I create an index pattern in kibana. It will not save properly, the discovery page will always ask to create a new index pattern.

*Note restarting kibana fixes this issue and the index aliases are placed back in properly again

Steps to reproduce:

  1. Run elasticsearch with this configuration(default one from download)
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

  1. Run kibana with this configuration
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://localhost:9200"]

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: "ui-kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

  1. Turn off elasticsearch, and delete the data folder under elasticsearch
  2. Turn on elasticsearch again
  3. Use /_cat/_indices and you will notice that ui-kibana is there not aliased properly

Expected behavior:
Upon elasticsearch restarting again and with kibana still running the created kibana index should have an alias associated to it similar to Figure 2 below (note alias was re-added after restarting kibana) . At the moment it creates the broken index ui-kibana (Figure 1) and index patterns cannot be stored properly.

Screenshots (if relevant): - _cat/indices/?v results
catSingle
Figure 1 - after deleting and restarting elasticsearch

catKibanaRestart
Figure 2 - after restarting Kibana

Provide logs and/or server output (if relevant):

After restarting elasticsearch this is the server output

[index_not_found_exception] no such index [ui-kibana], with { resource.type="index_or_alias" & resource.id="ui-kibana" & index_uuid="_na_" & index="ui-kibana" } :: {"path":"/ui-kibana/_search","query":{},"body":"{\"track_total_hits\":true,\"query\":{\"term\":{\"type\":{\"value\":\"space\"}}},\"aggs\":{\"disabledFeatures\":{\"terms\":{\"field\":\"space.disabledFeatures\",\"include\":[\"discover\",\"visualize\",\"dashboard\",\"dev_tools\",\"advancedSettings\",\"indexPatterns\",\"savedObjectsManagement\",\"graph\",\"monitoring\",\"ml\",\"apm\",\"maps\",\"canvas\",\"infrastructure\",\"logs\",\"siem\",\"uptime\"],\"size\":17}}},\"size\":0}","statusCode":404,"response":"{\"error\":{\"root_cause\":[{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [ui-kibana]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"ui-kibana\",\"index_uuid\":\"_na_\",\"index\":\"ui-kibana\"}],\"type\":\"index_not_found_exception\",\"reason\":\"no such index [ui-kibana]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"ui-kibana\",\"index_uuid\":\"_na_\",\"index\":\"ui-kibana\"},\"status\":404}"}    at respond (C:\Users\phoang\Documents\archive\elastic7Workspace\kibana-7.4.2-windows-x86_64\kibana-7.4.2-windows-x86_64\node_modules\elasticsearch\src\lib\transport.js:349:15)    at checkRespForFailure (C:\Users\phoang\Documents\archive\elastic7Workspace\kibana-7.4.2-windows-x86_64\kibana-7.4.2-windows-x86_64\node_modules\elasticsearch\src\lib\transport.js:306:7)    at HttpConnector.<anonymous> (C:\Users\phoang\Documents\archive\elastic7Workspace\kibana-7.4.2-windows-x86_64\kibana-7.4.2-windows-x86_64\node_modules\elasticsearch\src\lib\connectors\http.js:173:7)    at IncomingMessage.wrapper (C:\Users\phoang\Documents\archive\elastic7Workspace\kibana-7.4.2-windows-x86_64\kibana-7.4.2-windows-x86_64\node_modules\elasticsearch\node_modules\lodash\lodash.js:4929:19)    at IncomingMessage.emit (events.js:194:15)    at endReadableNT (_stream_readable.js:1103:12)    at process._tickCallback (internal/process/next_tick.js:63:19)

Kibana will then continuously output this, until restarted:

 [19:13:06.975] [warning][stats-collection] Unable to fetch data from kibana collector
 error  [19:13:07.032] [warning][stats-collection] [illegal_argument_exception] Fielddata is disabled on text fields by default. Set fielddata=true on [type] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead. :: {"path":"/ui-kibana/_search","query":{"ignore_unavailable":true,"filter_path":"aggregations.types.buckets"},"body":"{\"size\":0,\"query\":{\"terms\":{\"type\":[\"dashboard\",\"visualization\",\"search\",\"index-pattern\",\"graph-workspace\",\"timelion-sheet\"]}},\"aggs\":{\"types\":{\"terms\":{\"field\":\"type\",\"size\":6}}}}","statusCode":400,"response":"{\"error\":{\"root_cause\":[{\"type\":\"illegal_argument_exception\",\"reason\":\"Fielddata is disabled on text fields by default. Set fielddata=true on [type] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.\"}],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[{\"shard\":0,\"index\":\"ui-kibana\",\"node\":\"TAmV6ORtQW6lUZOKzyB25g\",\"reason\":{\"type\":\"illegal_argument_exception\",\"reason\":\"Fielddata is disabled on text fields by default. Set fielddata=true on [type] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.\"}}],\"caused_by\":{\"type\":\"illegal_argument_exception\",\"reason\":\"Fielddata is disabled on text fields by default. Set fielddata=true on [type] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.\",\"caused_by\":{\"type\":\"illegal_argument_exception\",\"reason\":\"Fielddata is disabled on text fields by default. Set fielddata=true on [type] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.\"}}},\"status\":400}"}
    at respond (C:\Users\phoang\Documents\archive\elastic7Workspace\kibana-7.4.2-windows-x86_64\kibana-7.4.2-windows-x86_64\node_modules\elasticsearch\src\lib\transport.js:349:15)
    at checkRespForFailure (C:\Users\phoang\Documents\archive\elastic7Workspace\kibana-7.4.2-windows-x86_64\kibana-7.4.2-windows-x86_64\node_modules\elasticsearch\src\lib\transport.js:306:7)
    at HttpConnector.<anonymous> (C:\Users\phoang\Documents\archive\elastic7Workspace\kibana-7.4.2-windows-x86_64\kibana-7.4.2-windows-x86_64\node_modules\elasticsearch\src\lib\connectors\http.js:173:7)
    at IncomingMessage.wrapper (C:\Users\phoang\Documents\archive\elastic7Workspace\kibana-7.4.2-windows-x86_64\kibana-7.4.2-windows-x86_64\node_modules\elasticsearch\node_modules\lodash\lodash.js:4929:19)
    at IncomingMessage.emit (events.js:194:15)
    at endReadableNT (_stream_readable.js:1103:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)

Any additional context:
I ran into this issue because I wanted to test complete replacement of elasticsearch files, while kibana is still running.

@Bargs Bargs added the Team:Operations Team label for Operations Team label Dec 17, 2019
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-operations (Team:Operations)

@jbudz
Copy link
Member

jbudz commented Dec 18, 2019

We only run migrations once on startup which does the aliasing and mappings setup. So if the kibana index is created at runtime, and then something performs a write without mappings we'll get dynamic mappings without an alias and the issues above.

#32237 is the root issue. In the interim maybe we can dig into your scenario of deleting elasticsearch files. Any context on the data folder deletion? Is using the REST API to delete all except system indices an option?

@GummyDonut
Copy link
Author

GummyDonut commented Dec 19, 2019

The data folder deletion was done manually through file explorer.
Unfortunately using the REST API is not an option, as we completely remove the elasticsearch instance, and replace it.
We could just restart kibana after the new elasticsearch is back online. Was just wondering if this was a known issue with Kibana,

Thanks for the fast reply.

@tylersmalley
Copy link
Contributor

We could just restart kibana after the new elasticsearch is back online. Was just wondering if this was a known issue with Kibana,

Yes, Kibana will need to be restarted if it's index is deleted. We use to allow for this, but it added overhead for every request to ensure the index exists with the correct mappings.

Ok to close this issue?

@GummyDonut
Copy link
Author

Restarting seems to fix the issue, for now I think its okay to close.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Operations Team label for Operations Team
Projects
None yet
Development

No branches or pull requests

5 participants