Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Stack monitoring] Kibana self/internal-monitoring vs metricbeat detection doesn't work #84044

Closed
daleckystepan opened this issue Nov 23, 2020 · 6 comments
Labels
bug Fixes for quality problems that affect the customer experience Feature:Stack Monitoring Team:Monitoring Stack Monitoring team

Comments

@daleckystepan
Copy link

daleckystepan commented Nov 23, 2020

Kibana version:
7.10.0

Elasticsearch version:
7.10.0

Server OS version:
Ubuntu 20.04

Browser version:
Chrome 87

Browser OS version:
MacOS Big Sur

Original install method (e.g. download page, yum, from source, etc.):
Docker

Describe the bug:
Kibana cannot differentiate between self/internal monitoring of components vs. metricbeat monitoring of components properly

Steps to reproduce:

  1. Install ES (more instances on one node) and Kibana in Docker (same with deb or rpm)
  2. Configure monitoring using metricbeat (logstash output), disable internal collection
  3. Install Logstash and configure pipelines
  4. View Stack monitoring page in Kibana (monitoring itself works ok - charts are visible, data are present)

Expected behavior:
Kibana properly detects monitoring type

Screenshots (if relevant):
Snímek obrazovky 2020-11-23 v 10 10 41
Snímek obrazovky 2020-11-23 v 10 11 05
Snímek obrazovky 2020-11-23 v 10 11 17

Errors in browser console (if relevant):

Provide logs and/or server output (if relevant):

Any additional context:
Relevant ES options:

xpack.monitoring.collection.enabled: true
xpack.monitoring.elasticsearch.collection.enabled: false

Relevant MB options:

- module: elasticsearch
  period: 10s
  hosts:
  - "https://XXX:9200"
  - "https://XXX:9210"
  - "https://XXX:9251"
  ssl.certificate_authorities: [ "{{ ssl_ca_cert_path }}" ]
  username: "XXX"
  password: "XXX"
  xpack.enabled: true

Logstash:

input {
    beats {
        port => 5044
    }
}

filter {
  if ![@metadata][index] {
    if [@metadata][beat] == "apm" {
        mutate {
            add_field => { "[@metadata][index]" => "%{[@metadata][beat]}-%{[@metadata][version]}-%{[processor][event]}" } 
        }
    }
    else {
        mutate {
            add_field => { "[@metadata][index]" => "%{[@metadata][beat]}-%{[@metadata][version]}" }
        }
    }
  }
}

output {
    if [@metadata][_id] and [@metadata][pipeline]  {
        elasticsearch {
            hosts => ["https://{{ elastic_search_server }}:9200"]
            sniffing => true
            user => logstash
            password => XXX

            ssl => true
            ssl_certificate_verification => true
            cacert => "{{ ssl_ca_cert_path }}"
            
            manage_template => false
            ilm_enabled => true
            ilm_pattern => "{now/d}-000001"
            
            index => "%{[@metadata][index]}"
            document_id => "%{[@metadata][_id]}"
            pipeline => "%{[@metadata][pipeline]}"
        }
    } else if [@metadata][_id] {
        elasticsearch {
            hosts => ["https://{{ elastic_search_server }}:9200"]
            sniffing => true
            user => logstash
            password => XXX

            ssl => true
            ssl_certificate_verification => true
            cacert => "{{ ssl_ca_cert_path }}"
            
            manage_template => false
            ilm_enabled => true
            ilm_pattern => "{now/d}-000001"
            
            index => "%{[@metadata][index]}"
            document_id => "%{[@metadata][_id]}"
        }
    } else if [@metadata][pipeline] {
        elasticsearch {
            hosts => ["https://{{ elastic_search_server }}:9200"]
            sniffing => true
            user => logstash
            password => XXX

            ssl => true
            ssl_certificate_verification => true
            cacert => "{{ ssl_ca_cert_path }}"
            
            manage_template => false
            ilm_enabled => true
            ilm_pattern => "{now/d}-000001"
            
            index => "%{[@metadata][index]}"
            pipeline => "%{[@metadata][pipeline]}"
        }
    } else {
        elasticsearch {
            hosts => ["https://{{ elastic_search_server }}:9200"]
            sniffing => true
            user => logstash
            password => XXX

            ssl => true
            ssl_certificate_verification => true
            cacert => "{{ ssl_ca_cert_path }}"
            
            manage_template => false
            ilm_enabled => true
            ilm_pattern => "{now/d}-000001"
            
            index => "%{[@metadata][index]}"
        } 
    }
}
GET _cat/indices/.moni*

green open .monitoring-logstash-7-mb yBH9Ge9_RQCru3K7jqzNIQ 1 1 2143292      0 800.7mb 441.4mb
green open .monitoring-beats-7-mb    Gfy06QeeT_uAO_gC2elQ3g 1 1 1589000      0   1.5gb 797.7mb
green open .monitoring-es-7-mb       amcmOIMARqadZiIkoOxa-A 1 1  974318 252000   1.3gb 679.1mb

Thank you for your help

@daleckystepan daleckystepan changed the title [Stack monitoring] Kibana self/internal-monitoring vs metricbeat detection don't work [Stack monitoring] Kibana self/internal-monitoring vs metricbeat detection doesn't work Nov 23, 2020
@afharo afharo added bug Fixes for quality problems that affect the customer experience Feature:Stack Monitoring Team:Monitoring Stack Monitoring team labels Nov 23, 2020
@elasticmachine
Copy link
Contributor

Pinging @elastic/stack-monitoring (Team:Monitoring)

@chrisronline
Copy link
Contributor

Hi @daleckystepan!

Thanks for filing this.

To get some more information, are you able to run the following query against your monitoring cluster and return the results?

POST .monitoring-*/_search
{
  "size": 0,
  "query": {
    "bool": {
      "filter": [
        {
          "range": {
            "timestamp": {
              "gte": "now-30s",
              "lte": "now"
            }
          }
        }
      ]
    }
  },
  "aggs": {
    "indices": {
      "terms": {
        "field": "_index",
        "size": 50
      },
      "aggs": {
        "es_uuids": {
          "terms": {
            "field": "node_stats.node_id",
            "size": 10000
          },
          "aggs": {
            "by_timestamp": {
              "max": {
                "field": "timestamp"
              }
            }
          }
        },
        "kibana_uuids": {
          "terms": {
            "field": "kibana_stats.kibana.uuid",
            "size": 10000
          },
          "aggs": {
            "by_timestamp": {
              "max": {
                "field": "timestamp"
              }
            }
          }
        },
        "beats_uuids": {
          "terms": {
            "field": "beats_stats.beat.uuid",
            "size": 10000
          },
          "aggs": {
            "by_timestamp": {
              "max": {
                "field": "timestamp"
              }
            },
            "beat_type": {
              "terms": {
                "field": "beats_stats.beat.type",
                "size": 10000
              }
            },
            "cluster_uuid": {
              "terms": {
                "field": "cluster_uuid",
                "size": 10000
              }
            }
          }
        },
        "logstash_uuids": {
          "terms": {
            "field": "logstash_stats.logstash.uuid",
            "size": 10000
          },
          "aggs": {
            "by_timestamp": {
              "max": {
                "field": "timestamp"
              }
            },
            "cluster_uuid": {
              "terms": {
                "field": "cluster_uuid",
                "size": 10000
              }
            }
          }
        }
      }
    }
  }
}

@daleckystepan
Copy link
Author

@chrisronline
Copy link
Contributor

Thanks @daleckystepan!

I think I see the issue here.

The code expects the index name to contain -mb- to consider the product migrated. However, your Logstash pipeline is outputting indices that end in -mb (without the trailing -). Typically, the date is part of the index name as well, such as .monitoring-kibana-7-mb-2020.11.23.

Is there a reason you aren't adding the date to your indices? Or some kind of sequencing?

@daleckystepan
Copy link
Author

daleckystepan commented Nov 24, 2020

You were right. It needed a little bit of tinkering with templates, index names and ILM but it works.

green open .monitoring-logstash-7-mb-2020.11.24-000001 Z6wLs-3VRSaVdqB-6hlurw 1 1   472  403 882.3kb 421.1kb
green open .monitoring-beats-7-mb-2020.11.24-000001    utQI4x_HThmWyAMI1EgqLg 1 1 22944    0  18.1mb   9.2mb
green open .monitoring-es-7-mb-2020.11.24-000001       XwHhlGSST0S-ja6vnAyFng 1 1  2941 2158   4.7mb   2.5mb

.monitoring-* indices are not migrated to ILM and new index component system. That was the issue.
elastic/elasticsearch#38470

Thank you for your time and quick support.

@chrisronline
Copy link
Contributor

Glad you got it working. I'm going to close this for now, but feel free to reopen if you have more issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Fixes for quality problems that affect the customer experience Feature:Stack Monitoring Team:Monitoring Stack Monitoring team
Projects
None yet
Development

No branches or pull requests

4 participants