Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when scraping: "...same name and label values" #246

Closed
laughland opened this issue Nov 10, 2017 · 3 comments
Closed

Error when scraping: "...same name and label values" #246

laughland opened this issue Nov 10, 2017 · 3 comments

Comments

@laughland
Copy link

Host operating system: output of uname -a:

Linux localhost.localdomain 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

snmp_exporter version: output of snmp_exporter -version:

Docker container: prom/snmp-exporter:master

What device/snmpwalk OID are you using?

1.3.6.1.4.1.318.1.1.1.9.3.3.1.8

If this is a new device, please link to the MIB(s):

MIB

What did you do that produced an error?

I followed the procedures for generating and using the snmp_exporter to gather snmp metrics.

I used the generator to generate a snmp.yml file - the generator.yml file contains:

modules:
ups_mibs:
    walk:
      - 1.3.6.1.4.1.318.1.1.1.9.3.3.1.8
 version: 3  
    auth:

The snmp.yml file:

ups_mibs:
  walk:
  - 1.3.6.1.4.1.318.1.1.1.9.3.3.1.8
  metrics:
  - name: upsPhaseOutputMaxLoad
    oid: 1.3.6.1.4.1.318.1.1.1.9.3.3.1.8
    type: gauge
    help: The maximum output load in VA measured since the last reset (upsPhaseResetMaxMinValues),
      or -1 if it's unsupported by this UPS - 1.3.6.1.4.1.318.1.1.1.9.3.3.1.8
    indexes:
    - labelname: upsPhaseOutputPhaseTableIndex
      type: gauge
    - labelname: upsPhaseOutputPhaseIndex
      type: gauge
  version: 3
  auth:
    community: public
    security_level: authPriv
    username: <redacted>
    password: <redacted>
    auth_protocol: MD5
    priv_protocol: DES
    priv_password: <redacted>

Then running snmp and prometheus in a docker-compose file (in notes section below), I used the following link to view the scrape data from Prometheus
http://localhost:9116/snmp?module=ups_mibs&target=<redacted>

What did you expect to see?

I expected to see three values one for each phase of power from the device with the OID above.

What did you see instead?

An error has occurred during metrics gathering:

2 error(s) occurred:
* collected metric upsPhaseOutputMaxLoad label:<name:"upsPhaseOutputPhaseIndex" value:"1" > label:<name:"upsPhaseOutputPhaseTableIndex" value:"1" > gauge:<value:27280 >  was collected before with the same name and label values
* collected metric upsPhaseOutputMaxLoad label:<name:"upsPhaseOutputPhaseIndex" value:"1" > label:<name:"upsPhaseOutputPhaseTableIndex" value:"1" > gauge:<value:25660 >  was collected before with the same name and label values

I ran a snmpwalk and see that the correct values are returning from the device:

snmpbulkwalk -v3  -On -t 60 -u <redacted> -l authPriv -a MD5 -A <redacted> -x DES -X <redacted> -M /usr/share/snmp/mibs/ -m all u<redacted> upsPhaseOutputMaxLoad


.1.3.6.1.4.1.318.1.1.1.9.3.3.1.8.1.1.1 = INTEGER: 23010
.1.3.6.1.4.1.318.1.1.1.9.3.3.1.8.1.1.2 = INTEGER: 27280
.1.3.6.1.4.1.318.1.1.1.9.3.3.1.8.1.1.3 = INTEGER: 25660

Notes:

Here is the prometheus.yml section:

global:
  scrape_interval: 65s
  scrape_timeout: 60s
# looking at location of rules files relative to docker-compose.yml
#rule_files:
 #- 'prometheus.rules'

scrape_configs:

 - job_name: 'ups_lond_snmp'
   metrics_path: /snmp
   params:
     module: [ups_mibs]
   static_configs:
     - targets: [
     '<redacted>'
     ]
   relabel_configs:
     - source_labels: [__address__]
       target_label: instance
       regex: '(^[^-]*-[^.]*).*'
       replacement: '$1'

     - source_labels: [__address__]
       target_label: __param_target

     - target_label: __address__
       replacement: <redacted>:9116  # SNMP exporter

And the docker-compose.yml file, which should help with running this:

version: '3.3'

 volumes:
   prometheus_data: {}
   grafana_data: {}

 services:

   prometheus:
     image: prom/prometheus:latest
     volumes:
       - ./prometheus/:/etc/prometheus/
       - prometheus_data:/prometheus
       - /home/user/prometheus_snmp/certs/:/etc/ssl/certs/:z
     command:
       - '--config.file=/etc/prometheus/prometheus.yml'
       #- '--storage.tsdb.path=/data'
     expose:
       - 9090
     ports:
       - "9090:9090"
     network_mode: "host"
     depends_on:
       - snmp-exporter

   snmp-exporter:
     image: prom/snmp-exporter:master
     volumes:
       - ./snmp_exporter/:/etc/snmp_exporter/
     expose:
       - 9116
     ports:
       - "9116:9116"
     network_mode: "host"

I've also read through the issues and google Groups but cannot determine if this is a configuration error on my part or a bug in the snmp_exporter.

@brian-brazil
Copy link
Contributor

That's a device bug, that snmpwalk output doesn't match the MIB - there's an extra oid in there.
You'll need to take this up with your vendor.

@laughland
Copy link
Author

Will do, thanks for the quick reply.

@bissquit
Copy link

@laughland , did the vendor fix the bug? Could you say please what device and firmware version do you use?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants