-
Notifications
You must be signed in to change notification settings - Fork 12
PMM-10278 postgres_exporter integration tests #71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
cbd7ace
e4258fe
46b20fc
0dc3950
9b2058d
8cbe0d6
fe3a5dd
318af7c
074ad4a
ceb3788
60adc4c
675b953
f4f2499
6082c4e
c18424f
c09bffd
d6f75c3
c9adb32
7124736
1e10c56
ff67891
6eae185
fcafb6b
e290e04
2dfbe3b
e2d34c8
7167fd3
2b84c9d
9c09625
67c8399
1f46409
3cc042b
0f388cf
fa73ee0
43f7dd1
6f7cdf3
e1f0875
596d4d7
682e903
2ba4152
2f83aab
ee201fe
56022d3
8affd7f
f1bfa39
e081348
b3db609
a44d524
f6aed88
513ff30
d67caae
5b0093b
76187d8
a94ae79
ca64787
7065b7a
588c2b1
6da0992
99881df
da75328
579a90a
6974dd9
ae4962b
f7c60c3
f794d61
dfb5774
e8e4a74
4bf07b9
f73eecb
e180346
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
######################### | ||
### tests | ||
|
||
# measures avg scrape time and compares old vs new exporters | ||
test-performance: | ||
go test -v -run '^TestPerformance$$' -args -doRun=true | ||
|
||
extraMetrics = false | ||
multipleLabels = false | ||
dumpMetrics = false | ||
|
||
test-metrics: | ||
go test -v -run '^TestMissingMetrics$$' -args -doRun=true | ||
|
||
test-labels: | ||
go test -v -run '^TestMissingLabels$$' -args -doRun=true | ||
|
||
test-resolutions-duplicates: | ||
go test -v -run '^TestResolutionsMetricDuplicates$$' -args -doRun=true | ||
|
||
test-resolutions: | ||
go test -v -run '^TestResolutions$$' -args -doRun=true | ||
|
||
dump-metrics: | ||
go test -v -run '^TestDumpMetrics$$' -args -doRun=true -extraMetrics=$(extraMetrics) -multipleLabels=$(multipleLabels) -dumpMetrics=$(dumpMetrics) | ||
|
||
test-consistency: test-metrics test-resolutions test-resolutions-duplicates | ||
|
||
######################### | ||
### env preparation | ||
|
||
# download exporter from provided feature build's client binary url | ||
prepare-exporter-from-fb: | ||
go test -v -run '^TestPrepareUpdatedExporter$\' -args -doRun=true -url=$(url) | ||
|
||
prepare-exporter-from-repo: | ||
make -C ../ build && cp ../postgres_exporter assets/postgres_exporter | ||
|
||
prepare-base-exporter: | ||
tar -xf assets/postgres_exporter_percona.tar.xz -C assets/ | ||
|
||
start-postgres-db: | ||
docker-compose -f assets/postgres-compose.yml up -d --force-recreate --renew-anon-volumes --remove-orphans | ||
|
||
stop-postgres-db: | ||
docker-compose -f assets/postgres-compose.yml down | ||
|
||
prepare-env-from-repo: prepare-exporter-from-repo prepare-base-exporter start-postgres-db |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
--- | ||
version: '3.7' | ||
|
||
services: | ||
postgres: | ||
image: ${POSTGRES_IMAGE:-postgres:11} | ||
container_name: postgres-test-srv | ||
command: > | ||
-c shared_preload_libraries='${PG_PRELOADED_LIBS:-pg_stat_statements}' | ||
-c track_activity_query_size=2048 | ||
-c pg_stat_statements.max=10000 | ||
-c pg_stat_monitor.pgsm_query_max_len=10000 | ||
-c pg_stat_statements.track=all | ||
-c pg_stat_statements.save=off | ||
-c track_io_timing=on | ||
ports: | ||
- "127.0.0.1:5432:5432" | ||
environment: | ||
- POSTGRES_USER=postgres | ||
- POSTGRES_PASSWORD=postgres | ||
volumes: | ||
- postgres-test-srv-vol:/docker-entrypoint-initdb.d/ | ||
networks: | ||
- postgres-test-srv-net | ||
|
||
volumes: | ||
postgres-test-srv-vol: | ||
|
||
networks: | ||
postgres-test-srv-net: |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
--auto-discover-databases | ||
--collect.custom_query.hr | ||
--collect.custom_query.lr | ||
--collect.custom_query.mr | ||
--exclude-databases=template0,template1,postgres,cloudsqladmin,pmm-managed-dev,azure_maintenance | ||
--log.level=warn | ||
Comment on lines
+1
to
+6
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. do we really need it to be in a separate file, not in the test where it's used There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. not really, just a bit simpler to edit flags list separate from code. but yep, ill move them in as a verbatim string |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
## ###################################################### | ||
## WARNING: This is an example. Do not edit this file. | ||
## To create your own Custom Queries - create a new file | ||
## ###################################################### | ||
## Custom query example. | ||
#pg_replication: | ||
# query: "SELECT EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())) as lag" | ||
# metrics: | ||
# - lag: | ||
# usage: "GAUGE" | ||
# description: "Replication lag behind master in seconds" |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
pg_postmaster_uptime: | ||
query: "select extract(epoch from current_timestamp - pg_postmaster_start_time()) as seconds" | ||
master: true | ||
metrics: | ||
- seconds: | ||
usage: "GAUGE" | ||
description: "Service uptime" |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
## ###################################################### | ||
## WARNING: This is an example. Do not edit this file. | ||
## To create your own Custom Queries - create a new file | ||
## ###################################################### | ||
## Custom query example. | ||
#pg_replication: | ||
# query: "SELECT EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())) as lag" | ||
# metrics: | ||
# - lag: | ||
# usage: "GAUGE" | ||
# description: "Replication lag behind master in seconds" |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
## ###################################################### | ||
## WARNING: This is an example. Do not edit this file. | ||
## To create your own Custom Queries - create a new file | ||
## ###################################################### | ||
## Custom query example. | ||
#pg_replication: | ||
# query: "SELECT EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())) as lag" | ||
# metrics: | ||
# - lag: | ||
# usage: "GAUGE" | ||
# description: "Replication lag behind master in seconds" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why it's in tar and what does it contain?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
its our old exporter (before any updates) to have a baseline.
In tar - because compressed it uses less space
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, but it adds additional code to the codebase, maybe it's better to keep it not archived? Do we have comparison in file sizes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
its 2.2x smaller in archive (15.2m vs 6.8m). but ok, not a problem at all to store uncompressed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, can't we substitute it just by some golden file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you mean with stub metrics - those are dependent on system exporter is running on, so I wouldn't
and we are comparing CPU usage, which also depends on hardware
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also as a second iteration after finishing more important tasks im going to make those tests runnable on mac, because for now they're working only under linux. Ok for occasional exporters updating but nor future-proof neither cross-team friendly)