Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: fix app label of metrics svc for ServiceMonitor discovery #229

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ metadata:
name: {{ include "cert-manager-approver-policy.name" . }}-metrics
namespace: {{ .Release.Namespace | quote }}
labels:
app: {{ include "cert-manager-approver-policy.name" . }}
app: {{ include "cert-manager-approver-policy.name" . }}-metrics
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The disadvantage of this, is that I can no longer easily discover all the resources related to this app. E.g.

$ kubectl get all --namespace cert-manager --selector app=cer
t-manager-approver-policy
NAME                                               READY   STATUS    RESTARTS   AGE
pod/cert-manager-approver-policy-c68dc44c9-5785f   1/1     Running   0          107m

NAME                                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/cert-manager-approver-policy           ClusterIP   10.96.10.146    <none>        443/TCP    107m
service/cert-manager-approver-policy-metrics   ClusterIP   10.96.133.139   <none>        9402/TCP   107m

NAME                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/cert-manager-approver-policy-c68dc44c9   1         1         1       107m

Copy link
Author

@leotomas837 leotomas837 May 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wallrj

I was looking at the same GitHub issue, and indeed you are right: targetPort setup the Prometheus's scrape config to scrape the container directly (for each service discovered, hence the duplicated target).
So 2 solutions: tightening the service discovery (my first approach), or targeting the service (by name) directly and not the container (your approach).

Happy to implement your approach, I like to think of a ServiceMonitor to actually be setup with service attributes and not container ones...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great. Then please modify the PR to use the the port: metrics approach and ping me for another review.

{{ include "cert-manager-approver-policy.labels" . | indent 4 }}
spec:
type: {{ .Values.app.metrics.service.type }}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ spec:
jobLabel: {{ include "cert-manager-approver-policy.name" . }}
selector:
matchLabels:
app: {{ include "cert-manager-approver-policy.name" . }}
app: {{ include "cert-manager-approver-policy.name" . }}-metrics
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
Expand Down