Error while upgrading fluentd using helm chart in k8s 1.31.7 #5110
Unanswered
monduofficial
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What is a problem?
Hi All,
I have been trying to add a new mount to the fluentd pod so i had to upgrade the helm image inorder to fetch the change i need to trigger the pipeline .
helm repo add fluent https://fluent.github.io/helm-charts
envsubst < kubernetes/logging/fluentd/helm_values.yaml | helm upgrade --install fluentd fluent/fluentd -n logging -f -
Any help or clue would be helpfull
Regards
Mondu
Describe the configuration of Fluentd
nameOverride: ""
fullnameOverride: ""
DaemonSet, Deployment or StatefulSet
kind: "DaemonSet"
# Only applicable for Deployment or StatefulSet
replicaCount: 1
image:
repository: "fluent/fluentd-kubernetes-daemonset"
pullPolicy: "IfNotPresent"
tag: "v1.16.5-debian-kafka-1.0"
Optional array of imagePullSecrets containing private registry credentials
Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
serviceAccount:
create: true
annotations: {}
name: null
rbac:
create: true
Configure podsecuritypolicy
Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
enabled: false
annotations: {}
Security Context policies for controller pods
See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for
notes on enabling and using sysctls
podSecurityContext: {}
seLinuxOptions:
type: "spc_t"
securityContext: {}
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
#runAsUser: 0
#runAsGroup: 0
#fsGroup: 0
Configure the livecycle
Ref: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
lifecycle: {}
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 20"]
Configure the livenessProbe
Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
#commenting it as there is a bug in the fluentd dedployment
livenessProbe:
httpGet:
path: /metrics
port: metrics
#initialDelaySeconds: 0
#periodSeconds: 10
#timeoutSeconds: 1
#successThreshold: 1
#failureThreshold: 3
Configure the readinessProbe
Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
#commenting it as there is a bug in the fluentd dedployment
readinessProbe:
httpGet:
path: /metrics
port: metrics
#initialDelaySeconds: 0
#periodSeconds: 10
#timeoutSeconds: 1
#successThreshold: 1
#failureThreshold: 3
resources: {}
requests:
cpu: 10m
memory: 128Mi
limits:
memory: 128Mi
only available if kind is Deployment
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 80
see https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics
customRules: []
# - type: Pods
# pods:
# metric:
# name: packets-per-second
# target:
# type: AverageValue
# averageValue: 1k
see https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior
behavior:
scaleDown:
policies:
- type: Pods
value: 4
periodSeconds: 60
- type: Percent
value: 10
periodSeconds: 60
priorityClassName: "system-node-critical"
nodeSelector: {}
Node tolerations for server scheduling to nodes with taints
Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
tolerations: []
- key: null
operator: Exists
effect: "NoSchedule"
Affinity and anti-affinity
Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
Annotations to be added to fluentd DaemonSet/Deployment
annotations: {}
Labels to be added to fluentd DaemonSet/Deployment
labels: {}
Annotations to be added to fluentd pods
podAnnotations: {}
Labels to be added to fluentd pods
podLabels: {}
How long (in seconds) a pods needs to be stable before progressing the deployment
minReadySeconds:
How long (in seconds) a pod may take to exit (useful with lifecycle hooks to ensure lb deregistration is done)
terminationGracePeriodSeconds:
Deployment strategy / DaemonSet updateStrategy
updateStrategy: {}
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
Additional environment variables to set for fluentd pods
env:
value: "../../../etc/fluent/fluent.conf"
valueFrom:
fieldRef:
fieldPath: spec.nodeName
#- name: FLUENT_UID
value: "0"
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch-master"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
envFrom: []
volumes:
hostPath:
path: /var/log
hostPath:
path: /var/lib/docker/containers
configMap:
name: fluentd-main
defaultMode: 0777
configMap:
name: fluentd-config
defaultMode: 0777
persistentVolumeClaim:
claimName: nfs-direct-pvc-for-logs
persistentVolumeClaim:
claimName: nfs-direct-pvc-media # New PVC for /opt/backend-files/shared/media
volumeMounts:
mountPath: /var/log
mountPath: /var/lib/docker/containers
readOnly: true
mountPath: /etc/fluent
mountPath: /etc/fluent/config.d/
mountPath: /srv/
mountPath: /srv/media # New mount for the additional PVC
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
Only available if kind is StatefulSet
Fluentd persistence
persistence:
enabled: false
storageClass: ""
accessMode: ReadWriteOnce
size: 10Gi
Fluentd service
service:
type: "ClusterIP"
annotations: {}
ports: []
- name: "forwarder"
protocol: TCP
containerPort: 24224
Prometheus Monitoring
metrics:
serviceMonitor:
enabled: true
additionalLabels:
release: fluentd
prometheus: "true"
namespace: "monitoring"
namespace: "monitoring"
namespaceSelector: {}
## metric relabel configs to apply to samples before ingestion.
##
metricRelabelings:
- sourceLabels: [name]
separator: ;
regex: ^fluentd_output_status_buffer_(oldest|newest)_.+
replacement: $1
action: drop
# relabel configs to apply to samples after ingestion.
#
relabelings:
- sourceLabels: [__meta_kubernetes_pod_node_name]
separator: ;
regex: ^(.*)$
targetLabel: nodename
replacement: $1
action: replace
prometheusRule:
enabled: true
additionalLabels: {}
namespace: "monitoring"
rules:
- alert: FluentdDown
expr: up{job="fluentd"} == 0
for: 5m
labels:
context: fluentd
severity: warning
annotations:
summary: "Fluentd Down"
description: "{{ $labels.pod }} on {{ $labels.nodename }} is down"
- alert: FluentdScrapeMissing
expr: absent(up{job="fluentd"} == 1)
for: 15m
labels:
context: fluentd
severity: warning
annotations:
summary: "Fluentd Scrape Missing"
description: "Fluentd instance has disappeared from Prometheus target discovery"
Grafana Monitoring Dashboard
dashboards:
enabled: "true"
namespace: "monitoring"
labels:
grafana_dashboard: '"1"'
Fluentd list of plugins to install
plugins:
- fluent-plugin-out-http
Add fluentd config files from K8s configMaps
configMapConfigs:
- fluentd-systemd-conf
Fluentd configurations:
fileConfigs:
01_sources.conf: |-
02_filters.conf: |-
<label @test-backend>
<match all.**>
@type loki
url "http://$INFRA_METRICS_SERVER_IP:3100"
flush_interval 10s
flush_at_shutdown true
buffer_chunk_limit 1m
03_dispatch.conf: |-
<label @dispatch>
<filter **>
@type prometheus
name fluentd_input_status_num_records_total
type counter
desc The total number of incoming records
tag ${tag}
hostname ${hostname}
04_outputs.conf: |-
Describe the logs of Fluentd
"fluent" already exists with the same configuration, skipping
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
##[error]Bash exited with code '1'.
Finishing: Bash
Environment
Beta Was this translation helpful? Give feedback.
All reactions