Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

<jemalloc>: Unsupported system page size #4328

Closed
vcozzolino opened this issue Oct 20, 2023 · 3 comments
Closed

<jemalloc>: Unsupported system page size #4328

vcozzolino opened this issue Oct 20, 2023 · 3 comments

Comments

@vcozzolino
Copy link

Describe the bug

I'm trying to deploy fluentd as DaemonSet on a Kubernetes cluster (only ARM machines) using this image fluent/fluentd-kubernetes-daemonset:v1.16-debian-elasticsearch8-arm64-1. The moment i deploy it, I see all the pods going into a crash loop and printing the same message:

<jemalloc>: Unsupported system page size
<jemalloc>: Unsupported system page size
<jemalloc>: Unsupported system page size
[FATAL tini (1)] Failed to allocate memory for child args: 'Cannot allocate memory'

I use fluentd in many other clusters, and never faced this error message.

To Reproduce

Hard to say, I think this is a host-related issues.

Expected behavior

Fluentd pods starting normally.

Your Environment

- Fluentd version: fluent/fluentd-kubernetes-daemonset:v1.16-debian-elasticsearch8-arm64-1
- TD Agent version:
- Operating system:
NAME="Rocky Linux"
VERSION="8.8 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Rocky Linux 8.8 (Green Obsidian)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:8:GA"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
SUPPORT_END="2029-05-31"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-8"
ROCKY_SUPPORT_PRODUCT_VERSION="8.8"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.8"
- Kernel version: 4.18.0-477.27.1.el8_lustre.aarch64

Your Configuration

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
    version: v1
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-logging
      version: v1
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1.16-debian-elasticsearch8-arm64-1
        env:
          - name: K8S_NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch-logging"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          # Option to configure elasticsearch plugin with self signed certs
          # ================================================================
          - name: FLUENT_ELASTICSEARCH_SSL_VERIFY
            value: "true"
          # Option to configure elasticsearch plugin with tls
          # ================================================================
          - name: FLUENT_ELASTICSEARCH_SSL_VERSION
            value: "TLSv1_2"
          # X-Pack Authentication
          # =====================
          - name: FLUENT_ELASTICSEARCH_USER
            value: "elastic"
          - name: FLUENT_ELASTICSEARCH_PASSWORD
            value: "changeme"
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        # When actual pod logs in /var/lib/docker/containers, the following lines should be used.
        # - name: dockercontainerlogdirectory
        #   mountPath: /var/lib/docker/containers
        #   readOnly: true
        # When actual pod logs in /var/log/pods, the following lines should be used.
        - name: dockercontainerlogdirectory
          mountPath: /var/log/pods
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      # When actual pod logs in /var/lib/docker/containers, the following lines should be used.
      # - name: dockercontainerlogdirectory
      #   hostPath:
      #     path: /var/lib/docker/containers
      # When actual pod logs in /var/log/pods, the following lines should be used.
      - name: dockercontainerlogdirectory
        hostPath:
          path: /var/log/pods

Your Error Log

<jemalloc>: Unsupported system page size
<jemalloc>: Unsupported system page size
<jemalloc>: Unsupported system page size
[FATAL tini (1)] Failed to allocate memory for child args: 'Cannot allocate memory'

Additional context

I know this could be related to different page size on the host (like 64K vs 4K) but would like some help to solve the issue.

@kenhys
Copy link
Contributor

kenhys commented Oct 20, 2023

As jemalloc is built in by default page size, if you want to change it, you need to configure and build image by yourself.

https://github.com/fluent/fluentd-docker-image/blob/master/v1.16/arm64/debian/Dockerfile#L53

@vcozzolino
Copy link
Author

As jemalloc is built in by default page size, if you want to change it, you need to configure and build image by yourself.

https://github.com/fluent/fluentd-docker-image/blob/master/v1.16/arm64/debian/Dockerfile#L53

Thanks a lot, I will rebuild the image and try again!

@zijiwork
Copy link

How to solve it?

As jemalloc is built in by default page size, if you want to change it, you need to configure and build image by yourself.
https://github.com/fluent/fluentd-docker-image/blob/master/v1.16/arm64/debian/Dockerfile#L53

Thanks a lot, I will rebuild the image and try again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants