Skip to content
This repository has been archived by the owner on Dec 20, 2024. It is now read-only.

关于Dragonfly缓存没有生效的问题 #1544

Open
yunkunrao opened this issue May 19, 2021 · 1 comment
Open

关于Dragonfly缓存没有生效的问题 #1544

yunkunrao opened this issue May 19, 2021 · 1 comment

Comments

@yunkunrao
Copy link

Question

环境
Dragonfly: 1.0.6
Harbor: v2.2.1

helm chart 配置文件 values.yaml

rbac:
  create: true

## Define serviceAccount names for components. Defaults to component's fully qualified name.
##
serviceAccounts:
  dfclient:
    create: true
    name:
  supernode:
    create: true
    name:

dfclient:
  name: dfclient
  ## dfclient container image
  ##
  image:
    repository: dragonflyoss/dfclient
    tag: 1.0.6
    pullPolicy: IfNotPresent

  ## dfclient priorityClassName
  ##
  priorityClassName: ""

  ## dfclient container arguments
  ##
  args: []

  ## Node tolerations for dfclient scheduling to nodes with taints
  ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  ##
  tolerations: 
    - key: "node-role.kubernetes.io/master"
      operator: "Exists"
      effect: "NoSchedule"

  ## Node labels for dfclient pod assignment
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  ##
  nodeSelector: {}

  ## Pod affinity
  ##
  affinity: {}

  ## Annotations to be added to dfclient pods
  ##
  podAnnotations: {}

  ## Labels to be added to dfclient pods
  ##
  podLabels: {}

  ## dfclient resource requests and limits
  ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources: {}
    # limits:
    #   cpu: 500m
    #   memory: 512Mi
    # requests:
    #   cpu: 500m
    #   memory: 512Mi

  ## Security context to be added to dfclient pods
  ##
  securityContext: {}

  ## If true, dfclient pods share the host network namespace
  ##
  hostNetwork: true

  dnsPolicy: ClusterFirstWithHostNet

  ## Additional dfclient hostPath mounts
  ##
  extraHostPathMounts:
    # - name: logs
    #   mountPath: /root/.small-dragonfly/logs
    #   hostPath: /var/dragonfly/logs
    # - name: df.key
    #   mountPath: /etc/dragonfly/df.key
    #   hostPath: df.key
    #   readOnly: true
    # - name: df.crt
    #   mountPath: /etc/dragonfly/df.crt
    #   hostPath: df.crt
    #   readOnly: true

  ## Additional dfclient Secret mounts
  ## Defines additional mounts with secrets. Secrets must be manually created in the namespace.
  extraSecretMounts: []
    # - name: secret-files
    #   mountPath: /etc/secrets
    #   subPath: ""
    #   secretName: dfclient-secret-files
    #   readOnly: true

  ## dfdaemon config data
  ## Ref: https://github.com/dragonflyoss/Dragonfly/tree/master/docs/config
  ##
  configData:
    dfget_flags: ["--verbose","-f","Expires&OSSAccessKeyId&Signature"]
    verbose: false
    registry_mirror:
      # url for the registry mirror
      # Remote url for the registry mirror, default is https://index.docker.io
      remote: https://core.harbor1
      # whether to ignore https certificate errors
      insecure: true
    proxies:
      - regx: blobs/sha256.*
    hosts:
      - regx: core.harbor1
        insecure: true
    # hijack_https:
    #   cert: /etc/dragonfly/df.crt
    #   key: /etc/dragonfly/df.key
    #   hosts:
    #     - regx: index.docker.io

supernode:
  name: supernode
  ## supernode container image
  ##
  image:
    repository: registry.cn-hangzhou.aliyuncs.com/yunkun/supernode
    tag: 1.0.6
    pullPolicy: IfNotPresent
  
  ## supernode priorityClassName
  ##
  priorityClassName: ""

  ## Additional supernode container arguments
  ##
  extraArgs: []

  ## Supernode Deployment Strategy type
  # strategy:
  #   type: Recreate

  ingress:
    ## If true, supernode Ingress will be created
    ##
    enabled: false

    ## supernode Ingress annotations
    ##
    annotations: {}
    #   kubernetes.io/ingress.class: nginx
    #   kubernetes.io/tls-acme: 'true'

    ## supernode Ingress additional labels
    ##
    extraLabels: {}

    ## supernode Ingress hostnames with optional path
    ## Must be provided if Ingress is enabled
    ##
    hosts: []
    #   - supernode.domain.com
    #   - domain.com/supernode

    ## supernode Ingress TLS configuration
    ## Secrets must be manually created in the namespace
    ##
    tls: []
    #   - secretName: supernode-tls
    #     hosts:
    #       - supernode.domain.com

  hostAliases:
    - ip: "192.168.0.2"
      hostnames:
        - "core.harbor1"
    # - ip: "127.0.0.1"
    #   hostnames:
    #     - "foo.local"
    #     - "bar.local"


  ## Node tolerations for supernode scheduling to nodes with taints
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  ##
  tolerations: []

  ## Node labels for supernode pod assignment
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}

  ## Pod affinity
  ##
  affinity: {}

  persistence:
    enabled: false
    accessModes:
      - ReadWriteOnce
    annotations: {}
    ## supernode data Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    # storageClass: "-"

    ## supernode data Persistent Volume existing claim name
    ## Requires supernode.persistentVolume.enabled: true
    ## If defined, PVC must be created manually before volume will be bound
    existingClaim: ""

    ## supernode data Persistent Volume mount root path
    ##
    mountPath: /home/admin/supernode
    size: 10Gi

  emptyDir:
    # medium: "Memory"
    sizeLimit: 2Gi

  ## Annotations to be added to supernode pods
  ##
  podAnnotations: {}

  ## Labels to be added to supernode pods
  ##
  podLabels: {}

  ## Use a StatefulSet if replicaCount needs to be greater than 1 (see below)
  ##
  replicaCount: 1

  statefulSet:
    ## If true, use a statefulset instead of a deployment for pod management.
    ## This allows to scale replicas to more than 1 pod
    ##
    enabled: false
    annotations: {}
    labels: {}
    podManagementPolicy: OrderedReady

    ## Supernode headless service to use for the statefulset
    ##
    headless:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "8002" 
      labels: {}

  ## supernode resource requests and limits
  ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources: {}
    # limits:
    #   cpu: 500m
    #   memory: 512Mi
    # requests:
    #   cpu: 500m
    #   memory: 512Mi

  ## Security context to be added to supernode pods
  ##
  securityContext: {}

  service:
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "8002"
    labels: {}
    ports:
      targetPortHTTP: 8002
      targetPortNginx: 8001
      # nodePortHTTP: 30000
      # nodePortNginx: 30001
    type: ClusterIP  

  ## Additional supernode Secret mounts
  ## Defines additional mounts with secrets. Secrets must be manually created in the namespace.
  extraSecretMounts: []
    # - name: secret-files
    #   mountPath: /etc/secrets
    #   subPath: ""
    #   secretName: supernode-secret-files
    #   readOnly: true

  # supernode config data
  # Ref: https://github.com/dragonflyoss/Dragonfly/tree/master/docs/config
  #
  configData:
    base:
      listenPort: 8002
      downloadPort: 8001
      homeDir: /home/admin/supernode
      schedulerCorePoolSize: 10
      peerUpLimit: 5
      peerDownLimit: 4
      eliminationLimit: 5
      failureCountLimit: 5
      systemReservedBandwidth: 20M
      maxBandwidth: 5G
      enableProfiler: false
      debug: false
      failAccessInterval: 3m
      gcInitialDelay: 6s
      gcMetaInterval: 2m
      taskExpireTime: 3m
      peerGCDelay: 3m
      gcDiskInterval: 5m
      youngGCThreshold: 5G
      fullGCThreshold: 1G
      IntervalThreshold: 2h
    plugins:
      storage:
        - name: local
          enabled: true
          config: |
            baseDir: /home/admin/supernode/repo

当Harbor一侧手动触发预热操作以后,该操作可以成功执行完成。在 pod dragon1-dragonfly-dfclient-fwjpf 中的/root/.small-dragonfly/logs/dfclient.log中可以看到数据块已经成功下载完毕。

此时,手动执行docker pull命令拉取上面已经预热的镜像,同时通过ps -ef在上面的pod中观察相关进程,可以看到在每次执行docker pull拉取镜像的过程中,dfget都会重新从Harbor中拉取镜像,并且镜像拉取时间并没有减少。初步断定此时缓存并没有生效。

image

请问以上情形是不是环境配置的问题?

@yunkunrao
Copy link
Author

yunkunrao commented May 19, 2021

参考了类似的issue #1342 ,从测试结果看应该也不是filter的问题。

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant