Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AdditionalConfig on nodePools or General is not being applied? #204

Closed
emil-jacero opened this issue Jun 30, 2022 · 12 comments
Closed

AdditionalConfig on nodePools or General is not being applied? #204

emil-jacero opened this issue Jun 30, 2022 · 12 comments
Assignees
Labels
bug Something isn't working

Comments

@emil-jacero
Copy link

Version image: v1.1 (built locally)
Version helm: v1.1 (built locally)

Hi,

I am trying to apply these settings to the cluster (opensearch.yml). I have tried adding them to the general config and the nodePools, as shown in the CRD documentation.

additionalConfig:
  s3.client.default.endpoint: "minio.example.com"
  s3.client.default.max_retries: "3"
  s3.client.default.protocol: "https"

This however does not apply the config to the ConfigMap but as env instead.
Is that the correct behavior?

Best regards,
Emil

@emil-jacero
Copy link
Author

An update to this issue:

When adding the additionalConfig the operator seems to continuously terminating my nodes and rebuilding them. Cannot find anything in the operator log suggesting something is wrong, but that is not the behavior i am looking for. :)

My configuration:

apiVersion: opensearch.opster.io/v1
kind: OpenSearchCluster
metadata:
  name: os-cluster-01
  namespace: opensearch
spec:
  general:
    serviceName: os-cluster-01
    version: 1.3.3
    imagePullPolicy: Always
    image: reg.example.com/opensearch:1.3.3-s3-plugin  # locally built image with the `repository-s3` plugin added and certificates
  security:
    tls:
      transport:
        perNode: true
        generate: true
      http:
        generate: true
    config:
      securityConfigSecret:
       name: security-config
      adminCredentialsSecret:
        name: admin-credentials
  dashboards:
    enable: true
    version: 1.3.3
    imagePullPolicy: Always
    image: reg.example.com/opensearch-dashboards:1.3.3  # locally built image with certificates
    replicas: 1
    tls:
      enable: true
      generate: true
    opensearchCredentialsSecret:
      name: kibanaserver-credentials
    additionalConfig:
      opensearch_security.cookie.secure: "true"
      opensearch.requestHeadersWhitelist: "['Authorization', 'securitytenant', 'security_tenant']"
      opensearch_security.multitenancy.enabled: "true"
      opensearch_security.multitenancy.tenants.enable_global: "true"
      opensearch_security.multitenancy.tenants.enable_private: "true"
      opensearch_security.multitenancy.tenants.preferred: "['Private', 'Global']"
      opensearch_security.multitenancy.enable_filter: "false"
    resources:
      requests:
         memory: "512Mi"
         cpu: "200m"
      limits:
         memory: "512Mi"
         cpu: "200m"
  nodePools:
    - component: masters
      replicas: 3
      diskSize: "5Gi"
      additionalConfig:
        s3.client.default.endpoint: "minio.example.com"
        s3.client.default.max_retries: "3"
        s3.client.default.protocol: "https"
      persistence:
        pvc:
          storageClass: longhorn
          accessModes:
            - ReadWriteOnce
      NodeSelector:
      env:
        - name: BACKUP_S3_AWS_ACCESS_KEY_ID
          valueFrom:
            secretKeyRef:
              name: backup-credentials
              key: access-key-id
        - name: BACKUP_S3_AWS_SECRET_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: backup-credentials
              key: secret-access-key
      resources:
         requests:
            memory: "2Gi"
            cpu: "500m"
         limits:
            memory: "2Gi"
            cpu: "1000m"
      roles:
        - "data"
        - "master"

@swoehrl-mw
Copy link
Collaborator

Hi @emil-jacero.

This however does not apply the config to the ConfigMap but as env instead.

That is the currently implemented behaviour. Each additonalConfig is added as an envar.

When adding the additionalConfig the operator seems to continuously terminating my nodes and rebuilding them

Could you check the operator logs for DEBUG messages with resource diff? I tried a version of your config on my local cluster (minus the image, persistence, nodePool.env and securityconfig) and I can see that the operator continously tries to apply a patch to the statefulset of the nodepool. That diff has $setElementOrder/env. Not sure why, but if you have the same messages it might narrow down the problem.

@dbason
Copy link
Contributor

dbason commented Jul 11, 2022

The $setElementOrder enforces strict ordering on the YAML list. Basically the patch isn't changing their environment variables just the order, which gets picked up as a change. It looks like the additional configuration values are getting added in a different order every time, so I will add a patch to sort these.

@swoehrl-mw swoehrl-mw added the bug Something isn't working label Jul 11, 2022
@emil-jacero
Copy link
Author

Tested the fix from @dbason and it works great. The operator no longer gets stuck in an endless loop.

Thanks :)

@dbason dbason self-assigned this Jul 12, 2022
@dbason dbason closed this as completed Aug 10, 2022
@prudhvigodithi
Copy link
Collaborator

prudhvigodithi commented Oct 3, 2022

Hey @swoehrl-mw and @dbason I still see this being added as env and not to the ConfigMap (opensearch.yml)
When working on #278, I need to add setting path.repo: ["/tmp/snapshots"] to opensearch.yml, but it just adds as the env when passed in spec.general.additionalConfig
(Document link https://opensearch.org/docs/latest/opensearch/snapshots/snapshot-restore/#register-repository)

    additionalConfig:
      path.repo: '["/tmp/snapshots"]'

Creates statefulset with env as

          - name: path.repo
            value: '["/tmp/snapshots"]'

Error

{
  "error" : {
    "root_cause" : [
      {
        "type" : "repository_exception",
        "reason" : "[my-fs-repository] location [/tmp/snapshots] doesn't match any of the locations specified by path.repo because this setting is empty"
      }
    ],
    "type" : "repository_exception",
    "reason" : "[my-fs-repository] failed to create repository",
    "caused_by" : {
      "type" : "repository_exception",
      "reason" : "[my-fs-repository] location [/tmp/snapshots] doesn't match any of the locations specified by path.repo because this setting is empty"
    }
  },
  "status" : 500
}

@swoehrl-mw
Copy link
Collaborator

Hi @prudhvigodithi I have no problem with changing the logic for additionalConfig to use a configmap instead of envvars. We just need to make sure to do a rolling restart of the nodes if the config changes.

@idanl21
Copy link
Collaborator

idanl21 commented Oct 3, 2022

Hi @prudhvigodithi, i think it will the best to implement also an configMap option, so the user can decide if he wants to use configMap or envvars. what are you thinking ?

@prudhvigodithi
Copy link
Collaborator

prudhvigodithi commented Oct 3, 2022

Thanks @swoehrl-mw and @idanl21 but at this point if I have to pass additionalConfig to opensearch.yml during cluster startup itself is it possible? I'm following this https://github.com/Opster/opensearch-k8s-operator/blob/main/docs/userguide/main.md#configuring-opensearchyml :)

@swoehrl-mw
Copy link
Collaborator

@prudhvigodithi The operator just relies on opensearch picking up the config options from envvars (I think this is done at startup) and doesn't really modify opensearch.yml. I'm guessing that doesn't work for list values? Like I hinted at, we might need to change the logic to not use the envvar mechanism but supply an opensearch.yml via mounted configmap.

@prudhvigodithi
Copy link
Collaborator

@prudhvigodithi The operator just relies on opensearch picking up the config options from envvars (I think this is done at startup) and doesn't really modify opensearch.yml. I'm guessing that doesn't work for list values? Like I hinted at, we might need to change the logic to not use the envvar mechanism but supply an opensearch.yml via mounted configmap.

make sense, thanks @swoehrl-mw let me a new issue to track this.

@prudhvigodithi
Copy link
Collaborator

Also just FYI, i see this works for dashboard, may be we have to use the same logic used there for OpenSearch as well.
@swoehrl-mw @idanl21

@segalziv
Copy link
Contributor

segalziv commented Oct 3, 2022

coming late into the discussion, but if I'm not mistaken the mechanism of env-vars was introduced by @dbason , Dan are we missing anything by the move to configmap? is there a reason for env vars that we should be aware of?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants