Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Namespace modification when installing k8s-otel operator #56

Open
eedugon opened this issue Nov 19, 2024 · 3 comments
Open

Namespace modification when installing k8s-otel operator #56

eedugon opened this issue Nov 19, 2024 · 3 comments

Comments

@eedugon
Copy link
Contributor

eedugon commented Nov 19, 2024

Per #50 (comment) it looks we support the installation of all components in a different namespace but the provided values file would require certain modifications.

The scope of this issue is:

  1. Determine if there's any way to decouple the namespace installation from values.yaml so a user could set the installation namespace without needing to tweak multiple values.

  2. Test the namespace customization use case to ensure it fully works.

  3. Add in the installation document the information needed for users who might want to tweak the namespace where the resources are installed. The content to add will depend of the outcome of the previous topic.

@rogercoll , @gizas : If any of you can take a look and review point 1. above I could take care of points 2 and 3 afterwards ;)


From the linked PR comment:

helm upgrade --install opentelemetry-kube-stack open-telemetry/opentelemetry-kube-stack \
--namespace opentelemetry-operator-system2 \
--values './values.yaml' \
--version '0.3.3'

In order for that to work, the user needs to:

  1. Update https://github.com/elastic/opentelemetry/blob/8.16/resources/kubernetes/operator/helm/values.yaml#L721
  2. And for instrumentation the url https://github.com/elastic/opentelemetry/blob/8.16/resources/kubernetes/operator/helm/values.yaml#L936
  3. And of course the annotations to have correct namespace later on the manual steps of annotating the deployments' apps

@rogercoll
Copy link
Contributor

Determine if there's any way to decouple the namespace installation from values.yaml so a user could set the installation namespace without needing to tweak multiple values.

There are two main usages of the namespace name in the values.yaml file:

  1. In the "Instrumentation" section:
instrumentation:
  name: elastic-instrumentation
  enabled: true # Enable/disable auto-instrumentation.
  exporter:
    endpoint: http://opentelemetry-kube-stack-daemon-collector.opentelemetry-operator-system.svc.cluster.local:4318

A possible solution to get rid of the "namespace" in the endpoint value would be to use the node DNS value instead, similar to the proposal in this PR: https://github.com/elastic/opentelemetry/pull/20/files

  1. In the daemonset collector configuration to prevent collecting the logs of the collectors/operator deployed in the namespace:
        filelog:
          retry_on_failure:
            enabled: true
          start_at: end
          exclude:
          # exlude collector logs
          - /var/log/pods/opentelemetry-operator-system_opentelemetry-kube-stack*/*/*.log

We could either just remove the namespace from the path:

        filelog:
          retry_on_failure:
            enabled: true
          start_at: end
          exclude:
          # exlude collector logs
          - /var/log/pods/*opentelemetry-kube-stack*/*/*.log

Or even reevaluate if those logs would be valuable for the user and remove their exclusion.

@eedugon
Copy link
Contributor Author

eedugon commented Nov 19, 2024

Thanks for your comments @rogercoll!

For the moment I'd leave it as it is and work on the docs side also, the reasons are:

A possible solution to get rid of the "namespace" in the endpoint value would be to use the node DNS value instead, similar to the proposal in this PR: https://github.com/elastic/opentelemetry/pull/20/files

I think it's better to rely on kubernetes services balancing and not send all the traces to the DaemonSet Pod where the application is running. But if there are stronger reasons for the change I wouldn't complain.

Ideally (IMO) the best would be to put the logic in the chart template as the template knows the name of the DaemonSet service and the namespace where this is being installed, but considering the chart is not ours maybe this isn't worthy at all.

In the daemonset collector configuration to prevent collecting the logs of the collectors/operator deployed in the namespace

Ideally the solution here would be to create a feature like excludeSelfLogs: true / false in the values file and then add or omit the exclusion in the chart template, but I assume that's not feasible because it's not our helm chart.

So let's leave it in the way it is and document it properly. If in the future we change the logs exclusion behavior we will adapt the docs accordingly.

If you agree I'll work on this issue focusing on the current behavior and the needs to customize the namespace :)

@rogercoll
Copy link
Contributor

Another solution would be to add the namespace as an environment variable into the deployments, and expand it in the configuration:

instrumentation:
  name: elastic-instrumentation
  enabled: true # Enable/disable auto-instrumentation.
  env:
    - name: OTEL_NAMESPACE
      valueFrom:
        fieldRef:
          fieldPath: metadata.namespace
  exporter:
    endpoint: http://$(OTEL_NAMESPACE):4318

Ideally the solution here would be to create a feature like excludeSelfLogs: true / false in the values file and then add or omit the exclusion in the chart template, but I assume that's not feasible because it's not our helm chart.

We can always propose it to the upstream community 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants