Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[opentelemetry-php-contrib] Ability to disable Laravel artisan console command hooks #1440

Closed
LauJosefsen opened this issue Nov 27, 2024 · 5 comments

Comments

@LauJosefsen
Copy link

Is your feature request related to a problem?
Hi guys, I think long running/worker-mode artisan commands in the laravel auto instrumentation still have the issue of creating "forever" outer spans. I see that we have disabled the Kernel hooks by default using the LaravelInstrumentation::shouldTraceCli() in the Console/Kernel instrumentation, however we still create a span for the individual commands in https://github.com/opentelemetry-php/contrib-auto-laravel/blob/main/src/Hooks/Illuminate/Console/Command.php

This can be observed in any long-running/worker-mode artisan command. A good default example is the queue:work, but also custom worker-mode artisan commands such as a kafka consumer or producer.

I don't know if this is bothering anyone else? I wonder if there today exists a way for me as an integrator to disable this hook for certain commands or environments? We previously maintained a fork of the auto instrumentation to disable if the env APP_LONG_RUNNING was set, but I was hoping to discuss an improvement that does not involve us maintaining a fork just for this small change.

Describe the solution you'd like

Unless you have a better idea, I propose having a check like the LaravelInstrumentation::shouldTraceCli() , but using a different env variable and maybe true as the default?

Describe alternatives you've considered
We used to maintain a fork solving the issue for our use cases with a custom envionrment variable for long running commands, but it would be nicer to have a solution upstream, as I would imagine other integrators are facing the same issue.

@LauJosefsen LauJosefsen changed the title [opentelemetry-php-contrib] [opentelemetry-php-contrib] Ability to disable Laravel artisan console command hooks Nov 27, 2024
@ChrisLightfootWild
Copy link
Contributor

This is related to #1411 as well.

I actually started work on this put never pushed it up. I will try and find some time later today to do that.

@LauJosefsen
Copy link
Author

I see. Closing this in favor of #1411. Feel free to ping me if you need any feedback or help with implementation. I am also on the CNCF slack 👍

@LauJosefsen
Copy link
Author

This is related to #1411 as well.

I actually started work on this put never pushed it up. I will try and find some time later today to do that.

I don't know if relevant, but we are looking into this issue again as we started experiencing some memory leaks across all services after updating some composer packages (including OTEL api, sdk and auto instrumentations), and it stops when we disable OTEL auto instrumentation. It might be an unrelated problem though.

@flc1125
Copy link
Member

flc1125 commented Dec 13, 2024

Information as a supplement is provided for reference: https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/#span-limits

It should be because of the OTEL_SPAN_EVENT_COUNT_LIMIT configuration, so the event with the largest single span only supports 128. So, my dashboard sees 128.


Hope that helps.

@flc1125
Copy link
Member

flc1125 commented Dec 13, 2024

This is related to #1411 as well.
I actually started work on this put never pushed it up. I will try and find some time later today to do that.

I don't know if relevant, but we are looking into this issue again as we started experiencing some memory leaks across all services after updating some composer packages (including OTEL api, sdk and auto instrumentations), and it stops when we disable OTEL auto instrumentation. It might be an unrelated problem though.

From my experience, I feel that it should not be the same as my current problem.

Because I don't know the actual situation. Just from personal past experience, it is very likely that the data has not been reported to the remote, resulting in a local memory overflow.

If it is the reason I mentioned, you can consider adding a collector's service to the nearest place (the same network) to the application service and report it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants