Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collect more runtime metrics #2155

Open
pkositsyn opened this issue Nov 14, 2020 · 14 comments
Open

Collect more runtime metrics #2155

pkositsyn opened this issue Nov 14, 2020 · 14 comments

Comments

@pkositsyn
Copy link
Contributor

pkositsyn commented Nov 14, 2020

Is your feature request related to a problem?
Sometimes it's useful to have goroutines count or gc related metrics exposed

Describe the solution you'd like
Add these metrics to collector internal metrics exporter

Describe alternatives you've considered
Adding configuration would be useful as well, but I don't see an elegant way to add it for internal exporter

@bogdandrutu
Copy link
Member

I think we should remove the custom runtime metrics from the service, and use one of the way we do allow collecting of runtime metrics for any go lang binary.

I would suggest we expose expvar on our service and scrape these metrics using an expvar receiver. There are some reasons to do this:

  1. This is a generic mechanism that allows us to monitor any golang binary not just the collector.
  2. Avoid unnecessary code to scrape runtime metrics for this binary.

Thoughts?

@bogdandrutu
Copy link
Member

@Vemmy124 alternative is to rely on opentelemetry-go library to produce these metrics

@rakyll
Copy link
Contributor

rakyll commented Nov 17, 2020

@bogdandrutu The only issue with the expvar is that there are not many tools that knows how to parse it. Otherwise, I agree with your points.

@pkositsyn
Copy link
Contributor Author

pkositsyn commented Nov 20, 2020

@bogdandrutu @rakyll I agree that expvar isn't popular and it's a real problem, which includes using an exporter almost for sure for monitoring (e.g. another collector with expvar receiver and prometheus exporter + still a problem of monitoring the last collector). Moreover, I doubt its efficiency with lock for every variable update but might be not that bad.

By the way, I don't mind writing some lines for collecting system metrics actually. Do you think that prometheus is not enough standardized for usage? If this is the case, I don't know a good alternative.

@bogdandrutu
Copy link
Member

@Vemmy124 look into opentelemetry-go-contrib, already support for more runtime metrics. We should use that instead of OpenCensus for recording internal metrics and install the runtime plugin from there

@jpkrohling
Copy link
Member

Instead of expvar, we should consider exposing this following the OpenMetrics format. Quite a lot of tools can understand it already, including the tool that is the most popular today in this area (Prometheus).

@pkositsyn
Copy link
Contributor Author

To be honest even opentelemetry-go alone seems complicated for me as a client library. @bogdandrutu Do I understand correctly that you want to replace OpenCensus for metrics in the whole repository? Could you provide reasons for that and what's wrong with current approach? This issue is stale only because I cannot see the purpose of changing a lot of things. Think I need a vision for that

MovieStoreGuy pushed a commit to atlassian-forks/opentelemetry-collector that referenced this issue Nov 11, 2021
* Removed internal semconvgen

* fix Makefile and update releasing docs

Signed-off-by: Anthony J Mirabella <a9@aneurysm9.com>

Co-authored-by: Eddy Lin <elindy26@gmail.com>
@github-actions github-actions bot added the Stale label Dec 3, 2022
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 4, 2023
@jpkrohling jpkrohling reopened this Jan 11, 2023
@jpkrohling
Copy link
Member

Reopening, as I think this is still valid and we might be closer now to achieving it than we were before, given that we are now closer to using OTel API in the collector.

@github-actions github-actions bot removed the Stale label Jan 12, 2023
@HudsonHumphries
Copy link
Member

Are there any updates on this? I was surprised to find there are go_gc or any go metrics emitted by the collector, lowering our ability to monitor what is happening on our collector processes. I would be interested in in helping implement this if there is an agreed upon way to move forward

hughesjj pushed a commit to hughesjj/opentelemetry-collector that referenced this issue Apr 27, 2023
Bumps [actionshub/chef-install](https://github.com/actionshub/chef-install) from 2.0.3 to 2.0.4.
- [Release notes](https://github.com/actionshub/chef-install/releases)
- [Commits](actionshub/chef-install@2.0.3...2.0.4)

---
updated-dependencies:
- dependency-name: actionshub/chef-install
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
@tomershafir
Copy link

@gowtham-sundara
Copy link

Could you please expose the standard go_ metrics? This will really help tune GOGC and GOMEMLIMIT

@tomershafir
Copy link

@bogdandrutu maybe you can help here?

@CarlosLanderas
Copy link

Any update on this topic? I was gonna check garbage collection on some collectors and was surprised this runtime metrics are not present.

Thanks :)

@XiaoWeiKIN
Copy link

mark

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests