-
Notifications
You must be signed in to change notification settings - Fork 828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[@opentelemetry/instrumentation-http] Configure custom metric attributes #4107
Comments
Hi, thanks for opening this issue 🙂 Could be that I'm misunderstanding, but I'm thinking https://opentelemetry.io/docs/instrumentation/js/resources/ might be what you're looking for. Is it possible to distinguish Or does that incur the same problems as with adding it to the Prometheus exporter? |
Thanks for the review, I realise that as a maintainer it is sensitive to add new APIs/features willy nilly. So hopefully if something is added it is done in a reasonable manner that is in line with the existing library ergonomics. The closest to solving this with resource that we have managed is: const resource = new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: traceConfiguration.serviceName,
[SemanticResourceAttributes.SERVICE_INSTANCE_ID]:
process.env.POD_NAME ?? traceConfiguration.serviceName,
[SemanticResourceAttributes.SERVICE_VERSION]: 'v666', // hardcoded example
[SemanticResourceAttributes.SERVICE_NAMESPACE]: 'canary', // hardcoded example
...attributesFromEnv,
});
// ...
const sdk = new opentelemetry.NodeSDK({
metricReader,
resource,
instrumentations: [ getNodeAutoInstrumentations({ ... }) ],
// ...
});
// Configure meter provider
sdk.configureMeterProvider({
views: histogramNames.map(
(instrumentName) =>
new View({
instrumentName,
aggregation: new ExplicitBucketHistogramAggregation([
50, 100, 250, 500, 1000, 2500, 5000, 10000, 20000,
]),
}),
),
});
sdk.start(); The metrics recorded then look like this:
So it seems we got "canary" in namespace, which could work (although as a hack) if this was possible to determine at boot. However we see a number of issues with this:
// How to detect this? Can this even be reconfigured after sdk.start() was run?
// Or do we have to shutdown and recreate it completely?
onPromition() => {
sdk.configureTracerProvider(
{
resource, // resource was updated with namespace "stable"
}, spanProcessor,
);
} We think that a way to extending attributes as a configuration on instrumentation level is in line with other hooks that are available in node auto instrumentation, question is if we nailed the API usage/ergonomics. |
Hmm, you're right; it does not really fit the namespace. I have to say that Prometheus is not really my area of expertise, but IIRC resources are usually exported via the If I understand correctly, this should bring the concept of resource attributes into Prometheus, and then it is possible to split by resource attributes, so it should work with any resource attribute instead. Of course, if the value of the resource attribute is not determined by the time the container starts, then that's challenging because I think we don't have a concept for that in OpenTelemetry at the moment. As such I think the whole topic should be addressed on a specification level rather than on individual instrumentation level, as most likely this would not only concern the telemetry emitted from the http instrumentation, but all other instrumentations (across all languages) as well. I'm aware of open-telemetry/oteps#207, but I think this proposal may require some hacking to make it work for this use-case. Then there's also #4045, which prototypes "global attributes" (only for traces and logs at the moment), which sounds more like what would be necessary here. 🤔 Thinking outside what's available in the app, I'm wondering if collector-contrib's kubernetes attributes processor would be able to apply a stable/canary resource attribute based on a k8s label? 🤔 |
Have to admit I always forget the discussions feature exists |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
Is your feature request related to a problem? Please describe.
We are deploying canaries and rely heavily on metrics and dashboards to measure how they compare to stable version currently out in production. There is no way to distinguish node http auto instrumentation metrics however, they all get mangled up in the same metric labels.
I have created a discussion but no one was able to help me.
Describe the solution you'd like
I want a clear and intuitive API to configure custom metric attributes that user may inject outside of span.
Describe alternatives you've considered
Inject the attributes from prometheus exporter
This feel wrong for two reasons:
Change pod name
By changing pod name we could get
exported_job
to become different but that would be bad because they are all the same app, just variants of the same app. So all the breaking changes in dashboards will be too wonky to worry about.The text was updated successfully, but these errors were encountered: