-
Notifications
You must be signed in to change notification settings - Fork 648
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support async/await syntax #62
Comments
If we can solve the <3.6 problem I think it'd be great to include both sync and async versions of some methods. What do we lose by using |
I haven't got time to explore this so I don't know the answer at this moment. Just list these questions so I won't forget about them :) From my memory, |
Closing this for now, re-open if it becomes an issue in the future. |
Currently we're only supporting python 3.6+ so the concerns around breaking tests should be resolved. @thedrow re-opening, would love some help implementing this if you have some bandwidth. |
No news on this topic ? |
Dumb question: What does it mean to support async/await exactly? Are we talking about instrumentations working with async/await functions (e.g, tornado request handlers), context propation working with async/await (if it doesn't already) or do we expect async/await versions of some API methods and why? |
I have the exact same question as well. I find it a bit weird that we are to add anything to our implementation without it being in the spec first. Is it there already? I can understand that the idea is for an implementation to do some preliminary research of a certain feature, but if that is the case it would be very helpful to have also a preliminary specification (it can be drafted in this issue, I don't mean it to be in the spec repo) so that we have an agreement and understanding on what is being attempted here. |
OK. That clarifies it a bit more. So this does not apply to the API specifically. What we want are async versions of BatchSpanProcessor and probably core exporters. Is that right? |
It would be great if someone interested in this could take this up and produce a design doc or proposal with technical details describing all possible directions we can take with pros and cons weighed so we can move forward on this. Ideally it'd be great if we could support both sync and async use cases with same methods but not sure how easily that can be done or even if possible. |
Another aspect of this: Each asyncio task needs to have its own root context span. Currently when you create a span, it assumes that if there's a current span active, that this should be the span to have as the parent span. In asyncio, this is nearly always wrong. |
I've been thinking about this a bit for metrics as well. I think we would want to be able to create Observable instruments with async callbacks too, which requires some API change. This could probably be done by only updating the Meter API's type annotations to accept async or regular forms, and then updating the SDK to handle these.
@Alan-R can you explain this a bit more? I thought our context implementation which uses contextvars should work correctly for asyncio in most cases. |
On Wed, Jan 26, 2022, at 3:11 PM, Aaron Abbott wrote:
> OK. That clarifies it a bit more. So this does not apply to the API specifically. What we want are async versions of BatchSpanProcessor and probably core exporters. Is that right?
>
I've been thinking about this a bit for metrics as well. I think we would want to be able to create Observable instruments with async callbacks too, which requires some API change. This could probably be done by only updating the Meter API's type annotations to accept async or regular forms, and then updating the SDK to handle these.
> Currently when you create a span, it assumes that if there's a current span active, that this should be the span to have as the parent span. In asyncio, this is nearly always wrong.
>
@Alan-R <https://github.com/Alan-R> can you explain this a bit more? I thought our context implementation which uses contextvars should work correctly for asyncio in most casesMessage ID: ***@***.***>
I was mistaken. That's how it was for OpenTracing. I mistakenly thought it was the same for OpenTelemetry.
My apologies.
…--
Alan Robertson
***@***.***
|
Any news about this? async with tracer.start_span_async("span"):
pass |
@gen-xu What do you have in mind for |
@TBBle yes if we have
The current implementation of |
Thanks for this number. Can you provide a bit more info about the machine and setup? The benchmark tests suggest it should be 50us at most (20k iterations/sec). Or maybe I misunderstood how those benchmarks are supposed to be interpreted. |
The things protected by |
If you're using BatchSpanProcessor and one of the built-in samplers, this should be CPU bound work + enqueing the message in a On my machine: $ python3 -m timeit -s "from threading import Lock; l = Lock()" "with l: pass"
2000000 loops, best of 5: 163 nsec per loop acquiring the locks shouldn't be too expensive if there isn't lock contention. Since you mention asyncio, I'm assuming you don't have many threads. I'm trying to think of what async work might be done if we introduced a |
@gshpychka Thanks for the link, didn't know there was a benchmark page available. I think the numbers I gave were not accurate, which are based on the viztracer library I used. I think it's just the viztracer's overhead and I re-did again myself with simply time() and it gives numbers like 50~60 us. Sorry for the false positive. |
I'm not sure if it should be a separate ticket, but working through how to deliver observable instruments using an async co-routine might be interesting. I haven't really done this thought experiment myself, and so don't know if it's even feasible, I only raise it because a colleague was looking into adding an observable guage to our asyncio-based code, and had to use Looking at the anyio equivalent, it's possible that there isn't a good solution, as the callback needs too much event loop info or pre-work, so doing what my colleage has currently done in user-code may be our best option. def message_count_callback():
coro = message_source.consumer_count()
future = asyncio.run_coroutine_threadsafe(coro, loop)
message_count = future.result()
yield Observation(message_count) |
This is basically what I had in mind. We can accept an optional event loop in the MeterProvider (if not provided create a background thread with a loop running for all of OpenTelemetry to use) and run any coroutine callbacks in that loop. The benefit of doing it in the SDK vs user code is that we can run the async callbacks for all instruments in parallel.
I'm not super familiar with anyio, how important is it to everyone that we use anyio instead of just using asyncio here? We are usually pretty conservative about taking new dependencies |
At least for anyio, I thought it was already being used in the SDK for something, but I just looked and it's not; I must have been thinking of a different project, sorry. I see we're directly using asyncio in the tests and docs, but the SDK code itself has no I will note that it's been a pretty-common sight in other libraries I've used that have to deal with async but aren't asyncio-specific, but I haven't really looked into it, or had to deal with it directly myself. I have no idea how widespread non-asyncio event loops, e.g. trio, are in practice, so I can't really comment on the cost/value tradeoff of using it. |
Is it possible to add to the documentation that |
+1 Also I am using this snippet myself to work around that, might be helpful to others from contextlib import asynccontextmanager
@asynccontextmanager
async def start_as_current_span_async(tracer, *args, **kwargs):
with tracer.start_as_current_span(*args, **kwargs) as span:
yield span and so it can be used as @start_as_current_span_async(tracer, "my_span")
async def some_async_func():
... |
By looking at it this should work as-is for a wrapper for any function (sync/async), right? |
I don't think the wrapper (if you mean the |
This seems like valuable low-hanging fruit to add as It might even be feasible for |
I was mistaken, this works great as is.
…On Sun, May 22, 2022, at 4:11 AM, Gen Xu wrote:
Any news about this?
it would also be nice to have async tracer as well so we can do something like this
async with tracer.start_span_async("span"):
pass
—
Reply to this email directly, view it on GitHub <#62 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAJTJI5TTO2FNLDJUB4X2WLVLIB6XANCNFSM4IG32MIQ>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
--
Alan Robertson
***@***.***
|
I use this snippet and some error occurred: |
Not sure if this is the best way, but I was able to get around the error with this, from contextlib import _AsyncGeneratorContextManager
def customasynccontextmanager(func):
@wraps(func)
def helper(*args, **kwds):
return _CustomAsyncGeneratorContextManager(func, args, kwds)
return helper
class _CustomAsyncGeneratorContextManager(_AsyncGeneratorContextManager):
def __call__(self, func):
@wraps(func)
async def inner(*args, **kwds):
async with self.__class__(self.func, self.args, self.kwds):
return await func(*args, **kwds)
return inner
@customasynccontextmanager
async def start_as_current_span_async(*args, **kwargs):
with tracer.start_as_current_span(*args, **kwargs) as span:
yield span courtesy of python/cpython@86b833b |
It's better for the library to handle it internally.....this would avoid massive imports or just copy pasting this shortcut ...but in the meantime....guess it works 🤝 |
I think there are quite a few feature requests in this thread and some discussion that just confuses users. I would like to close this issue since it is a bit confusing and open separate issues for those, if that sounds OK to everyone.
I'll throw another one in
|
Please open new issues if I missed anything. I'm closing this as obsolete in favor of the child issues: |
I wonder if the snippet might either have problems on earlier Python versions, or because the On Python 3.11, this works for me perfectly (with type hints too - I spent a couple of hours trying and failing to get the longer solution using from collections.abc import AsyncGenerator
from contextlib import asynccontextmanager
from typing import Any
from opentelemetry.trace import Tracer
@asynccontextmanager
async def start_as_current_span_async(
*args: Any,
tracer: Tracer,
**kwargs: Any,
) -> AsyncGenerator[None, None]:
"""Start a new span and set it as the current span.
Args:
*args: Arguments to pass to the tracer.start_as_current_span method
tracer: Tracer to use to start the span
**kwargs: Keyword arguments to pass to the tracer.start_as_current_span method
Yields:
None
"""
with tracer.start_as_current_span(*args, **kwargs):
yield I was then able to use this directly like from opentelemetry.trace import get_tracer
tracer = get_tracer(__name__)
@start_as_current_span_async(tracer=tracer, name='my_func')
async def my_func() -> ...:
... Both |
@jamestrousdale this seems like valuable information, can you please add it to the issue above where it fits better? This is a closed issue and I don't want this comment of yours to be overlooked because of that. |
@jamestrousdale nice solution. But it can be more flexible if you add function as argument:
With this code it can be possible to decorate function not only with decorator syntax:
|
I've encountered a slowdown on my async application after including opentelemetry(previously used opencensus) Is there any way to use opentelemetry without locking in asyncio environment? |
I also encountered a slowdown on my async application by including opentelemetry. I have an application with a lot of async tasks running. Response latency improved by 25% just by removing all opentelemetry. I came up with the hypothesis that maybe it was some lock contention when starting a span as the current span. I decided to refactor my application to not use the |
Python started to support async/await syntax in version 3.7.
We want to explore how to support this in OpenTelemetry Python.
Things to be explored are:
The text was updated successfully, but these errors were encountered: