- sdk: aworkflow decorator for async generators (#2292)
- cohere: rerank exception on saving response when return_documents=True (#2289)
- sdk: gemini instrumentation was never installed due to package name error (#2288)
- sdk: remove print (#2285)
- general bump of otel dependencies (#2274)
- openai: exception throw with pydantic v1 (#2262)
- vertex: async / streaming was missing output fields (#2253)
- sdk: don't serialize large jsons as inputs/outputs (#2252)
- anthropic: instrument anthropics system message as gen_ai.prompt.0 (#2238)
- llamaindex: streaming LLMs caused detached spans (#2237)
- sdk: missing sagemaker initialization (#2235)
- sdk: support a "block-list" of things to not instrument (#1958)
- openai+anthropic: async call crashing the app when already in a running asyncio loop (#2226)
- langchain: structured output response parsing (#2214)
- anthropic: add instrumentation for Anthropic prompt caching (#2175)
- bedrock: cohere models failed to report prompts (#2204)
- sdk: capture posthog events as anonymous; roll ingestion key (#2194)
- langchain: various bugs and edge cases in metric exporting (#2167)
- sdk: add header for logging exporter (#2164)
- langchain: metrics support (#2154)
- langchain: Add trace context to client requests (#2152)
- anthropic: add instrumentation for Anthropic tool calling (alternative to #1372) (#2150)
- openai: add structured output instrumentation (#2111)
- sdk: add OpenTelemetry logging support (#2112)
- ollama: tool calling (#2059)
- llama-index: add attribute to span for llm request type in dispatcher wrapper (#2141)
- traceloop-sdk: add aiohttp as dependency (#2094)
- anthropic: Replace count_tokens with usage for newer models (#2086)
- bedrock: support metrics for bedrock (#1957)
- SageMaker: Add SageMaker instrumentation (#2028)
- langchain: token usage reporting (#2074)
- langchain: serialize inputs and outputs with pydantic (#2065)
- sdk: custom image uploader (#2064)
- support async image upload flows (#2051)
- anthropic: add support for base64 images upload for anthropic (#2029)
- anthropic: token counting exception when prompt contains images (#2030)
- sdk+openai: support base64 images upload (#2000)
- sdk: wrong package check for Vertex AI (#2015)
- langchain: support v0.3.0 (#1985)
- add groq instrumentation (#1928)
- langchain: allow external and langchain metadata (#1922)
- bedrock: llama3 completion wasnt logged (#1914)
- instrumentation: Import redis from OpenTelemetry, add redis sample rag application (#1837)
- langchain: add missing kind property (#1901)
- openai: calculating streaming usage didnt work on azure models
- langchain: langgraph traces were broken (#1895)
- llama-index: callback improvements (#1859)
- openai: re-enabled token count for azure instances (#1877)
- openai: not given values thrown errors (#1876)
- sdk:
aentity_class
was missing a positional argument (#1816) - sdk: instrument threading for propagating otel context (#1868)
- openai: TypeError: '<' not supported between instances of 'NoneType' and 'int' in embeddings_wrappers.py (#1836)
- llama-index: Use callbacks (#1546)
- LanceDB Integration (#1749)
- sdk: chained entity path on nested tasks (#1782)
- workflow_name and entity_path support for langchain + fix entity_name (#1844)
- sdk: disable traceloop sync by default (#1835)
- langchain: export metadata as association properties (#1805)
- bedrock: add model name for amazon bedrock response (#1757)
- bedrock: token count for titan (#1748)
- langchain: various cases where not all parameters were logged properly (#1725)
- separate semconv-ai module to avoid conflicts (#1716)
- bump to otel 0.47b0 (#1695)
- openai: log content filter results in proper attributes (#1539)
- openai: add tool call id (#1664)
- pinecone: support v5 (#1665)
- sdk: aworkflow wasn't propagating workflow_name attribute (#1648)
- langchain: agent executor weren't producing traces (#1616)
- openai: pydantic tool calls in prompt weren't serialized correctly (#1572)
- sdk: manual reporting of llm spans (#1555)
- langchain: input/output values weren't respecting user config (#1540)
- llamaindex: report entity name (#1525)
- langchain: remove leftover print
- langchain: cleanups, and fix streaming issue (#1522)
- langchain: report llm spans (instead of normal instrumentations) (#1452)
- association properties and workflow / task on metrics (#1494)
- llamaindex: report inputs+outputs on entities (#1495)
- suppress LLM instrumentations through context (#1453)
- langchain: improve callbacks (#1426)
- sdk: llamaindex instrumentation was never initialized (#1490)
- sdk: prompt versions and workflow versions (#1425)
- openai: add support for parallel function calls (#1424)
- marqo: Add marqo instrumentation (#1373)
- sdk: context detach issues on fastapi (#1432)
- openai: Handle
tool_calls
assistant messages (#1429) - sdk: speedup SDK initialization (#1374)
- gemini: relax version requirements (#1367)
- openai: rename
function_call
totool_calls
(#1431)
- langchain: use callbacks (#1170)
- input/output serialization issue for langchain (#1341)
- sdk: remove auto-create dashboard option (#1315)
- sdk: backpropagate association property to nearest workflow/task (#1300)
- sdk: clear context when @workflow or @task is ending (#1301)
- bedrock: utilize invocation metrics from response body for AI21, Anthropic, Meta models when available to record usage on spans (#1286)
- gemini: basic support in generate_content API (#1293)
- alephalpha: Add AlephAlpha instrumentation (#1285)
- instrumentation: add streamed OpenAI function tracing (#1284)
- togetherai: Add together ai instrumentation (#1264)
- anthropic: duplicate creation of metrics (#1294)
- haystack: add input and output (#1202)
- openai: calculate token usage for azure (#1274)
- use constants (#1131)
- instrumentation: Handle OpenAI run polling (#1256)
- openai: handle empty finish_reason (#1236)
- removed debug prints from instrumentations
- vertexai: change the span names to match method calls (#1234)
- openai+anthropic+watsonx: align duration and token.usage metrics attributes with conventions (#1182)
- openai: async streaming responses (#1229)
- sdk: temporarily (?) remove sentry (#1228)
- all packages: Bump opentelemetry-api to 1.25.0 and opentelemetry-instrumentation to 0.46b0 (#1189)
- log tracing errors on debug level (#1180)
- bedrock: support streaming API (#1179)
- weaviate: support v4.6.3 (#1134)
- sdk: wrong package check for mistral instrumentations (#1168)
- vertexai:
vertexai.generative_models
/llm_model
detection (#1141)
- bedrock: support simple string in prompts (#1167)
- langchain: stringification fails for lists of LangChain
Documents
(#1140)
- mistral: implement instrumentation (#1139)
- ollama: implement instrumentation (#1138)
- anthropic: don't fail if can't count anthropic tokens (#1142)
- ollama: proper unwrapping; limit instrumentations to versions <1
- bedrock: instrument bedrock calls for Langchain (with session) (#1135)
- milvus: add Milvus instrumentation (#1068)
- add explicit buckets to pinecone histograms (#1129)
- pinecone: backport to v2.2.2 (#1122)
- llm metrics naming + views (#1121)
- langchain: better serialization of inputs and outputs (#1120)
- sdk: failsafe against instrumentation initialization errors (#1117)
- sdk: instrument milvus (#1116)
- openai: old streaming handling for backward compatibility with OpenAI v0 (#1064)
- openai: report fingerprint from response (#1066)
- sdk: special handling for metrics with custom traces exporter (#1065)
- openai: fallback to response model if request model is not set when calculating token usage (#1054)
- openai: add default value of stream as false in token usage metric (#1055)
- pinecone: metrics support (#1041)
- sdk: handle workflow & tasks generators (#1045)
- cohere: use billed units for token usage (#1040)
- remove all un-needed tiktoken deps (#1039)
- sdk: removed unneeded tiktoken dependency (#1038)
- openai: relax tiktoken requirements (#1035)
- sdk: loosen SDK requirements for Sentry + Posthog (#1027)
- sdk: separate sentry SDK (#1004)
- langchain: support model-specific packages (#985)
- pinecone: filter argument may be dict (#984)
- instrumentation: correct the module declaration to match package filepath name (#940)
- sdk: otel metrics with traceloop (#883)
- Updated semantic conventions based on otel community (#884)
- sdk: do not instrument sentry requests (used internally by SDK) (#939)
- openai: missing await for Embedding.acreate (#900)
- cohere: support v5 (#899)
- pinecone: support v3 (#895)
- instrumentation: the build problem for watsonx auto instrumentation (#885)
- langchain: input/output reporting (#894)
- sdk: reset the color of messages in the custom metrics exporter (#893)
- openai: azure filtering masked all completions (#886)
- chromadb: exception thrown when metadata isn't set (#882)
- properly handle and report exceptions (#748)
- langchain: bug when retrieving messages as kwargs from model invoke (#856)
- openai: handle filtered content (#854)
- bedrock: loosen version requirement of anthropic (#830)
- haystack: V2 Support (#710)
- sdk: warn for reporting score when not using Traceloop (#829)
- openai: fix aembeddings init error (#828)
- openai: missing aembedding metrics
- anthropic: fix issue with disabled metrics (#820)
- openai: missing metrics for OpenAI v0 instrumentation (#818)
- bedrock: enrich token usage for anthropic calls (#805)
- langchain: use chain names if exist (#804)
- llamaindex: proper support for custom LLMs (#776)
- anthropic: prompt attribute name (#775)
- langchain: BedrockChat model name should be model_id (#763)
- instrumentation-anthropic: Support for OpenTelemetry metrics for Anthropic (#764)
- bedrock: support anthropic v3 (#770)
- sdk: custom instruments missing parameters (#769)
- sdk: import of removed method
- sdk: removed deprecated set_context
- anthropic: do not fail for missing methods
- anthropic: Async and streaming Anthropic (#750)
- openai: async streaming metrics (#749)
- anthropic: token usage (#747)
- openai: switch to init flag for token usage enrichment (#745)
- anthropic: support multi-modal (#746)
- langchain: instrument chat models (#741)
- bump otel -> 0.45.0 (#740)
- enrich spans with related entity name + support entities nesting (#713)
- sdk: stricter dependencies for instrumentations
- openai: missing metric for v0 instrumentation (#735)
- traceloop-sdk: default value for metrics endpoint (#711)
- instrumentation deps without the SDK (#707)
- langchain: support custom models (#706)
- openai: enrich assistant data if not available (#705)
- openai: support pre-created assistants (#701)
- openai: assistants API (#673)
- pinecone: instrument pinecone query embeddings (#368)
- traceloop-sdk: custom span processor's on_start is honored (#695)
- openai: do not import tiktoken if not used
- sdk: exclude api.traceloop.com from requests
- openai: Support report token usage in stream mode (#661)
- anthropic: support messages API (#671)
- auto-instrumentation support (#662)
- sample: poetry issues; litellm sample
- sdk: better logging for otel metrics
- sdk: error for manually providing instrumentation list
- support python 3.12 (#639)
- traceloop-sdk: Log error message when providing wrong API key. (#638)
- openai: support tool syntax (#630)
- sdk: protect against unserializable inputs/outputs (#626)
- watsonx instrumentation: Watsonx metric support (#593)
- instrumentations: add entry points to support auto-instrumentation (#592)
- llamaindex: backport to support v0.9.x (#590)
- openai: is_streaming attribute (#589)
- openai: span events on completion chunks in streaming (#586)
- openai: streaming metrics (#585)
- watsonx: Watsonx stream generate support (#552)
- watsonx instrumentation: Init OTEL_EXPORTER_OTLP_INSECURE before import watsonx models (#549)
- link back to repo in pyproject.toml (#548)
- basic Support for OpenTelemetry Metrics and Token Usage Metrics in OpenAI V1 (#369)
- weaviate: implement weaviate instrumentation (#394)
- watsonx: exclude http request, adding span for model initialization (#543)
- llamaindex: instrument agents & tools (#533)
- openai: Fix
with_raw_response
redirect crashing span (#536) - openai: track client attributes for v1 SDK of OpenAI (#522)
- sdk: replaced MySQL instrumentor with SQLAlchemy (#531)
- sdk: fail gracefully if input/output is not json serializable (#525)
- new PR template (#524)
- cohere: enrich rerank attributes (#476)
- llamaindex: support query pipeline (#475)
- Qdrant instrumentation (#364)
- langchain: support LCEL (#473)
- sdk: fail gracefully in case input/output serialization failure (#472)
- llamaindex: support both new and legacy llama_index versions (#422)
- sdk: url for getting API key (#424)
- openai: handle async streaming responses for openai v1 client (#421)
- support both new and legacy llama_index versions (#420)
- sdk: support input/output of tasks & workflows (#419)
- langchain: backport to 0.0.346 (#418)
- openai: handle OpenAI async completion streaming responses (#409)
- README
- re-enabled haystack instrumentation (#77)
resource_attributes
always being None (#359)
- watsonx support for traceloop (#341)
- sdk: support arbitrary resources (#338)
- bug in managed prompts (#337)
- support langchain v0.1 (#320)
- otel deps (#336)
- openai: instrument embeddings APIs (#335)
- google-vertexai-instrumentation (#289)
- version bump error with replicate (#318)
- version bump error with replicate (#318)
- replicate release (#316)
- semconv: added top-k (#291)
- support anthropic v0.8.1 (#301)
- ci: fix replicate release (#285)
- replicate support (#248)
- support pydantic v1 (#282)
- broken tests (#281)
- sdk: user feedback scores (#247)
- openai: async streaming instrumentation (#245)
- send SDK version on fetch requests (#239)
- support async workflows in llama-index and openai (#233)
- sdk: support vision api for prompt management (#234)
- openai: langchain streaming bug (#225)
- traceloop-sdk: support explicit prompt versioning in prompt management (#221)
- bedrock support (#218)
- lint issues
- openai: attributes for functions in request (#211)
- llama-index: support ollama completion (#212)
- sdk: flag for dashboard auto-creation (#210)
- new logo
- python 3.8 compatibility (#198)
- cohere: cohere chat token usage (#196)
- disable telemetry in tests (#171)
- sdk telemetry data (#168)
- make auto-create path persisted (#170)
- openai: yield chunks for streaming (#166)
- llamaindex auto instrumentation (#157)
- openai: new OpenAI API v1 (#154)
- sdk: max_tokens are now optional from the backend (#153)
- errors on logging openai streaming completion calls (#144)
- langchain: improved support for agents and tools with Langchain (#143)
- support streaming API for OpenAI (#142)
- prompt-registry: remove redundant variables print
- tracing: add missing prompt manager template variables to span attributes (#140)
- sdk: allow overriding processor & propagator (#139)
- proper propagation of api key to fetcher (#138)
- ci-cd: release workflow fetches the outdated commit on release package jobs
- disable syncing when no API key is defined (#135)
- ci-cd: finalize release flow (#133)
- ci-cd: fix release workflow publish step
- ci-cd: fix release workflow publish step
- ci-cd: fix release workflow publish step
- ci-cd: add release workflow (#132)
- release workflow credentials
- disable content tracing for privacy reasons (#118)
- add prompt version hash (#119)
- propagate prompt management attributes to llm spans (#109)
- support association IDs as objects (#111)
- hugging-face transformers pipeline instrumentation (#104)
- add chromadb instrumentation + fix langchain instrumentation (#103)
- export to Grafana tempo (#95)
- langchain instrumentation (#88)
- cohere: support for chat and rerank (#84)
- cohere instrumentation (#82)
- Anthropic instrumentation (#71)
- basic prompt management (#69)
- Pinecone Instrumentation (#3)
- basic testing framework (#70)
- haystack instrumentations (#55)
- auto-create link to traceloop dashboard
- setting headers for exporting traces
- sdk code + openai instrumentation (#4)
- sdk: disable sync when using external exporter
- disable content tracing when not overridden (#121)
- langchain: add retrieval_qa workflow span (#112)
- traceloop-sdk: logging of service name in traces (#99)
- do not trigger dashboard auto-creation if exporter is set (#96)
- docs: clarification on getting API key
- chore: spaces and nits on README
- docs: bad link for python SDK
- docs: updated TRACELOOP_BASE_URL (#81)
- add openai function call data to telemetry (#80)
- sdk: disabled prompt registry by default (#78)
- support pinecone non-grpc (#76)
- support python 3.12
- docs: upgrades; docs about prompt mgmt (#74)
- traceloop-sdk: missing lockfile (#72)
- traceloop-sdk: flushing in notebooks (#66)
- py security issue
- docs: update exporting.mdx to include nr instrumentation (#12)
- sdk: async decorators not awaited
- sdk: missing dependency
- warn if Traceloop wasn't initialized properly (#11)
- match new dashboard API
- traceloop-sdk: duplicate spans reporting (#10)
- moved api key to /tmp
- /v1/traces is always appended to endpoint
- parse headers correctly
- traceloop-sdk: replace context variables with otel context + refactor (#8)
- traceloop sdk initialization and initial versions release for instrumentations (#7)
- wrong imports and missing code components (#6)
- gitignore
- README