Releases: log10-io/log10
0.10.1
0.10.0
What's Changed
New
- Support Anthropic claude-3-5-sonnet-20240620 model
- Support magentic version 0.27.0
- Support anthropic version 0.29.0
Fixes
- Update Anthropic function call output parsing logic to display messages properly in log10 platform
Chores
- ENG-849 ENG-819 Refactor Anthropic parsing code for async and sync with httpx client by @kxtran in #183
Full Changelog: 0.9.2...0.10.0
0.9.2
What's Changed
- ENG-851 Update google-generativeai version, example and test by @kxtran in #185
- Update openai version to 1.33.0 by @kxtran in #186
- Disable warning for async openai embedding calls by @wenzhe-log10 in #189
- [ENG-856] Remove large file from repo by @nqn in #187
- [ENG-857] Filter large image messages by @nqn in #190
Full Changelog: 0.9.1...0.9.2
0.9.1
0.9.0
What's Changed
New
-
Add fetching autofeedback by completion id to cli by @kxtran in #175
To get auto generated feedback for a completion, use
log10 feedback autofeedback get
-
Use non blocking async for AsyncOpenAI and AsyncAnthropic by @wenzhe-log10 in #179
Release
0.9.0
includes significant improvements in how we handle concurrency while using LLM in asynchronous streaming mode.
This update is designed to ensure that logging at steady state incurs no overhead (previously up to 1-2 seconds), providing a smoother and more efficient experience in latency critical settings.Important Considerations for Short-Lived Scripts:
💡For short-lived scripts using asynchronous streaming, it's important to note that you may need to wait until all logging requests have been completed before terminating your script.
We have provided a convenient method calledfinalize()
to handle this.
Here's how you can implement this in your code:from log10._httpx_utils import finalize ... await finalize()
Ensure
finalize()
is called once, at the very end of your event loop to guarantee that all pending logging requests are processed before the script exits.
For more details, check async logging examples.
Chores
- Add dependabot workflow by @kxtran in #169
- Remove setup.py file by @kxtran in #174
- Verify generated completions submitted to the platform by @kxtran in #172
Full Changelog: 0.8.6...0.9.0
0.8.6
What's Changed
Bug fixes
- Update parsing async openai streaming response logic by @kxtran in #167
- Installing latest magentic version pulls openai version >= 1.26.0 which includes
usage
block and an empty array list ofchoices
for streaming response which caused the parsing logic to run into exception. So the fix is to handle the new streaming responses and improve the code quality to be more robust.
- Installing latest magentic version pulls openai version >= 1.26.0 which includes
Chores
Full Changelog: 0.8.5...0.8.6
0.8.5
What's Changed
New
- Add gpt-4o in cli benchmark_models by @wenzhe-log10 in #159
- ENG-724: Add a function in load.py to return the last_completion_id by @nqn in #165
- ENG-784 Add anthropic async and tools stream api support by @kxtran in #162
Bug fixes
Chores
- Update examples and dependencies by @wenzhe-log10 in #157
- Add a cronjob tests via github actions by @kxtran in #158
- Update langchain test assertion by @kxtran in #160
Full Changelog: 0.8.4...0.8.5
0.8.4
0.8.3
What's Changed
Bug fixes
- fix condition when finish_reason is stop for tool_calls by @wenzhe-log10 in #152
- strip not given kwargs for openai sync calls if openai.NOT_GIVEN is assigned to kwargs by @wenzhe-log10 in #154
Chores
- minor update in makefile for logging test by @wenzhe-log10 in #155
Full Changelog: 0.8.2...0.8.3
0.8.2
What's Changed
New
- support google.generativeai sdk ChatSession.send_message and add examples by @wenzhe-log10 in #148
import google.generativeai as genai from log10.load import log10 log10(genai) model = genai.GenerativeModel("gemini-1.5-pro-latest", system_instruction="You are a cat. Your name is Neko.") chat = model.start_chat( history=[ {"role": "user", "parts": [{"text": "please say yes."}]}, {"role": "model", "parts": [{"text": "Yes yes yes?"}]}, ] ) prompt = "please say no." response = chat.send_message(prompt) print(response.text)
- README update - add model comparison using CLI by @wenzhe-log10 in #149
Fix
- Move cli utils func in its own file by @kxtran in #150
- fix sync stream for both openai tool_calls and magentic funciton calls by @wenzhe-log10 in #136
Full Changelog: 0.8.1...0.8.2