Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev #5

Merged
merged 39 commits into from
Jul 8, 2024
Merged

Dev #5

merged 39 commits into from
Jul 8, 2024

Conversation

Rai220
Copy link
Collaborator

@Rai220 Rai220 commented Jul 8, 2024

No description provided.

eyurtsev and others added 30 commits March 26, 2024 17:07
Update splash screen to take into account information about whether the
playground is enabled/disabled
Swap out to use only langchain_core
…i#549)

Add `Setup` instruction to run `langserve` app locally.
…led (langchain-ai#566)

This change cleans up the schema surfaced via openapi docs to not include callback event schema if the server does not respond with it.
After this PR most of the scaffolding code is in place to support scoped
feedback.

After this PR the remaining work to be done:

1. Expose configuration in APIHandler / add_routes
2. Determine how to support with the existing feedback endpoint (or
change to a new endpoint)
3. Surface in RemoteRunnable
…langchain-ai#570)

* Create an endpoint to accept token based feedback
* Update the format of returned feedback (since it's scoped by key)
…n-ai#571)

This PR exposes the ability to configure feedback tokens for the APIHandler and add_routes
This PR allows to turn on a configuration that makes the server accept
client provided run ids.

The server needs to be started with

`add_routes(..., config_keys=['run_id'])`

And a client will then be able to make a request with `config={"run_id":
uuid4}`
…bled config (langchain-ai#573)

Add test to verify that token_feedback endpoint gets registered
correctly
…angchain-ai#577)

After this PR a better error message will be surfaced to add_routes if a
user passes something that's not a runnable.

This currently occurs a lot when using the langchain cli as the lanchain
cli creates an empty app with a palceholder value of NotImplemented for
the runnable.
* This PR removes dependency on httpx sse
* After this PR invalid requests will be returned as having a status
code of 422 using a non streaming request
- Fixes langsmith client not being enabled if token feedback was
specified
- Add unit test for token feedback endpoint
- Fix schema for token feedback to accept a str in addition to a UUID
- Add metadata even to astream log with token feedback information
rc2 release to test API for feedback token integration
- remove pulumi links which are 404
- fix stream_events mention
- run markdown formatter
Code was checking if feedback was enabled rather than token feedback.
Migrate to new locations of imports
Prepare for 0.2.x release for core
`ChatAnthropic()` raises ValidationError.

Also set model for ChatOpenAI where it appears alongside anthropic.

Best to be explicit about model, otherwise new defaults will cause
unexpected changes.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
eyurtsev and others added 9 commits May 24, 2024 13:09
Just a quickfix for langchain-ai#212.

It would be better not to have to install fastapi only for using the
client, but in my opinion this is still better then getting errors when
importing the `RemoteRunnable` after a `pip install
"langserve[client]"`.

Otherwise at least the readme should be updated.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
@Rai220 Rai220 merged commit edf5a21 into main Jul 8, 2024
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants