-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix create_xml_agent with parameterless function as a tool #24866
fix create_xml_agent with parameterless function as a tool #24866
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Skipped Deployment
|
**Description:** This PR fixes an issue in the demo notebook of Databricks Vector Search in "Work with Delta Sync Index" section. **Issue:** N/A **Dependencies:** N/A --------- Co-authored-by: Chengzu Ou <chengzu.ou@databrick.com> Co-authored-by: Erick Friis <erick@langchain.dev>
fix: langchain-ai#25473 - **Description:** add prompt to install nltk - **Issue:** langchain-ai#25473
fix: langchain-ai#25482 - **Description:** Add a prompt to install beautifulsoup4 in places where `from langchain_community.document_loaders import WebBaseLoader` is used. - **Issue:** langchain-ai#25482
Updated the Together base URL from `.ai` to `.xyz` since some customers have reported problems with `.ai`.
…erChatLoader` (langchain-ai#25536) Delete redundant args in api doc
- **Description:** Fix `QianfanLLMEndpoint` and `Tongyi` input text.
…reate_stuff_documents_chain (langchain-ai#25531) …he prompt in the create_stuff_documents_chain Thank you for contributing to LangChain! - [ ] **PR title**: "langchain:add document_variable_name in the function _validate_prompt in create_stuff_documents_chain" - [ ] **PR message**: - **Description:** add document_variable_name in the function _validate_prompt in create_stuff_documents_chain - **Issue:** according to the description of create_stuff_documents_chain function, the parameter document_variable_name can be used to override the "context" in the prompt, but in the function, _validate_prompt it still use DOCUMENTS_KEY to check if it is a valid prompt, the value of DOCUMENTS_KEY is always "context", so even through the user use document_variable_name to override it, the code still tries to check if "context" is in the prompt, and finally it reports error. so I use document_variable_name to replace DOCUMENTS_KEY, the default value of document_variable_name is "context" which is same as DOCUMENTS_KEY, but it can be override by users. - **Dependencies:** none - **Twitter handle:** https://x.com/xjr199703 - [ ] **Add tests and docs**: none - [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. --------- Co-authored-by: Chester Curme <chester.curme@gmail.com>
…rning Text Object which raises error (langchain-ai#25271) - **Description:** The following [line](https://github.com/langchain-ai/langchain/blob/fd546196ef0fafa4a4cd7bb7ebb1771ef599f372/libs/community/langchain_community/document_loaders/parsers/audio.py#L117) in `OpenAIWhisperParser` returns a text object for some odd reason despite the official documentation saying it should return `Transcript` Instance which should have the text attribute. But for the example given in the issue and even when I tried running on my own, I was directly getting the text. The small PR accounts for that. - **Issue:** : langchain-ai#25218 I was able to replicate the error even without the GenericLoader as shown below and the issue was with `OpenAIWhisperParser` ```python parser = OpenAIWhisperParser(api_key="sk-fxxxxxxxxx", response_format="srt", temperature=0) list(parser.lazy_parse(Blob.from_path('path_to_file.m4a'))) ```
`llama-3.1-8b-instant` often fails some of the tool calling standard tests. Here we update to `llama-3.1-70b-versatile`.
- **Description:** Add ToolMessage for `ChatZhipuAI` to solve the issue langchain-ai#25490
…chain-ai#25548) Here we allow standard tests to specify a value for `tool_choice` via a `tool_choice_value` property, which defaults to None. Chat models [available in Together](https://docs.together.ai/docs/chat-models) have issues passing standard tool calling tests: - llama 3.1 models currently [appear to rely on user-side parsing](https://docs.together.ai/docs/llama-3-function-calling) in Together; - Mixtral-8x7B and Mistral-7B (currently tested) consistently do not call tools in some tests. Specifying tool_choice also lets us remove an existing `xfail` and use a smaller model in Groq tests.
Added reference to the source paper.
This PR to show support for the Azure Database for PostgreSQL Vector Store and Memory [Azure Database for PostgreSQL - Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/service-overview) [Azure Database for PostgreSQL pgvector extension](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-use-pgvector) **Description:** Added vector store and memory usage documentation for Azure Database for PostgreSQL **Twitter handle:** [@_aiabe](https://x.com/_aiabe) --------- Co-authored-by: Abeomor <{ID}+{username}@users.noreply.github.com>
This will allow complextype metadata to be returned. the current implementation throws error when dealing with nested metadata Thank you for contributing to LangChain! - [x] **PR title**: "package: description" - Where "package" is whichever of langchain, community, core, experimental, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes. - Example: "community: add foobar LLM" - [ ] **PR message**: ***Delete this entire checklist*** and replace with - **Description:** a description of the change - **Issue:** the issue # it fixes, if applicable - **Dependencies:** any dependencies required for this change - **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out! - [ ] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. --------- Co-authored-by: Chester Curme <chester.curme@gmail.com>
…ngchain-ai#25549) The new `langchain-ollama` package seems pretty well implemented, but I noticed the docs were still outdated so I decided to fix em up a bit. - Llama3.1 was release on 23rd of July; https://ai.meta.com/blog/meta-llama-3-1/ - Ollama supports tool calling since 25th of July; https://ollama.com/blog/tool-support - LangChain Ollama partner package was released 1st of august; https://pypi.org/project/langchain-ollama/ **Problem**: Docs note langchain-community instead of langchain-ollama **Solution**: Update docs to https://python.langchain.com/v0.2/docs/integrations/chat/ollama/ **Problem**: OllamaFunctions is deprecated, as noted on [Integrations](https://python.langchain.com/v0.2/docs/integrations/chat/ollama_functions/): This was an experimental wrapper that attempts to bolt-on tool calling support to models that do not natively support it. The [primary Ollama integration](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/) now supports tool calling, and should be used instead. **Solution**: Delete old notebook from repo, update the existing one with @tool decorator + pydantic examples to the notebook **Problem**: Llama3.1 was released while llama3-groq-tool-call fine-tune Is noted in notebooks. **Solution**: update docs + notebooks to llama3.1 (which has improved tool calling support) **Problem**: Install instructions are incomplete, there is no information to download a model and/or run the Ollama server **Solution**: Add simple instructions to start the ollama service and pull model (for toolcalling) --------- Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com> Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Added missed provider pages and links. Fixed inconsistent formatting.
Added missed integration links. Fixed inconsistent formatting.
👥 Update LangChain people data Co-authored-by: github-actions <github-actions@github.com>
…gchain-ai#25927) - **Description:** The function `_is_assistants_builtin_tool` didn't had support for `file_search` from OpenAI. This was creating conflict and blocking the usage of such. OpenAI Assistant changed from`retrieval` to `file_search`. The following code ``` agent = OpenAIAssistantV2Runnable.create_assistant( name="Data Analysis Assistant", instructions=prompt[0].content, tools={'type': 'file_search'}, model=self.chat_config.connection.deployment_name, client=llm, as_agent=True, tool_resources={ "file_search": { "vector_store_ids": vector_store_id } } ) ``` Was throwing the following error ``` Traceback (most recent call last): File "/Users/l.guedesdossantos/Documents/codes/shellai-nlp-backend/app/chat/chat_decorators.py", line 500, in get_response return await super().get_response(post, context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/l.guedesdossantos/Documents/codes/shellai-nlp-backend/app/chat/chat_decorators.py", line 96, in get_response response = await self.inner_chat.get_response(post, context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/l.guedesdossantos/Documents/codes/shellai-nlp-backend/app/chat/chat_decorators.py", line 96, in get_response response = await self.inner_chat.get_response(post, context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/l.guedesdossantos/Documents/codes/shellai-nlp-backend/app/chat/chat_decorators.py", line 96, in get_response response = await self.inner_chat.get_response(post, context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [Previous line repeated 4 more times] File "/Users/l.guedesdossantos/Documents/codes/shellai-nlp-backend/app/chat/azure_open_ai_chat.py", line 147, in get_response chain = chain_factory.get_chain(prompts, post.conversation.id, overrides, context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/l.guedesdossantos/Documents/codes/shellai-nlp-backend/app/llm_connections/chains.py", line 1324, in get_chain agent = OpenAIAssistantV2Runnable.create_assistant( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/l.guedesdossantos/anaconda3/envs/shell-e/lib/python3.11/site-packages/langchain_community/agents/openai_assistant/base.py", line 256, in create_assistant tools=[_get_assistants_tool(tool) for tool in tools], # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/l.guedesdossantos/anaconda3/envs/shell-e/lib/python3.11/site-packages/langchain_community/agents/openai_assistant/base.py", line 256, in <listcomp> tools=[_get_assistants_tool(tool) for tool in tools], # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/l.guedesdossantos/anaconda3/envs/shell-e/lib/python3.11/site-packages/langchain_community/agents/openai_assistant/base.py", line 119, in _get_assistants_tool return convert_to_openai_tool(tool) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/l.guedesdossantos/anaconda3/envs/shell-e/lib/python3.11/site-packages/langchain_core/utils/function_calling.py", line 255, in convert_to_openai_tool function = convert_to_openai_function(tool) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/l.guedesdossantos/anaconda3/envs/shell-e/lib/python3.11/site-packages/langchain_core/utils/function_calling.py", line 230, in convert_to_openai_function raise ValueError( ValueError: Unsupported function {'type': 'file_search'} Functions must be passed in as Dict, pydantic.BaseModel, or Callable. If they're a dict they must either be in OpenAI function format or valid JSON schema with top-level 'title' and 'description' keys. ``` With the proposed changes, this is fixed and the function will have support for `file_search`. This was the only place missing the support for `file_search`. Reference doc https://platform.openai.com/docs/assistants/tools/file-search - **Twitter handle:** luizf0992 --------- Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
…locals (langchain-ai#25936) Thank you for contributing to LangChain! - [ ] **PR title**: "package: description" - Where "package" is whichever of langchain, community, core, experimental, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes. - Example: "community: add foobar LLM" - [ ] **PR message**: ***Delete this entire checklist*** and replace with - **Description:** a description of the change - **Issue:** the issue # it fixes, if applicable - **Dependencies:** any dependencies required for this change - **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out! - [ ] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
…enAPI spec (langchain-ai#19002) - **Description:** This PR is intended to improve the generation of payloads for OpenAI functions when converting from an OpenAPI spec file. The solution is to recursively resolve `$refs`. Currently when converting OpenAPI specs into OpenAI functions using `openapi_spec_to_openai_fn`, if the schemas have nested references, the generated functions contain `$ref` that causes the LLM to generate payloads with an incorrect schema. For example, for the for OpenAPI spec: ``` text = """ { "openapi": "3.0.3", "info": { "title": "Swagger Petstore - OpenAPI 3.0", "termsOfService": "http://swagger.io/terms/", "contact": { "email": "apiteam@swagger.io" }, "license": { "name": "Apache 2.0", "url": "http://www.apache.org/licenses/LICENSE-2.0.html" }, "version": "1.0.11" }, "externalDocs": { "description": "Find out more about Swagger", "url": "http://swagger.io" }, "servers": [ { "url": "https://petstore3.swagger.io/api/v3" } ], "tags": [ { "name": "pet", "description": "Everything about your Pets", "externalDocs": { "description": "Find out more", "url": "http://swagger.io" } }, { "name": "store", "description": "Access to Petstore orders", "externalDocs": { "description": "Find out more about our store", "url": "http://swagger.io" } }, { "name": "user", "description": "Operations about user" } ], "paths": { "/pet": { "post": { "tags": [ "pet" ], "summary": "Add a new pet to the store", "description": "Add a new pet to the store", "operationId": "addPet", "requestBody": { "description": "Create a new pet in the store", "content": { "application/json": { "schema": { "$ref": "#/components/schemas/Pet" } } }, "required": true }, "responses": { "200": { "description": "Successful operation", "content": { "application/json": { "schema": { "$ref": "#/components/schemas/Pet" } } } } } } } }, "components": { "schemas": { "Tag": { "type": "object", "properties": { "id": { "type": "integer", "format": "int64" }, "model_type": { "type": "number" } } }, "Category": { "type": "object", "required": [ "model", "year", "age" ], "properties": { "year": { "type": "integer", "format": "int64", "example": 1 }, "model": { "type": "string", "example": "Ford" }, "age": { "type": "integer", "example": 42 } } }, "Pet": { "required": [ "name" ], "type": "object", "properties": { "id": { "type": "integer", "format": "int64", "example": 10 }, "name": { "type": "string", "example": "doggie" }, "category": { "$ref": "#/components/schemas/Category" }, "tags": { "type": "array", "items": { "$ref": "#/components/schemas/Tag" } }, "status": { "type": "string", "description": "pet status in the store", "enum": [ "available", "pending", "sold" ] } } } } } } """ ``` Executing: ``` spec = OpenAPISpec.from_text(text) pet_openai_functions, pet_callables = openapi_spec_to_openai_fn(spec) response = model.invoke("Create a pet named Scott", functions=pet_openai_functions) ``` `pet_open_functions` contains unresolved `$refs`: ``` [ { "name": "addPet", "description": "Add a new pet to the store", "parameters": { "type": "object", "properties": { "json": { "properties": { "id": { "type": "integer", "schema_format": "int64", "example": 10 }, "name": { "type": "string", "example": "doggie" }, "category": { "ref": "#/components/schemas/Category" }, "tags": { "items": { "ref": "#/components/schemas/Tag" }, "type": "array" }, "status": { "type": "string", "enum": [ "available", "pending", "sold" ], "description": "pet status in the store" } }, "type": "object", "required": [ "name", "photoUrls" ] } } } } ] ``` and the generated JSON has an incorrect schema (e.g. category is filled with `id` and `name` instead of `model`, `year` and `age`: ``` { "id": 1, "name": "Scott", "category": { "id": 1, "name": "Dogs" }, "tags": [ { "id": 1, "name": "tag1" } ], "status": "available" } ``` With this change, the generated JSON by the LLM becomes, `pet_openai_functions` becomes: ``` [ { "name": "addPet", "description": "Add a new pet to the store", "parameters": { "type": "object", "properties": { "json": { "properties": { "id": { "type": "integer", "schema_format": "int64", "example": 10 }, "name": { "type": "string", "example": "doggie" }, "category": { "properties": { "year": { "type": "integer", "schema_format": "int64", "example": 1 }, "model": { "type": "string", "example": "Ford" }, "age": { "type": "integer", "example": 42 } }, "type": "object", "required": [ "model", "year", "age" ] }, "tags": { "items": { "properties": { "id": { "type": "integer", "schema_format": "int64" }, "model_type": { "type": "number" } }, "type": "object" }, "type": "array" }, "status": { "type": "string", "enum": [ "available", "pending", "sold" ], "description": "pet status in the store" } }, "type": "object", "required": [ "name" ] } } } } ] ``` and the JSON generated by the LLM is: ``` { "id": 1, "name": "Scott", "category": { "year": 2022, "model": "Dog", "age": 42 }, "tags": [ { "id": 1, "model_type": 1 } ], "status": "available" } ``` which has the intended schema. - **Twitter handle:**: @brunoalvisio --------- Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
…n-ai#23809) **Description:** Adding a new option to the CSVLoader that allows us to implicitly specify the columns that are used for generating the Document content. Currently these are implicitly set as "all fields not part of the metadata_columns". In some cases however it is useful to have a field both as a metadata and as part of the document content.
Thank you for contributing to LangChain! - [ ] **PR title**: docs: fix typo in summarization_tutorial - [ ] **PR message**: docs: fix couple of typos in summarization_tutorial - [ ] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
…angchain-ai#23304) Returns an array of results which is more specific and easier for later use. Tested locally: ``` resp = tool.invoke("what's the weather like in Shanghai?") for item in resp: print(item) ``` returns ``` {'snippet': '<b>Shanghai</b>, <b>Shanghai</b>, China <b>Weather</b> Forecast, with current conditions, wind, air quality, and what to expect for the next 3 days.', 'title': 'Shanghai, Shanghai, China Weather Forecast | AccuWeather', 'link': 'https://www.accuweather.com/en/cn/shanghai/106577/weather-forecast/106577'} {'snippet': '5. 99 / 87 °F. 6. 99 / 86 °F. 7. Detailed forecast for 14 days. Need some help? Current <b>weather</b> <b>in Shanghai</b> and forecast for today, tomorrow, and next 14 days.', 'title': 'Weather for Shanghai, Shanghai Municipality, China - timeanddate.com', 'link': 'https://www.timeanddate.com/weather/china/shanghai'} {'snippet': '<b>Shanghai</b> - <b>Weather</b> warnings issued 14-day forecast. <b>Weather</b> warnings issued. Forecast - <b>Shanghai</b>. Day by day forecast. Last updated Friday at 01:05. Tonight, ... Temperature feels <b>like</b> 34 ...', 'title': 'Shanghai - BBC Weather', 'link': 'https://www.bbc.com/weather/1796236'} {'snippet': 'Current <b>weather</b> <b>in Shanghai</b>, <b>Shanghai</b>, China. Check current conditions <b>in Shanghai</b>, <b>Shanghai</b>, China with radar, hourly, and more.', 'title': 'Shanghai, Shanghai, China Current Weather | AccuWeather', 'link': 'https://www.accuweather.com/en/cn/shanghai/106577/current-weather/106577'} 13-Day Beijing, Xi&langchain-ai#39;an, Chengdu, <b>Shanghai</b> Chinese Language and Culture Immersion Tour. <b>Shanghai</b> in September. Average daily temperature range: 23–29°C (73–84°F) Average rainy days: 10. Average sunny days: 20. September ushers in pleasant autumn <b>weather</b>, making it one of the best months to visit <b>Shanghai</b>. <b>Weather</b> in <b>Shanghai</b>: Climate, Seasons, and Average Monthly Temperature. <b>Shanghai</b> has a subtropical maritime monsoon climate, meaning high humidity and lots of rain. Hot muggy summers, cool falls, cold winters with little snow, and warm springs are the norm. Midsummer through early fall is the best time to visit <b>Shanghai</b>. <b>Shanghai</b>, <b>Shanghai</b>, China <b>Weather</b> Forecast, with current conditions, wind, air quality, and what to expect for the next 3 days. 1165. 45.9. 121. Winter, from December to February, is quite cold: the average January temperature is 5 °C (41 °F). There may be cold periods, with highs around 5 °C (41 °F) or below, and occasionally, even snow can fall. The temperature dropped to -10 °C (14 °F) in January 1977 and to -7 °C (19.5 °F) in January 2016. 5. 99 / 87 °F. 6. 99 / 86 °F. 7. Detailed forecast for 14 days. Need some help? Current <b>weather</b> in <b>Shanghai</b> and forecast for today, tomorrow, and next 14 days. Everything you need to know about today&langchain-ai#39;s <b>weather</b> in <b>Shanghai</b>, <b>Shanghai</b>, China. High/Low, Precipitation Chances, Sunrise/Sunset, and today&langchain-ai#39;s Temperature History. <b>Shanghai</b> - <b>Weather</b> warnings issued 14-day forecast. <b>Weather</b> warnings issued. Forecast - <b>Shanghai</b>. Day by day forecast. Last updated Friday at 01:05. Tonight, ... Temperature feels <b>like</b> 34 ... <b>Shanghai</b> 14 Day Extended Forecast. <b>Weather</b> Today <b>Weather</b> Hourly 14 Day Forecast Yesterday/Past <b>Weather</b> Climate (Averages) Currently: 84 °F. Passing clouds. (<b>Weather</b> station: <b>Shanghai</b> Hongqiao Airport, China). See more current <b>weather</b>. Current <b>weather</b> in <b>Shanghai</b>, <b>Shanghai</b>, China. Check current conditions in <b>Shanghai</b>, <b>Shanghai</b>, China with radar, hourly, and more. <b>Shanghai</b> <b>Weather</b> Forecasts. <b>Weather Underground</b> provides local & long-range <b>weather</b> forecasts, weatherreports, maps & tropical <b>weather</b> conditions for the <b>Shanghai</b> area. ``` --------- Co-authored-by: Bagatur <baskaryan@gmail.com>
…space when strip_whitespace is false (langchain-ai#23272) Previously, regardless of whether or not strip_whitespace was set to true or false, the strip text method in the SpacyTextSplitter class used `sent.text` to get the sentence. I modified this to include a ternary such that if strip_whitespace is false, it uses `sent.text_with_ws` I also modified the project.toml to include the spacy pipeline package and to lock the numpy version, as higher versions break spacy. - **Issue:** N/a - **Dependencies:** None
…ain-ai#23339) - **PR title**: "community: add Jina Search tool" - **Description:** Added the Jina Search tool for querying the Jina search API. This includes the implementation of the JinaSearchAPIWrapper and the JinaSearch tool, along with a Jupyter notebook example demonstrating its usage. - **Issue:** N/A - **Dependencies:** N/A - **Twitter handle:** [Twitter handle](https://x.com/yashp3020?t=7wM0gQ7XjGciFoh9xaBtqA&s=09) - [x] **Add tests and docs**: If you're adding a new integration, please include 1. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ --------- Co-authored-by: Bagatur <baskaryan@gmail.com> Co-authored-by: Chester Curme <chester.curme@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we add unit tests for this change?
Updating the gateway pages in the documentation to name the `langchain-databricks` integration. --------- Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> Co-authored-by: Bagatur <baskaryan@gmail.com>
Added missed provider pages and links. Fixed inconsistent formatting. Added arxiv references to docstirngs. --------- Co-authored-by: Harrison Chase <hw.chase.17@gmail.com> Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
…22627) **Description:** This updates the langchain_community > huggingface > default bge embeddings ([the current default recommends this change](https://huggingface.co/BAAI/bge-large-en)) **Issue:** None **Dependencies:** None **Twitter handle:** @JonZeolla --------- Co-authored-by: Bagatur <baskaryan@gmail.com>
…tool converts api outputs into string (langchain-ai#22580) Co-authored-by: Bagatur <baskaryan@gmail.com>
langchain-ai#25929 broke the layout because of missing `:::` for the caution clause. Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
…' into parameterless_function_tools_fix
@baskaryan |
When using
create_xml_agent
orcreate_json_chat_agent
to create a agent, and the function corresponding to the tool is a parameterless function, theXMLAgentOutputParser
orJSONAgentOutputParser
will parse the tool input into an empty string,BaseTool
will parse it into a positional argument.So, the program will crash finally because we invoke a parameterless function but with a positional argument.