Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: index pages and sidebars #6522

Merged
merged 5 commits into from
Aug 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 10 additions & 11 deletions docs/core_docs/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -348,17 +348,6 @@ docs/integrations/retrievers/self_query/hnswlib.md
docs/integrations/retrievers/self_query/hnswlib.mdx
docs/integrations/retrievers/self_query/chroma.md
docs/integrations/retrievers/self_query/chroma.mdx
docs/integrations/document_loaders/file_loaders/unstructured.md
docs/integrations/document_loaders/file_loaders/unstructured.mdx
docs/integrations/document_loaders/file_loaders/text.md
docs/integrations/document_loaders/file_loaders/text.mdx
docs/integrations/document_loaders/file_loaders/pdf.md
docs/integrations/document_loaders/file_loaders/pdf.mdx
docs/integrations/document_loaders/file_loaders/directory.md
docs/integrations/document_loaders/file_loaders/directory.mdx
docs/integrations/document_loaders/file_loaders/csv.md
docs/integrations/document_loaders/file_loaders/csv.mdx
.vercel
docs/integrations/document_loaders/web_loaders/web_puppeteer.md
docs/integrations/document_loaders/web_loaders/web_puppeteer.mdx
docs/integrations/document_loaders/web_loaders/web_cheerio.md
Expand All @@ -369,3 +358,13 @@ docs/integrations/document_loaders/web_loaders/pdf.md
docs/integrations/document_loaders/web_loaders/pdf.mdx
docs/integrations/document_loaders/web_loaders/firecrawl.md
docs/integrations/document_loaders/web_loaders/firecrawl.mdx
docs/integrations/document_loaders/file_loaders/unstructured.md
docs/integrations/document_loaders/file_loaders/unstructured.mdx
docs/integrations/document_loaders/file_loaders/text.md
docs/integrations/document_loaders/file_loaders/text.mdx
docs/integrations/document_loaders/file_loaders/pdf.md
docs/integrations/document_loaders/file_loaders/pdf.mdx
docs/integrations/document_loaders/file_loaders/directory.md
docs/integrations/document_loaders/file_loaders/directory.mdx
docs/integrations/document_loaders/file_loaders/csv.md
docs/integrations/document_loaders/file_loaders/csv.mdx
8 changes: 4 additions & 4 deletions docs/core_docs/docs/concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@ import useBaseUrl from "@docusaurus/useBaseUrl";
### `@langchain/core`

This package contains base abstractions of different components and ways to compose them together.
The interfaces for core components like LLMs, vectorstores, retrievers and more are defined here.
The interfaces for core components like LLMs, vector stores, retrievers and more are defined here.
No third party integrations are defined here.
The dependencies are kept purposefully very lightweight.

### `@langchain/community`

This package contains third party integrations that are maintained by the LangChain community.
Key partner packages are separated out (see below).
This contains all integrations for various components (LLMs, vectorstores, retrievers).
This contains all integrations for various components (LLMs, vector stores, retrievers).
All dependencies in this package are optional to keep the package as lightweight as possible.

### Partner packages
Expand Down Expand Up @@ -143,7 +143,7 @@ Some components LangChain implements, some components we rely on third-party int
<span data-heading-keywords="chat model,chat models"></span>

Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text).
These are traditionally newer models (older models are generally `LLMs`, see below).
These are generally newer models (older models are generally `LLMs`, see below).
Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages.

Although the underlying models are messages in, message out, the LangChain wrappers also allow these models to take a string as input.
Expand Down Expand Up @@ -480,7 +480,7 @@ The base Embeddings class in LangChain provides two methods: one for embedding d

For specifics on how to use embedding models, see the [relevant how-to guides here](/docs/how_to/#embedding-models).

### Vectorstores
### Vector stores {#vectorstore}

<span data-heading-keywords="vector,vectorstore,vectorstores,vector store,vector stores"></span>

Expand Down
2 changes: 1 addition & 1 deletion docs/core_docs/docs/integrations/chat/azure.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
"source": [
"# AzureChatOpenAI\n",
"\n",
"[Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/) is a Microsoft Azure service that provides powerful language models from OpenAI.\n",
"Azure OpenAI is a Microsoft Azure service that provides powerful language models from OpenAI.\n",
"\n",
"This will help you getting started with AzureChatOpenAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all AzureChatOpenAI features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain_openai.AzureChatOpenAI.html).\n",
"\n",
Expand Down
66 changes: 29 additions & 37 deletions docs/core_docs/docs/integrations/chat/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,40 +6,32 @@ hide_table_of_contents: true

# Chat models

## Features (natively supported)

All ChatModels implement the Runnable interface, which comes with default implementations of all methods, ie. `invoke`, `batch`, `stream`. This gives all ChatModels basic support for invoking, streaming and batching, which by default is implemented as below:

- _Streaming_ support defaults to returning an `AsyncIterator` of a single value, the final result returned by the underlying ChatModel provider. This obviously doesn't give you token-by-token streaming, which requires native support from the ChatModel provider, but ensures your code that expects an iterator of tokens can work for any of our ChatModel integrations.
- _Batch_ support defaults to calling the underlying ChatModel in parallel for each input. The concurrency can be controlled with the `maxConcurrency` key in `RunnableConfig`.

Each ChatModel integration can optionally provide native implementations to truly enable invoke, streaming or batching requests.

Additionally, some chat models support additional ways of guaranteeing structure in their outputs by allowing you to pass in a defined schema.
[Tool calling](/docs/how_to/tool_calling) (tool calling) is one capability, and allows you to use the chat model as the LLM in certain types of agents.
Some models in LangChain have also implemented a `withStructuredOutput()` method that unifies many of these different ways of constraining output to a schema.

The table shows, for each integration, which features have been implemented with native support. Yellow circles (🟡) indicates partial support - for example, if the model supports tool calling but not tool messages for agents.

| Model | Stream | JSON mode | [Tool Calling](/docs/how_to/tool_calling/) | [`withStructuredOutput()`](/docs/how_to/structured_output/#the-.withstructuredoutput-method) | [Multimodal](/docs/how_to/multimodal_inputs/) |
| :---------------------- | :----: | :-------: | :----------------------------------------: | :------------------------------------------------------------------------------------------: | :-------------------------------------------: |
| BedrockChat | ✅ | ❌ | 🟡 (Bedrock Anthropic only) | 🟡 (Bedrock Anthropic only) | 🟡 (Bedrock Anthropic only) |
| ChatBedrockConverse | ✅ | ❌ | ✅ | ✅ | ✅ |
| ChatAlibabaTongyi | ❌ | ❌ | ❌ | ❌ | ❌ |
| ChatAnthropic | ✅ | ❌ | ✅ | ✅ | ✅ |
| ChatBaiduWenxin | ❌ | ❌ | ❌ | ❌ | ❌ |
| ChatCloudflareWorkersAI | ✅ | ❌ | ❌ | ❌ | ❌ |
| ChatCohere | ✅ | ❌ | ✅ | ✅ | ✅ |
| ChatFireworks | ✅ | ✅ | ✅ | ✅ | ✅ |
| ChatGoogleGenerativeAI | ✅ | ❌ | ✅ | ✅ | ✅ |
| ChatVertexAI | ✅ | ❌ | ✅ | ✅ | ✅ |
| ChatGroq | ✅ | ✅ | ✅ | ✅ | ✅ |
| ChatLlamaCpp | ✅ | ❌ | ❌ | ❌ | ❌ |
| ChatMinimax | ❌ | ❌ | ❌ | ❌ | ❌ |
| ChatMistralAI | ❌ | ✅ | ✅ | ✅ | ✅ |
| ChatOllama | ✅ | ✅ | ✅ | ✅ | ✅ |
| ChatOpenAI | ✅ | ✅ | ✅ | ✅ | ✅ |
| ChatTencentHunyuan | ✅ | ❌ | ❌ | ❌ | ❌ |
| ChatTogetherAI | ✅ | ✅ | ✅ | ✅ | ✅ |
| ChatYandexGPT | ❌ | ❌ | ❌ | ❌ | ❌ |
| ChatZhipuAI | ❌ | ❌ | ❌ | ❌ | ❌ |
[Chat models](/docs/concepts/#chat-models) are language models that use a sequence of [messages](/docs/concepts/#messages) as inputs and return messages as outputs (as opposed to using plain text). These are generally newer models.

:::info
If you'd like to write your own chat model, see [this how-to](/docs/how_to/custom_chat). If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing).
:::

## Featured providers

| Model | Stream | JSON mode | [Tool Calling](/docs/how_to/tool_calling/) | [`withStructuredOutput()`](/docs/how_to/structured_output/#the-.withstructuredoutput-method) | [Multimodal](/docs/how_to/multimodal_inputs/) |
| :----------------------------------------------------------------------- | :----: | :-------: | :----------------------------------------: | :------------------------------------------------------------------------------------------: | :-------------------------------------------: |
| [BedrockChat](/docs/integrations/chat/bedrock/) | ✅ | ❌ | 🟡 (Bedrock Anthropic only) | 🟡 (Bedrock Anthropic only) | 🟡 (Bedrock Anthropic only) |
| [ChatBedrockConverse](/docs/integrations/chat/bedrock_converse/) | ✅ | ❌ | ✅ | ✅ | ✅ |
| [ChatAnthropic](/docs/integrations/chat/anthropic/) | ✅ | ❌ | ✅ | ✅ | ✅ |
| [ChatCloudflareWorkersAI](/docs/integrations/chat/cloudflare_workersai/) | ✅ | ❌ | ❌ | ❌ | ❌ |
| [ChatCohere](/docs/integrations/chat/cohere/) | ✅ | ❌ | ✅ | ✅ | ✅ |
| [ChatFireworks](/docs/integrations/chat/fireworks/) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [ChatGoogleGenerativeAI](/docs/integrations/chat/google_generativeai/) | ✅ | ❌ | ✅ | ✅ | ✅ |
| [ChatVertexAI](/docs/integrations/chat/google_vertex_ai/) | ✅ | ❌ | ✅ | ✅ | ✅ |
| [ChatGroq](/docs/integrations/chat/groq/) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [ChatMistralAI](/docs/integrations/chat/mistral/) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [ChatOllama](/docs/integrations/chat/ollama/) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [ChatOpenAI](/docs/integrations/chat/openai/) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [ChatTogetherAI](/docs/integrations/chat/togetherai/) | ✅ | ✅ | ✅ | ✅ | ✅ |

## All chat models

import { IndexTable } from "@theme/FeatureTables";

<IndexTable />
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,12 @@ Only available on Node.js.

These loaders are used to load files given a filesystem path or a Blob object.

import DocCardList from "@theme/DocCardList";
:::info
If you'd like to write your own document loader, see [this how-to](/docs/how_to/document_loader_custom/). If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing).
:::

import { CategoryTable, IndexTable } from "@theme/FeatureTables";

## All document loaders

<DocCardList />
<IndexTable />
18 changes: 18 additions & 0 deletions docs/core_docs/docs/integrations/document_loaders/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
sidebar_position: 0
---

# Document loaders

[Document loaders](/docs/concepts#document-loaders) load data into LangChain's expected format for use-cases such as [retrieval-augmented generation (RAG)](/docs/tutorials/rag).

LangChain.js categorizes document loaders in two different ways:

- [File loaders](/docs/integrations/document_loaders/file_loaders/), which load data into LangChain formats from your local filesystem.
- [Web loaders](/docs/integrations/document_loaders/web_loaders/), which load data from remote sources.

See the individual pages for more on each category.

:::info
If you'd like to write your own document loader, see [this how-to](/docs/how_to/document_loader_custom/). If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing).
:::
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,14 @@ hide_table_of_contents: true

# Web Loaders

These loaders are used to load web resources.
These loaders are used to load web resources. They do not involve the local file system.

import DocCardList from "@theme/DocCardList";
import { CategoryTable, IndexTable } from "@theme/FeatureTables";

<DocCardList />
:::info
If you'd like to write your own document loader, see [this how-to](/docs/how_to/document_loader_custom/). If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing).
:::

## All web loaders

<IndexTable />
Loading
Loading