Skip to content

Commit

Permalink
docs[patch]: Add crosslinks, add LangGraph and LangSmith sections (la…
Browse files Browse the repository at this point in the history
…ngchain-ai#5702)

* Add crosslinks, add LangGraph and LangSmith sections

* Fix anchors
  • Loading branch information
jacoblee93 authored Jun 7, 2024
1 parent a4448fe commit 6535861
Show file tree
Hide file tree
Showing 4 changed files with 95 additions and 26 deletions.
57 changes: 43 additions & 14 deletions docs/core_docs/docs/concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -134,19 +134,6 @@ The **input type** and **output type** varies by component:
LangChain provides standard, extendable interfaces and external integrations for various components useful for building with LLMs.
Some components LangChain implements, some components we rely on third-party integrations for, and others are a mix.

### LLMs

<span data-heading-keywords="llm,llms"></span>

Language models that takes a string as input and returns a string.
These are traditionally older models (newer models generally are `ChatModels`, see below).

Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input.
This makes them interchangeable with ChatModels.
When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model.

LangChain does not provide any LLMs, rather we rely on third party integrations.

### Chat models

<span data-heading-keywords="chat model,chat models"></span>
Expand All @@ -167,6 +154,23 @@ We have some standardized parameters when constructing ChatModels:

ChatModels also accept other parameters that are specific to that integration.

For specifics on how to use chat models, see the [relevant how-to guides here](/docs/how_to/#chat-models).

### LLMs

<span data-heading-keywords="llm,llms"></span>

Language models that takes a string as input and returns a string.
These are traditionally older models (newer models generally are `ChatModels`, see below).

Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input.
This makes them interchangeable with ChatModels.
When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model.

LangChain does not provide any LLMs, rather we rely on third party integrations.

For specifics on how to use LLMs, see the [relevant how-to guides here](/docs/how_to/#llms).

### Function/Tool Calling

:::info
Expand Down Expand Up @@ -267,7 +271,7 @@ Prompt Templates take as input an object, where each key represents a variable i
Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or an array of messages.
The reason this PromptValue exists is to make it easy to switch between strings and messages.

There are a few different types of prompt templates
There are a few different types of prompt templates:

#### String PromptTemplates

Expand Down Expand Up @@ -341,13 +345,17 @@ const promptTemplate = ChatPromptTemplate.fromMessages([
]);
```

For specifics on how to use prompt templates, see the [relevant how-to guides here](/docs/how_to/#prompt-templates).

### Example Selectors

One common prompting technique for achieving better performance is to include examples as part of the prompt.
This gives the language model concrete examples of how it should behave.
Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them.
Example Selectors are classes responsible for selecting and then formatting examples into prompts.

For specifics on how to use example selectors, see the [relevant how-to guides here](/docs/how_to/#example-selectors).

### Output parsers

<span data-heading-keywords="output parser"></span>
Expand Down Expand Up @@ -398,6 +406,8 @@ LangChain has many different types of output parsers. This is a list of output p
| [Datetime](https://v02.api.js.langchain.com/classes/langchain_output_parsers.DatetimeOutputParser.html) | | `string` | `Promise<Date>` | Parses response into a `Date`. |
| [Regex](https://v02.api.js.langchain.com/classes/langchain_output_parsers.RegexParser.html) | | `string` | `Promise<Record<string, string>>` | Parses the given text using the regex pattern and returns a object with the parsed output. |

For specifics on how to use output parsers, see the [relevant how-to guides here](/docs/how_to/#output-parsers).

### Chat History

Most LLM applications have a conversational interface.
Expand Down Expand Up @@ -435,6 +445,8 @@ const loader = new CSVLoader();
const docs = await loader.load();
```

For specifics on how to use document loaders, see the [relevant how-to guides here](/docs/how_to/#document-loaders).

### Text splitters

Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.
Expand All @@ -452,6 +464,8 @@ That means there are two different axes along which you can customize your text
1. How the text is split
2. How the chunk size is measured

For specifics on how to use text splitters, see the [relevant how-to guides here](/docs/how_to/#text-splitters).

### Embedding models

<span data-heading-keywords="embedding,embeddings"></span>
Expand All @@ -462,6 +476,8 @@ Embeddings create a vector representation of a piece of text. This is useful bec

The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).

For specifics on how to use embedding models, see the [relevant how-to guides here](/docs/how_to/#embedding-models).

### Vectorstores

<span data-heading-keywords="vector,vectorstore,vectorstores,vector store,vector stores"></span>
Expand All @@ -477,6 +493,8 @@ const vectorstore = new MyVectorStore();
const retriever = vectorstore.asRetriever();
```

For specifics on how to use vector stores, see the [relevant how-to guides here](/docs/how_to/#vectorstores).

### Retrievers

<span data-heading-keywords="retriever,retrievers"></span>
Expand All @@ -488,6 +506,8 @@ Retrievers can be created from vector stores, but are also broad enough to inclu

Retrievers accept a string query as input and return an array of `Document`s as output.

For specifics on how to use retrievers, see the [relevant how-to guides here](/docs/how_to/#retrievers).

### Tools

<span data-heading-keywords="tool,tools"></span>
Expand All @@ -508,6 +528,8 @@ Many agents will only work with tools that have a single string input.

Importantly, the name, description, and JSON schema (if used) are all used in the prompt. Therefore, it is really important that they are clear and describe exactly how the tool should be used. You may need to change the default name, description, or JSON schema if the LLM is not understanding how to use the tool.

For specifics on how to use tools, see the [relevant how-to guides here](/docs/how_to/#tools).

### Toolkits

Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.
Expand Down Expand Up @@ -540,6 +562,7 @@ In order to solve that we built LangGraph to be this flexible, highly-controllab

If you are still using AgentExecutor, do not fear: we still have a guide on [how to use AgentExecutor](/docs/how_to/agent_executor).
It is recommended, however, that you start to transition to [LangGraph](https://github.com/langchain-ai/langgraphjs).
In order to assist in this we have put together a [transition guide on how to do so](/docs/how_to/migrate_agent).

### Multimodal

Expand All @@ -549,6 +572,8 @@ Multimodal **outputs** are even less common. As such, we've kept our multimodal
In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format.
So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.

For specifics on how to use multimodal models, see the [relevant how-to guides here](/docs/how_to/#multimodal).

### Callbacks

LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
Expand Down Expand Up @@ -600,6 +625,8 @@ of the object.
If you're creating a custom chain or runnable, you need to remember to propagate request time
callbacks to any child objects.

For specifics on how to use callbacks, see the [relevant how-to guides here](/docs/how_to/#callbacks).

## Techniques

### Function/tool calling
Expand Down Expand Up @@ -671,6 +698,8 @@ LangChain provides several advanced retrieval types. A full list is below, along
| [Multi-Query Retriever](/docs/how_to/multiple_queries/) | Any | Yes | If users are asking questions that are complex and require multiple pieces of distinct information to respond. | This uses an LLM to generate multiple queries from the original one. This is useful when the original query needs pieces of information about multiple topics to be properly answered. By generating multiple queries, we can then fetch documents for each of them. |
| [Ensemble](/docs/how_to/ensemble_retriever) | Any | No | If you have multiple retrieval methods and want to try combining them. | This fetches documents from multiple retrievers and then combines them. |

For a high-level guide on retrieval, see this [tutorial on RAG](/docs/tutorials/rag/).

### Text splitting

LangChain offers many different types of `text splitters`.
Expand Down
46 changes: 35 additions & 11 deletions docs/core_docs/docs/how_to/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ These are the core building blocks you can use when building applications.

### Prompt templates

Prompt Templates are responsible for formatting user input into a format that can be passed to a language model.
[Prompt Templates](/docs/concepts/#prompt-templates) are responsible for formatting user input into a format that can be passed to a language model.

- [How to: use few shot examples](/docs/how_to/few_shot_examples)
- [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
Expand All @@ -56,15 +56,15 @@ Prompt Templates are responsible for formatting user input into a format that ca

### Example selectors

Example Selectors are responsible for selecting the correct few shot examples to pass to the prompt.
[Example Selectors](/docs/concepts/#example-selectors) are responsible for selecting the correct few shot examples to pass to the prompt.

- [How to: use example selectors](/docs/how_to/example_selectors)
- [How to: select examples by length](/docs/how_to/example_selectors_length_based)
- [How to: select examples by semantic similarity](/docs/how_to/example_selectors_similarity)

### Chat models

Chat Models are newer forms of language models that take messages in and output a message.
[Chat Models](/docs/concepts/#chat-models) are newer forms of language models that take messages in and output a message.

- [How to: do function/tool calling](/docs/how_to/tool_calling)
- [How to: get models to return structured output](/docs/how_to/structured_output)
Expand All @@ -76,7 +76,7 @@ Chat Models are newer forms of language models that take messages in and output

### LLMs

What LangChain calls LLMs are older forms of language models that take a string in and output a string.
What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language models that take a string in and output a string.

- [How to: cache model responses](/docs/how_to/llm_caching)
- [How to: create a custom LLM class](/docs/how_to/custom_llm)
Expand All @@ -85,7 +85,7 @@ What LangChain calls LLMs are older forms of language models that take a string

### Output parsers

Output Parsers are responsible for taking the output of an LLM and parsing into more structured format.
[Output Parsers](/docs/concepts/#output-parsers) are responsible for taking the output of an LLM and parsing into more structured format.

- [How to: use output parsers to parse an LLM response into structured format](/docs/how_to/output_parser_structured)
- [How to: parse JSON output](/docs/how_to/output_parser_json)
Expand All @@ -94,7 +94,7 @@ Output Parsers are responsible for taking the output of an LLM and parsing into

### Document loaders

Document Loaders are responsible for loading documents from a variety of sources.
[Document Loaders](/docs/concepts/#document-loaders) are responsible for loading documents from a variety of sources.

- [How to: load CSV data](/docs/how_to/document_loader_csv)
- [How to: load data from a directory](/docs/how_to/document_loader_directory)
Expand All @@ -105,7 +105,7 @@ Document Loaders are responsible for loading documents from a variety of sources

### Text splitters

Text Splitters take a document and split into chunks that can be used for retrieval.
[Text Splitters](/docs/concepts/#text-splitters) take a document and split into chunks that can be used for retrieval.

- [How to: recursively split text](/docs/how_to/recursive_text_splitter)
- [How to: split by character](/docs/how_to/character_text_splitter)
Expand All @@ -114,20 +114,20 @@ Text Splitters take a document and split into chunks that can be used for retrie

### Embedding models

Embedding Models take a piece of text and create a numerical representation of it.
[Embedding Models](/docs/concepts/#embedding-models) take a piece of text and create a numerical representation of it.

- [How to: embed text data](/docs/how_to/embed_text)
- [How to: cache embedding results](/docs/how_to/caching_embeddings)

### Vector stores

Vector stores are databases that can efficiently store and retrieve embeddings.
[Vector stores](/docs/concepts/#vectorstores) are databases that can efficiently store and retrieve embeddings.

- [How to: create and query vector stores](/docs/how_to/vectorstores)

### Retrievers

Retrievers are responsible for taking a query and returning relevant documents.
[Retrievers](/docs/concepts/#retrievers) are responsible for taking a query and returning relevant documents.

- [How to: use a vector store to retrieve data](/docs/how_to/vectorstore_retriever)
- [How to: generate multiple queries to retrieve data for](/docs/how_to/multiple_queries)
Expand All @@ -148,7 +148,7 @@ Indexing is the process of keeping your vectorstore in-sync with the underlying

### Tools

LangChain Tools contain a description of the tool (to pass to the language model) as well as the implementation of the function to call).
LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call.

- [How to: create custom tools](/docs/how_to/custom_tools)
- [How to: use built-in tools and built-in toolkits](/docs/how_to/tools_builtin)
Expand All @@ -168,6 +168,8 @@ For in depth how-to guides for agents, please check out [LangGraph](https://lang

### Callbacks

[Callbacks](/docs/concepts/#callbacks) allow you to hook into the various stages of your LLM application's execution.

- [How to: pass in callbacks at runtime](/docs/how_to/callbacks_runtime)
- [How to: attach callbacks to a module](/docs/how_to/callbacks_attach)
- [How to: pass callbacks into a module constructor](/docs/how_to/callbacks_constructor)
Expand Down Expand Up @@ -204,6 +206,7 @@ These guides cover use-case specific details.
### Q&A with RAG

Retrieval Augmented Generation (RAG) is a way to connect LLMs to external sources of data.
For a high-level tutorial on RAG, check out [this guide](/docs/tutorials/rag/).

- [How to: add chat history](/docs/how_to/qa_chat_history_how_to/)
- [How to: stream](/docs/how_to/qa_streaming/)
Expand All @@ -214,6 +217,7 @@ Retrieval Augmented Generation (RAG) is a way to connect LLMs to external source
### Extraction

Extraction is when you use LLMs to extract structured information from unstructured text.
For a high level tutorial on extraction, check out [this guide](/docs/tutorials/extraction/).

- [How to: use reference examples](/docs/how_to/extraction_examples/)
- [How to: handle long text](/docs/how_to/extraction_long_text/)
Expand All @@ -222,6 +226,7 @@ Extraction is when you use LLMs to extract structured information from unstructu
### Chatbots

Chatbots involve using an LLM to have a conversation.
For a high-level tutorial on building chatbots, check out [this guide](/docs/tutorials/chatbot/).

- [How to: manage memory](/docs/how_to/chatbots_memory)
- [How to: do retrieval](/docs/how_to/chatbots_retrieval)
Expand All @@ -230,6 +235,7 @@ Chatbots involve using an LLM to have a conversation.
### Query analysis

Query Analysis is the task of using an LLM to generate a query to send to a retriever.
For a high-level tutorial on query analysis, check out [this guide](/docs/tutorials/query_analysis/).

- [How to: add examples to the prompt](/docs/how_to/query_few_shot)
- [How to: handle cases where no queries are generated](/docs/how_to/query_no_queries)
Expand All @@ -241,6 +247,7 @@ Query Analysis is the task of using an LLM to generate a query to send to a retr
### Q&A over SQL + CSV

You can use LLMs to do question answering over tabular data.
For a high-level tutorial, check out [this guide](/docs/tutorials/sql_qa/).

- [How to: use prompting to improve results](/docs/how_to/sql_prompting)
- [How to: do query validation](/docs/how_to/sql_query_checking)
Expand All @@ -249,8 +256,25 @@ You can use LLMs to do question answering over tabular data.
### Q&A over graph databases

You can use an LLM to do question answering over graph databases.
For a high-level tutorial, check out [this guide](/docs/tutorials/graph/).

- [How to: map values to a database](/docs/how_to/graph_mapping)
- [How to: add a semantic layer over the database](/docs/how_to/graph_semantic)
- [How to: improve results with prompting](/docs/how_to/graph_prompting)
- [How to: construct knowledge graphs](/docs/how_to/graph_constructing)

## [LangGraph.js](https://langchain-ai.github.io/langgraphjs)

LangGraph.js is an extension of LangChain aimed at
building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.

LangGraph.js documentation is currently hosted on a separate site.
You can peruse [LangGraph.js how-to guides here](https://langchain-ai.github.io/langgraphjs/how-tos/).

## [LangSmith](https://docs.smith.langchain.com/)

LangSmith allows you to closely trace, monitor and evaluate your LLM application.
It seamlessly integrates with LangChain, and you can use it to inspect and debug individual steps of your chains as you build.

LangSmith documentation is hosted on a separate site.
You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/).
Loading

0 comments on commit 6535861

Please sign in to comment.