From ee11067625cee22288fe7b0063b50054b1e28bf6 Mon Sep 17 00:00:00 2001 From: sergiopaniego Date: Fri, 13 Sep 2024 15:39:18 +0200 Subject: [PATCH 1/4] Corrected agents task link typo --- docs/source/en/agents.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/agents.md b/docs/source/en/agents.md index b100e39f1c9591..d1c131d4df24af 100644 --- a/docs/source/en/agents.md +++ b/docs/source/en/agents.md @@ -19,7 +19,7 @@ rendered properly in your Markdown viewer. ### What is an agent? -Large Language Models (LLMs) trained to perform [causal language modeling](./tasks/language_modeling.) can tackle a wide range of tasks, but they often struggle with basic tasks like logic, calculation, and search. When prompted in domains in which they do not perform well, they often fail to generate the answer we expect them to. +Large Language Models (LLMs) trained to perform [causal language modeling](./tasks/language_modeling) can tackle a wide range of tasks, but they often struggle with basic tasks like logic, calculation, and search. When prompted in domains in which they do not perform well, they often fail to generate the answer we expect them to. One approach to overcome this weakness is to create an *agent*. From 753170c64deba22a14d1a71002bd3411b11e8ff7 Mon Sep 17 00:00:00 2001 From: sergiopaniego Date: Fri, 13 Sep 2024 15:59:10 +0200 Subject: [PATCH 2/4] Corrected chat templating link --- docs/source/en/agents.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/agents.md b/docs/source/en/agents.md index d1c131d4df24af..f6370dcb666464 100644 --- a/docs/source/en/agents.md +++ b/docs/source/en/agents.md @@ -114,7 +114,7 @@ To start with, please install the `agents` extras in order to install all defaul pip install transformers[agents] ``` -Build your LLM engine by defining a `llm_engine` method which accepts a list of [messages](./chat_templating.) and returns text. This callable also needs to accept a `stop` argument that indicates when to stop generating. +Build your LLM engine by defining a `llm_engine` method which accepts a list of [messages](./chat_templating) and returns text. This callable also needs to accept a `stop` argument that indicates when to stop generating. ```python from huggingface_hub import login, InferenceClient From c04b702c1e752e0dcce1d42a69ecb3e5d4a8c097 Mon Sep 17 00:00:00 2001 From: sergiopaniego Date: Fri, 13 Sep 2024 15:59:56 +0200 Subject: [PATCH 3/4] Corrected chat templating link 2 --- docs/source/en/agents.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/agents.md b/docs/source/en/agents.md index f6370dcb666464..0b889f4eec867b 100644 --- a/docs/source/en/agents.md +++ b/docs/source/en/agents.md @@ -130,7 +130,7 @@ def llm_engine(messages, stop_sequences=["Task"]) -> str: ``` You could use any `llm_engine` method as long as: -1. it follows the [messages format](./chat_templating.md) (`List[Dict[str, str]]`) for its input `messages`, and it returns a `str`. +1. it follows the [messages format](./chat_templating) (`List[Dict[str, str]]`) for its input `messages`, and it returns a `str`. 2. it stops generating outputs at the sequences passed in the argument `stop_sequences` Additionally, `llm_engine` can also take a `grammar` argument. In the case where you specify a `grammar` upon agent initialization, this argument will be passed to the calls to llm_engine, with the `grammar` that you defined upon initialization, to allow [constrained generation](https://huggingface.co/docs/text-generation-inference/conceptual/guidance) in order to force properly-formatted agent outputs. From 55a8a4b7ff78da74f8d52e56fe89c7a90b4f24c1 Mon Sep 17 00:00:00 2001 From: sergiopaniego Date: Fri, 13 Sep 2024 17:21:59 +0200 Subject: [PATCH 4/4] Typo fixed in Agents, supercharged --- docs/source/en/agents_advanced.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/agents_advanced.md b/docs/source/en/agents_advanced.md index e7469a310c4102..399eeb9b70eb20 100644 --- a/docs/source/en/agents_advanced.md +++ b/docs/source/en/agents_advanced.md @@ -34,7 +34,7 @@ You can easily build hierarchical multi-agent systems with `transformers.agents` To do so, encapsulate the agent in a [`ManagedAgent`] object. This object needs arguments `agent`, `name`, and a `description`, which will then be embedded in the manager agent's system prompt to let it know how to call this managed agent, as we also do for tools. -Here's an example of making an agent that managed a specitif web search agent using our [`DuckDuckGoSearchTool`]: +Here's an example of making an agent that managed a specific web search agent using our [`DuckDuckGoSearchTool`]: ```py from transformers.agents import ReactCodeAgent, HfApiEngine, DuckDuckGoSearchTool, ManagedAgent