Skip to content

Commit

Permalink
Add note about jupyter workaround for conflicting event loops (#214)
Browse files Browse the repository at this point in the history
  • Loading branch information
sydney-runkle authored Dec 11, 2024
1 parent f76afa1 commit 4edb122
Show file tree
Hide file tree
Showing 3 changed files with 35 additions and 18 deletions.
12 changes: 11 additions & 1 deletion docs/agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ print(result.data)
There are three ways to run an agent:

1. [`agent.run()`][pydantic_ai.Agent.run] — a coroutine which returns a [`RunResult`][pydantic_ai.result.RunResult] containing a completed response
2. [`agent.run_sync()`][pydantic_ai.Agent.run_sync] — a plain, synchronous function which returns a [`RunResult`][pydantic_ai.result.RunResult] containing a completed response (internally, this just calls `asyncio.run(self.run())`)
2. [`agent.run_sync()`][pydantic_ai.Agent.run_sync] — a plain, synchronous function which returns a [`RunResult`][pydantic_ai.result.RunResult] containing a completed response (internally, this just calls `loop.run_until_complete(self.run())`)
3. [`agent.run_stream()`][pydantic_ai.Agent.run_stream] — a coroutine which returns a [`StreamedRunResult`][pydantic_ai.result.StreamedRunResult], which contains methods to stream a response as an async iterable

Here's a simple example demonstrating all three:
Expand Down Expand Up @@ -90,6 +90,16 @@ _(This example is complete, it can be run "as is")_

You can also pass messages from previous runs to continue a conversation or provide context, as described in [Messages and Chat History](message-history.md).

!!! note "jupyter notebooks"
If you're running `pydantic-ai` in a jupyter notebook, you might consider using [`nest-asyncio`](https://pypi.org/project/nest-asyncio/)
to manage conflicts between event loops that occur between jupyter's event loops and `pydantic-ai`'s.

Before you execute any agent runs, do the following:
```py {test="skip", lint="skip"}
import nest_asyncio
nest_asyncio.apply()
```

## Runs vs. Conversations

An agent **run** might represent an entire conversation — there's no limit to how many messages can be exchanged in a single run. However, a **conversation** might also be composed of multiple runs, especially if you need to maintain state between separate interactions or API calls.
Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ To understand the flow of the above runs, we can watch the agent in action using

To do this, we need to set up logfire, and add the following to our code:

```py title="bank_support_with_logfire.py" hl_lines="4-6"
```py title="bank_support_with_logfire.py" hl_lines="4-6" test="skip" lint="skip"
...
from bank_database import DatabaseConn

Expand Down
39 changes: 23 additions & 16 deletions tests/test_examples.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,11 +81,12 @@ def test_docs_examples(

prefix_settings = example.prefix_settings()
opt_title = prefix_settings.get('title')
opt_test = prefix_settings.get('test', '')
opt_lint = prefix_settings.get('lint', '')
cwd = Path.cwd()

if opt_title == 'bank_support_with_logfire.py':
# don't format and no need to run
return
if opt_test.startswith('skip') and opt_lint.startswith('skip'):
pytest.skip('both running code and lint skipped')

if opt_title == 'sql_app_evals.py':
os.chdir(tmp_path)
Expand All @@ -107,7 +108,6 @@ def test_docs_examples(
line_length = 120

eval_example.set_config(ruff_ignore=ruff_ignore, target_version='py39', line_length=line_length)

eval_example.print_callback = print_callback

call_name = 'main'
Expand All @@ -116,19 +116,26 @@ def test_docs_examples(
call_name = name
break

if eval_example.update_examples: # pragma: no cover
eval_example.format(example)
module_dict = eval_example.run_print_update(example, call=call_name)
if not opt_lint.startswith('skip'):
if eval_example.update_examples: # pragma: no cover
eval_example.format(example)
else:
eval_example.lint(example)

if opt_test.startswith('skip'):
pytest.skip(opt_test[4:].lstrip(' -') or 'running code skipped')
else:
eval_example.lint(example)
module_dict = eval_example.run_print_check(example, call=call_name)

os.chdir(cwd)
if title := opt_title:
if title.endswith('.py'):
module_name = title[:-3]
sys.modules[module_name] = module = ModuleType(module_name)
module.__dict__.update(module_dict)
if eval_example.update_examples:
module_dict = eval_example.run_print_update(example, call=call_name)
else:
module_dict = eval_example.run_print_check(example, call=call_name)

os.chdir(cwd)
if title := opt_title:
if title.endswith('.py'):
module_name = title[:-3]
sys.modules[module_name] = module = ModuleType(module_name)
module.__dict__.update(module_dict)


def print_callback(s: str) -> str:
Expand Down

0 comments on commit 4edb122

Please sign in to comment.