-
Notifications
You must be signed in to change notification settings - Fork 29
/
llms-ctx.txt
869 lines (661 loc) · 29.4 KB
/
llms-ctx.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
<project title="Claudette" summary="Claudette is a Python library that wraps Anthropic's Claude API to provide a higher-level interface for creating AI applications. It automates common patterns while maintaining full control, offering features like stateful chat, prefill support, image handling, and streamlined tool use.">Things to remember when using Claudette:
- You must set the `ANTHROPIC_API_KEY` environment variable with your Anthropic API key
- Claudette is designed to work with Claude 3 models (Opus, Sonnet, Haiku) and supports multiple providers (Anthropic direct, AWS Bedrock, Google Vertex)
- The library provides both synchronous and asynchronous interfaces
- Use `Chat()` for maintaining conversation state and handling tool interactions
- When using tools, the library automatically handles the request/response loop
- Image support is built in but only available on compatible models (not Haiku)<docs><doc title="README" desc="Quick start guide and overview"># claudette
> **NB**: If you are reading this in GitHub’s readme, we recommend you
> instead read the much more nicely formatted [documentation
> format](https://claudette.answer.ai/) of this tutorial.
*Claudette* is a wrapper for Anthropic’s [Python
SDK](https://github.com/anthropics/anthropic-sdk-python).
The SDK works well, but it is quite low level – it leaves the developer
to do a lot of stuff manually. That’s a lot of extra work and
boilerplate! Claudette automates pretty much everything that can be
automated, whilst providing full control. Amongst the features provided:
- A [`Chat`](https://claudette.answer.ai/core.html#chat) class that
creates stateful dialogs
- Support for *prefill*, which tells Claude what to use as the first few
words of its response
- Convenient image support
- Simple and convenient support for Claude’s new Tool Use API.
You’ll need to set the `ANTHROPIC_API_KEY` environment variable to the
key provided to you by Anthropic in order to use this library.
Note that this library is the first ever “literate nbdev” project. That
means that the actual source code for the library is a rendered Jupyter
Notebook which includes callout notes and tips, HTML tables and images,
detailed explanations, and teaches *how* and *why* the code is written
the way it is. Even if you’ve never used the Anthropic Python SDK or
Claude API before, you should be able to read the source code. Click
[Claudette’s Source](https://claudette.answer.ai/core.html) to read it,
or clone the git repo and execute the notebook yourself to see every
step of the creation process in action. The tutorial below includes
links to API details which will take you to relevant parts of the
source. The reason this project is a new kind of literal program is
because we take seriously Knuth’s call to action, that we have a “*moral
commitment*” to never write an “*illiterate program*” – and so we have a
commitment to making literate programming and easy and pleasant
experience. (For more on this, see [this
talk](https://www.youtube.com/watch?v=rX1yGxJijsI) from Hamel Husain.)
> “*Let us change our traditional attitude to the construction of
> programs: Instead of imagining that our main task is to instruct a
> **computer** what to do, let us concentrate rather on explaining to
> **human beings** what we want a computer to do.*” Donald E. Knuth,
> [Literate
> Programming](https://www.cs.tufts.edu/~nr/cs257/archive/literate-programming/01-knuth-lp.pdf)
> (1984)
## Install
``` sh
pip install claudette
```
## Getting started
Anthropic’s Python SDK will automatically be installed with Claudette,
if you don’t already have it.
``` python
import os
# os.environ['ANTHROPIC_LOG'] = 'debug'
```
To print every HTTP request and response in full, uncomment the above
line.
``` python
from claudette import *
```
Claudette only exports the symbols that are needed to use the library,
so you can use `import *` to import them. Alternatively, just use:
``` python
import claudette
```
…and then add the prefix `claudette.` to any usages of the module.
Claudette provides `models`, which is a list of models currently
available from the SDK.
``` python
models
```
['claude-3-opus-20240229',
'claude-3-5-sonnet-20241022',
'claude-3-haiku-20240307']
For these examples, we’ll use Sonnet 3.5, since it’s awesome!
``` python
model = models[1]
```
## Chat
The main interface to Claudette is the
[`Chat`](https://claudette.answer.ai/core.html#chat) class, which
provides a stateful interface to Claude:
``` python
chat = Chat(model, sp="""You are a helpful and concise assistant.""")
chat("I'm Jeremy")
```
Hello Jeremy, nice to meet you.
<details>
- id: `msg_015oK9jEcra3TEKHUGYULjWB`
- content:
`[{'text': 'Hello Jeremy, nice to meet you.', 'type': 'text'}]`
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 19, 'output_tokens': 11, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
</details>
``` python
r = chat("What's my name?")
r
```
Your name is Jeremy.
<details>
- id: `msg_01Si8sTFJe8d8vq7enanbAwj`
- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]`
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 38, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
</details>
``` python
r = chat("What's my name?")
r
```
Your name is Jeremy.
<details>
- id: `msg_01BHWRoAX8eBsoLn2bzpBkvx`
- content: `[{'text': 'Your name is Jeremy.', 'type': 'text'}]`
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 54, 'output_tokens': 8, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
</details>
As you see above, displaying the results of a call in a notebook shows
just the message contents, with the other details hidden behind a
collapsible section. Alternatively you can `print` the details:
``` python
print(r)
```
Message(id='msg_01BHWRoAX8eBsoLn2bzpBkvx', content=[TextBlock(text='Your name is Jeremy.', type='text')], model='claude-3-5-sonnet-20241022', role='assistant', stop_reason='end_turn', stop_sequence=None, type='message', usage=In: 54; Out: 8; Cache create: 0; Cache read: 0; Total: 62)
Claude supports adding an extra `assistant` message at the end, which
contains the *prefill* – i.e. the text we want Claude to assume the
response starts with. Let’s try it out:
``` python
chat("Concisely, what is the meaning of life?",
prefill='According to Douglas Adams,')
```
According to Douglas Adams,42. Philosophically, it’s to find personal
meaning through relationships, purpose, and experiences.
<details>
- id: `msg_01R9RvMdFwea9iRX5uYSSHG7`
- content:
`[{'text': "According to Douglas Adams,42. Philosophically, it's to find personal meaning through relationships, purpose, and experiences.", 'type': 'text'}]`
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 82, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
</details>
You can add `stream=True` to stream the results as soon as they arrive
(although you will only see the gradual generation if you execute the
notebook yourself, of course!)
``` python
for o in chat("Concisely, what book was that in?", prefill='It was in', stream=True):
print(o, end='')
```
It was in "The Hitchhiker's Guide to the Galaxy" by Douglas Adams.
### Async
Alternatively, you can use
[`AsyncChat`](https://claudette.answer.ai/async.html#asyncchat) (or
[`AsyncClient`](https://claudette.answer.ai/async.html#asyncclient)) for
the async versions, e.g:
``` python
chat = AsyncChat(model)
await chat("I'm Jeremy")
```
Hi Jeremy! Nice to meet you. I’m Claude, an AI assistant created by
Anthropic. How can I help you today?
<details>
- id: `msg_016Q8cdc3sPWBS8eXcNj841L`
- content:
`[{'text': "Hi Jeremy! Nice to meet you. I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]`
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 10, 'output_tokens': 31, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
</details>
Remember to use `async for` when streaming in this case:
``` python
async for o in await chat("Concisely, what is the meaning of life?",
prefill='According to Douglas Adams,', stream=True):
print(o, end='')
```
According to Douglas Adams, it's 42. But in my view, there's no single universal meaning - each person must find their own purpose through relationships, personal growth, contribution to others, and pursuit of what they find meaningful.
## Prompt caching
If you use `mk_msg(msg, cache=True)`, then the message is cached using
Claude’s [prompt
caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching)
feature. For instance, here we use caching when asking about Claudette’s
readme file:
``` python
chat = Chat(model, sp="""You are a helpful and concise assistant.""")
```
``` python
nbtxt = Path('README.txt').read_text()
msg = f'''<README>
{nbtxt}
</README>
In brief, what is the purpose of this project based on the readme?'''
r = chat(mk_msg(msg, cache=True))
r
```
Claudette is a high-level wrapper for Anthropic’s Python SDK that
automates common tasks and provides additional functionality. Its main
features include:
1. A Chat class for stateful dialogs
2. Support for prefill (controlling Claude’s initial response words)
3. Convenient image handling
4. Simple tool use API integration
5. Support for multiple model providers (Anthropic, AWS Bedrock, Google
Vertex)
The project is notable for being the first “literate nbdev” project,
meaning its source code is written as a detailed, readable Jupyter
Notebook that includes explanations, examples, and teaching material
alongside the functional code.
The goal is to simplify working with Claude’s API while maintaining full
control, reducing boilerplate code and manual work that would otherwise
be needed with the base SDK.
<details>
- id: `msg_014rVQnYoZXZuyWUCMELG1QW`
- content:
`[{'text': 'Claudette is a high-level wrapper for Anthropic\'s Python SDK that automates common tasks and provides additional functionality. Its main features include:\n\n1. A Chat class for stateful dialogs\n2. Support for prefill (controlling Claude\'s initial response words)\n3. Convenient image handling\n4. Simple tool use API integration\n5. Support for multiple model providers (Anthropic, AWS Bedrock, Google Vertex)\n\nThe project is notable for being the first "literate nbdev" project, meaning its source code is written as a detailed, readable Jupyter Notebook that includes explanations, examples, and teaching material alongside the functional code.\n\nThe goal is to simplify working with Claude\'s API while maintaining full control, reducing boilerplate code and manual work that would otherwise be needed with the base SDK.', 'type': 'text'}]`
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 4, 'output_tokens': 179, 'cache_creation_input_tokens': 7205, 'cache_read_input_tokens': 0}`
</details>
The response records the a cache has been created using these input
tokens:
``` python
print(r.usage)
```
Usage(input_tokens=4, output_tokens=179, cache_creation_input_tokens=7205, cache_read_input_tokens=0)
We can now ask a followup question in this chat:
``` python
r = chat('How does it make tool use more ergonomic?')
r
```
According to the README, Claudette makes tool use more ergonomic in
several ways:
1. It uses docments to make Python function definitions more
user-friendly - each parameter and return value should have a type
and description
2. It handles the tool calling process automatically - when Claude
returns a tool_use message, Claudette manages calling the tool with
the provided parameters behind the scenes
3. It provides a `toolloop` method that can handle multiple tool calls
in a single step to solve more complex problems
4. It allows you to pass a list of tools to the Chat constructor and
optionally force Claude to always use a specific tool via
`tool_choice`
Here’s a simple example from the README:
``` python
def sums(
a:int, # First thing to sum
b:int=1 # Second thing to sum
) -> int: # The sum of the inputs
"Adds a + b."
print(f"Finding the sum of {a} and {b}")
return a + b
chat = Chat(model, sp=sp, tools=[sums], tool_choice='sums')
```
This makes it much simpler compared to manually handling all the tool
use logic that would be required with the base SDK.
<details>
- id: `msg_01EdUvvFBnpPxMtdLRCaSZAU`
- content:
`[{'text': 'According to the README, Claudette makes tool use more ergonomic in several ways:\n\n1. It uses docments to make Python function definitions more user-friendly - each parameter and return value should have a type and description\n\n2. It handles the tool calling process automatically - when Claude returns a tool_use message, Claudette manages calling the tool with the provided parameters behind the scenes\n\n3. It provides a`toolloop`method that can handle multiple tool calls in a single step to solve more complex problems\n\n4. It allows you to pass a list of tools to the Chat constructor and optionally force Claude to always use a specific tool via`tool_choice```` \n\nHere\'s a simple example from the README:\n\n```python\ndef sums(\n a:int, # First thing to sum \n b:int=1 # Second thing to sum\n) -> int: # The sum of the inputs\n "Adds a + b."\n print(f"Finding the sum of {a} and {b}")\n return a + b\n\nchat = Chat(model, sp=sp, tools=[sums], tool_choice=\'sums\')\n```\n\nThis makes it much simpler compared to manually handling all the tool use logic that would be required with the base SDK.', 'type': 'text'}] ````
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 197, 'output_tokens': 280, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 7205}`
</details>
We can see that this only used ~200 regular input tokens – the 7000+
context tokens have been read from cache.
``` python
print(r.usage)
```
Usage(input_tokens=197, output_tokens=280, cache_creation_input_tokens=0, cache_read_input_tokens=7205)
``` python
chat.use
```
In: 201; Out: 459; Cache create: 7205; Cache read: 7205; Total: 15070
## Tool use
[Tool use](https://docs.anthropic.com/claude/docs/tool-use) lets Claude
use external tools.
We use [docments](https://fastcore.fast.ai/docments.html) to make
defining Python functions as ergonomic as possible. Each parameter (and
the return value) should have a type, and a docments comment with the
description of what it is. As an example we’ll write a simple function
that adds numbers together, and will tell us when it’s being called:
``` python
def sums(
a:int, # First thing to sum
b:int=1 # Second thing to sum
) -> int: # The sum of the inputs
"Adds a + b."
print(f"Finding the sum of {a} and {b}")
return a + b
```
Sometimes Claude will say something like “according to the `sums` tool
the answer is” – generally we’d rather it just tells the user the
answer, so we can use a system prompt to help with this:
``` python
sp = "Never mention what tools you use."
```
We’ll get Claude to add up some long numbers:
``` python
a,b = 604542,6458932
pr = f"What is {a}+{b}?"
pr
```
'What is 604542+6458932?'
To use tools, pass a list of them to
[`Chat`](https://claudette.answer.ai/core.html#chat):
``` python
chat = Chat(model, sp=sp, tools=[sums])
```
To force Claude to always answer using a tool, set `tool_choice` to that
function name. When Claude needs to use a tool, it doesn’t return the
answer, but instead returns a `tool_use` message, which means we have to
call the named tool with the provided parameters.
``` python
r = chat(pr, tool_choice='sums')
r
```
Finding the sum of 604542 and 6458932
ToolUseBlock(id=‘toolu_014ip2xWyEq8RnAccVT4SySt’, input={‘a’: 604542,
‘b’: 6458932}, name=‘sums’, type=‘tool_use’)
<details>
- id: `msg_014xrPyotyiBmFSctkp1LZHk`
- content:
`[{'id': 'toolu_014ip2xWyEq8RnAccVT4SySt', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]`
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `tool_use`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 442, 'output_tokens': 53, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
</details>
Claudette handles all that for us – we just call it again, and it all
happens automatically:
``` python
chat()
```
The sum of 604542 and 6458932 is 7063474.
<details>
- id: `msg_01151puJxG8Fa6k6QSmzwKQA`
- content:
`[{'text': 'The sum of 604542 and 6458932 is 7063474.', 'type': 'text'}]`
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 524, 'output_tokens': 23, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
</details>
You can see how many tokens have been used at any time by checking the
`use` property. Note that (as of May 2024) tool use in Claude uses a
*lot* of tokens, since it automatically adds a large system prompt.
``` python
chat.use
```
In: 966; Out: 76; Cache create: 0; Cache read: 0; Total: 1042
We can do everything needed to use tools in a single step, by using
[`Chat.toolloop`](https://claudette.answer.ai/toolloop.html#chat.toolloop).
This can even call multiple tools as needed solve a problem. For
example, let’s define a tool to handle multiplication:
``` python
def mults(
a:int, # First thing to multiply
b:int=1 # Second thing to multiply
) -> int: # The product of the inputs
"Multiplies a * b."
print(f"Finding the product of {a} and {b}")
return a * b
```
Now with a single call we can calculate `(a+b)*2` – by passing
`show_trace` we can see each response from Claude in the process:
``` python
chat = Chat(model, sp=sp, tools=[sums,mults])
pr = f'Calculate ({a}+{b})*2'
pr
```
'Calculate (604542+6458932)*2'
``` python
chat.toolloop(pr, trace_func=print)
```
Finding the sum of 604542 and 6458932
[{'role': 'user', 'content': [{'type': 'text', 'text': 'Calculate (604542+6458932)*2'}]}, {'role': 'assistant', 'content': [TextBlock(text="I'll help you break this down into steps:\n\nFirst, let's add those numbers:", type='text'), ToolUseBlock(id='toolu_01St5UKxYUU4DKC96p2PjgcD', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01St5UKxYUU4DKC96p2PjgcD', 'content': '7063474'}]}]
Finding the product of 7063474 and 2
[{'role': 'assistant', 'content': [TextBlock(text="Now, let's multiply this result by 2:", type='text'), ToolUseBlock(id='toolu_01FpmRG4ZskKEWN1gFZzx49s', input={'a': 7063474, 'b': 2}, name='mults', type='tool_use')]}, {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01FpmRG4ZskKEWN1gFZzx49s', 'content': '14126948'}]}]
[{'role': 'assistant', 'content': [TextBlock(text='The final result is 14,126,948.', type='text')]}]
The final result is 14,126,948.
<details>
- id: `msg_0162teyBcJHriUzZXMPz4r5d`
- content:
`[{'text': 'The final result is 14,126,948.', 'type': 'text'}]`
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 741, 'output_tokens': 15, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
</details>
## Structured data
If you just want the immediate result from a single tool, use
[`Client.structured`](https://claudette.answer.ai/core.html#client.structured).
``` python
cli = Client(model)
```
``` python
def sums(
a:int, # First thing to sum
b:int=1 # Second thing to sum
) -> int: # The sum of the inputs
"Adds a + b."
print(f"Finding the sum of {a} and {b}")
return a + b
```
``` python
cli.structured("What is 604542+6458932", sums)
```
Finding the sum of 604542 and 6458932
[7063474]
This is particularly useful for getting back structured information,
e.g:
``` python
class President:
"Information about a president of the United States"
def __init__(self,
first:str, # first name
last:str, # last name
spouse:str, # name of spouse
years_in_office:str, # format: "{start_year}-{end_year}"
birthplace:str, # name of city
birth_year:int # year of birth, `0` if unknown
):
assert re.match(r'\d{4}-\d{4}', years_in_office), "Invalid format: `years_in_office`"
store_attr()
__repr__ = basic_repr('first, last, spouse, years_in_office, birthplace, birth_year')
```
``` python
cli.structured("Provide key information about the 3rd President of the United States", President)
```
[President(first='Thomas', last='Jefferson', spouse='Martha Wayles', years_in_office='1801-1809', birthplace='Shadwell', birth_year=1743)]
## Images
Claude can handle image data as well. As everyone knows, when testing
image APIs you have to use a cute puppy.
``` python
fn = Path('samples/puppy.jpg')
display.Image(filename=fn, width=200)
```
<img src="index_files/figure-commonmark/cell-35-output-1.jpeg"
width="200" />
We create a [`Chat`](https://claudette.answer.ai/core.html#chat) object
as before:
``` python
chat = Chat(model)
```
Claudette expects images as a list of bytes, so we read in the file:
``` python
img = fn.read_bytes()
```
Prompts to Claudette can be lists, containing text, images, or both, eg:
``` python
chat([img, "In brief, what color flowers are in this image?"])
```
In this adorable puppy photo, there are purple/lavender colored flowers
(appears to be asters or similar daisy-like flowers) in the background.
<details>
- id: `msg_01LHjGv1WwFvDsWUbyLmTEKT`
- content:
`[{'text': 'In this adorable puppy photo, there are purple/lavender colored flowers (appears to be asters or similar daisy-like flowers) in the background.', 'type': 'text'}]`
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 110, 'output_tokens': 37, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
</details>
The image is included as input tokens.
``` python
chat.use
```
In: 110; Out: 37; Cache create: 0; Cache read: 0; Total: 147
Alternatively, Claudette supports creating a multi-stage chat with
separate image and text prompts. For instance, you can pass just the
image as the initial prompt (in which case Claude will make some general
comments about what it sees), and then follow up with questions in
additional prompts:
``` python
chat = Chat(model)
chat(img)
```
What an adorable Cavalier King Charles Spaniel puppy! The photo captures
the classic brown and white coloring of the breed, with those soulful
dark eyes that are so characteristic. The puppy is lying in the grass,
and there are lovely purple asters blooming in the background, creating
a beautiful natural setting. The combination of the puppy’s sweet
expression and the delicate flowers makes for a charming composition.
Cavalier King Charles Spaniels are known for their gentle, affectionate
nature, and this little one certainly seems to embody those traits with
its endearing look.
<details>
- id: `msg_01Ciyymq44uwp2iYwRZdKWNN`
- content:
`[{'text': "What an adorable Cavalier King Charles Spaniel puppy! The photo captures the classic brown and white coloring of the breed, with those soulful dark eyes that are so characteristic. The puppy is lying in the grass, and there are lovely purple asters blooming in the background, creating a beautiful natural setting. The combination of the puppy's sweet expression and the delicate flowers makes for a charming composition. Cavalier King Charles Spaniels are known for their gentle, affectionate nature, and this little one certainly seems to embody those traits with its endearing look.", 'type': 'text'}]`
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 98, 'output_tokens': 130, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
</details>
``` python
chat('What direction is the puppy facing?')
```
The puppy is facing towards the left side of the image. Its head is
positioned so we can see its right side profile, though it appears to be
looking slightly towards the camera, giving us a good view of its
distinctive brown and white facial markings and one of its dark eyes.
The puppy is lying down with its white chest/front visible against the
green grass.
<details>
- id: `msg_01AeR9eWjbxa788YF97iErtN`
- content:
`[{'text': 'The puppy is facing towards the left side of the image. Its head is positioned so we can see its right side profile, though it appears to be looking slightly towards the camera, giving us a good view of its distinctive brown and white facial markings and one of its dark eyes. The puppy is lying down with its white chest/front visible against the green grass.', 'type': 'text'}]`
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 239, 'output_tokens': 79, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
</details>
``` python
chat('What color is it?')
```
The puppy has a classic Cavalier King Charles Spaniel coat with a rich
chestnut brown (sometimes called Blenheim) coloring on its ears and
patches on its face, combined with a bright white base color. The white
is particularly prominent on its face (creating a distinctive blaze down
the center) and chest area. This brown and white combination is one of
the most recognizable color patterns for the breed.
<details>
- id: `msg_01R91AqXG7pLc8hK24F5mc7x`
- content:
`[{'text': 'The puppy has a classic Cavalier King Charles Spaniel coat with a rich chestnut brown (sometimes called Blenheim) coloring on its ears and patches on its face, combined with a bright white base color. The white is particularly prominent on its face (creating a distinctive blaze down the center) and chest area. This brown and white combination is one of the most recognizable color patterns for the breed.', 'type': 'text'}]`
- model: `claude-3-5-sonnet-20241022`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage:
`{'input_tokens': 326, 'output_tokens': 92, 'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0}`
</details>
Note that the image is passed in again for every input in the dialog, so
that number of input tokens increases quickly with this kind of chat.
(For large images, using prompt caching might be a good idea.)
``` python
chat.use
```
In: 663; Out: 301; Cache create: 0; Cache read: 0; Total: 964
## Other model providers
You can also use 3rd party providers of Anthropic models, as shown here.
### Amazon Bedrock
These are the models available through Bedrock:
``` python
models_aws
```
['anthropic.claude-3-opus-20240229-v1:0',
'anthropic.claude-3-5-sonnet-20241022-v2:0',
'anthropic.claude-3-sonnet-20240229-v1:0',
'anthropic.claude-3-haiku-20240307-v1:0']
To use them, call `AnthropicBedrock` with your access details, and pass
that to [`Client`](https://claudette.answer.ai/core.html#client):
``` python
from anthropic import AnthropicBedrock
```
``` python
ab = AnthropicBedrock(
aws_access_key=os.environ['AWS_ACCESS_KEY'],
aws_secret_key=os.environ['AWS_SECRET_KEY'],
)
client = Client(models_aws[-1], ab)
```
Now create your [`Chat`](https://claudette.answer.ai/core.html#chat)
object passing this client to the `cli` parameter – and from then on,
everything is identical to the previous examples.
``` python
chat = Chat(cli=client)
chat("I'm Jeremy")
```
It’s nice to meet you, Jeremy! I’m Claude, an AI assistant created by
Anthropic. How can I help you today?
<details>
- id: `msg_bdrk_01V3B5RF2Pyzmh3NeR8xMMpq`
- content:
`[{'text': "It's nice to meet you, Jeremy! I'm Claude, an AI assistant created by Anthropic. How can I help you today?", 'type': 'text'}]`
- model: `claude-3-haiku-20240307`
- role: `assistant`
- stop_reason: `end_turn`
- stop_sequence: `None`
- type: `message`
- usage: `{'input_tokens': 10, 'output_tokens': 32}`
</details>
### Google Vertex
These are the models available through Vertex:
``` python
models_goog
```
['claude-3-opus@20240229',
'claude-3-5-sonnet-v2@20241022',
'claude-3-sonnet@20240229',
'claude-3-haiku@20240307']
To use them, call `AnthropicVertex` with your access details, and pass
that to [`Client`](https://claudette.answer.ai/core.html#client):
``` python
from anthropic import AnthropicVertex
import google.auth
```
``` python
project_id = google.auth.default()[1]
gv = AnthropicVertex(project_id=project_id, region="us-east5")
client = Client(models_goog[-1], gv)
```
``` python
chat = Chat(cli=client)
chat("I'm Jeremy")
```
## Extensions
- [Pydantic Structured
Ouput](https://github.com/tom-pollak/claudette-pydantic)</doc></docs><api><doc title="API List" desc="A succint list of all functions and methods in claudette.">404: Not Found</doc></api></project>