You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+84-12Lines changed: 84 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -120,6 +120,50 @@ async def main() -> None:
120
120
asyncio.run(main())
121
121
```
122
122
123
+
## Streaming responses
124
+
125
+
We provide support for streaming responses using Server Side Events (SSE).
126
+
127
+
```python
128
+
from gradientai import GradientAI
129
+
130
+
client = GradientAI()
131
+
132
+
stream = client.chat.completions.create(
133
+
messages=[
134
+
{
135
+
"role": "user",
136
+
"content": "What is the capital of France?",
137
+
}
138
+
],
139
+
model="llama3.3-70b-instruct",
140
+
stream=True,
141
+
)
142
+
for completion in stream:
143
+
print(completion.choices)
144
+
```
145
+
146
+
The async client uses the exact same interface.
147
+
148
+
```python
149
+
from gradientai import AsyncGradientAI
150
+
151
+
client = AsyncGradientAI()
152
+
153
+
stream =await client.chat.completions.create(
154
+
messages=[
155
+
{
156
+
"role": "user",
157
+
"content": "What is the capital of France?",
158
+
}
159
+
],
160
+
model="llama3.3-70b-instruct",
161
+
stream=True,
162
+
)
163
+
asyncfor completion in stream:
164
+
print(completion.choices)
165
+
```
166
+
123
167
## Using types
124
168
125
169
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
@@ -167,8 +211,14 @@ from gradientai import GradientAI
version= response.parse() # get the object that `agents.versions.list()` would have returned
289
-
print(version.agent_versions)
354
+
completion= response.parse() # get the object that `chat.completions.create()` would have returned
355
+
print(completion.choices)
290
356
```
291
357
292
358
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) object.
@@ -300,8 +366,14 @@ The above interface eagerly reads the full response body when you make the reque
300
366
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
301
367
302
368
```python
303
-
with client.agents.versions.with_streaming_response.list(
304
-
uuid="REPLACE_ME",
369
+
with client.chat.completions.with_streaming_response.create(
0 commit comments